C# to pull all blobs out of database and save to path - c#

Someone on here already pushed me in the right direction. I believe I am on the correct track now to pull all of my blobs out of my database. The only thing I cannot figure out is (I think) is where to set where it saves to.
Here is all the code:
namespace PullBlobs
{
class Program
{
static void Main()
{
string connectionString = "Data Source=NYOPSSQL05;Initial Catalog=Opsprod;Integrated Security=true;User Id=username;Password=password;";
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "Select FileName, FileData from IntegrationFile where IntegrationFileID = #Request_ID";
cmd.Connection = connection;
cmd.Parameters.Add("#Request_ID", SqlDbType.UniqueIdentifier).Value = new Guid("EBFF2CEA-3FF9-4D22-ABEF-3240647119CC");
connection.Open();
SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (reader.Read())
{
// Writes the BLOB to a file (*.bmp).
FileStream stream;
// Streams the BLOB to the FileStream object.
BinaryWriter writer;
// Size of the BLOB buffer.
int bufferSize = 100;
// The BLOB byte[] buffer to be filled by GetBytes.
byte[] outByte = new byte[bufferSize];
// The bytes returned from GetBytes.
long retval;
// The starting position in the BLOB output.
long startIndex = 0;
// The publisher id to use in the file name.
string FileName = "";
// Get the publisher id, which must occur before getting the logo.
FileName = reader.GetString(0);
// Create a file to hold the output.
stream = new FileStream(
FileName + ".txt", FileMode.OpenOrCreate, FileAccess.Write);
writer = new BinaryWriter(stream);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read bytes into outByte[] and retain the number of bytes returned.
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
// Continue while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
writer.Write(outByte);
writer.Flush();
// Reposition start index to end of last buffer and fill buffer.
startIndex += bufferSize;
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
}
// Write the remaining buffer.
writer.Write(outByte, 0, (int)retval - 1);
writer.Flush();
// Close the output file.
writer.Close();
stream.Close();
}
// Close the reader and the connection.
reader.Close();
connection.Close();
}
}
}
}
I think I have every thing altered the way I need it except for where to set the path to save the files to. Here is the link I found on msdn where I pulled most of this code (and altered it) from.
https://msdn.microsoft.com/en-us/library/87z0hy49(v=vs.110).aspx

Dumb question, but are there any actual blobs in there for that request id?
Furthermore, Since you are just reading blobs from the database and writing them to files, why not use BCP?
https://msdn.microsoft.com/en-us/library/ms162802.aspx

I figured out the answer of where to put the path. Make sure you put the # symbol before the quotation marks.
stream = new FileStream(#"c:\dev\Blobs\" + FileName + ".txt", FileMode.OpenOrCreate, FileAccess.Write);

Related

Converting XLS to PDF: Getting "Data index must be a valid index in the field" exception during GetBytes()

I have an application that converts files into PDF. It first saves a blob from MySQL into a temp file and then it converts that temp file into a PDF. I'm getting this "Data index must be a valid index in the field" exception error at the GetBytes() only when I try and convert an XLS file. Other file types (BMP, XLSX, DOC, DOCX, etc.) all convert.
private WriteBlobToTempFileResult WriteBlobToTempFile(int id, string fileType)
{
Logger.Log(string.Format("Inside WriteBlobToTempFile() id: {0} fileType: {1}", id, fileType));
WriteBlobToTempFileResult res = new WriteBlobToTempFileResult //return object
{
PrimaryKey = id
};
FileStream fs; // Writes the BLOB to a file
BinaryWriter bw; // Streams the BLOB to the FileStream object.
int bufferSize = 100; // Size of the BLOB buffer.
byte[] outbyte = new byte[bufferSize]; // The BLOB byte[] buffer to be filled by GetBytes.
long retval; // The bytes returned from GetBytes.
long startIndex = 0; // The starting position in the BLOB output.
string connectionString = ConfigurationManager.AppSettings["MySQLConnectionString"]; //connection string from app.config
string path = ConfigurationManager.AppSettings["fileDirectory"]; //get directory from App.Config
try
{
MySqlConnection conn = new MySqlConnection(connectionString);
conn.Open();
//Determine records to convert, retrieve Primary Key and file type
string sql = "SELECT FILE_DATA from " + TableName + " WHERE PK_TSP_DOCS_ID = #id";
MySqlCommand cmd = new MySqlCommand(sql, conn);
cmd.Parameters.AddWithValue("#id", id);
MySqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (rdr.Read())
{
// Create a file to hold the output.
fs = new FileStream(path + #"\" + id + "." + fileType, FileMode.OpenOrCreate, FileAccess.Write);
bw = new BinaryWriter(fs);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read the bytes into outbyte[] and retain the number of bytes returned.
retval = rdr.GetBytes(rdr.GetOrdinal("FILE_DATA"), startIndex, outbyte, 0, bufferSize);
// Continue reading and writing while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
bw.Write(outbyte);
bw.Flush();
// Reposition the start index to the end of the last buffer and fill the buffer.
startIndex += bufferSize;
// *****IT FAILS AT THE LINE BELOW*****
retval = rdr.GetBytes(rdr.GetOrdinal("FILE_DATA"), startIndex, outbyte, 0, bufferSize);
// *****IT FAILS AT THE LINE ABOVE*****
}
// Write the remaining buffer.
bw.Write(outbyte, 0, (int)retval);
bw.Flush();
// Close the output file.
bw.Close();
fs.Close();
}
// Close the reader and the connection.
rdr.Close();
conn.Close();
res.FullPath = path + #"\" + id + "." + fileType;
}
catch (Exception ex)
{
res.Error = true;
res.ErrorMessage = string.Format("Failed to write temporary file for record id: {0} of file type: {1}", id.ToString(), fileType);
res.InternalErrorMessage = ex.Message; //string.Format("Caught Exception in WriteBlobToTempPDF(). Stack Trace: {0}", ex.StackTrace);
}
return res;
}
This is a bug in Oracle's MySQL Connector/NET. You can see at https://mysql-net.github.io/AdoNetResults/#GetBytes_reads_nothing_at_end_of_buffer that MySql.Data throws an IndexOutOfRangeException when trying to read 0 bytes at the end of the buffer, but no other ADO.NET provider does.
You should switch to MySqlConnector (disclaimer: I'm a contributor) and use the MySqlDataReader.GetStream() and Stream.CopyTo method to simplify your code:
MySqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (rdr.Read())
{
// Create a file to hold the output.
using (var fs = new FileStream(path + #"\" + id + "." + fileType, FileMode.OpenOrCreate, FileAccess.Write))
{
// Open a Stream from the data reader
using (var stream = rdr.GetStream(rdr.GetOrdinal("FILE_DATA"))
{
// Copy the data
stream.CopyTo(fs);
}
}
}

Saving to disc directyl from sqlFileStream or figuring out how to store large data inside one byte array

Hello I have been following numerous tutorials online for a new project I'm working on. I am obtaining my data from filestream and I'm getting an out of memory exception at this line:
byte[] buffer = new byte[(int)sfs.Length];
What I'm doing is immediately taking the byte array and then wanting to save it to the disc. If there is not an easy way to avoid the system out of memory exception, is there a way to write to disc from the sqlFileStream avoiding creating a new byte array?
string cs = #”Data Source=<your server>;Initial Catalog=MyFsDb;Integrated Security=TRUE”;
using (SqlConnection con = new SqlConnection(cs))
{
con.Open();
SqlTransaction txn = con.BeginTransaction();
string sql = “SELECT fData.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT(), fName FROM MyFsTable”;
SqlCommand cmd = new SqlCommand(sql, con, txn);
SqlDataReader rdr = cmd.ExecuteReader();
while (rdr.Read())
{
string filePath = rdr[0].ToString();
byte[] objContext = (byte[])rdr[1];
string fName = rdr[2].ToString();
SqlFileStream sfs = new SqlFileStream(filePath, objContext, System.IO.FileAccess.Read);
**byte[] buffer = new byte[(int)sfs.Length];**
sfs.Read(buffer, 0, buffer.Length);
sfs.Close();
string filename = #”C:\Temp\” + fName;
System.IO.FileStream fs = new System.IO.FileStream(filename, FileMode.Create, FileAccess.Write, FileShare.Write);
fs.Write(buffer, 0, buffer.Length);
fs.Flush();
fs.Close();
}
rdr.Close();
txn.Commit();
con.Close();
}
}
Here's a method that can be used to read bytes from one Stream to another Stream without regard for what type of Stream they each are.
Public Sub CopyStream(source As Stream, destination As Stream, Optional blockSize As Integer = 1024)
Dim buffer(blockSize - 1) As Byte
'Read the first block.'
Dim bytesRead = source.Read(buffer, 0, blockSize)
Do Until bytesRead = 0
'Write the current block.'
destination.Write(buffer, 0, bytesRead)
'Read the next block.'
bytesRead = source.Read(buffer, 0, blockSize)
Loop
End Sub

c# read content of blob file and when converting it to file - it's unreadable

I've made a tool which gets the content of blob column and then I save the File
The problem which bugs me almost three days is that after I save the file - it's unreadable. The exception when I try to open the file is :
'Can't read file header'
I have to mention that most if the files are in .tif format
I would appreciate any help
FbCommand cmd = new FbCommand(String.Format("SELECT FIRST 1 ID, DOCID, FILENAME, FILESIZE, DATA FROM ORIGINALS WHERE ID > {0} ORDER BY ID", initialIndex), con);
var reader = cmd.ExecuteReader();
if (reader.HasRows)
{
while (reader.Read())
{
//MessageBox.Show(reader.GetInt32(0).ToString());
int docId = (int)reader["DOCID"];
long newDocId = dictDocs[docId];
initialIndex = (int)reader["ID"];
string fileName = reader["FILENAME"].ToString();
int size = (int)reader["FILESIZE"];
byte[] data = (byte[])reader["DATA"];
System.IO.FileStream fs =
new System.IO.FileStream("D:" + fileName, System.IO.FileMode.Create, System.IO.FileAccess.Write);
fs.Write(data, 0, data.Length);
fs.Close();
var Writer = new BinaryWriter(File.OpenWrite("D:" + fileName));
Writer.Write(data);
Writer.Flush();
}
}
All after
System.IO.FileStream fs =
new System.IO.FileStream("D:" + fileName, System.IO.FileMode.Create, System.IO.FileAccess.Write);
fs.Write(data, 0, data.Length);
fs.Close();
is uneccessary. You are overwriting your own file.

SqlFileStream append new data create new file

I sending file to SqlFileStream in parts. Like this:
public void StreamFile(int storageId, Stream stream)
{
var select = string.Format(
#"Select TOP(1) Content.PathName(),
GET_FILESTREAM_TRANSACTION_CONTEXT() FROM FileStorage WHERE FileStorageID={0}",
storageId);
using (var conn = new SqlConnection(this.ConnectionString))
{
conn.Open();
var sqlTransaction = conn.BeginTransaction(IsolationLevel.ReadCommitted);
string serverPath;
byte[] serverTxn;
using (var cmd = new SqlCommand(select, conn))
{
cmd.Transaction = sqlTransaction;
using (SqlDataReader rdr = cmd.ExecuteReader())
{
rdr.Read();
serverPath = rdr.GetSqlString(0).Value;
serverTxn = rdr.GetSqlBinary(1).Value;
rdr.Close();
}
}
this.SaveFile(stream, serverPath, serverTxn);
sqlTransaction.Commit();
}
}
private void SaveFile(Stream clientStream, string serverPath, byte[] serverTxn)
{
const int BlockSize = 512;
using (var dest = new SqlFileStream(serverPath, serverTxn,
FileAccess.ReadWrite, FileOptions.SequentialScan, 0))
{
var buffer = new byte[BlockSize];
int bytesRead;
dest.Seek(dest.Length, SeekOrigin.Begin);
while ((bytesRead = clientStream.Read(buffer, 0, buffer.Length)) > 0)
{
dest.Write(buffer, 0, bytesRead);
}
}
clientStream.Close();
}
I tryed upload file in 10 parts. When I look up folder where is hold data I saw 10 files. For each additional parts SqlFilesStream creates a new file. The last file holding all data but rest file unnecessarily wasted space. Is possible holding all data in one file - last append operation?
SqlFileStream doesn't support "partial updates". Each time the content changes, a new file is written to disk. These extra files are de-referenced and will eventually be cleaned up, but until they are, they will affect your backups/logs/etc
If you want all the data in a single file, you will need to process the entire file in a single stream.
If you don't need to support large files ( > 2 GB) and you expect lots of small partial updates, you might be better off storing the file in a VARBINARY(MAX) column instead.

Trying to read a blob

I am trying to read a BLOB from an Oracle database. The function GetFileContent take p_file_id as a paramater and return a BLOB. The BLOB is a DOCX-file that needs to be written in a folder somewhere. But I can't quite figure out how to read the BLOB. There is definitely something stored in the return_value-paramater after
OracleDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
The value is {byte[9946]}. But I get an error when executing
long retrievedBytes = reader.GetBytes(1, startIndex, buffer, 0, ChunkSize);
It says InvalidOperationException was caught: "No data exists for the row or column."
Here is the code:
cmd = new OracleCommand("GetFileContent", oraCon);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("p_file_id", OracleType.Number).Direction = ParameterDirection.Input;
cmd.Parameters[0].Value = fileID;
cmd.Parameters.Add("return_value", OracleType.Blob).Direction = ParameterDirection.ReturnValue;
cmd.Connection.Open();
OracleDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
reader.Read();
MemoryStream memory = new MemoryStream();
long startIndex = 0;
const int ChunkSize = 256;
while (true)
{
byte[] buffer = new byte[ChunkSize];
long retrievedBytes = reader.GetBytes(1, startIndex, buffer, 0, ChunkSize); //FAILS
memory.Write(buffer, 0, (int)retrievedBytes);
startIndex += retrievedBytes;
if (retrievedBytes != ChunkSize)
break;
}
cmd.Connection.Close();
byte[] data = memory.ToArray();
memory.Dispose();
How can I read the BLOB from the function?
Looks like you are using the Microsoft Oracle Client. You probably want to use the LOB objects rather than using GetBytes(...).
I think the first link below would be the easiest for you. Here is an excerpt:
using(reader)
{
//Obtain the first row of data.
reader.Read();
//Obtain the LOBs (all 3 varieties).
OracleLob BLOB = reader.GetOracleLob(1);
...
//Example - Reading binary data (in chunks).
byte[] buffer = new byte[100];
while((actual = BLOB.Read(buffer, 0, buffer.Length)) >0)
Console.WriteLine(BLOB.LobType + ".Read(" + buffer + ", " + buffer.Length + ") => " + actual);
...
}
OracleLob::Read Method
OracleLob Class
OracleDataReader::GetOracleLob Method
On a side note, the Microsoft Oracle client is being depreciated. You may want look into switching to Oracle's ODP.net, as that will be the only "Officially Supported" client moving forward.

Categories

Resources