Trying to read a blob - c#

I am trying to read a BLOB from an Oracle database. The function GetFileContent take p_file_id as a paramater and return a BLOB. The BLOB is a DOCX-file that needs to be written in a folder somewhere. But I can't quite figure out how to read the BLOB. There is definitely something stored in the return_value-paramater after
OracleDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
The value is {byte[9946]}. But I get an error when executing
long retrievedBytes = reader.GetBytes(1, startIndex, buffer, 0, ChunkSize);
It says InvalidOperationException was caught: "No data exists for the row or column."
Here is the code:
cmd = new OracleCommand("GetFileContent", oraCon);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("p_file_id", OracleType.Number).Direction = ParameterDirection.Input;
cmd.Parameters[0].Value = fileID;
cmd.Parameters.Add("return_value", OracleType.Blob).Direction = ParameterDirection.ReturnValue;
cmd.Connection.Open();
OracleDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
reader.Read();
MemoryStream memory = new MemoryStream();
long startIndex = 0;
const int ChunkSize = 256;
while (true)
{
byte[] buffer = new byte[ChunkSize];
long retrievedBytes = reader.GetBytes(1, startIndex, buffer, 0, ChunkSize); //FAILS
memory.Write(buffer, 0, (int)retrievedBytes);
startIndex += retrievedBytes;
if (retrievedBytes != ChunkSize)
break;
}
cmd.Connection.Close();
byte[] data = memory.ToArray();
memory.Dispose();
How can I read the BLOB from the function?

Looks like you are using the Microsoft Oracle Client. You probably want to use the LOB objects rather than using GetBytes(...).
I think the first link below would be the easiest for you. Here is an excerpt:
using(reader)
{
//Obtain the first row of data.
reader.Read();
//Obtain the LOBs (all 3 varieties).
OracleLob BLOB = reader.GetOracleLob(1);
...
//Example - Reading binary data (in chunks).
byte[] buffer = new byte[100];
while((actual = BLOB.Read(buffer, 0, buffer.Length)) >0)
Console.WriteLine(BLOB.LobType + ".Read(" + buffer + ", " + buffer.Length + ") => " + actual);
...
}
OracleLob::Read Method
OracleLob Class
OracleDataReader::GetOracleLob Method
On a side note, the Microsoft Oracle client is being depreciated. You may want look into switching to Oracle's ODP.net, as that will be the only "Officially Supported" client moving forward.

Related

Converting XLS to PDF: Getting "Data index must be a valid index in the field" exception during GetBytes()

I have an application that converts files into PDF. It first saves a blob from MySQL into a temp file and then it converts that temp file into a PDF. I'm getting this "Data index must be a valid index in the field" exception error at the GetBytes() only when I try and convert an XLS file. Other file types (BMP, XLSX, DOC, DOCX, etc.) all convert.
private WriteBlobToTempFileResult WriteBlobToTempFile(int id, string fileType)
{
Logger.Log(string.Format("Inside WriteBlobToTempFile() id: {0} fileType: {1}", id, fileType));
WriteBlobToTempFileResult res = new WriteBlobToTempFileResult //return object
{
PrimaryKey = id
};
FileStream fs; // Writes the BLOB to a file
BinaryWriter bw; // Streams the BLOB to the FileStream object.
int bufferSize = 100; // Size of the BLOB buffer.
byte[] outbyte = new byte[bufferSize]; // The BLOB byte[] buffer to be filled by GetBytes.
long retval; // The bytes returned from GetBytes.
long startIndex = 0; // The starting position in the BLOB output.
string connectionString = ConfigurationManager.AppSettings["MySQLConnectionString"]; //connection string from app.config
string path = ConfigurationManager.AppSettings["fileDirectory"]; //get directory from App.Config
try
{
MySqlConnection conn = new MySqlConnection(connectionString);
conn.Open();
//Determine records to convert, retrieve Primary Key and file type
string sql = "SELECT FILE_DATA from " + TableName + " WHERE PK_TSP_DOCS_ID = #id";
MySqlCommand cmd = new MySqlCommand(sql, conn);
cmd.Parameters.AddWithValue("#id", id);
MySqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (rdr.Read())
{
// Create a file to hold the output.
fs = new FileStream(path + #"\" + id + "." + fileType, FileMode.OpenOrCreate, FileAccess.Write);
bw = new BinaryWriter(fs);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read the bytes into outbyte[] and retain the number of bytes returned.
retval = rdr.GetBytes(rdr.GetOrdinal("FILE_DATA"), startIndex, outbyte, 0, bufferSize);
// Continue reading and writing while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
bw.Write(outbyte);
bw.Flush();
// Reposition the start index to the end of the last buffer and fill the buffer.
startIndex += bufferSize;
// *****IT FAILS AT THE LINE BELOW*****
retval = rdr.GetBytes(rdr.GetOrdinal("FILE_DATA"), startIndex, outbyte, 0, bufferSize);
// *****IT FAILS AT THE LINE ABOVE*****
}
// Write the remaining buffer.
bw.Write(outbyte, 0, (int)retval);
bw.Flush();
// Close the output file.
bw.Close();
fs.Close();
}
// Close the reader and the connection.
rdr.Close();
conn.Close();
res.FullPath = path + #"\" + id + "." + fileType;
}
catch (Exception ex)
{
res.Error = true;
res.ErrorMessage = string.Format("Failed to write temporary file for record id: {0} of file type: {1}", id.ToString(), fileType);
res.InternalErrorMessage = ex.Message; //string.Format("Caught Exception in WriteBlobToTempPDF(). Stack Trace: {0}", ex.StackTrace);
}
return res;
}
This is a bug in Oracle's MySQL Connector/NET. You can see at https://mysql-net.github.io/AdoNetResults/#GetBytes_reads_nothing_at_end_of_buffer that MySql.Data throws an IndexOutOfRangeException when trying to read 0 bytes at the end of the buffer, but no other ADO.NET provider does.
You should switch to MySqlConnector (disclaimer: I'm a contributor) and use the MySqlDataReader.GetStream() and Stream.CopyTo method to simplify your code:
MySqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (rdr.Read())
{
// Create a file to hold the output.
using (var fs = new FileStream(path + #"\" + id + "." + fileType, FileMode.OpenOrCreate, FileAccess.Write))
{
// Open a Stream from the data reader
using (var stream = rdr.GetStream(rdr.GetOrdinal("FILE_DATA"))
{
// Copy the data
stream.CopyTo(fs);
}
}
}

C# to pull all blobs out of database and save to path

Someone on here already pushed me in the right direction. I believe I am on the correct track now to pull all of my blobs out of my database. The only thing I cannot figure out is (I think) is where to set where it saves to.
Here is all the code:
namespace PullBlobs
{
class Program
{
static void Main()
{
string connectionString = "Data Source=NYOPSSQL05;Initial Catalog=Opsprod;Integrated Security=true;User Id=username;Password=password;";
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "Select FileName, FileData from IntegrationFile where IntegrationFileID = #Request_ID";
cmd.Connection = connection;
cmd.Parameters.Add("#Request_ID", SqlDbType.UniqueIdentifier).Value = new Guid("EBFF2CEA-3FF9-4D22-ABEF-3240647119CC");
connection.Open();
SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (reader.Read())
{
// Writes the BLOB to a file (*.bmp).
FileStream stream;
// Streams the BLOB to the FileStream object.
BinaryWriter writer;
// Size of the BLOB buffer.
int bufferSize = 100;
// The BLOB byte[] buffer to be filled by GetBytes.
byte[] outByte = new byte[bufferSize];
// The bytes returned from GetBytes.
long retval;
// The starting position in the BLOB output.
long startIndex = 0;
// The publisher id to use in the file name.
string FileName = "";
// Get the publisher id, which must occur before getting the logo.
FileName = reader.GetString(0);
// Create a file to hold the output.
stream = new FileStream(
FileName + ".txt", FileMode.OpenOrCreate, FileAccess.Write);
writer = new BinaryWriter(stream);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read bytes into outByte[] and retain the number of bytes returned.
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
// Continue while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
writer.Write(outByte);
writer.Flush();
// Reposition start index to end of last buffer and fill buffer.
startIndex += bufferSize;
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
}
// Write the remaining buffer.
writer.Write(outByte, 0, (int)retval - 1);
writer.Flush();
// Close the output file.
writer.Close();
stream.Close();
}
// Close the reader and the connection.
reader.Close();
connection.Close();
}
}
}
}
I think I have every thing altered the way I need it except for where to set the path to save the files to. Here is the link I found on msdn where I pulled most of this code (and altered it) from.
https://msdn.microsoft.com/en-us/library/87z0hy49(v=vs.110).aspx
Dumb question, but are there any actual blobs in there for that request id?
Furthermore, Since you are just reading blobs from the database and writing them to files, why not use BCP?
https://msdn.microsoft.com/en-us/library/ms162802.aspx
I figured out the answer of where to put the path. Make sure you put the # symbol before the quotation marks.
stream = new FileStream(#"c:\dev\Blobs\" + FileName + ".txt", FileMode.OpenOrCreate, FileAccess.Write);

How do I OPEN a stored excel file in SQL Server 2008 via C#

I have stored an excel file in my SQL Server 2008 DB (as a VARBINARY(MAX)) the following (so far, hard coded) way:
// C# - visual studio 2008
var update = new SqlCommand("UPDATE Requests SET Attachment = #xls" +
" WHERE RequestsID = 27", conn);
update.Parameters.AddWithValue("xls", File.ReadAllBytes("C:/aFolder/hello.xlsx"));
update.ExecuteNonQuery();
It works, but I want to open it too!
How do I do that?
Note, not read blob-data, but open the actual "hello.xlsx" file.
I have tried the following:
http://dotnetsoldier.blogspot.com/2007/07/how-to-retrieve-blob-object-in-winforms.html
I can see it works, as "Binary.Length" is exactly the size of my "hello.xlsx" when executing - but the file doesn´t open, and that´s my problem.
Please help me out!
EDIT:
HERE IS THE CODE THAT I CURRENTLY USE TO "OPEN" THE SPREADSHEET:
SqlConnection conn =
new SqlConnection
(global::MY_PROJECT.Properties.Settings.Default.DB_1ConnectionString);
conn.Open();
SqlCommand Cmd = new SqlCommand("select Attachment from Requests where RequestsID = 27", conn);
Cmd.CommandType = CommandType.Text;
SqlDataReader Reader = Cmd.ExecuteReader(CommandBehavior.CloseConnection);
//
string DocumentName = null;
FileStream FStream = null;
BinaryWriter BWriter = null;
//
//
//
byte[] Binary = null;
const int ChunkSize = 100;
int SizeToWrite = 0;
MemoryStream MStream = null;
//
while (Reader.Read())
{
DocumentName = Reader["Attachment"].ToString();
// Create a file to hold the output.
FStream = new FileStream(#"c:\" + DocumentName, FileMode.OpenOrCreate, FileAccess.Write);
BWriter = new BinaryWriter(FStream);
Binary = (Reader["Attachment"]) as byte[];
SizeToWrite = ChunkSize;
MStream = new MemoryStream(Binary);
//
for (int i = 0; i < Binary.GetUpperBound(0) - 1; i = i + ChunkSize)
{
if (i + ChunkSize >= Binary.Length) SizeToWrite = Binary.Length - i;
byte[] Chunk = new byte[SizeToWrite];
MStream.Read(Chunk, 0, SizeToWrite);
BWriter.Write(Chunk);
BWriter.Flush();
}
BWriter.Close();
FStream.Close();
}
FStream.Dispose();
conn.Close();
The code you posted looks like it probably writes the spreadsheet to disc but I can't see any code to open it. You would need to use something like
System.Diagnostics.Process.Start(#"c:\" + DocumentName)
I think.

Downloading a 50 MB file from SQL Server in ASP.NET stops in middle for a while

When I'm trying to download a 50 MB file from a database (it is not an issue with a smaller file size), it stops in the middle sometimes and resumes again after a long time. Am I mssing something?
The code,
SqlConnection con = new SqlConnection(ConnectionString);
SqlCommand cmd = new SqlCommand("DownloadFile", con);
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandTimeout = 60;
cmd.Parameters.AddWithValue("#Id", Id);
try
{
conPortal.Open();
SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.CloseConnection);
// File name
string fileName = reader.GetString(0);
// Total bytes to read
int dataToRead = reader.GetInt32(1);
Context.Server.ScriptTimeout = 600;
Context.Response.Buffer = true;
Context.Response.Clear();
Context.Response.ContentType = "application/octet-stream";
Context.Response.AddHeader("Content-Disposition",
"attachment; filename=\"" + fileName + "\";");
Context.Response.AddHeader("Content-Length", dataToRead.ToString());
int ChunkSize = 262144;
// Buffer to read 10K bytes in chunk
byte[] buffer = new Byte[ChunkSize];
long offset = 0;
long length;
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (Context.Response.IsClientConnected)
{
// Read the data in buffer
length = reader.GetBytes(2, offset, buffer, 0, ChunkSize);
offset += ChunkSize;
// Write the data to the current output stream.
Context.Response.OutputStream.Write(buffer, 0, (int) length);
// Flush the data to the HTML output.
Context.Response.Flush();
buffer = new Byte[ChunkSize];
dataToRead = dataToRead - (int) length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
finally
{
cmd.Dispose();
}
int ChunkSize = 262144;
There you go, that's 25 MB - right in the middle of a 50 MB download. Try modifying this and see what happens. Any reason you set it to that particular value?
That's not how you stream content from a SQL Server query result. You must specify CommandBehavior.SequentialAccess:
Provides a way for the DataReader to
handle rows that contain columns with
large binary values. Rather than
loading the entire row,
SequentialAccess enables the
DataReader to load data as a stream.
You can then use the GetBytes or
GetChars method to specify a byte
location to start the read operation,
and a limited buffer size for the data
being returned.
And, as others have pointed out, 25MB is not a reasonable buffer chunk size. Try something like 4K (typical TDS packet size) or 16k (typical SSL frame size if connection is encrypted).

What is the most efficient way to read many bytes from SQL Server using SqlDataReader (C#)

What is the most efficient way to read bytes (8-16 K) from SQL Server using SqlDataReader.
It seems I know 2 ways:
byte[] buffer = new byte[4096];
MemoryStream stream = new MemoryStream();
long l, dataOffset = 0;
while ((l = reader.GetBytes(columnIndex, dataOffset, buffer, 0, buffer.Length)) > 0)
{
stream.Write(buffer, 0, buffer.Length);
dataOffset += l;
}
and
reader.GetSqlBinary(columnIndex).Value
The data type is IMAGE
GetSqlBinary will load the whole data into memory while your first approach will read it in chunks which will take less memory especially if you need to only process the binary in parts. But once again it depends on what you are going to do with the binary and how it will be processed.
For that blob size, I would go with GetSqlBinary. Below I've also included a Base64 encode example; something like this:
using (SqlConnection con = new SqlConnection("...")) {
con.Open();
using (SqlCommand cmd = con.CreateCommand()) {
cmd.CommandText = "SELECT TOP 1 * FROM product WHERE DATALENGTH(picture)>0";
using (SqlDataReader reader = cmd.ExecuteReader()) {
reader.Read();
byte[] dataBinary = reader.GetSqlBinary(reader.GetOrdinal("picture")).Value;
string dataBase64 = System.Convert.ToBase64String(dataBinary, Base64FormattingOptions.InsertLineBreaks);
//TODO: use dataBase64
}
}
}

Categories

Resources