SqlFileStream append new data create new file - c#

I sending file to SqlFileStream in parts. Like this:
public void StreamFile(int storageId, Stream stream)
{
var select = string.Format(
#"Select TOP(1) Content.PathName(),
GET_FILESTREAM_TRANSACTION_CONTEXT() FROM FileStorage WHERE FileStorageID={0}",
storageId);
using (var conn = new SqlConnection(this.ConnectionString))
{
conn.Open();
var sqlTransaction = conn.BeginTransaction(IsolationLevel.ReadCommitted);
string serverPath;
byte[] serverTxn;
using (var cmd = new SqlCommand(select, conn))
{
cmd.Transaction = sqlTransaction;
using (SqlDataReader rdr = cmd.ExecuteReader())
{
rdr.Read();
serverPath = rdr.GetSqlString(0).Value;
serverTxn = rdr.GetSqlBinary(1).Value;
rdr.Close();
}
}
this.SaveFile(stream, serverPath, serverTxn);
sqlTransaction.Commit();
}
}
private void SaveFile(Stream clientStream, string serverPath, byte[] serverTxn)
{
const int BlockSize = 512;
using (var dest = new SqlFileStream(serverPath, serverTxn,
FileAccess.ReadWrite, FileOptions.SequentialScan, 0))
{
var buffer = new byte[BlockSize];
int bytesRead;
dest.Seek(dest.Length, SeekOrigin.Begin);
while ((bytesRead = clientStream.Read(buffer, 0, buffer.Length)) > 0)
{
dest.Write(buffer, 0, bytesRead);
}
}
clientStream.Close();
}
I tryed upload file in 10 parts. When I look up folder where is hold data I saw 10 files. For each additional parts SqlFilesStream creates a new file. The last file holding all data but rest file unnecessarily wasted space. Is possible holding all data in one file - last append operation?

SqlFileStream doesn't support "partial updates". Each time the content changes, a new file is written to disk. These extra files are de-referenced and will eventually be cleaned up, but until they are, they will affect your backups/logs/etc
If you want all the data in a single file, you will need to process the entire file in a single stream.
If you don't need to support large files ( > 2 GB) and you expect lots of small partial updates, you might be better off storing the file in a VARBINARY(MAX) column instead.

Related

Read Image stored in Oracle in Long (Finacle)

I want to read the image stored in Oracle Long datatype. Number of images are stored in a remote Oracle database in a column with datatype long. I just need to retrieve those images and show them on my aspx page. I could retrieve the image from database but when tried to caste it to byte array, it threw error that, string can not be converted to byte[]'. Anybody have any suggestions on how to retrieve these images stored in long column in the database. I have already tried various beginner techniques.
OracleDataReader TableReader = null;
string connectionString = ConfigurationManager.ConnectionStrings["System.Data.OracleClient"].ConnectionString;
using (OracleConnection connection = new OracleConnection(connectionString))
{
connection.Open();
command = new OracleCommand("Select * From IMAGE_TABLE where CUST_ID ='xxxxxxxx' fetch first 1 rows only", connection);
command.InitialLONGFetchSize = -1;
TableReader = command.ExecuteReader();
while (TableReader.Read())
{
try
{
var tt = Convert.ToInt32(TableReader[14]);
byte[] bytes = new byte[tt];
//long bytesRead = TableReader.GetBytes(13, 0, bytes, 0, tt);
//FileStream file = new FileStream("d:\\Img1.jpg", FileMode.Create, FileAccess.Write);
//file.Write(bytes, 0, (int)bytesRead);
//file.Close();
byte[] blob_ =System.Text.UTF32Encoding.ASCII.GetBytes(TableReader[13].ToString());
using (MemoryStream ms = new MemoryStream(blob_))
{
image = Image.FromStream(ms);
}
}
catch (Exception rr)
{
}
}
}

Stored exe File in sql is not executable after Loaded From SQL Server

We are trying to store an executable(exe) file in SQL. We are getting no error either writing or reading. Just the file we stored is not working after downloading back.
This is how we store the file:
databaseFilePut(#"FilePath", con, dt.Rows[0].ItemArray[0].ToString(), "ASIL");
And this is the inside of the function:
public static void databaseFilePut(string varFilePath, SqlConnection con, string version, string OFSET )
{
byte[] file;
using (var stream = new FileStream(varFilePath, FileMode.Open, FileAccess.Read))
{
using (var reader = new BinaryReader(stream))
{
file = reader.ReadBytes((int)stream.Length);
}
}
using (var sqlWrite = new SqlCommand("UPDATE ERP_TOOL_UPDATE SET Version=#Version1, ExeDosyasi= #ExeDosyasi1, OFSET= #OFSET1", con))
{
sqlWrite.Parameters.AddWithValue("#Version1", (double.Parse(version) + 1).ToString());
sqlWrite.Parameters.Add("#ExeDosyasi1", SqlDbType.VarBinary, file.Length).Value = file;
sqlWrite.Parameters.AddWithValue("#OFSET1", "ASIL");
sqlWrite.ExecuteNonQuery();
}
}
After saving to the database this is what the data like:
0x4D5A90000300000004000000FFFF0000B8000.... and goes on.
After reading we try to recreate the stored exe with this code:
SqlCommand com = new SqlCommand("Select ExeDosyasi From ERP_TOOL_UPDATE WHERE OFSET = 'ASIL' ", con);
com.CommandType = CommandType.Text;
SqlDataReader reader = com.ExecuteReader();
reader.Read();
byte[] blob;
byte[] blob2;
blob = (byte[])reader[0];
blob2 = System.Text.Encoding.Default.GetBytes(System.Text.Encoding.Unicode.GetString(blob));
using (var fs = new FileStream(#"C:\Users\Bilal\Desktop\ERPAnalizTool.exe", FileMode.Create, FileAccess.Write))
{
fs.Write(blob2, 0, blob2.Length);
fs.Flush();
}
We are not getting any errors, it saves the file. Just the problem is the file has a little bit smaller size. When we try to run, it doesn't run. Like it never was an exe before.
Any help would be appreciated. Thank you all.
Your problem is the following line:
blob2 = System.Text.Encoding.Default.GetBytes(System.Text.Encoding.Unicode.GetString(blob));
The blob variable contains the bytes you wrote to the database, which is the content of the file read in the databaseFilePut method. There is no reason at all to convert it to a Unicode string and then to the system default encoding (Windows-1252 on my system). The data is not a string, it is binary. The double conversion will simply produce a mangled byte sequence.
Simply write the blob variable to disk:
blob = (byte[])reader[0];
File.WriteAllBytes(#"C:\Users\Bilal\Desktop\ERPAnalizTool.exe", blob);

C# to pull all blobs out of database and save to path

Someone on here already pushed me in the right direction. I believe I am on the correct track now to pull all of my blobs out of my database. The only thing I cannot figure out is (I think) is where to set where it saves to.
Here is all the code:
namespace PullBlobs
{
class Program
{
static void Main()
{
string connectionString = "Data Source=NYOPSSQL05;Initial Catalog=Opsprod;Integrated Security=true;User Id=username;Password=password;";
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "Select FileName, FileData from IntegrationFile where IntegrationFileID = #Request_ID";
cmd.Connection = connection;
cmd.Parameters.Add("#Request_ID", SqlDbType.UniqueIdentifier).Value = new Guid("EBFF2CEA-3FF9-4D22-ABEF-3240647119CC");
connection.Open();
SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess);
while (reader.Read())
{
// Writes the BLOB to a file (*.bmp).
FileStream stream;
// Streams the BLOB to the FileStream object.
BinaryWriter writer;
// Size of the BLOB buffer.
int bufferSize = 100;
// The BLOB byte[] buffer to be filled by GetBytes.
byte[] outByte = new byte[bufferSize];
// The bytes returned from GetBytes.
long retval;
// The starting position in the BLOB output.
long startIndex = 0;
// The publisher id to use in the file name.
string FileName = "";
// Get the publisher id, which must occur before getting the logo.
FileName = reader.GetString(0);
// Create a file to hold the output.
stream = new FileStream(
FileName + ".txt", FileMode.OpenOrCreate, FileAccess.Write);
writer = new BinaryWriter(stream);
// Reset the starting byte for the new BLOB.
startIndex = 0;
// Read bytes into outByte[] and retain the number of bytes returned.
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
// Continue while there are bytes beyond the size of the buffer.
while (retval == bufferSize)
{
writer.Write(outByte);
writer.Flush();
// Reposition start index to end of last buffer and fill buffer.
startIndex += bufferSize;
retval = reader.GetBytes(1, startIndex, outByte, 0, bufferSize);
}
// Write the remaining buffer.
writer.Write(outByte, 0, (int)retval - 1);
writer.Flush();
// Close the output file.
writer.Close();
stream.Close();
}
// Close the reader and the connection.
reader.Close();
connection.Close();
}
}
}
}
I think I have every thing altered the way I need it except for where to set the path to save the files to. Here is the link I found on msdn where I pulled most of this code (and altered it) from.
https://msdn.microsoft.com/en-us/library/87z0hy49(v=vs.110).aspx
Dumb question, but are there any actual blobs in there for that request id?
Furthermore, Since you are just reading blobs from the database and writing them to files, why not use BCP?
https://msdn.microsoft.com/en-us/library/ms162802.aspx
I figured out the answer of where to put the path. Make sure you put the # symbol before the quotation marks.
stream = new FileStream(#"c:\dev\Blobs\" + FileName + ".txt", FileMode.OpenOrCreate, FileAccess.Write);

How to attach files to dotnet zip library c#

I am using dotnet zip library to create zip file. My files are saved into a database and I am reading each row and load file data into memory from the database and save the memory stream to zip file. my below code works fine.
using System.IO;
using Ionic.Zip;
private void button2_Click(object sender, EventArgs e)
{
using (SqlConnection sqlConn = new SqlConnection(#"Data Source=BBATRIDIP\SQLSERVER2008R2;Initial Catalog=test;Integrated Security=True"))
{
string query = String.Format(#"SELECT Name, ContentType, Data FROM [TestTable]");
SqlCommand cmd = new SqlCommand(query, sqlConn);
cmd.Connection.Open();
System.IO.MemoryStream memStream = null;
ZipFile zip = new ZipFile();
zip.MaxOutputSegmentSize = 1024 * 1024; // 1MB each segment size would be
// the above line would split zip file into multiple files and each file
//size would be 1MB
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
byte[] data = (byte[])reader["Data"];
memStream = new System.IO.MemoryStream(data);
string strFile = reader["Name"].ToString() + "\\" + reader["ContentType"].ToString();
ZipEntry ze = zip.AddEntry(strFile, memStream);
int xx = 0;
}
}
zip.Save(#"e:\MyCustomZip.zip");
memStream.Dispose();
MessageBox.Show("Job Done");
// here u can save the zip in memory stream also there is a overload insteaa of saving in HD
}
}
but if i change a bit into the code then corrupted zip file is creating. if i change this line zip.Save(#"e:\MyCustomZip.zip"); then problem occur.
using (SqlConnection sqlConn = new SqlConnection(#"Data Source=BBATRIDIP\SQLSERVER2008R2;Initial Catalog=test;Integrated Security=True"))
{
string query = String.Format(#"SELECT Name, ContentType, Data FROM [TestTable]");
SqlCommand cmd = new SqlCommand(query, sqlConn);
cmd.Connection.Open();
System.IO.MemoryStream memStream = null;
ZipFile zip = new ZipFile();
zip.MaxOutputSegmentSize = 1024 * 1024; // 1MB each segment size would be
// the above line would split zip file into multiple files and each file
//size would be 1MB
zip.Save(#"e:\MyCustomZip.zip");
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
byte[] data = (byte[])reader["Data"];
memStream = new System.IO.MemoryStream(data);
string strFile = reader["Name"].ToString() + "\\" + reader["ContentType"].ToString();
ZipEntry ze = zip.AddEntry(strFile, memStream);
int xx = 0;
}
}
memStream.Dispose();
MessageBox.Show("Job Done");
// here u can save the zip in memory stream also there is a overload insteaa of saving in HD
}
I want to create zip with zero kb initially and when I will add stream to zip in loop then zip file size will increase gradually. if you see my above code where I try to do that but I got no success. what am I doing wrong in the code?
Another issue
dotnet zip compression ratio not good. i zip a 152 KB doc file with dotnet zip and when zip was created then i was zip file size was 136KB. is there any tweak exist which create small size zip file. share the knowledge. thanks
So, looking at the documentation for ZipFile.Save, we see the following:
Use this when creating a new zip file, or when updating a zip archive.
So, it seems that you need to call this repeatedly to get the "update" behaviour you are looking for. As such just need to move your Save into the loop:
//...
ZipFile zip = new ZipFile();
zip.MaxOutputSegmentSize = 1024 * 1024;
using (SqlDataReader reader = cmd.ExecuteReader())
{
while (reader.Read())
{
byte[] data = (byte[])reader["Data"];
memStream = new System.IO.MemoryStream(data);
string strFile =
reader["Name"].ToString() + "\\" + reader["ContentType"].ToString();
ZipEntry ze = zip.AddEntry(strFile, memStream);
zip.Save(#"e:\MyCustomZip.zip");
}
}
With regards to your second question (never a good idea on SO, try to stick to one problem per question):
Without knowing what you are compressing, it's very difficult to speculate about what kind of compression ratio you might achieve. If the data is already compressed (e.g. compressed image/video such as jpeg/h264), zipping isn't going to give you any gains whatsoever. Your only hint about the nature of the content is that it is a "doc". If you're talking about a modern Word document (docx), this is already a zip compressed folder structure. Gains will be minimal. If it's another kind of doc, perhaps it contains embedded (compressed) media? This will also be relatively incompressible.

What is the most efficient way to read many bytes from SQL Server using SqlDataReader (C#)

What is the most efficient way to read bytes (8-16 K) from SQL Server using SqlDataReader.
It seems I know 2 ways:
byte[] buffer = new byte[4096];
MemoryStream stream = new MemoryStream();
long l, dataOffset = 0;
while ((l = reader.GetBytes(columnIndex, dataOffset, buffer, 0, buffer.Length)) > 0)
{
stream.Write(buffer, 0, buffer.Length);
dataOffset += l;
}
and
reader.GetSqlBinary(columnIndex).Value
The data type is IMAGE
GetSqlBinary will load the whole data into memory while your first approach will read it in chunks which will take less memory especially if you need to only process the binary in parts. But once again it depends on what you are going to do with the binary and how it will be processed.
For that blob size, I would go with GetSqlBinary. Below I've also included a Base64 encode example; something like this:
using (SqlConnection con = new SqlConnection("...")) {
con.Open();
using (SqlCommand cmd = con.CreateCommand()) {
cmd.CommandText = "SELECT TOP 1 * FROM product WHERE DATALENGTH(picture)>0";
using (SqlDataReader reader = cmd.ExecuteReader()) {
reader.Read();
byte[] dataBinary = reader.GetSqlBinary(reader.GetOrdinal("picture")).Value;
string dataBase64 = System.Convert.ToBase64String(dataBinary, Base64FormattingOptions.InsertLineBreaks);
//TODO: use dataBase64
}
}
}

Categories

Resources