I need to read blob data into a memory stream from an Image type column from an SQL Server database. How can I do this using Dapper?
I was reading the Dapper manual but was unable to find information about this.
UPDATE: I need to read the data from the database (from a query). All the links suggested so far has information about how to store the blob in the database.
Figured it out. The result dynamic type is a byte[].
var row = con.QueryFirst("SELECT BLOBFIELD FROM TABLE WHERE ID = 1");
byte[] bytes = drawings.BLOBFIELD;
using (var stream = new System.IO.FileStream(#"C:\Temp\Test.dat", System.IO.FileMode.CreateNew))
stream.Write(bytes, 0, bytes.Length);
Related
I have a process that archives MongoDb collections by getting an IAsyncCursor and writing the raw bytes out to an Azure Blob stream. This seems to be quite efficient and works. Here is the working code.
var cursor = await clientDb.GetCollection<RawBsonDocument>(collectionPath).Find(new BsonDocument()).ToCursorAsync();
while (cursor.MoveNext())
foreach (var document in cursor.Current)
{
var bytes = new byte[document.Slice.Length];
document.Slice.GetBytes(0, bytes, 0, document.Slice.Length);
blobStream.Write(bytes, 0, bytes.Length);
}
However, in order to move this data from the archive back into MongoDb, the only way I've figured out how to do it is to load the entire raw byte array into a memory stream and then .InsertOneAsync() in to MongoDb. This does work fine for smaller collections, but for very large collections I'm getting MongoDb errors. Also, this obviously isn't very memory efficient. Is there any way to stream raw byte data into MongoDb, or use a cursor like I'm doing on the read?
var rawRef = clientDb.GetCollection<RawBsonDocument>(collectionPath);
using (var ms = new MemoryStream())
{
await stream.CopyToAsync(ms);
var bytes = ms.ToArray();
var rawBson = new RawBsonDocument(bytes);
await rawRef.InsertOneAsync(rawBson);
}
Here is the error I get if the collection is too large.
MongoDB.Driver.MongoConnectionException : An exception occurred while sending a message to the server.
---- System.IO.IOException : Unable to write data to the transport connection: An established connection was aborted by the software in your host machine..
-------- System.Net.Sockets.SocketException : An established connection was aborted by the software in your host machine.
Instead of copying the stream as a whole to a byte-Array and parsing this to a RawBsonDocument, you can parse the documents one by one, e.g.:
while (stream.Position < stream.Length)
{
var rawBson = BsonSerializer.Deserialize<RawBsonDocument>(stream);
await rawRef.InsertOneAsync(rawBson);
}
The stream will be read in chunks of one. Above sample inserts the documents directly into the database. If you want to insert in batches, you can collect a reasonable amount of documents in a list and use InsertManyAsync.
I am storing the image file details in the database. Along with the path, I am also planning to store the byte array in case of accidental deletion of the uploaded folder. I would be using SQL express database as I want to use the app locally. As the size of SQL express is 10GB is it advisable to store a byte array of images in the database.
Does the size of the byte array is same as the size of the image? As I am expecting the images to be of around 5-10MB.
using (var fileStream = new FileStream(filePath, FileMode.Create))
{
await postPatientInformationEntity.PatientPhoto.CopyToAsync(fileStream);
fileInformationEntity.Path = filePath;
fileInformationEntity.Name = postPatientInformationEntity.PatientPhoto.FileName;
fileInformationEntity.Type = postPatientInformationEntity.PatientPhoto.ContentType;
}
using (var ms = new MemoryStream())
{
await postPatientInformationEntity.PatientPhoto.CopyToAsync(ms);
var fileBytes = ms.ToArray();
fileInformationEntity.FileData = fileBytes;
// act on the Base64 data
}
An alternative, and better method is to store the images outside of the database and store only a link to the image file. You only need a text field in your database table to store this information. The only problem to this approach is that you must synchronize the data in the link field with your file system. The possible solution is to store it in the file system, if the images need to be accessed from another server, tools like rsync can be used to move files from one server to other. this tools can be automated in order to sync the filesystems in constant intervals. The path of uploaded file is enough.
I'm trying to grab an image from the web, and add it to my database.
String lsResponse = string.Empty;
using (HttpWebResponse lxResponse = (HttpWebResponse)req1.GetResponse())
{
using (BinaryReader reader = new BinaryReader(lxResponse.GetResponseStream()))
{
Byte[] lnByte = reader.ReadBytes(1 * 1024 * 1024 * 10);
using (FileStream lxFS = new FileStream(id + ".png", FileMode.Create))
{
lxFS.Write(lnByte, 0, lnByte.Length);
My SQL Server datatype is Varbinary(MAX). Tried with different types (image,...) did not work.
Put it into database.
SqlCommand cmd = new SqlCommand("INSERT INTO PlayersDB (id) ) VALUES (#id);
cmd.Parameters.AddWithValue("#id", lnByte);
I keep getting an error:
Implicit conversion from data type nvarchar to varbinary(max) is not allowed. Use the CONVERT function to run this query.
So, my program sees my lnByte not as binary?
First be sure about the data type is correct.I don't know much about SQL server but i've encountered similar problem in MySql. I changed the data type to Blob and it worked
We use blobs for storing images within a sql db at my workplace. I'm not particularly experienced with it myself, but here are some Technet resources that can explain it better than I can.
Binary Large Object (Blob) Data (SQL Server)
FILESTREAM (SQL Server)
i want to select a picture that save as a large object in a postgresql database.
i know i use lo_export to do this.
but there is a problem: i want to save this picture directly to my computer because i cant access the files that save on Server using lo_export
(i think that is best for me if picture transfer to my computer by a select query)
I don't exactly know my way around C# but the Npgsql Manual has an example sort of like this of writing a bytea column to a file:
command = new NpgsqlCommand("select blob from t where id = 1);", conn);
Byte[] result = (Byte[])command.ExecuteScalar();
FileStream fs = new FileStream(args[0] + "database", FileMode.Create, FileAccess.Write);
BinaryWriter bw = new BinaryWriter(new BufferedStream(fs));
bw.Write(result);
bw.Flush();
fs.Close();
bw.Close();
So you just read it out of the database pretty much like any other column and write it to a local file. The example is about half way down the page I linked to, just search for "bytea" and you'll find it.
UPDATE: For large objects, the process appears to be similar but less SQL-ish. The manual (as linked to above) includes a few large object examples:
NpgsqlTransaction t = Polacz.BeginTransaction();
LargeObjectManager lbm = new LargeObjectManager(Polacz);
LargeObject lo = lbm.Open(takeOID(idtowaru),LargeObjectManager.READWRITE); //take picture oid from metod takeOID
byte[] buf = new byte[lo.Size()];
buf = lo.Read(lo.Size());
MemoryStream ms = new MemoryStream();
ms.Write(buf,0,lo.Size());
// ...
Image zdjecie = Image.FromStream(ms);
Search the manual for "large object" and you'll find it.
Not familiar with C# but if you've contrib/dblink around and better access to a separate postgresql server, this might work:
select large object from bad db server.
copy large object into good db server using dblink.
lo_export on good db server.
If your pictures don't exceed 1GB (or if you don't access only parts of the bytes) then using bytea is the better choice to store them.
A lot of SQL GUI tools allow to directly download (even view) the content of bytea columns directly
I am building some C# desktop application and I need to save file into database. I have come up with some file chooser which give me correct path of the file. Now I have question how to save that file into database by using its path.
It really depends on the type and size of the file. If it's a text file, then you could use File.ReadAllText() to get a string that you can save in your database.
If it's not a text file, then you could use File.ReadAllBytes() to get the file's binary data, and then save that to your database.
Be careful though, databases are not a great way to store heavy files (you'll run into some performance issues).
FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
int numBytes = new FileInfo(fileName).Length;
byte[] buff = br.ReadBytes(numBytes);
Then you upload it to the DB like anything else, I'm assume you are using a varbinary column (BLOB)
So filestream would be it but since you're using SQL 2K5 you will have to do it the read into memory way; which consumes alot of resources.
First of the column type varchar(max) is your friend this give you ~2Gb of data to play with, which is pretty big for most uses.
Next read the data into a byte array and convert it to a Base64String
FileInfo _fileInfo = new FileInfo(openFileDialog1.FileName);
if (_fileInfo.Length < 2147483647) //2147483647 - is the max size of the data 1.9gb
{
byte[] _fileData = new byte[_fileInfo.Length];
_fileInfo.OpenRead().Read(_fileData, 0, (int)_fileInfo.Length);
string _data = Convert.ToBase64String(_fileData);
}
else
{
MessageBox.Show("File is too large for database.");
}
And reverse the process to recover
byte[] _fileData = Convert.FromBase64String(_data);
You'll want to dispose of those strings as quickly as possible by setting them to string.empty as soon as you have finished using them!
But if you can, just upgrade to 2008 and use FILESTREAM.
If you're using SQL Server 2008, you could use FILESTREAM (getting started guide here). An example of using this functionality from C# is here.
You would need the file into a byte array then store this as a blob field in the database possible with the name you wanted to give the file and the file type.
You could just reverse the process for putting the file out again.