I have a function which extracts a file into a byte array (data).
int contentLength = postedFile.ContentLength;
byte[] data = new byte[contentLength];
postedFile.InputStream.Read(data, 0, contentLength);
Later I use this byte array to construct an System.Drawing.Image object
(where data is the byte array)
MemoryStream ms = new MemoryStream(data);
Image bitmap = Image.FromStream(ms);
I get the following exception "ArgumentException: Parameter is not valid."
The original posted file contained a 500k jpeg image...
Any ideas why this isnt working?
Note: I assure you I have a valid reason for converting to a byte array and then to a memorystream!!
That's most likely because you didn't get all the file data into the byte array. The Read method doesn't have to return as many bytes as you request, and it returns the number of bytes actually put in the array. You have to loop until you have gotten all the data:
int contentLength = postedFile.ContentLength;
byte[] data = new byte[contentLength];
for (int pos = 0; pos < contentLength; ) {
pos += postedFile.InputStream.Read(data, pos, contentLength - pos);
}
This is a common mistake when reading from a stream. I have seen this problem a lot of times.
Edit:
With the check for an early end of stream, as Matthew suggested, the code would be:
int contentLength = postedFile.ContentLength;
byte[] data = new byte[contentLength];
for (int pos = 0; pos < contentLength; ) {
int len = postedFile.InputStream.Read(data, pos, contentLength - pos);
if (len == 0) {
throw new ApplicationException("Upload aborted.");
}
pos += len;
}
You're not checking the return value of postedFile.InputStream.Read. It is not at all guaranteed to fill the array on the first call. That will leave a corrupt JPEG in data (0's instead of file content).
Have you checked the return value from the Read() call to verify that is actually reading all of the content? Perhaps Read() is only returning a portion of the stream, requiring you to loop the Read() call until all of the bytes are consumed.
Any reason why you don't simply do this:
Image bitmap = Image.FromStream(postedFile.InputStream);
I have had problems loading images in .NET that were openable by more robust image libraries. It's possible that the specific jpeg image you have is not supported by .NET. jpeg files are not just one type of encoding, there's a variety of possible compression schemes allowed.
You could try it with another image that you know is in a supported format.
Related
I'm a novice programmer. I'm creating a library to process binary files of a certain type -- like a codec (though without a need to process a progressive stream coming over a wire). I'm looking for an efficient way to read the file into memory and then parse portions of the data as needed. In particular, I'd like to avoid large memory copies, hopefully without a lot of added complexity to avoid that.
In some situations, I want to do sequential reading of values in the data. For this, a MemoryStream works well.
FileStream fs = new FileStream(_fileName, FileMode.Open, FileAccess.Read);
byte[] bytes = new byte[fs.Length];
fs.Read(bytes, 0, bytes.Length);
_ms = new MemoryStream(bytes, 0, bytes.Length, false, true);
fs.Close();
(That involved a copy from the bytes array into the memory stream; that's one time, and I don't know of a way to avoid it.)
With the memory stream, it's easy to seek to arbitrary positions and then start reading structure members. E.g.,
_ms.Seek(_tableRecord.Offset, SeekOrigin.Begin);
byte[] ab32 = new byte[4];
_version = ConvertToUint(_ms.Read(ab32));
_numRecords = ConvertToUint(_ms.Read(ab32));
// etc.
But there may also be times when I want to take a slice out of the memory corresponding to some large structure and then pass into a method for certain processing. MemoryStream doesn't support that. I could always pass the MemoryStream plus offset and length, though that might not always be the most convenient.
Instead of MemoryStream, I could store the data in memory using Memory. That supports slicing, but not sequential reading.
If for some situation I want to get a slice (rather than pass stream & offset/length), I could construct an ArraySegment from MemoryStream.GetBuffer.
ArraySegment<byte> as = new ArraySegment<byte>(ms.GetBuffer(), offset, length);
It's not clear to me, though, if that will result in a (potentially large) copy, or if that uses a reference into the same memory held by the MemoryStream. I gather that GetBuffer exposes the underlying memory rather than providing a copy; and that ArraySegment will point into the same memory?
There will be times when I need to get a slice that is a copy as I'll need to modify some elements and then process that, but without changing the original. If ArraySegment gets a reference rather than a copy, I gather I could use ArraySegment<byte>.ToArray()?
So, my questions are:
Is MemoryStream the best approach? Is there any other type that allows sequential reading like MemoryStream but also allows slicing like Memory?
If I want a slice without copying memory, will ArraySegment<byte>(ms.GetBuffer(), offset, length) do that?
Then if I need a copy that can be modified without affecting the original, use ArraySegment<byte>.ToArray()?
Is there a way to read the data from a file directly into a new MemoryStream without creating a temporary byte array that gets copied?
Am I approaching all this the best way?
To get the initial MemoryStream from reading the file, the following works:
byte[] bytes;
try
{
// File.ReadAllBytes opens a filestream and then ensures it is closed
bytes = File.ReadAllBytes(_fi.FullName);
_ms = new MemoryStream(bytes, 0, bytes.Length, false, true);
}
catch (IOException e)
{
throw e;
}
File.ReadAllBytes() copies the file content into memory. It uses using, which means that it ensures the file gets closed. So no Finally statement is needed.
I can read individual values from the MemoryStream using MemoryStream.Read. These calls involve copies of those values, which is fine.
In one situation, I needed to read a table out of the file, change a value, and then calculate a checksum of the entire file with that change in place. Instead of copying the entire file so that I could edit one part, I was able to calculate the checksum in progressive steps: first on the initial, unchanged segment of the file, then continue with the middle segment that was changed, then continue with the remainder.
For this I could process the first and final segments using the MemoryStream. This involved lots of reads, with each read copying; but those copies were transient variables, so no significant working set increase.
For the middle segment, that needed to be copied since it had to be changed (but the original version needed to be kept intact). The following worked:
// get ref (not copy!) to the byte array underlying the MemoryStream
byte[] fileData = _ms.GetBuffer();
// determine the required length
int length = _tableRecord.Length;
// create array to hold the copy
byte[] segmentCopy = new byte[length];
// get the copy
Array.ConstrainedCopy(fileData, _tableRecord.Offset, segmentCopy, 0, length);
After modifying values in segmentCopy, I then needed to pass this to my static method for calculating checksums, which expected a MemoryStream (for sequential reading). This worked:
// new MemoryStream will hold a ref to the segmentCopy array (no new copy!)
MemoryStream ms = new MemoryStream(segmentCopy, 0, segmentCopy.Length);
What I haven't needed to do yet, but will want to do, is to get a slice of the MemoryStream that doesn't involve copying. This works:
MemoryStream sliceFromMS = new MemoryStream(fileData, offset, length);
From above, fileData was a ref to the array underlying the original MemoryStream. Now sliceFromMS will have a ref to a segment within that same array.
You can use FileStream.Seek, as I understand it, there is no need to load data into memory, then to use this method of MemoryStream
In the following example, str1 and str2 are equal:
using (var fs = new FileStream(#"C:\Users\bar_v\OneDrive\Desktop\js_balancer.txt", FileMode.Open))
{
var buffer = new byte[20];
fs.Read(buffer, 0, 20);
var str1= Encoding.ASCII.GetString(buffer);
fs.Seek(0, SeekOrigin.Begin);
fs.Read(buffer, 0, 20);
var str2 = Encoding.ASCII.GetString(buffer);
}
By the way, when you create a new MemoryStream object, you don’t copy the byte array, you just keep a reference to it:
public MemoryStream(byte[] buffer, bool writable)
{
if (buffer == null)
throw new ArgumentNullException(nameof(buffer), SR.ArgumentNull_Buffer);
_buffer = buffer;
_length = _capacity = buffer.Length;
_writable = writable;
_exposable = false;
_origin = 0;
_isOpen = true;
}
But when reading, as we can see, copying occurs:
public override int Read(byte[] buffer, int offset, int count)
{
if (buffer == null)
throw new ArgumentNullException(nameof(buffer), SR.ArgumentNull_Buffer);
if (offset < 0)
throw new ArgumentOutOfRangeException(nameof(offset), SR.ArgumentOutOfRange_NeedNonNegNum);
if (count < 0)
throw new ArgumentOutOfRangeException(nameof(count), SR.ArgumentOutOfRange_NeedNonNegNum);
if (buffer.Length - offset < count)
throw new ArgumentException(SR.Argument_InvalidOffLen);
EnsureNotClosed();
int n = _length - _position;
if (n > count)
n = count;
if (n <= 0)
return 0;
Debug.Assert(_position + n >= 0, "_position + n >= 0"); // len is less than 2^31 -1.
if (n <= 8)
{
int byteCount = n;
while (--byteCount >= 0)
buffer[offset + byteCount] = _buffer[_position + byteCount];
}
else
Buffer.BlockCopy(_buffer, _position, buffer, offset, n);
_position += n;
return n;
}
I am uploading jpeg images as fast as i can to a web service (it is the requirement I have been given).
I am using async call to the web service and I calling it within a timer.
I am trying to optimise as much as possible and tend to use an old laptop for testing. On a normal/reasonable build PC all is OK. On the laptop I get high RAM usage.
I know I will get a higher RAM usage using that old laptop but I want to know the lowest spec PC the app will work on.
As you can see in the code below I am converting the jpeg image into a byte array and then I upload the byte array.
If I can reduce/compress/zip the bye array then I am hoping this will be 1 of the ways of improving memory usage.
I know jpegs are already compressed but if I compare the current byte array with the previous byre array then uploading the difference between this byte arrays I could perhaps compress it even more on the basis that some of the byte values will be zero.
If I used a video encoder (which would do the trick) I would not be real time as much I would like.
Is there an optimum way of comparing 2 byte arrays and outputting to a 3rd byte array? I have looked around but could not find an answer that I liked.
This is my code on the client:
bool _uploaded = true;
private void tmrLiveFeed_Tick(object sender, EventArgs e)
{
try
{
if (_uploaded)
{
_uploaded = false;
_live.StreamerAsync(Shared.Alias, imageToByteArray((Bitmap)_frame.Clone()), Guid.NewGuid().ToString()); //web service being called here
}
}
catch (Exception _ex)
{
//do some thing but probably time out error here
}
}
//web service has finished the client invoke
void _live_StreamerCompleted(object sender, AsyncCompletedEventArgs e)
{
_uploaded = true; //we are now saying we start to upload the next byte array
}
private wsLive.Live _live = new wsLive.Live(); //web service
private byte[] imageToByteArray(Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms,System.Drawing.Imaging.ImageFormat.Jpeg); //convert image to best image compression
imageIn.Dispose();
return ms.ToArray();
}
thanks...
As C.Evenhuis said - JPEG files are compressed, and changing even few pixels results in complettly differrent file. So - comparing resulting JPEG files is useless.
BUT you can compare your Image objects - quick search results in finding this:
unsafe Bitmap PixelDiff(Bitmap a, Bitmap b)
{
Bitmap output = new Bitmap(a.Width, a.Height, PixelFormat.Format32bppArgb);
Rectangle rect = new Rectangle(Point.Empty, a.Size);
using (var aData = a.LockBitsDisposable(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb))
using (var bData = b.LockBitsDisposable(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb))
using (var outputData = output.LockBitsDisposable(rect, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb))
{
byte* aPtr = (byte*)aData.Scan0;
byte* bPtr = (byte*)bData.Scan0;
byte* outputPtr = (byte*)outputData.Scan0;
int len = aData.Stride * aData.Height;
for (int i = 0; i < len; i++)
{
// For alpha use the average of both images (otherwise pixels with the same alpha won't be visible)
if ((i + 1) % 4 == 0)
*outputPtr = (byte)((*aPtr + *bPtr) / 2);
else
*outputPtr = (byte)~(*aPtr ^ *bPtr);
outputPtr++;
aPtr++;
bPtr++;
}
}
return output;
}
If your goal is to find out whether two byte arrays contain exactly the same data, you can create an MD5 hash and compare these as others have suggested. However in your question you mention you want to upload the difference which means the result of the comparison must be more than a simple yes/no.
As JPEGs are already compressed, the smallest change to the image could lead to a large difference in the binary data. I don't think any two JPEGs contain binary data similar enough to easily compare.
For BMP files you may find that changing a single pixel affects only one or a few bytes, and more importantly, the data for the pixel at a certain offset in the image is located at the same position in both binary files (given that both images are of equal size and color depth). So for BMPs the difference in binary data directly relates to the difference in the images.
In short, I don't think obtaining the binary difference between JPEG files will improve the size of the data to be sent.
im trying to send an image via network stream, i have a sendData and Getdata functions
and i always get an invalid parameter when using the Image.FromStream function
this is my code :
I am Getting the pic from the screen, then converting it to a byte[]
Inserting it to a Memory stream that i send via a networkStream.
private void SendData()
{
StreamWriter swWriter = new StreamWriter(this._nsClient);
// BinaryFormatter bfFormater = new BinaryFormatter();
// this method
lock (this._secLocker)
{
while (this._bShareScreen)
{
// Check if you need to send the screen
if (this._bShareScreen)
{
MemoryStream msStream = new MemoryStream();
this._imgScreenSend = new Bitmap(this._imgScreenSend.Width, this._imgScreenSend.Height);
// Send an image code
swWriter.WriteLine(General.IMAGE);
swWriter.Flush();
// Copy image from screen
this._grGraphics.CopyFromScreen(0, 0, 0, 0, this._sizScreenSize);
this._imgScreenSend.Save(msStream, System.Drawing.Imaging.ImageFormat.Jpeg);
msStream.Seek(0, SeekOrigin.Begin);
// Create the pakage
byte[] btPackage = msStream.ToArray();
// Send its langth
swWriter.WriteLine(btPackage.Length.ToString());
swWriter.Flush();
// Send the package
_nsClient.Write(btPackage, 0, btPackage.Length);
_nsClient.Flush();
}
}
}
}
private void ReciveData()
{
StreamReader srReader = new StreamReader(this._nsClient);
string strMsgCode = String.Empty;
bool bContinue = true;
//BinaryFormatter bfFormater = new BinaryFormatter();
DataContractSerializer x = new DataContractSerializer(typeof(Image));
// Lock this method
lock (this._objLocker)
{
while (bContinue)
{
// Get the next msg
strMsgCode = srReader.ReadLine();
// Check code
switch (strMsgCode)
{
case (General.IMAGE):
{
// Read bytearray
int nSize = int.Parse(srReader.ReadLine().ToString());
byte[] btImageStream = new byte[nSize];
this._nsClient.Read(btImageStream, 0, nSize);
// Get the Stream
MemoryStream msImageStream = new MemoryStream(btImageStream, 0, btImageStream.Length);
// Set seek, so we read the image from the begining of the stream
msImageStream.Position = 0;
// Build the image from the stream
this._imgScreenImg = Image.FromStream(msImageStream); // Error Here
Part of the problem is that you're using WriteLine() which adds Environment.NewLine at the end of the write. When you just call Read() on the other end, you're not dealing with that newline properly.
What you want to do is just Write() to the stream and then read it back on the other end.
The conversion to a string is strange.
What you're doing, when transferring an image, is sending an array of bytes. All you need to do is send the length of the expected stream and then the image itself, and then read the length and the byte array on the other side.
The most basic and naive way of transferring a byte array over the wire is to first send an integer that represents the length of the array, and read that length on the receiving end.
Once you now know how much data to send/receive, you then send the array as a raw array of bytes on the wire and read the length that you previously determined on the other side.
Now that you have the raw bytes and a size, you can reconstruct the array from your buffer into a valid image object (or whatever other binary format you've just sent).
Also, I'm not sure why that DataContractSerializer is there. It's raw binary data, and you're already manually serializing it to bytes anyway, so that thing isn't useful.
One of the fundamental problems of network programming using sockets and streams is defining your protocol, because the receiving end can't otherwise know what to expect or when the stream will end. That's why every common protocol out there either has a very strictly defined packet size and layout or else does something like sending length/data pairs, so that the receiving end knows what to do.
If you implement a very simple protocol such as sending an integer which represents array length and reading an integer on the receiving end, you've accomplished half the goal. Then, both sender and receiver are in agreement as to what happens next. Then, the sender sends exactly that number of bytes on the wire and the receiver reads exactly that number of bytes on the wire and considers the read to be finished. What you now have is an exact copy of the original byte array on the receiving side and you can then do with it as you please, since you know what that data was in the first place.
If you need a code example, I can provide a simple one or else there are numerous examples available on the net.
Trying to keep it short:
the Stream.Read function (which you use) returns an int that states how many bytes were read, this is return to you so you could verify that all the bytes you need are received.
something like:
int byteCount=0;
while(byteCount < nSize)
{
int read = this._nsClient.Read(btImageStream, byteCount, nSize-byteCount);
byteCount += read;
}
this is not the best code for the job
I have a HttpHandler returning an image through Response.OutputStream. I have the following code:
_imageProvider.GetImage().CopyTo(context.Response.OutputStream);
GetImage() method returns a Stream which is actually a MemoryStream instance and it is returning 0 bytes to the browser. If i change GetImage() method signature to return a MemoryStream and use the following line of code:
_imageProvider.GetImage().WriteTo(context.Response.OutputStream);
It works and the browser gets an image. So what is the difference between WriteTo and CopyTo in MemoryStream class, and what is the recommended way to make this works using Stream class in GetImage() method signature.
WriteTo() is resetting the read position to zero before copying the data - CopyTo() on the other hand will copy whatever data remains after the current position in the stream. That means if you did not reset the position yourself, no data will be read at all.
Most likely you just miss the following in your first version:
memoryStream.Position = 0;
According to reflector, this is the CopyTo() method definition:
private void InternalCopyTo(Stream destination, int bufferSize)
{
int num;
byte[] buffer = new byte[bufferSize];
while ((num = this.Read(buffer, 0, buffer.Length)) != 0)
{
destination.Write(buffer, 0, num);
}
}
I dont see any "remains mechanism" here... It copies everything from this to destination ( in blocks of buffer size ).
I'm very sorry for the conservative title and my question itself,but I'm lost.
The samples provided with ICsharpCode.ZipLib doesn't include what I'm searching for.
I want to decompress a byte[] by putting it in InflaterInputStream(ICSharpCode.SharpZipLib.Zip.Compression.Streams.InflaterInputStream)
I found a decompress function ,but it doesn't work.
public static byte[] Decompress(byte[] Bytes)
{
ICSharpCode.SharpZipLib.Zip.Compression.Streams.InflaterInputStream stream =
new ICSharpCode.SharpZipLib.Zip.Compression.Streams.InflaterInputStream(new MemoryStream(Bytes));
MemoryStream memory = new MemoryStream();
byte[] writeData = new byte[4096];
int size;
while (true)
{
size = stream.Read(writeData, 0, writeData.Length);
if (size > 0)
{
memory.Write(writeData, 0, size);
}
else break;
}
stream.Close();
return memory.ToArray();
}
It throws an exception at line(size = stream.Read(writeData, 0, writeData.Length);) saying it has a invalid header.
My question is not how to fix the function,this function is not provided with the library,I just found it googling.My question is,how to decompress the same way the function does with InflaterStream,but without exceptions.
Thanks and again - sorry for the conservative question.
the code in lucene is very nice.
public static byte[] Compress(byte[] input) {
// Create the compressor with highest level of compression
Deflater compressor = new Deflater();
compressor.SetLevel(Deflater.BEST_COMPRESSION);
// Give the compressor the data to compress
compressor.SetInput(input);
compressor.Finish();
/*
* Create an expandable byte array to hold the compressed data.
* You cannot use an array that's the same size as the orginal because
* there is no guarantee that the compressed data will be smaller than
* the uncompressed data.
*/
MemoryStream bos = new MemoryStream(input.Length);
// Compress the data
byte[] buf = new byte[1024];
while (!compressor.IsFinished) {
int count = compressor.Deflate(buf);
bos.Write(buf, 0, count);
}
// Get the compressed data
return bos.ToArray();
}
public static byte[] Uncompress(byte[] input) {
Inflater decompressor = new Inflater();
decompressor.SetInput(input);
// Create an expandable byte array to hold the decompressed data
MemoryStream bos = new MemoryStream(input.Length);
// Decompress the data
byte[] buf = new byte[1024];
while (!decompressor.IsFinished) {
int count = decompressor.Inflate(buf);
bos.Write(buf, 0, count);
}
// Get the decompressed data
return bos.ToArray();
}
Well it sounds like the data is just inappropriate, and that otherwise the code would work okay. (Admittedly I'd use a "using" statement for the streams instead of calling Close explicitly.)
Where did you get your data from?
Why don't you use the System.IO.Compression.DeflateStream class (available since .Net 2.0)? This uses the same compression/decompression method but doesn't require an extra library dependency.
Since .Net 2.0 you only need the ICSharpCode.ZipLib if you need the file container support.