I am uploading jpeg images as fast as i can to a web service (it is the requirement I have been given).
I am using async call to the web service and I calling it within a timer.
I am trying to optimise as much as possible and tend to use an old laptop for testing. On a normal/reasonable build PC all is OK. On the laptop I get high RAM usage.
I know I will get a higher RAM usage using that old laptop but I want to know the lowest spec PC the app will work on.
As you can see in the code below I am converting the jpeg image into a byte array and then I upload the byte array.
If I can reduce/compress/zip the bye array then I am hoping this will be 1 of the ways of improving memory usage.
I know jpegs are already compressed but if I compare the current byte array with the previous byre array then uploading the difference between this byte arrays I could perhaps compress it even more on the basis that some of the byte values will be zero.
If I used a video encoder (which would do the trick) I would not be real time as much I would like.
Is there an optimum way of comparing 2 byte arrays and outputting to a 3rd byte array? I have looked around but could not find an answer that I liked.
This is my code on the client:
bool _uploaded = true;
private void tmrLiveFeed_Tick(object sender, EventArgs e)
{
try
{
if (_uploaded)
{
_uploaded = false;
_live.StreamerAsync(Shared.Alias, imageToByteArray((Bitmap)_frame.Clone()), Guid.NewGuid().ToString()); //web service being called here
}
}
catch (Exception _ex)
{
//do some thing but probably time out error here
}
}
//web service has finished the client invoke
void _live_StreamerCompleted(object sender, AsyncCompletedEventArgs e)
{
_uploaded = true; //we are now saying we start to upload the next byte array
}
private wsLive.Live _live = new wsLive.Live(); //web service
private byte[] imageToByteArray(Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms,System.Drawing.Imaging.ImageFormat.Jpeg); //convert image to best image compression
imageIn.Dispose();
return ms.ToArray();
}
thanks...
As C.Evenhuis said - JPEG files are compressed, and changing even few pixels results in complettly differrent file. So - comparing resulting JPEG files is useless.
BUT you can compare your Image objects - quick search results in finding this:
unsafe Bitmap PixelDiff(Bitmap a, Bitmap b)
{
Bitmap output = new Bitmap(a.Width, a.Height, PixelFormat.Format32bppArgb);
Rectangle rect = new Rectangle(Point.Empty, a.Size);
using (var aData = a.LockBitsDisposable(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb))
using (var bData = b.LockBitsDisposable(rect, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb))
using (var outputData = output.LockBitsDisposable(rect, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb))
{
byte* aPtr = (byte*)aData.Scan0;
byte* bPtr = (byte*)bData.Scan0;
byte* outputPtr = (byte*)outputData.Scan0;
int len = aData.Stride * aData.Height;
for (int i = 0; i < len; i++)
{
// For alpha use the average of both images (otherwise pixels with the same alpha won't be visible)
if ((i + 1) % 4 == 0)
*outputPtr = (byte)((*aPtr + *bPtr) / 2);
else
*outputPtr = (byte)~(*aPtr ^ *bPtr);
outputPtr++;
aPtr++;
bPtr++;
}
}
return output;
}
If your goal is to find out whether two byte arrays contain exactly the same data, you can create an MD5 hash and compare these as others have suggested. However in your question you mention you want to upload the difference which means the result of the comparison must be more than a simple yes/no.
As JPEGs are already compressed, the smallest change to the image could lead to a large difference in the binary data. I don't think any two JPEGs contain binary data similar enough to easily compare.
For BMP files you may find that changing a single pixel affects only one or a few bytes, and more importantly, the data for the pixel at a certain offset in the image is located at the same position in both binary files (given that both images are of equal size and color depth). So for BMPs the difference in binary data directly relates to the difference in the images.
In short, I don't think obtaining the binary difference between JPEG files will improve the size of the data to be sent.
Related
I am using the following render code. This is a slightly modified code because, in the actual code, I am getting all the files from the FTP server. CurrentWindowCenter and CurrentWindowWidth hold the current value of WL and WW. This value can be changed from the UI. I am showing the rendered WritableBitmap directly to the user using an image component inside a canvas.
But the rendering of the image is very slow. Especially for images with large file sizes such as X-Ray. So, the WW and WL change is also very slow since it also uses the render function.
I am not very knowledgeable about this. But is there a way to make the rendering or WW/WL change faster? Is there a way to skip the rendering of the image every time a WW/WL change happens?
Any advice in the right direction is appreciated.
Thanks in advance.
// assume filePath holds an actual file location.
var filePath = "";
var dicomFile = new DicomFile.Open(filePath);
var dicomImage = new DicomImage(dicomFile.DataSet);
if (CurrentWindowCenter.HasValue && CurrentWindowWidth.HasValue)
{
dicomImage.WindowCenter = Convert.ToDouble(CurrentWindowCenter.Value);
dicomImage.WindowWidth = Convert.ToDouble(CurrentWindowWidth.Value);
}
dicomImage.RenderImage().AsWriteableBitmap();
Environment
Fo-Dicom (4.0.8)
.NET Framework 4.8
WPF
My guess is that fo-dicom is not really intended for high performance and more for compatibility. For best performance you should probably use the GPU via DirectX, OpenCL or similar. Second best should be some tightly optimized SIMD code, probably using c++.
But there might be some improvements to be had using just c#. From the looks of it fo-dicom creates a new image, copies pixels to this image, and then creates a writeableBitmap and does another copy. These step will take some extra time.
My code for copying pixels and applying a lut/transfer function look like this:
public static unsafe void Mono16ToMono8(
byte* source,
byte* target,
int sourceStride,
int targetStride,
int width,
int height,
byte[] lut)
{
Parallel.For(0, height, y =>
{
var ySource = source + y * sourceStride;
var yTarget = target + y * targetStride;
var srcUshort = (ushort*)ySource;
for (int x = 0; x < width; x++)
{
var sample = srcUshort[x];
yTarget[x] = lut[sample];
}
});
}
And the code to do the actual update of the writeable bitmap:
public static unsafe void Update(
this WriteableBitmap self,
IImage source,
byte[] lut)
{
self.Lock();
try
{
var targetPtr = (byte*)self.BackBuffer;
fixed (byte* sourcePtr = source.Data)
{
if (source.PixelFormat == PixelType.Mono16)
{
Mono16ToMono8(
sourcePtr,
targetPtr,
source.Stride,
self.BackBufferStride,
source.Width,
source.Height,
lut);
}
}
self.AddDirtyRect(new Int32Rect(0, 0, (int)self.Width, (int)self.Height));
}
finally
{
self.Unlock();
}
}
This uses an internal IImage format, where source.Data is a ReadOnlySpan<byte>, but could just as well be a byte[]. I hope most of the other properties are self-explanatory. I would expect this code to be a bit faster since it avoids both allocations and some copying steps.
All of this assumes the image is in 16-bit unsigned format, that is common for dicom, but not the only format. It also assumes you can actually get a hold of a pointer to the actual pixel-buffer, and an array of the lut that maps each possible pixelvalue to a byte. It also assumes a writeablebitmap of the correct size and color space.
And as previously mentioned, if you want both high performance, and handle all possible image formats, you might need to invest time to build your own image rendering pipeline.
My game takes a screenshot each game loop and stores it memory. The user can then press "print screen" to trigger "SaveScreenshot" (see code below) to store each screenshot as a PNG and also compile them into an AVI using SharpAvi. The saving of images works fine, and a ~2sec AVI is produced, but it doesn't show any video when played. It's just the placeholder VLC Player icon. I think this is very close to working, but I can't determine what's wrong. Please see my code below. If anyone has any ideas, I'd be very appreciative!
private Bitmap GrabScreenshot()
{
try
{
Bitmap bmp = new Bitmap(this.ClientSize.Width, this.ClientSize.Height);
System.Drawing.Imaging.BitmapData data =
bmp.LockBits(this.ClientRectangle, System.Drawing.Imaging.ImageLockMode.WriteOnly,
System.Drawing.Imaging.PixelFormat.Format24bppRgb);
GL.ReadPixels(0, 0, this.ClientSize.Width, this.ClientSize.Height, PixelFormat.Bgr, PixelType.UnsignedByte,
data.Scan0);
bmp.UnlockBits(data);
bmp.RotateFlip(RotateFlipType.RotateNoneFlipY);
return bmp;
} catch(Exception ex)
{
// occasionally getting GDI generic exception when rotating the image... skip that one.
return null;
}
}
private void SaveScreenshots()
{
var directory = "c:\\helioscreenshots\\";
var rootFileName = string.Format("{0}_", DateTime.UtcNow.Ticks);
var writer = new AviWriter(directory + rootFileName + ".avi")
{
FramesPerSecond = 30,
// Emitting AVI v1 index in addition to OpenDML index (AVI v2)
// improves compatibility with some software, including
// standard Windows programs like Media Player and File Explorer
EmitIndex1 = true
};
// returns IAviVideoStream
var aviStream = writer.AddVideoStream();
// set standard VGA resolution
aviStream.Width = this.ClientSize.Width;
aviStream.Height = this.ClientSize.Height;
// class SharpAvi.KnownFourCCs.Codecs contains FOURCCs for several well-known codecs
// Uncompressed is the default value, just set it for clarity
aviStream.Codec = KnownFourCCs.Codecs.Uncompressed;
// Uncompressed format requires to also specify bits per pixel
aviStream.BitsPerPixel = BitsPerPixel.Bpp32;
var index = 0;
while (this.Screenshots.Count > 0)
{
Bitmap screenshot = this.Screenshots.Dequeue();
var screenshotBytes = ImageToBytes(screenshot);
// write data to a frame
aviStream.WriteFrame(true, // is key frame? (many codecs use concept of key frames, for others - all frames are keys)
screenshotBytes, // array with frame data
0, // starting index in the array
screenshotBytes.Length); // length of the data
// save it!
// NOTE: compared jpeg, gif, and png. PNG had smallest file size.
index++;
screenshot.Save(directory + rootFileName + index + ".png", System.Drawing.Imaging.ImageFormat.Png);
}
// save the AVI!
writer.Close();
}
public static byte[] ImageToBytes(Image img)
{
using (var stream = new MemoryStream())
{
img.Save(stream, System.Drawing.Imaging.ImageFormat.Png);
return stream.ToArray();
}
}
From what I see, you're providing the byte-array in png-encoding, yet the stream is configured as KnownFourCCs.Codecs.Uncompressed.
Furthermore, from the manual:
AVI expects uncompressed data in format of standard Windows DIB, that is bottom-up bitmap of the specified bit-depth. For each frame, put its data in byte array and call IAviVideoStream.WriteFrame()
Next, all encoders expect input image data in specific format. It's BGR32 top-down - 32 bits per pixel, blue byte first, alpha byte not used, top line goes first. This is the format you can often get from existing images. [...] So, you simply pass an uncompressed top-down BGR32
I would retrieve the byte-array directly from the Bitmap using LockBits and Marshal.Copy as described in the manual.
I would like to access the value of each individual pixel value of a 16UC1-formatted png image, which I receive as a byte[].
I am realtively new to image processing in C# and I got stuck at this problem for days now.
I can work with a "typical" bgr8-formatted jpg/png byte array simply by:
private static Bitmap getBitmap(byte[] array)
{
return new Bitmap(new MemoryStream(array));
}
I tried many things for the 16UC1-format. The furthest i got is:
private Bitmap getBitmap(byte[] array)
{
var bitmap = new Bitmap(640,480,PixelFormat.Format16bppRgb555);
var bitmapData = bitmap.LockBits(new Rectangle(0, 0, 640, 480), ImageLockMode.WriteOnly, PixelFormat.Format16bppRgb555);
System.Runtime.InteropServices.Marshal.Copy(bitmapData.Scan0, array, 0, array.Length);
bitmap.UnlockBits(bitmapData);
return bitmap;
}
this at least returns a bitmap, though it is completely black.
Trying PixelFormat.Format16bppGrayScale instead of PixelFormat.Format16bppRgb555 gives me a "General error in GDI+".
When writing the byte array to a file by e.g. by
File.WriteAllBytes(filename, array);
I can see the image with image viewers like IrfanView, though Windows photo viewer fails.
Reading the file as a Bitmap is not required. I want to avoid file operations for performance reasons. I simply want to access each individual xy-pixel of that image.
Update:
I started using Emgu.CV and applying imdecode as Dan suggested below.
private Bitmap getCompressedDepthBitmap(byte[] data)
{
Mat result = new Mat(new Size(640, 480), DepthType.Cv16U, 1);
CvInvoke.Imdecode(data,LoadImageType.AnyDepth, result);
return result.Bitmap;
}
This again gives me a black image. (By saving the byte array via WriteAllBytes I see useful contents.) I also tried
Image<Gray, float> image = result.ToImage<Gray, float>();
image.Save(Path.Combine(localPath, "image.png"));
which as well gave me a black image.
I am planning to normalize the Mat now somehow, maybe this helps...
Thank you for your interest and your support!
After hours and hours of wasted working time and despair I finally found the solution...
One important thing I was missing in my description above is that the image data byte[] is coming from of a ROS sensor_msgs/CompressedImage.msg.
The data byte array, which is supposed to contain the png data, sometimes starts with a 12byte header; seemingly only if the data is (1 channel) compressedDepth image. I acciendently found this info here.
Removing these awesomely obnoxous 12 bytes and continung as usual does the job:
var bitmap = new Bitmap(new MemoryStream(dataSkip(12).ToArray()));
i have a application where i need to check my scanner generates blank tif file or not.
here i share my example code
private void button1_Click(object sender, EventArgs e)
{
string path2 = #"F:\3333.tif";
string path = #"F:\Document Scanned # 1-blank.tif";
System.Drawing.Image img = System.Drawing.Image.FromFile(path);
byte[] bytes;
using (MemoryStream ms = new MemoryStream())
{
img.Save(ms, System.Drawing.Imaging.ImageFormat.Tiff);
bytes = ms.ToArray();
}
System.Drawing.Image img2 = System.Drawing.Image.FromFile(path2);
byte[] bytes2;
using (MemoryStream ms3 = new MemoryStream())
{
img2.Save(ms3, System.Drawing.Imaging.ImageFormat.Tiff);
bytes2 = ms3.ToArray();
}
bool t = false;
t = bytes.SequenceEqual(bytes2);
}
Note: Blank tif image meance white page .
In above bool t always returns true why ? i used two diff images solved
Essentially, you are comparing the bytes of two TIF files (with the indirection of using Image). This can fail for various reasons:
Dimension: If the two images do not have the exact same height and width, the byte sequences will -of course- be different even if both are completely white.
Metadata: As far as I know, the TIF format contains various metadata. Therefore, two files may be different even if they have the same pixels. I would recommend manually checking all pixel values (e.g. Bitmap.GetPixel) and comparing them to white (Color.FromArgb(255,255,255,255)).
Noise: Are you sure that a blank file is always pure white (255,255,255)? Maybe some random pixels have slightly different values such as (255,254,255)...
I have a function which extracts a file into a byte array (data).
int contentLength = postedFile.ContentLength;
byte[] data = new byte[contentLength];
postedFile.InputStream.Read(data, 0, contentLength);
Later I use this byte array to construct an System.Drawing.Image object
(where data is the byte array)
MemoryStream ms = new MemoryStream(data);
Image bitmap = Image.FromStream(ms);
I get the following exception "ArgumentException: Parameter is not valid."
The original posted file contained a 500k jpeg image...
Any ideas why this isnt working?
Note: I assure you I have a valid reason for converting to a byte array and then to a memorystream!!
That's most likely because you didn't get all the file data into the byte array. The Read method doesn't have to return as many bytes as you request, and it returns the number of bytes actually put in the array. You have to loop until you have gotten all the data:
int contentLength = postedFile.ContentLength;
byte[] data = new byte[contentLength];
for (int pos = 0; pos < contentLength; ) {
pos += postedFile.InputStream.Read(data, pos, contentLength - pos);
}
This is a common mistake when reading from a stream. I have seen this problem a lot of times.
Edit:
With the check for an early end of stream, as Matthew suggested, the code would be:
int contentLength = postedFile.ContentLength;
byte[] data = new byte[contentLength];
for (int pos = 0; pos < contentLength; ) {
int len = postedFile.InputStream.Read(data, pos, contentLength - pos);
if (len == 0) {
throw new ApplicationException("Upload aborted.");
}
pos += len;
}
You're not checking the return value of postedFile.InputStream.Read. It is not at all guaranteed to fill the array on the first call. That will leave a corrupt JPEG in data (0's instead of file content).
Have you checked the return value from the Read() call to verify that is actually reading all of the content? Perhaps Read() is only returning a portion of the stream, requiring you to loop the Read() call until all of the bytes are consumed.
Any reason why you don't simply do this:
Image bitmap = Image.FromStream(postedFile.InputStream);
I have had problems loading images in .NET that were openable by more robust image libraries. It's possible that the specific jpeg image you have is not supported by .NET. jpeg files are not just one type of encoding, there's a variety of possible compression schemes allowed.
You could try it with another image that you know is in a supported format.