Changing Bitmaps using c# - c#

I was working on my college project where i was trying to change bit values of a Bitmap.
I am loading the bitmap to the memory stream, then extracting it to byte[] array. Now I was changing few central bits of this byte[] array and then converting it back to a bitmap image.
But i got a run time exception that "Invalid Bitmap".
Does a bitmap have some special format instead of simple bits???
Following is the code used by me:
MemoryStream mstream = new MemoryStream();
Bitmap b = new Bitmap(#"D:\my_pic.bmp");
b.Save(mstream, System.Drawing.Imaging.ImageFormat.Bmp);
byte[] ba = mstream.ToArray();
mstream.Close();
byte[] _Buffer = null;
System.IO.FileStream _FileStream = new System.IO.FileStream(_FileName, System.IO.FileMode.Open, System.IO.FileAccess.Read);
System.IO.BinaryReader _BinaryReader = new System.IO.BinaryReader(_FileStream);
long _TotalBytes = new System.IO.FileInfo(_FileName).Length;
_Buffer = _BinaryReader.ReadBytes((Int32)_TotalBytes);
// close file reader
_FileStream.Close();
_FileStream.Dispose();
_BinaryReader.Close();
int leng1 = ba.Length;
int leng2=_Buffer.Length;
for(i=(leng1)/2;i<leng1;i++)
{
for(j=0;j<leng2;j++)
{
ba[i]=_Buffer[j];
}
if(j==(leng2-1))
{
break;
}
}
TypeConverter tc = TypeDescriptor.GetConverter(typeof(Bitmap));
Bitmap bitmap1 = (Bitmap)tc.ConvertFrom(ba);

You must have your own goal to want to operate at this low level with a bitmap and that's fine. Unless you are performance bound it is way easier to do graphics at the graphics API level. Even if you are performance sensitive, other people have cut a path through the jungle already.
Back to the question. The BMP file format is simpler than some others but still comes in many varieties. Here is a detailed introduction to the gory details of the BMP format:
BMP file format
Now if you are just parsing your own BMP files and you know they are 32-bit RGB, and the header is going to be such-and-such a size which you can skip over, etc, then that might work for you. If you need to handle any old BMP it gets messy real fast which is why the libraries try to take care of everything for you.

Related

How to compress bitmap to smaller size in C#?

My bitmap is too large for uploading to the printer. I am going to compress it to smaller size so that less data will be transmit over the printer. But I don't want reduce the length and width of the bitmap. I have done some research but all of them require a stream especially as following
bitmap.compress(Bitmap.Format.jpeg,50,outputStream);
Why do I need a stream to store the file? How can I skip that and get the compressed bitmap that I want? I have tried
originalBitmap = Bitmap.decodeByteArray(imageByteData);
//Line below not working and got error
compressedBitmap = Bitmap.compress(Bitmap.Format.jpeg,50,outputStream);
In the outputStream which is my Download folder, I did see the compressed image, but how can I access the compressed image again? Unfortunately, the compress method is not that straight forward. My question is how can I compress a bitmap and use the compressed bitmap in another action? Thank you.
You can compress it to an in-memory stream:
//Compress to a stream in memory
byte[] compressedData = null;
using(var stream = new MemoryStream())
{
bitmap.Compress(Bitmap.Format.Jpeg, 50, stream);
compressedData = stream.ToArray();
}
//load compressed bitmap from memory
using(var stream = new MemoryStream(compressedData))
{
var compressedBitmap = BitmapFactory.DecodeStream(stream);
}

BitmapSource.Create issue

I have a question about BitmapSource.Create. I have the following code, and it's not behaving as expected:
reader.BaseStream.Position += BytesInMetadata;
var rawData = new UInt16[NumberOfPixels];
// Read in the raw image data in 16 bit format.
NumberOfPixels.Times((Action<int>)(i => rawData[i] = reader.ReadUInt16()));
var stats = new MsiStats()
{
Mean = rawData.Average(v => (Double)v),
StdDev = rawData.StandardDeviation(v => (Double)v),
Min = rawData.Min(),
Max = rawData.Max()
};
// Convert the 16-bit image to an 8-bit image that can actually be displayed.
var scaledData = ScaleData(rawData, 4.0f, CType);
GCHandle handle = GCHandle.Alloc(scaledData, GCHandleType.Pinned);
using (var bmp = new Bitmap(2048, 2048, 2048, System.Drawing.Imaging.PixelFormat.Format8bppIndexed, handle.AddrOfPinnedObject()))
{
bmp.Save(#"C:\Users\icyr\Work Folders\COBRA_I-3\CAST Data\myOGBitmap.bmp");
}
handle.Free();
var src = BitmapSource.Create(NumberOfColumns, NumberOfRows,
96, 96,
PixelFormats.Gray8, null,
scaledData,
NumberOfRows);
using (var fileStream = new FileStream(#"C:\<somefolder>\myBitmap.bmp", FileMode.OpenOrCreate))
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create(src));
enc.Save(fileStream);
}
I'm reading a 12 bit value from an proprietary image file, converting it to 8 bits, and then saving it as a bitmapsource object. However, when I read it back (or save it, as I do below) it saves it... wrong. I'm not even sure how to describe it. When I read the saved images in Matlab, the file saved from the Bitmapsource object only has pixel values that are multiples of 17. The saved file from the scaledData object has the full range of values.
What's going on here? Unfortuantely I'm working within a framework of code that I didn't write, and unless I want to overhaul the entire project (which I don't, nor do I have the time to) I need to continue to be able to use BitmapSource objects for my data storage purposes.
I'm at a loss of what to do here, so I'm hoping that you guys might have a better understanding of why this is occuring, and how to prevent it from doing so with minimal changes.
Apparently the issue was the use of the PixelFormat.Gray8. I changed it to PixelFormat.Indexed8, using BitmapPallettes.Gray256 for my pallette, and that seemed to fix my issue.
var src = BitmapSource.Create(NumberOfColumns, NumberOfRows,
96, 96,
PixelFormats.Indexed8, BitmapPalettes.Gray256,
scaledData,
NumberOfRows);
Still don't understand what was going on.

Image size is drastically increasing after applying a simple watermark

I've a set of images that I'm programmatically drawing a simple watermark on them using System.Windows and System.Windows.Media.Imaging (yes, not with GDI+) by following a tutorial in here.
Most of the images are not more than 500Kb, but after applying a simple watermark, which is a text with a transparent background, the image size is drastically increasing.
For example, a 440Kb image is becoming 8.33MB after applying the watermark with the below method, and that is shocking me.
private static BitmapFrame ApplyWatermark(BitmapFrame image, string waterMarkText) {
const int x = 5;
var y = image.Height - 20;
var targetVisual = new DrawingVisual();
var targetContext = targetVisual.RenderOpen();
var brush = (SolidColorBrush)(new BrushConverter().ConvertFrom("#FFFFFF"));
brush.Opacity = 0.5;
targetContext.DrawImage(image, new Rect(0, 0, image.Width, image.Height));
targetContext.DrawRectangle(brush, new Pen(), new Rect(0, y, image.Width, 20));
targetContext.DrawText(new FormattedText(waterMarkText, CultureInfo.CurrentCulture, FlowDirection.LeftToRight,
new Typeface("Batang"), 13, Brushes.Black), new Point(x, y));
targetContext.Close();
var target = new RenderTargetBitmap((int)image.Width, (int)image.Height, 96, 96, PixelFormats.Default);
target.Render(targetVisual);
var targetFrame = BitmapFrame.Create(target);
return targetFrame;
}
I've noticed that the image quality is improved compared than the original image. The image is more smoother and colors are more lighter. But, you know I don't really want this. I want the image to be as it is, but include the watermark. No quality increases, and of course no drastic changes in image size.
Is there any settings that I'm missing in here to tell my program to keep the quality as same as source image? How can I prevent the significant change of the image size after the changes in my ApplyWatermark method?
Edit
1. This is how I convert BitmapFrame to Stream. Then I use that Stream to save the image to AmazonS3
private Stream EncodeBitmap(BitmapFrame image) {
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create(image));
var memoryStream = new MemoryStream();
enc.Save(memoryStream);
return memoryStream;
}
2. This is how I get the BitmapFrame from Stream
private static BitmapFrame ReadBitmapFrame(Stream stream) {
var photoDecoder = BitmapDecoder.Create(
stream,
BitmapCreateOptions.PreservePixelFormat,
BitmapCacheOption.None);
return photoDecoder.Frames[0];
}
3. This is how I read the file from local directory
public Stream FindFileInLocalImageDir() {
try {
var path = #"D:\Some\Path\Image.png";
return !File.Exists(path) ? null : File.Open(path, FileMode.Open, FileAccess.Read, FileShare.Read);
} catch (Exception) {
return null;
}
}
The problem is that when you edit the image, the compression is gone. A 730x1108 JPG with 433kB disc size with 32bit (you mentioned transparency, so ARGB) will need at least 730 * 1108 * 4 = 3,09MB on disc. Of course you can compress it afterwards again (for disc, network stream of what else).
This is the reason why image software always needs much memory even when working with compressed data.
Conclusion: You will need the free memory to work with the image. Not possible to have it otherwise completly at hand.
The reason I asked my question in the comments earlier, is because I noticed there were several different encoders available. A bitmap usually has a significantly larger file size, due to the amount of information it's storing about your image.
I haven't tested this myself, but have you tried a different encoder?
var pngEncoder = new PngBitmapEncoder();
pngEncoder.Frames.Add(ApplyWatermark(null, null));
MemoryStream stm = File.Create(image);
pngEncoder.Save(stm);
return stm;

C# Image: To preserve image's checksum

I have the following codes to convert an image(bitmap) to byte array:
public byte[] ConvertImageToByteArray(Image imageToConvert, ImageFormat formatOfImage)
{
byte[] Ret;
try
{
using (MemoryStream ms = new MemoryStream())
{
imageToConvert.Save(ms, formatOfImage);
Ret = ms.ToArray();
}
}
catch (Exception)
{
throw;
}
return Ret;
}
and Convert byte array back to image(bitmap):
public Bitmap ConvertByteArrayToImage(byte[] myByteArray)
{
Image newImage;
using (MemoryStream ms = new MemoryStream(myByteArray, 0, myByteArray.Length))
{
ms.Write(myByteArray, 0, myByteArray.Length);
newImage = Image.FromStream(ms, true);
}
return newImage;
}
Here's my Main Program:
byte[] test = ConvertImageToByteArray(Image.FromFile("oldImage.bmp"), ImageFormat.Bmp);
Bitmap bmp = ConvertByteArrayToImage(test);
bmp.Save("newImage.bmp");
But when I compare both of the image files(old & new bitmap images), their checksum appeared to be different. Any reason for that happening? How to fix it to maintain its integrity?
Basically, there are many ways an identical image can be encoded in a BMP file. If I try your example on a random image I found, I see the .NET Bitmap class saves the file without filling the biSizeImage field in the BITMAPINFOHEADER structure in the BMP header (but the original image produced by IrfanView has it filled), which is a completely correct and documented possibility. (“This may be set to zero for BI_RGB bitmaps.”)
And this is definitely not the only variable thing in the BMP format. For instance, there are multiple possible orderings of pixel data in the image (top-to-bottom, bottom-to-top), specified in the header. (“If biHeight is positive, the bitmap is a bottom-up DIB and its origin is the lower-left corner. If biHeight is negative, the bitmap is a top-down DIB and its origin is the upper-left corner.”)
So, if you receive any BMP file from a source not under your control and really need to produce an image using exactly the same BMP variant, you have a lot work to do, and I don’t think you could use the standard .NET helper classes for that.
See also this question: Save bitmap to file has zero in image size field
After chatting a bit, you solution comes down to reading and writing bytes, take the image object out the equation and just deal with the raw bytes.
To read the file:
MemoryStream ms = new MemoryStream(File.ReadAllBytes("filename"));
To write the file:
File.WriteAllBytes("outputfile", ms.ToArray());

Out of memory error while loading a Bitmap

I'm working with big size images (for ex. 16000x9440 px) and cut some regions for other things. I'm getting an exception "Out of memory" when create a new Bitmap instance:
using (FileStream fileStream = new FileStream(mapFileResized, FileMode.Open))
{
byte[] data = new byte[fileStream.Length];
fileStream.Read(data, 0, data.Length);
using (MemoryStream memoryStream = new MemoryStream(data))
{
using (Bitmap src = new Bitmap(memoryStream)) // <-- exception
{
tile = new Bitmap(tileWidth, tileHeight, PixelFormat.Format24bppRgb);
tile.SetResolution(src.HorizontalResolution, src.VerticalResolution);
tile.MakeTransparent();
using (Graphics grRect = Graphics.FromImage(tile))
{
grRect.CompositingQuality = CompositingQuality.HighQuality;
grRect.SmoothingMode = SmoothingMode.HighQuality;
grRect.DrawImage(
src,
new RectangleF(0, 0, tileWidth, tileHeight),
rTile,
GraphicsUnit.Pixel
);
}
}
}
}
When I use small image sizes (for ex. 8000x4720 px) then all work fine.
How can I work with big size images?
PS tile Bitmap is disposed in finally block.
Best regards, Alex.
You are using about a gigabyte of ram, not very suprising that you run out of memory.
Assuming you use a 32bpp Fileformat with 16000x9440 pixel you get a filesize of about:
16000 * 9440 * (32/8) = ~576MB
byte[] data = new byte[fileStream.Length];
fileStream.Read(data, 0, data.Length);
using (MemoryStream memoryStream = new MemoryStream(data))
{
[... snip ...]
}
You load the whole File into a memory stream, this requires 576MB.
[... snip ...]
using (Bitmap src = new Bitmap(memoryStream)) // <-- exception
{
[... snip ...]
}
[... snip ...]
You load the whole stream contents into a bitmap, this requires at least another 576MB (depending on how much memory the bitmap requires per pixel, should be at least 4, propably more). At that point you have the image twice in memory which seriously hurts for such big images.
You can reduce the memory footprint by getting rid of the memory stream and loading the bitmap directly from the file stream.
Another solution would be to load only a part of the bitmap and load the other parts on-demand (much like google maps), but i can't help you with that solution, might require reading the bitmap manually.
Not a complete answer to your question, but you are probably better of using a library like ImageMagick.NET
MemoryStream is implemented using an array of bytes that stores the data. If you read more data than the array can hold, a new array of double size is allocated and the bytes copied from one array to the other.
Since you apparently know how much data you're going to need, you can allocated the correct size up front and thus avoid the resizing.
However, once you reach a certain size you will run out of memory. .NET imposes a 2 GB limit on a single object (even on 64 bit), so the internal array in MemoryStream will never be able to grow beyond that. If your image is larger than that you'll get an out of memory exception.

Categories

Resources