I'm working on a mandelbrot fractal renderer and managed to get a basic version working. As a next step I tried to make the writing to the bitmap in chunks so the user gets faster feedback. Doing this, I ran into an error on calling the WritePixels method on my WriteableBitmap ("Buffer size is not sufficient").
var bytesPerPixel = _bitmap.Format.BitsPerPixel/8;
var stride = _bitmap.PixelWidth*bytesPerPixel;
var imageBytes = new byte[stride*_bitmap.PixelHeight];
var chunks = _bitmap.PixelHeight;
var chunkSize = _bitmap.PixelWidth;
for (var chunk = 0; chunk < chunks; chunk++)
{
var chunkPixelIndex = chunk*chunkSize;
for (var pixel = chunkPixelIndex; pixel < chunkPixelIndex+chunkSize; pixel++)
{
ColorPixel(pixel, bytesPerPixel, imageBytes);
}
var imageRect = new Int32Rect(0, 0, _bitmap.PixelWidth, _bitmap.PixelHeight);
_bitmap.WritePixels(imageRect, imageBytes, stride, chunkPixelIndex*bytesPerPixel);
}
Here's the last know working version of the code for comparison:
var bytesPerPixel = _bitmap.Format.BitsPerPixel/8;
var stride = _bitmap.PixelWidth*bytesPerPixel;
var imageBytes = new byte[stride*_bitmap.PixelHeight];
var numberOfPixels = _bitmap.PixelWidth*_bitmap.PixelHeight;
for (var pixel = 0; pixel < numberOfPixels; pixel++)
{
ColorPixel(pixel, bytesPerPixel, imageBytes);
}
var imageRect = new Int32Rect(0, 0, _bitmap.PixelWidth, _bitmap.PixelHeight);
_bitmap.WritePixels(imageRect, imageBytes, stride, 0);
EDIT: Using the debugger I've established that the first time the code throws is when it's calling WritePixels with an offset of 4320 (1080*4). imageBytes has a size of 3110400 so I really don't see how it's size isn't sufficient. Am I just blatantly misunderstanding this api?
Related
I am trying to create movie from images.
I am following following links :
https://www.leadtools.com/support/forum/posts/t11084- // Here I am trying option 2 mentioned & https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b61726a4-4b87-49c7-b4fc-8949cd1366ac/visual-c-visual-studio-2017-how-do-you-convert-jpg-images-to-video-in-visual-c?forum=csharpgeneral
void convert()
{
bmp = new Bitmap(320, 240, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
// create sample source object
SampleSource smpsrc = new SampleSource();
ConvertCtrl convertCtrl = new ConvertCtrl();
// create a new media type wrapper
MediaType mt = new MediaType();
double AvgTimePerFrame = (10000000 / 15);
// set the type to 24-bit RGB video
mt.Type = Constants.MEDIATYPE_Video;
mt.SubType = Constants.MEDIASUBTYPE_RGB24;
// set the format
mt.FormatType = Constants.FORMAT_VideoInfo;
VideoInfoHeader vih = new VideoInfoHeader();
int bmpSize = GetBitmapSize(bmp);
// setup the video info header
vih.bmiHeader.biCompression = 0; // BI_RGB
vih.bmiHeader.biBitCount = 24;
vih.bmiHeader.biWidth = bmp.Width;
vih.bmiHeader.biHeight = bmp.Height;
vih.bmiHeader.biPlanes = 1;
vih.bmiHeader.biSizeImage = bmpSize;
vih.bmiHeader.biClrImportant = 0;
vih.AvgTimePerFrame.lowpart = (int)AvgTimePerFrame;
vih.dwBitRate = bmpSize * 8 * 15;
mt.SetVideoFormatData(vih, null, 0);
// set fixed size samples matching the bitmap size
mt.SampleSize = bmpSize;
mt.FixedSizeSamples = true;
// assign the source media type
smpsrc.SetMediaType(mt);
// select the LEAD compressor
convertCtrl.VideoCompressors.MCmpMJpeg.Selected = true;
convertCtrl.SourceObject = smpsrc;
convertCtrl.TargetFile = #"D:\Projects\LEADTool_Movie_fromImage\ImageToVideo_LeadTool\ImageToVideo_LeadTool\Images\Out\aa.avi";
//convertCtrl.TargetFile = "C:\\Users\\vipul.langalia\\Documents\\count.avi";
convertCtrl.TargetFormat = TargetFormatType.WMVMux;
convertCtrl.StartConvert();
BitmapData bmpData;
int i = 1;
byte[] a = new byte[bmpSize];
System.Drawing.Rectangle rect = new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height);
var imgs = GetAllFiles();
foreach (var item in imgs)
{
bmpSize = GetBitmapSize(item);
MediaSample ms = smpsrc.GetSampleBuffer(30000);
ms.SyncPoint = true;
bmpData = item.LockBits(rect, ImageLockMode.ReadWrite, item.PixelFormat);
Marshal.Copy(bmpData.Scan0, a, 0, bmpSize);
item.UnlockBits(bmpData);
ms.SetData(bmpSize, a);
SetSampleTime(ms, i, AvgTimePerFrame);
smpsrc.DeliverSample(1000, ms);
i++;
}
smpsrc.DeliverEndOfStream(1000);
}
byte[] GetByteArrayFroMWritableBitmap(WriteableBitmap bitmapSource)
{
var width = bitmapSource.PixelWidth;
var height = bitmapSource.PixelHeight;
var stride = width * ((bitmapSource.Format.BitsPerPixel + 7) / 8);
var bitmapData = new byte[height * stride];
bitmapSource.CopyPixels(bitmapData, stride, 0);
return bitmapData;
}
private int GetBitmapSize(WriteableBitmap bmp)
{
int BytesPerLine = (((int)bmp.Width * 24 + 31) & ~31) / 8;
return BytesPerLine * (int)bmp.Height;
}
private int GetBitmapSize(Bitmap bmp)
{
int BytesPerLine = ((bmp.Width * 24 + 31) & ~31) / 8;
return BytesPerLine * bmp.Height;
}
It is throwing out of memory exception when execute ms.SetData(bmpSize, a); statement. Plus If I directly pass byte[] by var a = System.IO.File.ReadAllBytes(imagePath); in ms.SetData(bmpSize, a); statement then it will not throw error but video file is not properly created.
Can anybody please help me?
There are a couple of problems with your code:
Are all your images 320x240 pixels? If not, you should resize them to these exact dimensions before delivering them as video samples to the Convert control.
If you want to use a different size, you can, but it should be the same size for all images, and you should modify the code accordingly.
You are setting the TargetFormat property to WMVMux, but the name of the output file has “.avi” extension. If you want to save AVI files, set TargetFormat = TargetFormatType.AVI.
If you still face problems after this, feel free to contact support#leadtools.com and provide full details about what you tried and what errors you’re getting. Email support is free for LEADTOOLS SDK owners and also for free evaluation users.
I followed this solution for my project : How to create bitmap from Surface (SharpDX)
I don't have enough reputation to comment so I'm opening a new question here.
My project is basically in Direct 2D, I have a Surface buffer, a swapchain. I want to put my buffer into a datastream and reads it's value to put it into a bitmap and save it on disk ( like a screen capture), but my code won't work since all the bytes values are 0 (which is black) and this doesn't make sense since my image is fully white with a bit of blue.
Here is my code :
SwapChainDescription description = new SwapChainDescription()
{
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput,
BufferCount = 1,
SwapEffect = SwapEffect.Discard,
IsWindowed = true,
OutputHandle = this.Handle
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug | DeviceCreationFlags.BgraSupport, description, out device, out swapChain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
SharpDX.DXGI.Adapter dxgiAdapter = dxgiDevice.Adapter;
SharpDX.Direct2D1.Device d2dDevice = new SharpDX.Direct2D1.Device(dxgiDevice);
d2dContext = new SharpDX.Direct2D1.DeviceContext(d2dDevice, SharpDX.Direct2D1.DeviceContextOptions.None);
SharpDX.Direct3D11.DeviceContext d3DeviceContext = new SharpDX.Direct3D11.DeviceContext(device);
properties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
Surface backBuffer = swapChain.GetBackBuffer<Surface>(0);
d2dTarget = new SharpDX.Direct2D1.Bitmap(d2dContext, backBuffer, properties);
d2dContext.Target = d2dTarget;
playerBitmap = this.LoadBitmapFromContentFile(#"C:\Users\ndesjardins\Desktop\wave.png");
//System.Drawing.Bitmap bitmapCanva = new System.Drawing.Bitmap(1254, 735);
d2dContext.BeginDraw();
d2dContext.Clear(SharpDX.Color.White);
d2dContext.DrawBitmap(playerBitmap, new SharpDX.RectangleF(0, 0, playerBitmap.Size.Width, playerBitmap.Size.Height), 1f, SharpDX.Direct2D1.BitmapInterpolationMode.NearestNeighbor);
SharpDX.Direct2D1.SolidColorBrush brush = new SharpDX.Direct2D1.SolidColorBrush(d2dContext, SharpDX.Color.Green);
d2dContext.DrawRectangle(new SharpDX.RectangleF(200, 200, 100, 100), brush);
d2dContext.EndDraw();
swapChain.Present(1, PresentFlags.None);
Texture2D backBuffer3D = backBuffer.QueryInterface<SharpDX.Direct3D11.Texture2D>();
Texture2DDescription desc = backBuffer3D.Description;
desc.CpuAccessFlags = CpuAccessFlags.Read;
desc.Usage = ResourceUsage.Staging;
desc.OptionFlags = ResourceOptionFlags.None;
desc.BindFlags = BindFlags.None;
var texture = new Texture2D(device, desc);
d3DeviceContext.CopyResource(backBuffer3D, texture);
byte[] data = null;
using (Surface surface = texture.QueryInterface<Surface>())
{
DataStream dataStream;
var map = surface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int lines = (int)(dataStream.Length / map.Pitch);
data = new byte[surface.Description.Width * surface.Description.Height * 4];
dataStream.Position = 0;
int dataCounter = 0;
// width of the surface - 4 bytes per pixel.
int actualWidth = surface.Description.Width * 4;
for (int y = 0; y < lines; y++)
{
for (int x = 0; x < map.Pitch; x++)
{
if (x < actualWidth)
{
data[dataCounter++] = dataStream.Read<byte>();
}
else
{
dataStream.Read<byte>();
}
}
}
dataStream.Dispose();
surface.Unmap();
int width = surface.Description.Width;
int height = surface.Description.Height;
byte[] bytewidth = BitConverter.GetBytes(width);
byte[] byteheight = BitConverter.GetBytes(height);
Array.Copy(bytewidth, 0, data, 0, 4);
Array.Copy(byteheight, 0, data, 4, 4);
}
Do you guys have any idea why the byte array that is returned at the end is full of 0 since it should be mostly 255? All I did in my backbuffer was to draw a bitmap image and a rectangle form. Array.Copy is to add the width and height header to the byte array, therefore I could create a bitmap out of it.
I answered in a comment but formatting is horrible so apologies!
https://gamedev.stackexchange.com/a/112978/29920 This looks promising but as you said in reply to mine, this was some time ago and I'm pulling this out of thin air, if it doesn't work either someone with more current knowledge will have to answer or I'll have to grab some source code and try myself.
SharpDX.Direct2D1.Bitmap dxbmp = new SharpDX.Direct2D1.Bitmap(renderTarget,
new SharpDX.Size2(bmpWidth, bmpHeight), new
BitmapProperties(renderTarget.PixelFormat));
dxbmp.CopyFromMemory(bmpBits, bmpWidth * 4);
This looks kind of like what you need. I'm assuming bmpBits in here is either a byte array or a memory stream either of which could then be saved off or at least give you something to look at to see if you're actually getting pixel data
I have some raw sensor data which is in a single dimension byte array. The data is actually in IEEE single precision floating point format. I know the X and Y axis legths and I want to create a Windows Bitmap (greyscale - there is only one colour plane containing luminance data) from my data.
Here's what I'm trying so far:
var bitmap = new Bitmap(xAxis, yAxis, PixelFormat.Format16bppGrayScale);
var pixelReader = GetPixelReader(hdu.MandatoryKeywords.BitsPerPixel);
using (var stream = new MemoryStream(hdu.RawData, writable: false))
{
using (var reader = new BinaryReader(stream, Encoding.ASCII))
{
for (var y = 0; y < yAxis; y++)
{
for (var x = 0; x < xAxis; x++)
{
var pixel = pixelReader(reader);
var argb = Color.FromArgb(pixel, pixel, pixel);
bitmap.SetPixel(x, y, argb);
}
}
}
}
return bitmap;
pixelReader is a delegate and defined as:
private static int ReadIeeeSinglePrecision(BinaryReader reader)
{
return (int) reader.ReadSingle();
}
When I run this code, I get an exception InvalidArgumentException on the line where I try to set the pixel value. I stepped it in the debugger and x=0, y=0 and pixel=0. It doesn't say which argument is invalid or why (thanks Microsoft).
So clearly I'm doing something wrong and actually, I suspect there is probably a more efficient way of going about this. I would appreciate any suggestions. For reasons I can't quite put my finger on, I am finding this code very challenging to write.
OK here is what worked in the end. Based on code taken from Mark Dawson's answer to this question: https://social.msdn.microsoft.com/Forums/vstudio/en-US/10252c05-c4b6-49dc-b2a3-4c1396e2c3ab/writing-a-16bit-grayscale-image?forum=csharpgeneral
private static Bitmap CreateBitmapFromBytes(byte[] pixelValues, int width, int height)
{
//Create an image that will hold the image data
Bitmap pic = new Bitmap(width, height, PixelFormat.Format16bppGrayScale);
//Get a reference to the images pixel data
Rectangle dimension = new Rectangle(0, 0, pic.Width, pic.Height);
BitmapData picData = pic.LockBits(dimension, ImageLockMode.ReadWrite, pic.PixelFormat);
IntPtr pixelStartAddress = picData.Scan0;
//Copy the pixel data into the bitmap structure
System.Runtime.InteropServices.Marshal.Copy(pixelValues, 0, pixelStartAddress, pixelValues.Length);
pic.UnlockBits(picData);
return pic;
}
So then I modified my own code to convert from the IEEE float data into 16 bit integers, and then create the bitmap directly from that, like so:
var pixelReader = GetPixelReader(hdu.MandatoryKeywords.BitsPerPixel);
var imageBytes = new byte[xAxis * yAxis * sizeof(Int16)];
using (var outStream = new MemoryStream(imageBytes, writable: true))
using (var writer = new BinaryWriter(outStream))
using (var inStream = new MemoryStream(hdu.RawData, writable: false))
using (var reader = new BinaryReader(inStream, Encoding.ASCII))
for (var y = 0; y < yAxis; y++)
{
for (var x = 0; x < xAxis; x++)
{
writer.Write(pixelReader(reader));
}
}
var bitmap = CreateGreyscaleBitmapFromBytes(imageBytes, xAxis, yAxis);
return bitmap;
This appears to also address the efficiency problem highlighted in the comments.
I am trying to split and merge a multi-page tiff image. The reason I split is to draw annotations at each image level. The code is working fine, however the merged tiff is pretty large compared to source tiff. For example, I have tested with 17 color pages tiff image(size is 5MB), after splitting and merging, it produces 85MB tiff image. I am using BitMiracle.LibTiff.I actually commented the annotations code temporary as well to resolve this size issue, I am not sure what I am doing wrong.. Here is the code to split.
private List<Bitmap> SplitTiff(byte[] imageData)
{
var bitmapList = new List<Bitmap>();
var tiffStream = new TiffStreamForBytes(imageData);
//open tif file
var tif = Tiff.ClientOpen("", "r", null, tiffStream);
//get number of pages
var num = tif.NumberOfDirectories();
if (num == 1)
return new List<Bitmap>
{
new Bitmap(GetImage(imageData))
};
for (short i = 0; i < num; i++)
{
//set current page
tif.SetDirectory(i);
FieldValue[] photoMetric = tif.GetField(TiffTag.PHOTOMETRIC);
Photometric photo = Photometric.MINISBLACK;
if (photoMetric != null && photoMetric.Length > 0)
photo = (Photometric)photoMetric[0].ToInt();
if (photo != Photometric.MINISBLACK && photo != Photometric.MINISWHITE)
bitmapList.Add(GetBitmapFromTiff(tif));
// else
// bitmapList.Add(GetBitmapFromTiffBlack(tif, photo));// commented temporrarly to fix size issue
}
return bitmapList;
}
private static Bitmap GetBitmapFromTiff(Tiff tif)
{
var value = tif.GetField(TiffTag.IMAGEWIDTH);
var width = value[0].ToInt();
value = tif.GetField(TiffTag.IMAGELENGTH);
var height = value[0].ToInt();
//Read the image into the memory buffer
var raster = new int[height * width];
if (!tif.ReadRGBAImage(width, height, raster))
{
return null;
}
var bmp = new Bitmap(width, height, PixelFormat.Format32bppRgb);
var rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
var bmpdata = bmp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format32bppRgb);
var bits = new byte[bmpdata.Stride * bmpdata.Height];
for (var y = 0; y < bmp.Height; y++)
{
int rasterOffset = y * bmp.Width;
int bitsOffset = (bmp.Height - y - 1) * bmpdata.Stride;
for (int x = 0; x < bmp.Width; x++)
{
int rgba = raster[rasterOffset++];
bits[bitsOffset++] = (byte)((rgba >> 16) & 0xff);
bits[bitsOffset++] = (byte)((rgba >> 8) & 0xff);
bits[bitsOffset++] = (byte)(rgba & 0xff);
bits[bitsOffset++] = (byte)((rgba >> 24) & 0xff);
}
}
System.Runtime.InteropServices.Marshal.Copy(bits, 0, bmpdata.Scan0, bits.Length);
bmp.UnlockBits(bmpdata);
return bmp;
}
and the code to merge individual Bitmaps to tiff is here...
public static PrizmImage PrizmImageFromBitmaps(List<Bitmap> imageItems, string ext)
{
if (imageItems.Count == 1 && !(ext.ToLower().Equals(".tif") || ext.ToLower().Equals(".tiff")))
return new PrizmImage(new MemoryStream(ImageUtility.BitmapToByteArray(imageItems[0])), ext);
var codecInfo = GetCodecInfo();
var memoryStream = new MemoryStream();
var encoderParams = new EncoderParameters(1);
encoderParams.Param[0] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.MultiFrame);
var initialImage = imageItems[0];
var masterBitmap = imageItems[0];// new Bitmap(initialImage);
masterBitmap.Save(memoryStream, codecInfo, encoderParams);
encoderParams.Param[0] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.FrameDimensionPage);
for (var i = 1; i < imageItems.Count; i++)
{
var img = imageItems[i];
masterBitmap.SaveAdd(img, encoderParams);
img.Dispose();
}
encoderParams.Param[0] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.Flush);
masterBitmap.SaveAdd(encoderParams);
memoryStream.Seek(0, SeekOrigin.Begin);
encoderParams.Dispose();
masterBitmap.Dispose();
return new PrizmImage(memoryStream, ext);
}
Most probably, the issue is caused by the fact you are converting all images to 32 bits-per-pixel bitmaps.
Suppose, you have a black and white fax-encoded image. It might be encoded as a 100 Kb TIFF file. The same image might take 10+ megabytes when you save it as a 32bpp bitmap. Compressing these megabytes will help, but you never achieve the same compression ratio as in source image because you increased amount of image data from 1 bit per pixel to 32 bits per pixel.
So, you should not convert images to 32bpp bitmaps, if possible. Try preserve their properties and compression as much as possible. Have a look at source code of the TiffCP utility for hints how to do that.
If you absolutely have to convert images to 32bpp bitmaps (you might have to if you add colorful annotations to them) then there is not much can be done to reduce the resulting size. You might decrease output size by 10-20% percents if you choose better compression scheme and tune up the scheme properly. But that's all, I am afraid.
I have doubts that this part of code causes memory leak:
public FileResult ShowCroppedImage(int id, int size)
{
string path = "~/Uploads/Photos/";
string sourceFile = Server.MapPath(path) + id + ".jpg";
MemoryStream stream = new MemoryStream();
var bitmap = imageManipulation.CropImage(sourceFile, size, size);
bitmap.Save(stream, System.Drawing.Imaging.ImageFormat.Jpeg);
Byte[] bytes = stream.ToArray();
return File(bytes, "image/png");
}
How could I make a test to see if this piece of code is the cause?
EDIT:
public Image CropImage(string sourceFile, int newWidth, int newHeight)
{
Image img = Image.FromFile(sourceFile);
Image outimage;
int sizeX = newWidth;
int sizeY = newHeight;
MemoryStream mm = null;
double ratio = 0;
int fromX = 0;
int fromY = 0;
if (img.Width < img.Height)
{
ratio = img.Width / (double)img.Height;
newHeight = (int)(newHeight / ratio);
fromY = (img.Height - img.Width) / 2;
}
else
{
ratio = img.Height / (double)img.Width;
newWidth = (int)(newWidth / ratio);
fromX = (img.Width - img.Height) / 2;
}
if (img.Width == img.Height)
fromX = 0;
Bitmap result = new Bitmap(sizeX, sizeY);
//use a graphics object to draw the resized image into the bitmap
Graphics grPhoto = Graphics.FromImage(result);
//set the resize quality modes to high quality
grPhoto.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
grPhoto.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
grPhoto.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
//draw the image into the target bitmap
//now do the crop
grPhoto.DrawImage(
img,
new System.Drawing.Rectangle(0, 0, newWidth, newHeight),
new System.Drawing.Rectangle(fromX, fromY, img.Width, img.Height),
System.Drawing.GraphicsUnit.Pixel);
// Save out to memory and get an image from it to send back out the method.
mm = new MemoryStream();
result.Save(mm, System.Drawing.Imaging.ImageFormat.Jpeg);
img.Dispose();
result.Dispose();
grPhoto.Dispose();
outimage = Image.FromStream(mm);
return outimage;
}
I would write it as
public FileResult ShowCroppedImage(int id, int size)
{
string path = "~/Uploads/Photos/";
string sourceFile = Server.MapPath(path) + id + ".jpg";
using (MemoryStream stream = new MemoryStream())
{
using (Bitmap bitmap = imageManipulation.CropImage(sourceFile, size, size))
{
bitmap.Save(stream, System.Drawing.Imaging.ImageFormat.Jpeg);
Byte[] bytes = stream.ToArray();
return File(bytes, "image/png");
}
}
}
to ensure that stream.Dispose & bitmap.Dispose are called.
Might want to call stream.dispose(); after Byte[] bytes = stream.ToArray();.
Given the question was how to detect memory leaks/usage, I'd recommend writing a method that calls your function recording the memory usage before and after:
public void SomeTestMethod()
{
var before = System.GC.GetTotalMemory(false);
// call your method
var used = before - System.GC.GetTotalMemory(false);
var unreclaimed = before - System.GC.GetTotalMemory(true);
}
Before will measure the memory usage before your function runs. The used variable will hold how much memory your function used before the garbage collector was run and unreclaimed will tell you how many bytes your function used even after trying to clean up your objects.
I suspect used will be high and unreclaimed will not - putting a using around your memory stream as the other posters suggest should make them closer although bear in mind you still have a byte array holding on to memory.