I'm working on a C# form that reads graphics from a binary file and allows me to create maps with it. I'm having trouble trying to display an image on the form based on the bytes in the binary file. This binary file stores a bunch of 8x8 pixel images and each row of this image is stored in two different bytes, which I have to do some processing on. After reading all the bytes, I need to get palette data based on these bytes and then I can finally get my image colored properly.
I won't get into details of how the image processing works, since I think I can do that by myself (unless someone is curious, then I can explain in more detail - I'm basically reading a CHR file that contains patterns for a NES game and displaying a tile using the NES palette). By the end of all this processing, I have the RGB values for each pixel of the image stored as a byte array. My problem is that I don't know how to convert this byte array into a 8x8 image.
So for example, suppose I have a 8x8 tile that is just a red tile. By the end of the processing, this tile would be represented by an array like this (each pixel uses 3 bytes: byte 1 = R, byte 2 = G, byte 3 = B):
byte[] tilecolors = {255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0, }
So, binary file has a bunch of these images stored in it as bytes and I want to do is transform those bytes into 8x8 images that will be displayed in an app (probably inside a picturebox, since I want the app user to use these tiles to create maps and stuff).
I tried using a function I saw on another website that reads loads an image from bytes using MemoryStream, but I get a "Parameter is not valid" error. This is the function I tried:
public Image byteArrayToImage(byte[] byteArrayIn)
{
MemoryStream ms = new MemoryStream(byteArrayIn);
Image returnImage = Image.FromStream(ms);
return returnImage;
}
EDIT:
I got it to work by following #ckuri 's suggestion. After I calculate the RGB colors from reading the binary file, I can use the Bitmap class's SetPixel method to draw the image.
I have a .png file and I did the following two things
Read the file as a byte array
byte[] arr = File.ReadAllBytes(Filename)
Read it using Emgu, read the file into Image and then converted the Bitmap to a byte array using the following.
Image<Gray,Byte> Img = new Image<Gray,Byte>(Filename);
byte[] arr = ImageToByte2(Img.Bitmap);
public static byte[] ImageToByte2(Image img)
{
using (MemoryStream stream = new MemoryStream())
{
img.Save(stream, System.Drawing.Imaging.ImageFormat.Png);
return stream.ToArray();
}
}
I have a difference in length of the byte array. I don't understand why there is a difference. Please help.
The first option reads all bytes of the file including the header, while the second one just reads the byte of the plain image.
For more info on the structure and the header of a png look here: https://en.wikipedia.org/wiki/Portable_Network_Graphics
When you use EMGU to generate the PNG byte array, you can't be sure that the compression level is the same as the PNG image you had in the first place.
There is an overload for the save method, where you can specify encoder parameters as well. If you adjust the compression level, maybe you can get the same byte length.
I am trying to convert YUV420 frames to Bitmap or Image. I am reading these frames from an MP4 video in C# using the AVBlocks library. So, after creating an input and output socket using AVBlocks classes, I then pull each frame from the video with a YUV420 color format and UncompressedVideo stream type. I basically do this by calling Transcoder.Pull(int outputIndex, MediaSample outputData) and then the MediaBuffer that's part of the outputData has the data in an array of bytes. So I am trying to convert these bytes to a Bitmap or Image so that I can eventually show each frame into a PictureBox in the Winforms application.
What I've tried:
I have tried using a MemoryStream, as shown below, but I get an unhandled ArgumentException saying that the parameter is not valid. I tried using ImageConverter() as well to convert to an Image, but I get the same exception. Then, I converted the byte array from YUV to RGB format and gave the updated array as a parameter to the MemoryStream, but again no luck. I also tried changing the color format of the output socket from YUV420 to a BGR format, but it resulted in the same issue as above. The code that tries to convert to a bitmap using MemoryStream:
while (transcoder.Pull(out inputIndex, yuvFrame))
{
buffer = (MediaBuffer) yuvFrame.Buffer.Clone();
Bitmap b;
byte[] temp = new byte[buffer.DataSize];
Array.Copy(buffer.Start, buffer.DataOffset, temp, 0, buffer.DataSize);
var ms = new MemoryStream(temp);
b = new Bitmap(ms);
}
The aforementioned exception is thrown in the last line of the code. I'm not sure if it's the color format or the stream type, or something else that's causing the problem. If someone wants to see more of the code (setting up input & output sockets etc), let me know. For reference, the link to the example I've been following from AVBlocks is this and the link to MediaBuffer class is this.
The Bitmap(MemoryStream ms) constructor expects the bytes from an actual file, like a png, jpeg, bmp or gif. If I'm reading this correctly, you don't have that; you only have pure RGB triplets data. That isn't enough, because it lacks all information about the image's width, height, colour depth etc.
You will need to actually construct an image object from the RGB data. This isn't really trivial; it means you need to make a new image object with the correct dimensions and colour format, then access its backing bytes array, and write your data into it. The actual code for creating an image out of a byte array can be found in this answer.
Note that you'll have to take into account the actual stride in the resulting data you get; the amount of bytes on each line of the image. Images are saved per line, and those lines are usually padded to a multiple of 4 bytes. This obviously messes up a lot if you don't take it into account.
If your data is completely compact, then the stride to give to the BuildImage function I linked to will just be your image width multiplied by the amount of bytes per pixel (should be 3 for 24bpp RGB), but if not, you'll have to pad it to the next multiple of 4.
I'm trying to convert a JPG image to a (double) 2d array. Using:
Image image = Image.FromFile("image.jpg");
I get a 500 x 500 image (according to image.Size.Height(Width)). But when I try to convert this to a byte array using
byte[] arr;
using (MemoryStream ms = new MemoryStream())
{
image.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
arr = ms.ToArray();
}
I get arr.GetLength(0)=35640, which is smaller than 500*500=250000. I'll convert the 1d array arr to a 2d array after that. Am I missing something?
You are not saving a pixel representation.. you are saving the bytes of a JPEG file. If you want actual pixels you need to loop over the pixels.
Also be aware that each pixel has a minimum of 3 components: Red, Green, Blue.
If you save the image in JPEG format, the pixels written to the stream will be compressed.
If you're wanting to manipulate the pixels of the image, you should probably load the image into a Bitmap and then call Bitmap.LockBits to get at the raw pixels in memory.