Unity Texture2D loadImage exact values - c#

Why with Unity when I load an external 1024x1024 RGBA32 .png (saved via either PaintXP or Gimp) with a blob of (64,64,64) pixels in the centre does the Debug.Log line at the bottom return incorrect values? - The closest I can get is with an uncompressed .png (from Gimp) with values like (65,66,65), but with a standard image they seem to come back as (56,56,56).
Texture2D tex = null;
byte[] fileData;
if (File.Exists(mapPath + "/map.png"))
{
fileData = File.ReadAllBytes(mapPath + "/map.png");
tex = new Texture2D(size, size, TextureFormat.RGBA32, false);
tex.anisoLevel = 0;
tex.Compress(false);
tex.filterMode = FilterMode.Point;
tex.LoadImage(fileData); // Auto-resize the texture dimensions
Color32[] pixelsRaw = tex.GetPixels32(0);
Color32[,] pixels = new Color32[size, size];
for (int j = 0; j < size - 1; j++)
{
for (int i = 0; i < size - 1; i++)
{
pixels[i, j] = pixelsRaw[(j * tex.height) + i];
}
}
Debug.Log(pixels[512, 512]);
}
This was all in an attempt to read a tile-based level from a .png image. But with the returned values being so inaccurate, I can't seem to find a way to make this possible. (I've done this loads of times with Java.awt/LWJGL and it works fine there, why not Unity?)
To clarify, this image is being loaded from outside the Unity project, so there is no way to manually set the compression/format settings via the editor.

There are a couple of problems: compression and gamma correction.
1. When you call Compress on your Texture2D it will compress your texture. The bool parameter only tells it to do a low quality or high quality compression. So just remove the line: tex.Compress(false);
2. The PNG has gamma information. Gimp has an option when you export to png to save gamma or not. So open your image in Gimp and export it with the "Save Gamma" option unchecked.
Alternatively I was able to get the same result by removing the gAMA and sRGB attributes from the png with TweakPNG.

Related

C# cropping bitmaps depending on bitmap location

I have a PDF file containing numerous pages of hand-written surveys. My C# application currently breaks each PDF page down into single Bitmap objects (each PDF page is a Bitmap object) and then uses various APIs to read the hand-written data from each Bitmap object and then enters the extracted data into a database.
My problem is, in order for my API extraction to work, each checkbox and each answer box needs to be in the exact same XY pixel location in every Bitmap. Because these PDF files are scanned images, each PDF page may be a few pixels off in any direction e.g. some PDF pages are a few pixels off to the left, right, top or bottom
Does anybody know if it is possible to crop a Bitmap based on some constant in each Bitmap? For example (please see Bitmap image below), if I could crop each Bitmap starting at the "S" in Secondary School Study at the top left of each page, then each Bitmap would be cropped at the exact same location and this would solve my problem of each checkbox and answer box being in the same XY locations.
Any advice would be appreciated
EDIT: the only possible solution I can think of is looping over each pixel, starting at the top left hand corner until it hits a black pixel (which would be the first "S" in Secondary School Study). Could I then crop the Bitmap from this location?
I came up with a solution which was similar to the one I mentioned above. I scan over each pixel and until it reaches the first pixel in the "S" in Secondary School Study. I use this pixel X Y location to then crop a rectangle of a fixed height and width, starting at that location. I used bm.GetPixel().GetBrightness() to find out when the pixel reached the "S".
Bitmap bm = new Bitmap(#"C:\IronPDFDoc\2.png", true);
bool cropFlag = false;
int cropX = 0;
int cropY = 0;
for (int y = 0; y < 155; y++)
{
for (int x = 0; x < 115; x++)
{
float pixelBrightness = bm.GetPixel(x, y).GetBrightness();
if (pixelBrightness < 0.8 && cropFlag == false)
{
cropFlag = true;
cropX = x;
cropY = y;
}
}
}
Rectangle crop = new Rectangle(cropX, cropY, 648, 915);
Bitmap croppedSurvey = new Bitmap(crop.Width, crop.Height);
using (Graphics g = Graphics.FromImage(croppedSurvey))
{
g.DrawImage(bm, new Rectangle(0, 0, croppedSurvey.Width, croppedSurvey.Height),
crop,
GraphicsUnit.Pixel);
}
croppedSurvey.Save(#"C:\IronPDFDoc\croppedSurvey.png", ImageFormat.Png);

Unity: Reading image pixel color and instantiating object based on that

I need to read image pixel colors, the image will be only Black and white. Therefore if the pixel is white i want to instantiate white cube and if pixel is black i want to instantiate black cube. Now This is all new to me so i made some digging and i ended up using system.Drawing and bitmaps. However now im stuck. I can't know how to check for the white pixel
For example
private void Pixelreader()
{
Bitmap img = new Bitmap(("ImageName.png");
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
System.Drawing.Color pixel = img.GetPixel(i, j);
if (pixel == *if image is white)
{
// instantiate white color.
}
}
}
}
is there any other way of doing this? Thanks!
If the image is truly black and white only (that is, all pixels are either equal to System.Drawing.Color.Black or System.Drawing.Color.White), then you could compare to these colors directly. Within the code you posted, it will look like this:
if (pixel == System.Drawing.Color.White)
{
//instantiate white color.
}
If the image is part of your Unity assets, a better approach is to read it using Resources. Place the image into Assets/Resources folder; then you can use the following code:
Texture2D image = (Texture2D)Resources.Load("ImageName.png");
If the image is entirely black or entirely white, no need to loop - just check one pixel:
if(image.GetPixel(0,0) == Color.White)
{
//Instantiate white cube
}
else
{
//Instantiate black cube
}
You can actually load an image as a resource into a Texture2D, then use UnityEngine.Texture2D and UnityEngine.Color.GrayScale to check if the color you get out is sufficiently close to white.
It sounds like you are going a bit overboard with it and instead could use features already built in to Unity. Try taking a look in to getting the pixel color during a ray cast.
if (Physics.Raycast (ray, hit)) {
var TextureMap: Texture2D = hit.transform.renderer.material.mainTexture;
var pixelUV = hit.textureCoord;
pixelUV.x *= TextureMap.width;
pixelUV.y *= TextureMap.height;
print ( "x=" + pixelUV.x + ",y=" + pixelUV.y + " " + TextureMap.GetPixel (pixelUV.x,pixelUV.y) );
Taken from here

Kinect V2 Color Stream Byte Order

I'm working on an application which will stream the color, depth, and IR video data from the Kinect V2 sensor. Right now I'm just putting together the color video part of the app. I've read through some tutorials and actually got some video data coming into my app - the problem seems to be that the byte order seems to be in the wrong order which gives me an oddly discolored image (see below).
So, let me explain how I got here. In my code, I first open the sensor and also instantiate a new multi source frame reader. After I've created the reader, I create an event handler called Reader_MultiSourceFrameArrived:
void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
{
if (proccessing || gotframe) return;
// Get a reference to the multi-frame
var reference = e.FrameReference.AcquireFrame();
// Open color frame
using (ColorFrame frame = reference.ColorFrameReference.AcquireFrame())
{
if (frame != null)
{
proccessing = true;
var description = frame.ColorFrameSource.FrameDescription;
bw2 = description.Width / 2;
bh2 = description.Height / 2;
bpp = (int)description.BytesPerPixel;
if (imgBuffer == null)
{
imgBuffer = new byte[description.BytesPerPixel * description.Width * description.Height];
}
frame.CopyRawFrameDataToArray(imgBuffer);
gotframe = true;
proccessing = false;
}
}
}
Now, every time a frame is received (and not processing) it should copy the frame data into an array called imgBuffer. When my application is ready I then call this routine to convert the array into a Windows Bitmap that I can display on my screen.
if (gotframe)
{
if (theBitmap.Rx != bw2 || theBitmap.Ry != bh2) theBitmap.SetSize(bw2, bh2);
int kk = 0;
for (int j = 0; j < bh2; ++j)
{
for (int i = 0; i < bw2; ++i)
{
kk = (j * bw2 * 2 + i) * 2 * bpp;
theBitmap.pixels[i, bh2 - j - 1].B = imgBuffer[kk];
theBitmap.pixels[i, bh2 - j - 1].G = imgBuffer[kk + 1];
theBitmap.pixels[i, bh2 - j - 1].R = imgBuffer[kk + 2];
theBitmap.pixels[i, bh2 - j - 1].A = 255;
}
}
theBitmap.needupdate = true;
gotframe = false;
}
}
So, after this runs theBitmap now contains the image information needed to draw the image on the screen... however, as seen in the image above - it looks quite strange. The most obvious change is to simply change the order of the pixel B,G,R values when they get assigned to the bitmap in the double for loop (which I tried)... however, this simply results in other strange color images and none provide an accurate color image. Any thoughts where I might be going wrong?
Is this Bgra?
The normal "RGB" in Kinect v2 for C# is BGRA.
Using the Kinect SDK 2.0, you don't need all of those "for" cycles.
The function used to allocate the pixels in the bitmap is this one:
colorFrame.CopyConvertedFrameDataToIntPtr(
this.colorBitmap.BackBuffer,
(uint)(colorFrameDescription.Width * colorFrameDescription.Height * 4),
ColorImageFormat.Bgra);
1) Get the Frame From the kinect, using Reader_ColorFrameArrived (go see ColorSamples - WPF);
2) Create the colorFrameDescription from the ColorFrameSource using Bgra format;
3) Create the bitmap to display;
If you have any problems, please say. But if you follow the sample it's actually pretty clean there how to do it.
I was stuck on this problem forever. Problem is, that all almost all examples you find, are WPF examples. But for Windows Forms its a different story.
frame.CopyRawFrameDataToArray(imgBuffer);
gets you the rawdata whitch is
ColorImageFormat.Yuy2
By converting it to RGB you should be able to fix your color problem. The transformation from YUY2 to RGB is very expensive, you might want to use a Parallel foreach loop to maintain your framerate

Recording RGB stream from the kinect sensor

I'm doing a WPF application and one of the functions is to record video (Only RGB stream) from Kinect sensor (using Aforge and SDK 1.5).
In my application, i have a button that when clicked, it should save the video stream into an avi file.
I've added the references and I copied all the .dll files into my projects folder (as was explained on other forums) but for some reason I receive this error:
{"Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.":null}
So inside private void button4_Click(object sender, RoutedEventArgs e) is the following code:
int width = 640;
int height = 480;
// create instance of video writer
VideoFileWriter writer = new VideoFileWriter();
// create new video file
writer.Open("test.avi", width, height, 25, VideoCodec.MPEG4);
// create a bitmap to save into the video file
Bitmap image = new Bitmap (width, height, DrawingColor.PixelFormat.Format24bppRgb);
for (int i = 0; i < 1000; i++)
{
image.SetPixel(i % width, i % height, Color.Red);
writer.WriteVideoFrame(image);
}
writer.Close();
}
I will really appreciate your help and also im flexible with the way to record RGB stream(if you recommend another way), as long a its not complicated because im new with C#
The reason the video is red is because you are turning it red with
for (int i = 0; i < 1000; i++)
{
image.SetPixel(i % width, i % height, Color.Red);
writer.WriteVideoFrame(image);
}
What you should do is convert the BitmapSource/WritableBitmap* (assuming you are displaying Kinect's data with a BitmapSource or WritableBitmap Then you can just add that bitmap to your Video frame. Hope this helps!
**If you are using a WritableBitmap, convert it to a BitmapImage, then convert that to a Bitmap*

export transparent images in c#?

I've edited an bitmap in c# and for every pixel i've changed it to a certain color if a condition was true else i've set the color to Color.Transparent ( the operations were done with getPixel/setPixel ) . I've exported the image in .png format but the image isn't transparent. Any ideas why or how should i do it ?
Regards,
Alexandru Badescu
here is the code :
-- here i load the image and convert to PixelFormat.Format24bppRgb if png
m_Bitmap = (Bitmap)Bitmap.FromFile(openFileDialog.FileName, false);
if(openFileDialog.FilterIndex==3) //3 is png
m_Bitmap=ConvertTo24(m_Bitmap);
-- this is for changing the pixels after a certain position in an matrix
for (int i = startX; i < endX; i++)
for (int j = startY; j < endY; j++)
{
if (indexMatrix[i][j] == matrixFillNumber)
m_Bitmap.SetPixel(j, i, selectedColor);
else
m_Bitmap.SetPixel(j, i, Color.Transparent);
}
Its because pixelformat.
Here is a sample code for you:
Bitmap inp = new Bitmap("path of the image to edit");
Bitmap outImg = new Bitmap(inp.Width, inp.Height, PixelFormat.Format32bppArgb);
outImg.SetResolution(inp.HorizontalResolution, inp.VerticalResolution);
Graphics g = Graphics.FromImage(outImg);
g.DrawImage(inp, Point.Empty);
g.Dispose();
inp.Dispose();
////
// Check your condition and set pixel here (outImg.GetPixel, outImg.SetPixel)
////
outImg.Save("out file path", ImageFormat.Png);
outImg.Dispose();
this is the code that requires minimum change to your current code.
But i would recommend you to check out LockBits Method for a better performance:
http://msdn.microsoft.com/en-us/library/5ey6h79d.aspx
I will need more code to verify, but my best guess is that you first write something on the bitmap to clear it (like fill it with White color), then to make the transparent pixels you draw with Color.Transparent on top. This will simply not work, since White (or anything else) with Transparent on top, is still White.
If you have created a bitmap in code, it will be most likely 24 bit and would not support alpha blending/transparent.
provide the code for creating and we should be able to help.

Categories

Resources