How to determine edges in an image optimally? - c#

I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this:
(source: msn.com)
the result should be an image with the msn content without the white margins(left& right).
I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect => x1,y1,x2,y2 data.
I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar..
Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'?
public static CropRect EdgeDetection(Bitmap Image, float Threshold)
{
CropRect cropRectangle = new CropRect();
int lowestX = 0;
int lowestY = 0;
int largestX = 0;
int largestY = 0;
lowestX = Image.Width;
lowestY = Image.Height;
//find the lowest X bound;
for (int y = 0; y < Image.Height - 1; ++y)
{
for (int x = 0; x < Image.Width - 1; ++x)
{
Color currentColor = Image.GetPixel(x, y);
Color tempXcolor = Image.GetPixel(x + 1, y);
Color tempYColor = Image.GetPixel(x, y + 1);
if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) +
((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) +
((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold))
{
if (lowestX > x)
lowestX = x;
if (largestX < x)
largestX = x;
}
if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) +
((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) +
((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold))
{
if (lowestY > y)
lowestY = y;
if (largestY < y)
largestY = y;
}
}
}
if (lowestX < Image.Width / 4)
cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0;
else
cropRectangle.X = 0;
if (lowestY < Image.Height / 4)
cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0;
else
cropRectangle.Y = 0;
cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8;
cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8;
return cropRectangle;
}
}

One possible optimisation is to use Lockbits to access the color values directly rather than through the much slower GetPixel.
The Bob Powell page on LockBits is a good reference.
On the other hand, my testing has shown that the overhead associated with Lockbits makes that approach slower if you try to write a GetPixelFast equivalent to GetPixel and drop it in as a replacement. Instead you need to ensure that all pixel access is done in one hit rather than multiple hits. This should fit nicely with your code provided you don't lock/unlock on every pixel.
Here is an example
BitmapData bmd = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, b.PixelFormat);
byte* row = (byte*)bmd.Scan0 + (y * bmd.Stride);
// Blue Green Red
Color c = Color.FromArgb(row[x * pixelSize + 2], row[x * pixelSize + 1], row[x * pixelSize]);
b.UnlockBits(bmd);
Two more things to note:
This code is unsafe because it uses pointers
This approach depends on pixel size within Bitmap data, so you will need to derive pixelSize from bitmap.PixelFormat

GetPixel is probably your main culprit (I recommend running some profiling tests to track it down), but you could restructure the algorithm like this:
Scan first row (y = 0) from left-to-right and right-to-left and record the first and last edge location. It's not necessary to check all pixels, as you want the extreme edges.
Scan all subsequent rows, but now we only need to search outward (from center toward edges), starting at our last known minimum edge. We want to find the extreme boundaries, so we only need to search in the region where we could find new extrema.
Repeat the first two steps for the columns, establishing initial extrema and then using those extrema to iteratively bound the search.
This should greatly reduce the number of comparisons if your images are typically mostly content. The worst case is a completely blank image, for which this would probably be less efficient than the exhaustive search.
In extreme cases, image processing can also benefit from parallelism (split up the image and process it in multiple threads on a multi-core CPU), but this is quite a bit of additional work and there are other, simpler changes you still make. Threading overhead tends to limit the applicability of this technique and is mainly helpful if you expect to run this thing 'realtime', with dedicated repeated processing of incoming data (to make up for the initial setup costs).

This won't make it better on the order... but if you square your threshold, you won't need to do a square root, which is very expensive.
That should give a significant speed increase.

Related

Issue with Kinect V2 Coordinate Mapper

I'm currently undertaking a university project that involves object detection and recognition with a Kinect. Now I'm using the MapDethFrameToColorSpace method for coordinating the depth to rgb. I believe the issue is to with this loop here
for (int i = 0; i < _depthData.Length; ++i)
{
ColorSpacePoint newpoint = cPoints[i];
if (!float.IsNegativeInfinity(newpoint.X) && !float.IsNegativeInfinity(newpoint.Y))
int colorX = (int)Math.Floor(newpoint.X + 0.5);
int colorY = (int)Math.Floor(newpoint.Y + 0.5);
if ((colorX >= 0) && (colorX < colorFrameDescription.Width) && (colorY >= 0) && (colorY < colorFrameDescription.Height))
{
int j = ((colorFrameDescription.Width * colorY) + colorX) * bytesPerPixel;
int newdepthpixel = i * 4;
displaypixel[newdepthpixel] = colorpixels[(j)]; //B
displaypixel[newdepthpixel + 1] = colorpixels[(j + 1)]; //G
displaypixel[newdepthpixel + 2] = colorpixels[(j + 1)]; //R
displaypixel[newdepthpixel + 3] = 255; //A*/
}
It appears that the indexing is not correct or there are pixels/depth values missing because the output appears to be multiples of the same image but small and with a limited x index.
http://postimg.org/image/tecnvp1nx/
Let me guess: Your output image (displaypixel) is 1920x1080 pixels big? (Though from the link you posted, it seems to be 1829×948?)
That's your problem. MapDethFrameToColorSpace returns the corresponding position in the color image for each depth pixels. That means, you get 512x424 values. Putting those into a 1920x1080 image means only about 10% of the image is filled, and the part that's filled will be jumbled.
If you make your output image 512x424 pixels big instead, it should give you an image like the second on in this article.
Or you could keep your output image at 1920x1080, but instead of putting one pixel after the other, you'd also calculate the position where to put the pixel. So instead doing
int newdepthpixel = i * 4;
you'd need to do
int newdepthpixel = ((colorFrameDescription.Width * colorY) + colorX) * 4;
That would give you a 1920x1080 image, but with only 512x424 pixels filled, with lots of space in between.

Parallel.For statement return "System.InvalidOperationException" with a Bitmap Processing

Well, I have a code to apply a Rain Bow filter in "x" image, I have to do in two ways: Sequential & parallel, my sequential code is working without problems, but the parallel section doesn't work. And I have no idea, why?.
Code
public static Bitmap RainbowFilterParallel(Bitmap bmp)
{
Bitmap temp = new Bitmap(bmp.Width, bmp.Height);
int raz = bmp.Height / 4;
Parallel.For(0, bmp.Width, i =>
{
Parallel.For(0, bmp.Height, x =>
{
if (i < (raz))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R / 5, bmp.GetPixel(i, x).G, bmp.GetPixel(i, x).B));
}
else if (i < (raz * 2))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R, bmp.GetPixel(i, x).G / 5, bmp.GetPixel(i, x).B));
}
else if (i < (raz * 3))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R, bmp.GetPixel(i, x).G, bmp.GetPixel(i, x).B / 5));
}
else if (i < (raz * 4))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R / 5, bmp.GetPixel(i, x).G, bmp.GetPixel(i, x).B / 5));
}
else
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R / 5, bmp.GetPixel(i, x).G / 5, bmp.GetPixel(i, x).B / 5));
}
});
});
return temp;
}
Besides, In a moments the program return the same error but says "The object is already in use".
PS. I'm beginner with c#, and I Searched this topic in another post and I found nothing.
Thank you very much in advance
As commenter Ron Beyer points out, using the SetPixel() and GetPixel() methods is very slow. Each call to one of those methods involves a lot of overhead in the transition between your managed code down to the actual binary buffer that the Bitmap object represents. There are a lot of layers there, and the video driver typically gets involved which requires transitions between user and kernel level execution.
But besides being slow, these methods also make the object "busy", and so if an attempt to use the bitmap (including calling one of those methods) is made between the time one of those methods is called and when it returns (i.e. while the call is in progress), an error occurs with the exception you saw.
Since the only way that parallelizing your current code would be helpful is if these method calls could occur concurrently, and since they simply cannot, this approach isn't going to work.
On the other hand, using the LockBits() method is not only guaranteed to work, there's a very good chance that you will find the performance is so much better using LockBits() that you don't even need to parallelize the algorithm. But should you decide you do, because of the way LockBits() works — you gain access to a raw buffer of bytes that represents the bitmap image — you can easily parallelize the algorithm and take advantage of multiple CPU cores (if present).
Note that when using LockBits() you will be working with the Bitmap object at a level that you might not be accustomed to. If you are not already knowledgeable with how bitmaps really work "under the hood", you will have to familiarize yourself with the way that bitmaps are actually stored in memory. This includes understanding what the different pixel formats mean, how to interpret and modify pixels for a given format, and how a bitmap is laid out in memory (e.g. the order of rows, which can vary depending on the bitmap, as well as the "stride" of the bitmap).
These things are not terribly hard to learn, but it will require patience. It is well worth the effort though, if performance is your goal.
Parallel is hard on the singular mind. And mixing it with legacy GDI+ code can lead to strange results..
Your code has numerous issues:
You call GetPixel three times per pixel instead of once
You are accessing the pixels not horizontally as you should
You call y x and x i; the machine won't mind but us people do
You are using way too much parallelization. No use to have much more of it than you have cores. It creates overhead that is bound to eat up any gains, unless your inner loop has a really hard job to do, like millions of calculations..
But the exception you get has nothing to do with these issues. And one mistake you don't make is to access the same pixel in parallel... So why the crash?
After cleaning up the code I found that the error in the stack trace pointed to SetPixel and there to System.Drawing.Image.get_Width(). The former is obvious, the latter not part of our code..!?
So I dug into the source code at referencesource.microsoft.com and found this:
/// <include file='doc\Bitmap.uex' path='docs/doc[#for="Bitmap.SetPixel"]/*' />
/// <devdoc>
/// <para>
/// Sets the color of the specified pixel in this <see cref='System.Drawing.Bitmap'/> .
/// </para>
/// </devdoc>
public void SetPixel(int x, int y, Color color) {
if ((PixelFormat & PixelFormat.Indexed) != 0) {
throw new InvalidOperationException(SR.GetString(SR.GdiplusCannotSetPixelFromIndexedPixelFormat));
}
if (x < 0 || x >= Width) {
throw new ArgumentOutOfRangeException("x", SR.GetString(SR.ValidRangeX));
}
if (y < 0 || y >= Height) {
throw new ArgumentOutOfRangeException("y", SR.GetString(SR.ValidRangeY));
}
int status = SafeNativeMethods.Gdip.GdipBitmapSetPixel(new HandleRef(this, nativeImage), x, y, color.ToArgb());
if (status != SafeNativeMethods.Gdip.Ok)
throw SafeNativeMethods.Gdip.StatusException(status);
}
The real work is done by SafeNativeMethods.Gdip.GdipBitmapSetPixel but before that the method does a bounds check on the Bitmap's Width and Height. And while these in our case of course never change the system still won't allow accessing them in parallel and hence crashes when at some point the checks are happening interwoven. Totally uncesessary, of course, but there you go..
So GetPixel (which has the same behaviour) and SetPixel can't safely be used in a parallel processing.
Two ways out of it:
We can add locks to the code and thus make sure the checks won't happen at the 'same' time:
public static Bitmap RainbowFilterParallel(Bitmap bmp)
{
Bitmap temp = new Bitmap(bmp);
int raz = bmp.Height / 4;
int height = bmp.Height;
int width = bmp.Width;
// set a limit to parallesim
int maxCore = 7;
int blockH = height / maxCore + 1;
//lock (temp)
Parallel.For(0, maxCore, cor =>
{
//Parallel.For(0, bmp.Height, x =>
for (int yb = 0; yb < blockH; yb++)
{
int i = cor * blockH + yb;
if (i >= height) continue;
for (int x = 0; x < width; x++)
{
{
Color c;
// lock the Bitmap just for the GetPixel:
lock (temp) c = temp.GetPixel(x, i);
byte R = c.R;
byte G = c.G;
byte B = c.B;
if (i < (raz)) { R = (byte)(c.R / 5); }
else if (i < raz + raz) { G = (byte)(c.G / 5); }
else if (i < raz * 3) { B = (byte)(c.B / 5); }
else if (i < raz * 4) { R = (byte)(c.R / 5); B = (byte)(c.B / 5); }
else { G = (byte)(c.G / 5); R = (byte)(c.R / 5); }
// lock the Bitmap just for the SetPixel:
lock (temp) temp.SetPixel(x, i, Color.FromArgb(R,G,B));
};
}
};
});
return temp;
}
Note that limiting parallism is so important there is even a member in the ParallelOptions class and a parameter inParallel.For to control it! I have set the maximum core numer to 7, but this would be better:
int degreeOfParallelism = Environment.ProcessorCount - 1;
So this should save us some overhead. But still: I'd expect that to be slower than a corrected sequential method!
Instead going for a LockBits as Peter and Ron have suggested method makes things really fast (1ox) and adding parallelism potentially even faster still..
So finally to finish up this length answer, here is a Lockbits plus Limited-Parallel solution:
public static Bitmap RainbowFilterParallelLockbits(Bitmap bmp)
{
Bitmap temp = null;
temp = new Bitmap(bmp);
int raz = bmp.Height / 4;
int height = bmp.Height;
int width = bmp.Width;
Rectangle rect = new Rectangle(Point.Empty, bmp.Size);
BitmapData bmpData = temp.LockBits(rect,ImageLockMode.ReadOnly, temp.PixelFormat);
int bpp = (temp.PixelFormat == PixelFormat.Format32bppArgb) ? 4 : 3;
int size = bmpData.Stride * bmpData.Height;
byte[] data = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(bmpData.Scan0, data, 0, size);
var options = new ParallelOptions();
int maxCore = Environment.ProcessorCount - 1;
options.MaxDegreeOfParallelism = maxCore > 0 ? maxCore : 1;
Parallel.For(0, height, options, y =>
{
for (int x = 0; x < width; x++)
{
{
int index = y * bmpData.Stride + x * bpp;
if (y < (raz)) data[index + 2] = (byte) (data[index + 2] / 5);
else if (y < (raz * 2)) data[index + 1] = (byte)(data[index + 1] / 5);
else if (y < (raz * 3)) data[index ] = (byte)(data[index ] / 5);
else if (y < (raz * 4))
{ data[index + 2] = (byte)(data[index + 2] / 5);
data[index] = (byte)(data[index] / 5); }
else
{ data[index + 2] = (byte)(data[index + 2] / 5);
data[index + 1] = (byte)(data[index + 1] / 5);
data[index] = (byte)(data[index] / 5); }
};
};
});
System.Runtime.InteropServices.Marshal.Copy(data, 0, bmpData.Scan0, data.Length);
temp.UnlockBits(bmpData);
return temp;
}
While not strictly relevant I wanted to post a better faster version than any of the one's that I see given in the answers. This is the fastest way I know of to iterate through a bitmap and save the results in C#. In my work we need to go through millions of large images, this is just me grabbing the red channel and saving it for my own purposes but it should give you the idea of how to work
//Parallel Unsafe, Corrected Channel, Corrected Standard div 5x faster
private void TakeApart_Much_Faster(Bitmap processedBitmap)
{
_RedMin = byte.MaxValue;
_RedMax = byte.MinValue;
_arr = new byte[BMP.Width, BMP.Height];
long Sum = 0,
SumSq = 0;
BitmapData bitmapData = processedBitmap.LockBits(new Rectangle(0, 0, processedBitmap.Width, processedBitmap.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
//this is a much more useful datastructure than the array but it's slower to fill.
points = new ConcurrentDictionary<Point, byte>();
unsafe
{
int bytesPerPixel = Image.GetPixelFormatSize(bitmapData.PixelFormat) / 8;
int heightInPixels = bitmapData.Height;
int widthInBytes = bitmapData.Width * bytesPerPixel;
_RedMin = byte.MaxValue;
_RedMax = byte.MinValue;
byte* PtrFirstPixel = (byte*)bitmapData.Scan0;
Parallel.For(0, heightInPixels, y =>
{
//pointer to the first pixel so we don't lose track of where we are
byte* currentLine = PtrFirstPixel + (y * bitmapData.Stride);
for (int x = 0; x < widthInBytes; x = x + bytesPerPixel)
{
//0+2 is red channel
byte redPixel = currentLine[x + 2];
Interlocked.Add(ref Sum, redPixel);
Interlocked.Add(ref SumSq, redPixel * redPixel);
//divide by three since we are skipping ahead 3 at a time.
_arr[x/3, y] = redPixel;
_RedMin = redPixel < _RedMin ? _RedMin : redPixel;
_RedMax = redPixel > RedMax ? RedMax : redPixel;
}
});
_RedMean = Sum / TotalPixels;
_RedStDev = Math.Sqrt((SumSq / TotalPixels) - (_RedMean * _RedMean));
processedBitmap.UnlockBits(bitmapData);
}
}

How to select by color range?

In my application I have loaded a picture and I want to be able to detect similar colors. So if I select a color I want the application to be able to find all pixels with that same (or almost the same) color. This is what I wrote for a detection system that looks in a vertical direction between the point of the mouse click and the end of the bitmap.
for (int y = mouseY; y < m_bitmap.Height; y++)
{
Color pixel = m_bitmap.GetPixel(mouseX, y);
//check if there is another color
if ((pixel.R > curcolor.R + treshold || pixel.R < curcolor.R - treshold) ||
(pixel.G > curcolor.G + treshold || pixel.G < curcolor.G - treshold) ||
(pixel.B > curcolor.B + treshold || pixel.B < curcolor.B - treshold))
{ //YESSSSS!
if ((y - ytop > minheight)&&(curcolor != Color.White)) //no white, at least 15px height
{
colorlayers.Add(new ColorLayer(curcolor, y - 1, ytop));
}
curcolor = pixel;
ytop = y;
}
}
Would this be the best way? Somehow it looks like it doesn't work too good with yellowish colors.
RGB is a 3D space.
A color far away threshold in all directions is not so similar to original one (and what is similar according to numbers may not be so similar to human beings eyes).
I would make a check using HSL (for example) where hue value as a finite 1D range, just for example:
for (int y = mouseY; y < m_bitmap.Height; y++)
{
Color pixel = m_bitmap.GetPixel(mouseX, y);
if (Math.Abs(color.GetHue() - curcolor.GetHue()) <= threshold)
{
// ...
}
}
Moreover please note that using bitmaps in this way (GetPixel() is terribly slow, take a look to this post to see a - much - faster alternative).
It might be interesting to look at how the magic wand tool in Paint.NET works.
This is how they compare 2 colors:
private static bool CheckColor(ColorBgra a, ColorBgra b, int tolerance)
{
int sum = 0;
int diff;
diff = a.R - b.R;
sum += (1 + diff * diff) * a.A / 256;
diff = a.G - b.G;
sum += (1 + diff * diff) * a.A / 256;
diff = a.B - b.B;
sum += (1 + diff * diff) * a.A / 256;
diff = a.A - b.A;
sum += diff * diff;
return (sum <= tolerance * tolerance * 4);
}
Source
The reason why yellow colors give a problem might be that RGB is not a perceptually uniform colorspace. This means that, given a distance between two points/colors in the colorspace, the perception of this color distance/difference will in general not be the same.
That said, you might want to use another color space, like HSL as suggested by Adriano, or perhaps Lab.
If you want to stick to RGB, I would suggest to calculate the euclidian distance, like this (I think it's simpler):
float distance = Math.sqrt((pixel.R-curcolor.R)^2 + (pixel.G-curcolor.G)^2 + (pixel.B-curcolor.B)^2);
if(distance < threshold)
{
// Do what you have to.
}

Reducing color depth in an image is not reducin the file size?

I use this code to reduce the depth of an image:
public void ApplyDecreaseColourDepth(int offset)
{
int A, R, G, B;
Color pixelColor;
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
pixelColor = bitmapImage.GetPixel(x, y);
A = pixelColor.A;
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);
if (G < 0)
{
G = 0;
}
B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);
if (B < 0)
{
B = 0;
}
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
first question is: the offset that I give the function is not the depth, is that right?
the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.
This is the code that I use to save the modified image:
private Bitmap bitmapImage;
public void SaveImage(string path)
{
bitmapImage.Save(path);
}
You are just setting the pixel values to a lower level.
For example, is if a pixel is represented by 3 channels with 16 bits per channel, you are reducing each pixel colour value to 8-bits per channel. This will never reduce the image size as the pixels allocated have already a fixed depth of 16 bits.
Try saving the new values to a new image with maximum of 8-bit depth.
Surely you will have a reduced image in bytes but not the overall size that is, X,Y dimensions of the image will remain intact. What you are doing will reduce image quality.
Let's start by cleaning up the code a bit. The following pattern:
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
if (R < 0)
{
R = 0;
}
Is equivalent to this:
R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
You can thus simplify your function to this:
public void ApplyDecreaseColourDepth(int offset)
{
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
int pixelColor = bitmapImage.GetPixel(x, y);
int A = pixel.A;
int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);
bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
}
}
}
To answer your questions:
Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).
First i would like to ask you one simple question :)
int i = 10;
and now i = i--;
douse it effect on size of i ?
ans is No.
you are doing the same thing
Index imaged are represent in two matrix
1 for color mapping and
2 fro image mapping
you just change the value of element not deleting it
so it will not effect on size of image
You can't decrease color depth with Get/SetPixel. Those methods only change the color.
It seems you can't easily save an image to a certain pixel format, but I did find some code to change the pixel format in memory. You can try saving it, and it might work, depending what format you save to.
From this question: https://stackoverflow.com/a/2379838/785745
He gives this code to change color depth:
public static Bitmap ConvertTo16bpp(Image img) {
var bmp = new Bitmap(img.Width, img.Height, System.Drawing.Imaging.PixelFormat.Format16bppRgb555);
using (var gr = Graphics.FromImage(bmp))
{
gr.DrawImage(img, new Rectangle(0, 0, img.Width, img.Height));
}
return bmp;
}
You can change the PixelFormat in the code to whatever you need.
A Bitmap image of a certain pixel count is always the same size, because the bitmap format does not apply compression.
If you compress the image with an algorithm (e.g. JPEG) then the 'reduced' image should be smaller.
R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2))
Doesn't this always return 0?
If you want to reduce the size of your image, you can specify a different compression format when calling Image.Save().
GIF file format is probably a good candidate, since it works best with contiguous pixels of identical color (which happens more often when your color depth is low).
JPEG works great with photos, but you won't see significant results if you convert a 24-bit image into a 16-bit one and then compresses it using JPEG, because of the way the algorithm works (you're better off saving the 24-bit pictures as JPEG directly).
And as others have explained, your code won't reduce the size used by the Image object unless you actually copy the resulting data into another Bitmap object with a different PixelFormat such as Format16bppRgb555.

XNA - Culling Performance Issue

This method that draws my tiles seems to be quite slow, Im not sure exactly whats wrong, it belive my culling method isnt working and is drawing stuff offscreen, but im not completeley sure. Here it is:
// Calculate the visible range of tiles.
int left = (int)Math.Floor(cameraPosition.X / 16);
int right = left + spriteBatch.GraphicsDevice.Viewport.Width / 16;
right = Math.Min(right, Width) + 1; // Width -1 originally - didn't look good as tiles drawn on screen
if (right > tiles.GetUpperBound(0))
right = tiles.GetUpperBound(0) + 1; // adding 1 to get the last right tile drawn
int top = (int)Math.Floor(cameraPosition.Y / 16);
int bottom = left + spriteBatch.GraphicsDevice.Viewport.Height/ 16;
bottom = Math.Min(bottom, Height) + 1; // Height -1 originally - didn't look good as tiles drawn on screen
if (bottom > tiles.GetUpperBound(1))
bottom = tiles.GetUpperBound(1) + 1; // adding 1 to get the last bottom tile drawn
// For each tile position
for (int y = top; y < bottom; ++y)
{
for (int x = left; x < right; ++x)
{
// If there is a visible tile in that position, draw it
if (tiles[x, y].BlockType.Name != "Blank")
{
Texture2D texture = tileContent["DirtBlock_" + getTileSetType(tiles,x,y)];
spriteBatch.Draw(texture, new Vector2(x * 16, y * 16), Color.White);
if (isMinimap)
spriteBatch.Draw(pixel, new Vector2(30+x, 30+y), Color.White);
}
}
}
GetTileSetTypes is a function to get what tiles are around it, for different textures, like DirtBlock_North, DirtBlock_Center, etc.
Tile content is just a class with my block textures.
Try changing SpriteBatch.Begin to defered and combining all of the tiles onto one texture.
See this GameDev question for info about why deferred is most likely the fastest option for you.
Also realize that every time you draw a new texture you have to take the old one out of the GPU and put the new one in. This process is called texture swapping and usually isn't an issue but you are swapping textures twice per tile which is likely to impact performance noticeably.
This can be fixed by combining multiple sprites onto one texture and using the source rectangle argument. This allows you to draw multiple sprites without a texture swap. There are a few OSS libraries for this. Sprite Sheet Packer is my personal favorite.
Unfortunantly without the project and a profiler I'm just guessing; however, these are the two biggest gotchas for rendering tilemaps I know of. I can't really see anything wrong from here. Below is the code I use to draw my tile maps and as you see its very similar to yours.
If all else fails I would suggest using a profiler to figure out which bits are running slowly.
//Init the holder
_holder = new Rectangle(0, 0, TileWidth, TileHeight);
//Figure out the min and max tile indices to draw
var minX = Math.Max((int)Math.Floor((float)worldArea.Left / TileWidth), 0);
var maxX = Math.Min((int)Math.Ceiling((float)worldArea.Right / TileWidth), Width);
var minY = Math.Max((int)Math.Floor((float)worldArea.Top / TileHeight), 0);
var maxY = Math.Min((int)Math.Ceiling((float)worldArea.Bottom / TileHeight), Height);
for (var y = minY; y < maxY; y++) {
for (var x = minX; x < maxX; x++) {
_holder.X = x * TileWidth;
_holder.Y = y * TileHeight;
var t = tileLayer[y * Width + x];
spriteBatch.Draw(
t.Texture,
_holder,
t.SourceRectangle,
Color.White,
0,
Vector2.Zero,
t.SpriteEffects,
0);
}
}

Categories

Resources