How to control Excel's scaling of bitmaps - c#

I have a bitmap image and I want to put it in excel. I used this code I found here:
xlWorkSheet.Shapes.AddPicture("C:\\filesystem\\mb2.bmp",
msoFalse, msoCTrue, 0, 0, 518, 390);
But the resulting image is 1.333 times wider and higher. OK, so I can just multiply the dimensions by 0.75 and I get an image in excel with the desired dimensions.
xlWorkSheet.Shapes.AddPicture("C:\\filesystem\\mb2.bmp",
msoFalse, msoCTrue, 0, 0, (float)(518*0.75), (float)(390*0.75));
But that number 0.75 sitting there hard-coded really bothers me. Especially since I've seen this question in which the op's ratio is 0.76. Knowing that this code needs to run on any number of systems with different displays, I want to know how to get the ratio programmatically.
Somewhat also related to this question which has to do with copy-paste without code.

If you are talking about printing, the display is irrelevant.
The dimensions of the image need to be relative to the paper size. The size values in the AddPicture method are in points and are only loosely related to pixels. Points are units if measure that make sense to your printer. The application translates points to pixels for you so you don't need to worry about that.
You can use the InchesToPoints or CentimetersToPoints methods of the Application object to size your image to paper size.

Related

How to set "Resolution Unit" when regenerating\saving an image

I'm trying to investigate an issue I thinking I'm seeing in an application that generates a TIF image.
To do this I'm trying to regenerating\saving the image myself following this msdn Encoder.Compression example (except I'm using CompressionCCITT4).
The output file has a 'Resolution unit' = 2 but I would like to set that to 1.
Questions:
Any ideas how to do set the 'Resolution unit'?
Any ideas on the effect that would\should have on the image?
Thanks!
Disclaimer: Being a Java guy, I don't know how to set it using the library you describe. So I'll only answer question part 2:
Any ideas on the effect that would\should have on the image?
The quick answer is: None. It only matters when you import the image into a page-setting/layout program, word processor, print it, etc. But a lot of software simply assumes 72dpi in both x/y dimensions regardless.
Here's what the TIFF spec says:
Typically, TIFF pixel- editors do not care about the resolution, but applications (such as page layout programs) do care.
And:
1 = No absolute unit of measurement. Used for images that may have a non-square aspect ratio, but no meaningful absolute dimensions.
The drawback of ResolutionUnit=1 is that different applications will import the image
at different sizes. Even if the decision is arbitrary, it might be better to use dots per
inch or dots per centimeter, and to pick XResolution and YResolution so that the aspect ratio is correct and the maximum dimension of the image is about four inches (the “four” is arbitrary.)

How are images stored in EMGU?

How are images stored in emgu?
Those pixel values seem to be very large magnitude. ~ 9*10^9
Shouldn't pixels be [0 .. 255] ?
When I draw the image it seems to look ok. TemplateMatch is a grayscale float, ie:
Image<Gray, Single> TemplateMatch;
also when I scale TemplateMatch, it seems to have no effect on its appearance.
ie:
TemplateMatch._Mul(somevalue);
Yes, pixels are usually between 0 and 255 but when they're not, like in your case, there are a couple of things that can be done to draw your image correctly anyways.
The image can be remapped using the lowest and highest values in your image. Since you see no change when applying _Mul(somvalue) it is probably the algorithm that is applied.
Another way to deal with image of depth higher than 8-bit is to apply a modulo operation. It is faster to apply because you don't need to scan all pixels in the image upfront but it usually gives less interesting results.
When you say that you "draw" the image, I suppose you're using the ImageBox class in EmguCV. Note that if you draw it with another library you may not see the same result as it might be using a different algorithm.

Simple algorithm to crop empty borders from an image by code?

Currently I'm seeking for a rather fast and reasonably accurate algorithm in C#/.NET to do these steps in code:
Load an image into memory.
Starting from the color at position (0,0), find the unoccupied space.
Crop away this unnecessary space.
I've illustrated what I want to achieve:
What I can imagine is to get the color of the pixel at (0,0) and then do some unsafe line-by-line/column-by-column walking through all pixels until I meet a pixel with another color, then cut away the border.
I just fear that this is really really slow.
So my question is:
Are you aware of any quick algorithmns (ideally without any 3rd party libraries) to cut away "empty" borders from an in-memory image/bitmap?
Side-note: The algorithm should be "reasonable accurate", not 100% accurate. Some tolerance like one line too much or too few cropped would be way OK.
Addition 1:
I've just finished implementing my brute force algorithm in the simplest possible manner. See the code over at Pastebin.com.
If you know your image is centered, you might try walking diagonally ( ie (0,0), (1,1), ...(n,n) ) until you have a hit, then backtrack one line at a time checking until you find an "empty" line (in each dimension). For the image you posted, it would cut a lot of comparisons.
You should be able to do that from 2 opposing corners concurrently to get some multi-core action.
Of course, hopefully you dont it the pathelogical case of 1 pixel wide line in the center of the image :) Or the doubly pathological case of disconnected objects in your image such that the whole image is centered, but nothing crosses the diagonal.
One improvement you could make is to give your "hit color" some tolerance (adjustable maybe?)
The algorithm you are suggesting is a brute force algorithm and will work all the time for all type of images.
but for special cases like, subject image is centered and is a continuous blob of colors (as you have displayed in your example), binary sort kind of algorithm can be applied.
start from center line (0,length/2) and start in one direction at a time, examine the lines as we do in binary search.
do it for all the sides.
this will reduce complexity to log n to the base 2
For starters, your current algorithm is basically the best possible.
If you want it to run faster, you could code it in c++. This tends to be more efficient than managed unsafe code.
If you stay in c#, you can parallel extensions to run it on multiple cores. That wont reduce the load on the machine but it will reduce the latency, if any.
If you happen to have a precomputed thumbnail for the image, you can apply your algo on the thumbnail first to get a rough idea.
First, you can convert your bitmap to a byte[] using LockBits(), this will be much faster than GetPixel() and won't require you to go unsafe.
As long as you don't naively search the whole image and instead search one side at a time, you nailed the algorithm 95%. Just make you are not searching already cropped pixels, as this might actually make the algorithm worse than the naive one if you have two adjacent edges that crop a lot.
A binary search can improve a tiny bit, but it's not that significant as it will maybe save you a line of search for each direction in the best case scenario.
Although i prefer the answer of Tarang, i'd like to give some hints on how to 'isolate' objects in an image by refering to a given foregroundcolor and backgroundcolor, which is called 'segmentation' and used when working in the field of 'optical inspection', where an image is not just cropped to some detected object but objects are counted and also measured, things you can measure on an object is area, contour, diameter etc..
First of all, usually you'll start really to walk through your image beginning at x/y coordinates 0,0 and walk from left to right and top to bottom until you'll find a pixel that has another value as the background. The sensitivity of the segmentation is given by defining the grayscale value of the background as well as the grayscale value of the foreground. You possibly will walk through the image as said, by coordinates, but from the programs view you'll just walk through an array of pixels. That means you'll have to deal with the formula that calculates the x/y coordinate to the pixel's index in the pixel array. This formula sure needs width and height of the image.
For your concern of cropping, i think when you've found the so called 'pivot point' of your foreground object, you'll usually walk along the found object by using a formula that detects neighbor pixels of the same foregeground value. If there is only one object to detect as in your case, it's easy to store those pixels coordinates that are north-most, east-most, south-most and west-most. These 4 coordinates mark the rectangle your object fits in. With this information you can calculate the new images (cropped image) width and height.

Factors Affecting Time to Render Image using GDI+

This is a follow on to my previous question Speeding Up Image Handling
Apologies if I should have amended that question in some way rather than starting a new one.
I have tried all sorts of different things to speed up the drawing of an image on screen.
I thought that compressing the image until it is smaller would have an effect. However while this might save memory for the object I do not think it has any effect on how long it takes to draw. I tried converting the image to jpeg and using 100% compression. However while this creates a blocky image it does not affect the drawing time. I now think this is because the number of pixels that get rendered is not changed by this
I tried reducing the color palette to 256 colors. This makes the size smaller since it uses less bytes per pixel but does not seem to affect drawing on screen. I had thought that reducing the bytes per pixel GDI+ has to handle might save some time but it is not enough for me to see so far.
So am I wasting my time looking at compression and palette?
I assume that time taken will be affected by the number of pixels to be drawn (width x height) and I have resized the image to match the pixel size as displayed on screen. I think this is the one thing that does have an effect....
I have looked at how to stop autoscaling of the image - does my resize stop that or can the image still be autoscaled when it is rendered?
I am wondering if I can replace the DrawImage call with something using p/Invoke or other API call (which I admit I don't really understand).
You could use the Bitblt P/Invoke, which copies image data directly. It does require some initialisation to convert the image data to something the display device understands, but this copy operation is lightning fast. If you would like to know what exactly is happening in DrawImage btw, use a tool like ILSpy or Reflector.NET to inspect the method.
See BitBlt code not working for an example. For some information about Bitblt see http://www.codeproject.com/KB/GDI-plus/flicker_free.aspx.

How can I resize an image in C# while retaining high quality?

I found an article on image processing from here: http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-saving-cropping-and-resizing Everything works fine.
I want to keep the high quality when resizing the image. I think if I can increase the DPI value I can achieve this. Does anyone know if this is possible? And if so, how can I implement it in C#?
For starters, it's worth pointing out that there are two general categories of images; vector [e.g. SVG, WMF, Adobe Illustrator and Corel Draw Graphics] and bitmap (also called raster) images [e.g. Bitmap, JPEG and PNG Images].
Vector images are formed from a series of mathematical equations and/or calculations. Bitmap images, on the other hand, are made up of individual dots (pixels) each corresponding to a particular feature on the object the image is taken of.
If it should happen that you want to resize an image, the first thing to consider is if it is a bitmap or vector image. By virtue of the fact that vector images are obtained from calculations, they can be perfectly resized without losing any detail. The case is different for bitmap images. Since each pixel is independent of the other, when you desire to resize it, you are simply increasing or decreasing the source to target pixel ratio.
So in order to double the size of a vector image, simply multiply the target dimensions by two and everything comes out all right. If you should apply the same effect on a bitmap, you are actually increasing each source pixel to cover four pixels (two rows of two horizontal pixels).
Of course, by applying interpolation and filtering, the computer can "smooth" out the edges of the target pixels so they seem to blend into each other and give the appearance of a reasonably resized image but this output is never the same as resizing a vector image; vector images resize perfectly.
You also mentioned DPI in your question. DPI is essentially the number of pixels that correspond to an inch when the image is printed not when it is viewed on a screen. Therefore by increasing the DPI of the image, you do not increase the size of the image on the screen. You only increase the quality of print [which needless to say depends on the maximum resolution of the printer].
If you really desire to resize the image and the image is a bitmap, as a rule of thumb, do not increase the size beyond 200% of the original image's size else you'll lose the quality.
You can see this answer for code to resize bitmap images.
To see a sample vector image, go to this link.
Note Try zooming in and out of the image to see how well it resizes.
A typical bitmap are the StackOverflow sprites. They do not keep their quality resized.
Further Reading
Vector Graphics: http://en.wikipedia.org/wiki/Vector_image
Bitmap Graphics: http://en.wikipedia.org/wiki/Bitmap_image
Simply If the original image is smaller then the re-sized image then there is hardly anything you can do. Rest is a research problem.
This would only be possible if it's a vector graphic. Look into SVG. Otherwise, I think you might need Silverlight or Flex for that part.
What you're asking isn't really possible. You can't enlarge an image while maintaining the same quality.
If you think about an image as a mapped array of pixels (literally, a "bit-map"), this makes sense. The image is saved with a fixed amount of data, and that's all you have to work with when you resize it. Any examples to the contrary (like TV shows, as suggested by one of the comments) are purely fictional.
The best that you can do is set the InterpolationMode property of the Graphics object you're using to do the resizing to "HighQualityBicubic", which is the highest quality smoothing algorithm supported by GDI+ and in fact by every major graphics package on the market. It's the best that even Adobe Photoshop has to offer. Essentially, interpolation means that the computer is calculating the approximate value of the new pixels you're adding to make the image larger from the relative values of neighboring pixels. It's a "best guess" method, but it's the best compromise we've come up with yet.
At the very least, the resulting images won't have "jaggies" or rough, pixelated lines.
Sample code:
Graphics g;
g.InterpolationMode = Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
// ... insert the rest of your code here
Beyond that, it's worth noting that GDI+ (which the .NET Framework uses internally for graphics routines) works best with image sizes that are multiples of 16. So if it all possible, you should try and make the width and height of your resized images a multiple of 16. This will preserve as much of the original image quality as possible.
The ideal solution is to switch to vector graphics that can be resized at will. Instead of pixel information, they contain mathematical information used to draw or "render" the image. Read more on Wikipedia.
let's try metadata in GDI+, may be it can suit your request

Categories

Resources