I'm working on a Winforms app that contains a large map image (5500px by 2500px). I've set it up so the map starts in full size, but the user can zoom out to a few different scales to see more of the map. The user is able to drag the map around to shift what they are looking at (like Google Maps, Bing Maps, Civilization, etc.).
When the map is full sized (scale = 1.0), I am able to prevent the user from scrolling past the borders of the image. I do this by calculating if they are trying to move past 0, or past the image width - current window size, similar to this:
if (_currHScroll <= 0) {
_currHScroll = 0;
}
This all works just fine. But, when I zoom out on the map (thus, making the image smaller), the limits for the bottom and right of the map break down. I know why this happens--because the Transform that is performed basically "compresses" the map a little bit, and so what used to be a 5000 px image is now smaller, depending on the scale. But, my limiters are based on the image size.
So, the user can scroll past the end of the map, and just sees white space. Worse things happen, I realize, but if possible I'd like to keep them from doing that.
I'm sure there is a straight-forward way to do this, but I haven't figured it out yet. I've tried simply multiplying my calculation by the scale, but that didn't seem to work (seems to under-estimate the size initially, then over-estimate on the smallest sizes). I've tried calculating the transform location of the bottom right of the image, and using that, but it turns out, that number is inverted, and I can't find what it relates to.
I'm including my transform point method here. It works just fine. It tells me, regardless of zoom level, what pixel was clicked on the original image. Thus, if someone clicks on point 200, 200 but the image is scaled at .5, it will show something like 400,400 as what was clicked (but, as I said, I don't think the scale value is a multiplier--using this just for demonstration purposes).
public Point GetTransformedPoint(Point mousePoint) {
Matrix clickTransform = _mapTransform.Clone();
Point[] xPoints = { new Point(mousePoint.X, mousePoint.Y) };
clickTransform.Invert();
clickTransform.TransformPoints(xPoints);
Debug.Print("Orig: {0}, {1} -- Trans: {2}, {3}", mousePoint.X, mousePoint.Y, xPoints[0].X, xPoints[0].Y);
return xPoints[0];
}
Many thanks in advance. I'm sure it's something relatively easy that I'm overlooking, but after several hours, I'm just not finding it.
If i understand right, you can calculate the maximum with your method GetTransformedPoint by using width and height from your Image as Point. The result can then be used inside your check...
And by the way, you are right, the scale value is a multiplier used as a factor. The only thing is, you have to cast the result to an integer.
Related
The "game" I am trying to create has many buttons and images on the screen at once, and the buttons are designed for the base (what I believe to be 800x600) console size. The buttons and sprites are all in set positions.
The issue I am having is trying to get every image to scale when I do isfullscreen=true. The images stay in their relative position, but I need them to 'scale' based on the the actual size of the window.
While searching for an answer, I have found many that scale individual images or scale them based on the aspect ratio but what I am attempting to do is scale all images, no matter the aspect ratio, depending on the actual size of the XNA window. For example, If I have 3 100x60 sprites and 2 200x90 sprites placed on a 800x600 screen, how would I change the sprites to be the same relative size if the window size were to be changed to 1980x720 without having to manipulate each image?
Thanks
Edit: I've tried using a scale matrix, but that seems to require me setting the EXACT scale for that exact size, meaning I have to create a different scale matrix for each possible window size, which is not what I am trying to achieve.
I've fixed the issue i've been having using a ScaleMatrix, as I was using them incorrectly before.
Before, I was using a matrix of (800, 600) but now (I don't have access to the environment right now so it will have to be in pseudo) I have changed the code so its a variable scale:
(f)XScale = 800 / Viewport.Width
(f)Yscale = 600 / Viewport.Height
Matrix.createScale(XScale, YScale, 1 , 1)
I then passed this to the Spritebatch.Begin. The issue is, if you have something rendered to a triangle (Such as a background), then you may wish to render it in a different spritebatch.Begin as this scale will mess with the triangle.
I have a background rectangle and it applies the scale to it, which puts it off the screen. Its fine if it is something you wish to have scaled, such as a button rectangle.
I have a fragment for more options in my app that I want mostly hidden except for one small rectangle that says "Pull up for more options". This rectangle is parked at the bottom of the screen. When the user pulls it up, it then pops up from the bottom but NOT taking over the full screen, only about 1/3rd of the bottom. Just enough to show some of the options (checkboxes).
I am doing this and have it working by setting the TranslationY setting of the options fragment layout, which is in a FrameLayout container by the way, so that it is at the bottom of the screen and just shows my "Pull up for more options" text.
Then, when they pull up on that, I have some motion events to bring it up to where I want.
Here is the issue, I can get it working just fine on one display using hard coded TranslationY settings. For example, on a Galaxy S2 which has a density of 1.5 (HDPI 240) and a screen of 480x800, these are my hard coded values that work. I had to find them just by playing around with the numbers.
int trackOptionsHome = 650; //Parked at bottom of screen.
int trackOptionsExtended = 450; //Extended out where I want it to.
Again, with those hard coded values on the S2, it works fine and the way I want. However, if I now try a different device that is STILL HDPI (1.5/240) except the screen size is 480x640 (3.5in), it does not display properly which is to be expected. So then I implemented something like this:
float trackOptionsHome = ((dMetrics.HeightPixels / dMetrics.Density) + 120);
float trackOptionsExtended = ((dMetrics.HeightPixels / dMetrics.Density) - 100);
This was to try to take into account for different display density and sizes. I was then trying to do math at the end of each to position my fragment where I want it. However, I am getting inconsistent results and the numbers are still arbitrary. I have to find them by playing around.
This raises two questions:
1. How do I make sure I get the results I want on DIFFERENT display densities, which is not even the issue I am having at the moment since I have the same density.
2. How do I scale properly for the different size screens, which appears to be my immediate problem.
For math purposes at the moment (I can adjust as needed after I get an answer) let's say I want 50px of the fragment showing from the bottom when it is in the home position and 300px showing up from the bottom when extended.
What is the correct way to do this?
Thanks!
Mike
I have a chart that displays several lines showing signal strengths over a frequency band.
Each chart is composed of one 'area' and four 'series'. On the parent form there are several graphs like the one shown below. All of them are created dynamically and will have different widths.
What I am trying to do is add a tooltip or annotation (or something) when the mouse hovers over a specific area of the chart as shown in the mockup below:
If the mouse moved to the other side of the chart a different channel number and frequency would be shown in a box surrounding that area of the chart.
It doesn't have to be exactly as shown in the mockup although an outline would be preferred in order to show the user how wide the channel is regardless of the waveform shown in that area at the time. For example, the waveform shown above might only be 8MHz wide but channel 1 itself might have an allocation that is 10MHz wide (the device varies its bandwidth based on its offered load.)
The X axis is MHz and a channel is defined in terms of MHz so it would be ideal to define the outline in terms of the X axis instead of pixels.
Also, note that this is a realtime chart that is updated up to 10 times per second. Therefore it would be best if the information was not required to be updated each time new data arrived.
I was able to combine a couple of items to make the following solution:
[credit LICEcap]
The highlight is a rectangle filled in the 'OnPaint' method of the chart control.
The text is a simple TextAnnotation that is applied during the mousemove event.
It took quite a bit of coordinate conversion to get all the pieces in the right spot - especially the text. I needed to convert between pixels, position and value.
The first conversion was to pixels in order to center the text using MeasureString. I then converted it from pixel location to X axis value and then needed to convert it to position since the annotation requires using 'position' coordinates. There is not a function to convert from pixels to position. There is a pixels to value and a value to position which is the way I went.
I don't claim this to be the best or even a proper way to do it but it works. If anyone else has a better solution or a way to improve my code please post.
Here's my code for positioning the text:
double temp = chart1.ChartAreas[0].AxisX.ValueToPixelPosition(Convert.ToDouble(ce.sChannelFrequency) * 1000);
using (System.Drawing.Graphics graphics = System.Drawing.Graphics.FromImage(new Bitmap(1, 1))) {
SizeF size = graphics.MeasureString(freq.Text, new Font("eurostile", 13, FontStyle.Bold, GraphicsUnit.Pixel));
temp -= (size.Width/2+10);
}
if (temp < 0) temp = 0;
temp = chart1.ChartAreas[0].AxisX.PixelPositionToValue(temp);
freq.X = chart1.ChartAreas[0].AxisX.ValueToPosition(temp);
Currently I'm seeking for a rather fast and reasonably accurate algorithm in C#/.NET to do these steps in code:
Load an image into memory.
Starting from the color at position (0,0), find the unoccupied space.
Crop away this unnecessary space.
I've illustrated what I want to achieve:
What I can imagine is to get the color of the pixel at (0,0) and then do some unsafe line-by-line/column-by-column walking through all pixels until I meet a pixel with another color, then cut away the border.
I just fear that this is really really slow.
So my question is:
Are you aware of any quick algorithmns (ideally without any 3rd party libraries) to cut away "empty" borders from an in-memory image/bitmap?
Side-note: The algorithm should be "reasonable accurate", not 100% accurate. Some tolerance like one line too much or too few cropped would be way OK.
Addition 1:
I've just finished implementing my brute force algorithm in the simplest possible manner. See the code over at Pastebin.com.
If you know your image is centered, you might try walking diagonally ( ie (0,0), (1,1), ...(n,n) ) until you have a hit, then backtrack one line at a time checking until you find an "empty" line (in each dimension). For the image you posted, it would cut a lot of comparisons.
You should be able to do that from 2 opposing corners concurrently to get some multi-core action.
Of course, hopefully you dont it the pathelogical case of 1 pixel wide line in the center of the image :) Or the doubly pathological case of disconnected objects in your image such that the whole image is centered, but nothing crosses the diagonal.
One improvement you could make is to give your "hit color" some tolerance (adjustable maybe?)
The algorithm you are suggesting is a brute force algorithm and will work all the time for all type of images.
but for special cases like, subject image is centered and is a continuous blob of colors (as you have displayed in your example), binary sort kind of algorithm can be applied.
start from center line (0,length/2) and start in one direction at a time, examine the lines as we do in binary search.
do it for all the sides.
this will reduce complexity to log n to the base 2
For starters, your current algorithm is basically the best possible.
If you want it to run faster, you could code it in c++. This tends to be more efficient than managed unsafe code.
If you stay in c#, you can parallel extensions to run it on multiple cores. That wont reduce the load on the machine but it will reduce the latency, if any.
If you happen to have a precomputed thumbnail for the image, you can apply your algo on the thumbnail first to get a rough idea.
First, you can convert your bitmap to a byte[] using LockBits(), this will be much faster than GetPixel() and won't require you to go unsafe.
As long as you don't naively search the whole image and instead search one side at a time, you nailed the algorithm 95%. Just make you are not searching already cropped pixels, as this might actually make the algorithm worse than the naive one if you have two adjacent edges that crop a lot.
A binary search can improve a tiny bit, but it's not that significant as it will maybe save you a line of search for each direction in the best case scenario.
Although i prefer the answer of Tarang, i'd like to give some hints on how to 'isolate' objects in an image by refering to a given foregroundcolor and backgroundcolor, which is called 'segmentation' and used when working in the field of 'optical inspection', where an image is not just cropped to some detected object but objects are counted and also measured, things you can measure on an object is area, contour, diameter etc..
First of all, usually you'll start really to walk through your image beginning at x/y coordinates 0,0 and walk from left to right and top to bottom until you'll find a pixel that has another value as the background. The sensitivity of the segmentation is given by defining the grayscale value of the background as well as the grayscale value of the foreground. You possibly will walk through the image as said, by coordinates, but from the programs view you'll just walk through an array of pixels. That means you'll have to deal with the formula that calculates the x/y coordinate to the pixel's index in the pixel array. This formula sure needs width and height of the image.
For your concern of cropping, i think when you've found the so called 'pivot point' of your foreground object, you'll usually walk along the found object by using a formula that detects neighbor pixels of the same foregeground value. If there is only one object to detect as in your case, it's easy to store those pixels coordinates that are north-most, east-most, south-most and west-most. These 4 coordinates mark the rectangle your object fits in. With this information you can calculate the new images (cropped image) width and height.
I do NOT want the system trying to scale my drawing, I want to do it entirely on my own as any attempt to squeeze/stretch the graphics will produce ugly results. The problem is that as the image gets bigger I want to add more detail rather than have it simply scale up.
Right now I'm looking at two sets of stripes. One is black/white, the other is black/white/white. The pen width is set to 1.
When the line is drawn horizontally it's correct. The same logic drawing vertical lines appears to be doing some antialiasing, bleeding the black onto the nearby white. The black/white/white doesn't look as good as the horizontal, the black/white looks more like medium++ gray/medium-- gray.
The same code is generating the coordinates in all cases, the transform logic is simply selecting what offset to apply where as I am only supporting orientations on the cardinals. Since there's no floating point involved I can't be looking at precision issues.
How do I get the system to leave my graphics alone???
(Yeah, I realize this won't work at very high resolution and eventually I'll have to scale up the lines. Over any reasonable on-screen zoom factor this won't matter, for printer use I'll have to play with it and see where I need to scale. The basic problem is that I'm trying to shoehorn things into too few pixels without just making blobs.)
Edit: There is no scaling going on. I'm generating a bitmap the exact size of the target window. All lines are drawn at integer coordinates. The recommendation of setting SmoothingMode to None changes the situation: Now the black/white/white draws as a very clear gray/gray/white and the black/white draws as a solid gray box. Now that this is cleaned up I can see some individual vertical lines that were supposed to be black are actually doing the same thing of drawing as 2-pixel gray bars. It's like all my vertical lines are off by 1/2 pixel--yet every drawing command gets only integers.
Edit again: I've learned more about the problem. The image is being drawn correctly but trashed when displayed to the screen. (Saving it to disk and viewing it on the very same monitor shows it drawn correctly.)
You really should let the system manage it for you. You have described a certain behavior that is specific to the hardware you are using. Given different hardware, the problem may not exist at all, or it may exist horizontally but not vertically, or may only exist at much smaller or much larger resolutions, etc. etc.
The basic problem you described sounds like the vertical lines are being drawn "between" vertical stacks of pixels, which is causing the system to draw an anti-aliased line. The alternative to anti-aliasing the line is to shift it. The problem with that is the lines will "jitter" or "jerk" if the image is moved around, animated, or scaled or transformed in any other way. Generally, jerk is MUCH less desirable than anti-aliasing because it is more distracting.
You should be able to turn off anti-aliasing using the SmoothingMode enum, or you could try to handle positioning yourself. Either way, you are trading anti-aliasing for jittery, jerky rendering during any movement or transformation.
Have a look at System.Drawing.Drawing2d.SmoothingMode. Setting it to 'Default' or 'None' should turn off anti aliasing when doing line drawing. If you're talking about scaling an image without anti aliasing effects, have a look at InterpolationMode. Specifically, you might wish to set it to 'Nearest-Neighbor' which will keep your rectangular blocks perfectly crisp. Note that you will see some odd effects if you scale your image by anything other than whole numbers.
Perhaps you need to align your lines on half-pixel coordinates? A one pixel line drawn at say x = 5 would be drawn on the center of the line, which means it would go from x = 4.5 to x = 5.5. If you want it to go from x = 4 to x = 5 then you'd need to set its coordinate to x = 4.5.
GDI+ has a property: http://msdn.microsoft.com/en-us/library/system.drawing.graphics.pixeloffsetmode.aspx that allows you to control this behavior.
Sounds like you need to change your application to tell the system it is DPI aware so scaling doesn't occur. Here's an article on doing that: http://msdn.microsoft.com/en-us/library/ms701681%28VS.85%29.aspx