Is it possible to draw a line at the border of the screen, like an "inline" with a consistent width of lets say 10 pixels, so that it aligns to the edges of the screen, even at the rounded corners?
Like this:
normal screen
with the wanted inline (Orange line)
Is there a unity solution? (Otherwise any other solution would be great too! Android studio maybe?)
What I want to achieve is a line that's always the same shape as the screen borders, on every screen, no matter the radius of the corners
I don't think this is possible in normal ways, since Unity considers the screen as a rectangle, so it will give you no information about the shape of the corners of your screen.
However, it is not impossible. You can use SystemInfo.deviceModel to get the model of a device and then you can retrieve information of its screen shape from a server or something like that.
The only necessary information the server needs to store is the radius of the corner. If its 0, means the screen is a rectangle, otherwise the screen is rounded with the given radius r:
Having this information, you can pass it to a post processing shader that will evaluate the minimum distance from each pixel to the corner of the screen, and if this distance is less than some value you defined, you can paint it differently.
Related
For some reason, everything I tried that was online about resolution independence doesn't really work the way I would want it to. Most of the solutions suggest a way to get the width and height and then use a class with them, but the result is always a cutoff picture(too big picture) or a picture in the top left corner with smaller height and width than those of the screen.
P.S. The game is in fullscreen, but we tried windowed and it didn't work either.
This is actually a pretty simple fix. Basically, create a RenderTarget object somewhere and do all your drawing to that object, then resize that to the screen.
Use GraphicsDevice.SetRenderTarget(target) to change the render target, then do all your drawing operations, then change it back to the back buffer afterwards by setting it to null. RenderTargets actually derive from Texture2D, so you can use it in as an argument, such as: SpriteBatch.Draw(renderTarget, Viewport.Bounds, Color.White), and this will stretch it to fit the screen.
I need an optimal solution with a program I am working on. The program should take an image(black and white only) with some single black lined objects, like star, circle, rectangle and etc. After, a program should find a location of each black point on it and will track it. By this I according to those location I will make some computations to make my stepper mottor move accordingly. Imagine that there is marker and white board, and I want the picture on my pc to be drawn on whiteboard, by specifying the size .
I don't know, maybe there are some other ways beside the image, and what units is better, pixel based or ....?need some suggestions and recomendations.
Appreciate
With printing, scanning and monitors, there is the term DPI (Dot-Per-Inch), this is what you need to inspect to determine the real world size of an image. Compare the DPI of the device, by the number of pixels in your image.
Note, DPI is represented in dots per SQUARE inch.
I don't know how to do that with C# , but you can OpenCV and it wrapper for c# Emgu, you can read the image that you have and convert it to YCbCr color model and, you can you the contours function or minmaxloc function .
I am drawing lines on a background image in a c# panel. The panel is anchored to the form so as the form resizes the panel resizes. The background image is set to be stretched so all you see as you resize the form is the background image.
My initial problem:
The lines drawn on the panel (via the OnPaint event) stay where they were originally drawn as the image resizes.
My current solution:
Record the location of the line and redraw it on a new bitmap by scaling the X and Y coordinates (works fine).
My new problem:
As you continually resize the window and draw lines you can't calculate the scaling factor from any point in time and apply it to all lines since the lines were originall drawn in different size images.
The two options I think I have:
After I redraw the line go through my array of lines and update the coordinate information so it now matches the current scale.
Or
In addition to storing the coordinate information of the line also store the size information of the panel at the time it was drawn so I can always calculate the scale for each line based on when it was drawn and the new panel size.
What I'm hoping for:
If you have thoughts on either of the two approaches that would be greatly appreciated....Even better would be to point me in the direction of a far better method to do this (I am fairly new to graphics processing in c#).
Can't write a comment, much as I want to. You do have a few options:
Draw your lines directly on the original Bitmap. This might not be an option for you, depending on the task.
Do it as you're doing now, keeping track of the lines' coordinates, updating them on resize, and redrawing them on Paint - if you use this, you'll be able to move and delete them, too,
Or do it by introducing a "scale factor" (float) which you update on every resize, and in your Paint event handler you draw everything using that scale factor. As you create a line, you calculate its coordinates using the scale factor BACK TO an unified coordinate system (scale factor 1) and then you don't have to modify your coordinates at all. This might be easy to debug due to that unified coordinate system. This is what I'd recommend, but it again depends on your task.
Draw to a full transparent Bitmap of the same size as your original image, use a scale factor like in the previous option. On creating a line, calculate its coordinates in the unified coordinate system, draw it on the Bitmap, and then on every Paint, draw the entire Bitmap over your original one. This, again, might not be an option if you need to delete or move your lines, or if you're tight on memory, or you don't want your lines to be blurred when you scale up, but somehow many ppl like this because it's like a "layer in Photoshop". :)
I have made a program that reads voltage and current values of some diode curves from an xml file and draws them on screen (Just using plain 2D graphics and some simple commands like DrawCurve and stuff like that).
My main image frame is 800 by 800 pixels (you can see a smaller screenshot down below). Now I want to add a zoom function that when I hover the mouse over this image area, a flying smaller square pops up and zooms in + moves when I move the mouse over this area.
I have no idea how to approach this. Ofcourse I don't ask the full working code but please help me to get closer and closer!
For instance, can I make the zoom to work, without reading the curve data and painting real time? or there is no escape from it? How can I have a hovering image box when I move mouse over the orginal image?
Thanks!
Have you timed how long DrawCurve takes? Perhaps it's fast enough to do in real time. Don't forget, the GDI will clip the drawing primitives to the drawing area. You just need to set up a clipping rectangle as you move the mouse around.
To speed up the redraw, create the main window image (the one you pasted) as an off-screen bitmap, and just DrawImage the off-screen version to the window in the paint events. That way you reduce the impact of the DrawCurve.
Finally, to get good looking results, overload the OnPaintBackground (can't remember the name exactly but it's something like that) so it does nothing (not even call the base class) and do all your painting in the OnPaint method using a BufferedGraphics object.
Update
Your paint function might look like this:
OnPaint (...)
{
the_graphics_object.DrawImage (the background image);
the_graphics_object.Clip = new Region (new Rectangle (coords relative to mouse position));
the_graphics_object.TranslateTransform (drawing offset based on mouse position);
RenderScene (the_graphics_object, scale_factor); // draws grid and curve, etc
the_graphics_object.DrawRectangle (zoom view rectangle); // draw a frame around the zoomed view
}
This will produce a floating 'window' relative to the mouse position.
Typically, cases where redrawing can be time consuming, zooming is usually tackled by providing a "quick but ugly" implementation, alongside the "correct but slow" implementation. While the zoom operation is actively in progress (say, while the user has a slider clicked, or until a 50ms since the last change in zoom value has happened), you use the quick and ugly mode, so the user can see a preview of what the final image will be. Once they let go of the zoom slider (or whatever mechanism you provided), you can recalculate the image in detail. The quick version is usually calculated based on the original image that you are working with.
In your case, you could simply take the original image, work out the bounding box of the new, zoomed image, and scale the relevant part of the original image up to the full image size. If say 100ms has passed with no change in zoom, recalculate the entire image.
Examples of this kind of functionality are quite widespread: most fractal generators use exactly this technique, and even unrelated things like Google StreetView (which provides a very ugly distorted version of the previous image when you move around, until the actual image has downloaded).
I do NOT want the system trying to scale my drawing, I want to do it entirely on my own as any attempt to squeeze/stretch the graphics will produce ugly results. The problem is that as the image gets bigger I want to add more detail rather than have it simply scale up.
Right now I'm looking at two sets of stripes. One is black/white, the other is black/white/white. The pen width is set to 1.
When the line is drawn horizontally it's correct. The same logic drawing vertical lines appears to be doing some antialiasing, bleeding the black onto the nearby white. The black/white/white doesn't look as good as the horizontal, the black/white looks more like medium++ gray/medium-- gray.
The same code is generating the coordinates in all cases, the transform logic is simply selecting what offset to apply where as I am only supporting orientations on the cardinals. Since there's no floating point involved I can't be looking at precision issues.
How do I get the system to leave my graphics alone???
(Yeah, I realize this won't work at very high resolution and eventually I'll have to scale up the lines. Over any reasonable on-screen zoom factor this won't matter, for printer use I'll have to play with it and see where I need to scale. The basic problem is that I'm trying to shoehorn things into too few pixels without just making blobs.)
Edit: There is no scaling going on. I'm generating a bitmap the exact size of the target window. All lines are drawn at integer coordinates. The recommendation of setting SmoothingMode to None changes the situation: Now the black/white/white draws as a very clear gray/gray/white and the black/white draws as a solid gray box. Now that this is cleaned up I can see some individual vertical lines that were supposed to be black are actually doing the same thing of drawing as 2-pixel gray bars. It's like all my vertical lines are off by 1/2 pixel--yet every drawing command gets only integers.
Edit again: I've learned more about the problem. The image is being drawn correctly but trashed when displayed to the screen. (Saving it to disk and viewing it on the very same monitor shows it drawn correctly.)
You really should let the system manage it for you. You have described a certain behavior that is specific to the hardware you are using. Given different hardware, the problem may not exist at all, or it may exist horizontally but not vertically, or may only exist at much smaller or much larger resolutions, etc. etc.
The basic problem you described sounds like the vertical lines are being drawn "between" vertical stacks of pixels, which is causing the system to draw an anti-aliased line. The alternative to anti-aliasing the line is to shift it. The problem with that is the lines will "jitter" or "jerk" if the image is moved around, animated, or scaled or transformed in any other way. Generally, jerk is MUCH less desirable than anti-aliasing because it is more distracting.
You should be able to turn off anti-aliasing using the SmoothingMode enum, or you could try to handle positioning yourself. Either way, you are trading anti-aliasing for jittery, jerky rendering during any movement or transformation.
Have a look at System.Drawing.Drawing2d.SmoothingMode. Setting it to 'Default' or 'None' should turn off anti aliasing when doing line drawing. If you're talking about scaling an image without anti aliasing effects, have a look at InterpolationMode. Specifically, you might wish to set it to 'Nearest-Neighbor' which will keep your rectangular blocks perfectly crisp. Note that you will see some odd effects if you scale your image by anything other than whole numbers.
Perhaps you need to align your lines on half-pixel coordinates? A one pixel line drawn at say x = 5 would be drawn on the center of the line, which means it would go from x = 4.5 to x = 5.5. If you want it to go from x = 4 to x = 5 then you'd need to set its coordinate to x = 4.5.
GDI+ has a property: http://msdn.microsoft.com/en-us/library/system.drawing.graphics.pixeloffsetmode.aspx that allows you to control this behavior.
Sounds like you need to change your application to tell the system it is DPI aware so scaling doesn't occur. Here's an article on doing that: http://msdn.microsoft.com/en-us/library/ms701681%28VS.85%29.aspx