I'm trying to develop an application with a "map" and every user has some pieces on it.
All the pieces are in canvas.
The pieces have a new position every 30ms, I set them using a timer doing :
myPiece.Margin = new Thickness(x, y, 0, 0);
But the render is not really smooth (actually it is when I put my window in 1024*768).
Is there a better way to set the positions to have a better render ?
You can try RenderTransform with TranslateTransform. They should be faster than setting the margins. But then you will need to keep track of your positions based on something else but margin.
Related
I'm trying to make something similar to paint. I'm trying to figure out how make different brush styles. Like in Paint 3D you get a certain line fills when using the pen tool vs using the paint brush tool.
I have no idea where to even start. I've spent a good portion of the day looking through documentations, and watching YouTube videos. I'm more lost than when I started. The closest thing I came across was line caps, but that's definitely not what I'm looking for.
!!See the UPDATE below!!
Hans' link should point you in the right direction, namely toward TextureBrushes.
To help you further here a few points to observe:
TextureBrush is a brush, not a pen. So you can't follow a path, like the mouse movements to draw along that curve. Instead, you need to find an area to fill with the brush.
This also implies that you need to decide how and when to trigger the drawing; basic options are by time and/or by distance. Usually, the user can set parameters for these often called 'flow' and 'distance'..
Instead of filling a simple shape and drawing many of those, you can keep adding the shapes to a GraphicsPath and fill that path.
To create a TextureBrush you need a pattern file that has transparency. You can either make some or download them from the web where loads of them are around, many for free.
Most are in the Photoshop Brush format 'abr'; if they are not too recent (<=CS5) you can use abrMate to convert them to png files.
You can load a set of brushes to an ImageList, set up for large enough size (max 256x256) and 32bpp to allow alpha.
Most patterns are black with alpha, so if you want color you need to create a colored version of the current brush image (maybe using a ColorMatrix).
You may also want to change its transparency (best also with the ColorMatrix).
And you will want to change the size to the current brush size.
Update
After doing a few tests I have to retract the original assumption that a TextureBrush is a suitable tool for drawing with textured tips.
It is OK for filling areas, but for drawing free-hand style it will not work properly. There are several reasons..:
one is that the TextureBrush will always tile the pattern in some way, flipped or not and this will always look like you are revealing one big underlying pattern instead of piling paint with several strokes.
Another is that finding the area to fill is rather problematic.
Also, tips may or may not be square but unless you fill with a rectangle there will be gaps.
See here for an example of what you don't want at work.
The solution is really simple and much of the above still applies:
What you do is pretty much regular drawing but in the end, you do a DrawImage with the prepared 'brush' pattern.
Regular drawing involves:
A List<List<Point>> curves that hold all the finished mouse paths
A List<Point> curentCurve for the current path
In the Paint event you draw all the curves and, if it has any points, also the current path.
For drawing with a pattern, it is necessary to also know when to draw which pattern version.
If we make sure not to leak them we can cache the brush patterns..:
Bitmap brushPattern = null;
List<Tuple<Bitmap,List<Point>>> curves = new List<Tuple<Bitmap,List<Point>>>();
Tuple<Bitmap, List<Point>> curCurve = null;
This is a simple/simplistic caching method. For better efficiency you could use a Dictionary<string, Bitmap> with a naming scheme that produces a string from the pattern index, size, color, alpha and maybe a rotation angle; this way each pattern would be stored only once.
Here is an example at work:
A few notes:
In the MouseDown we create a new current curve:
curCurve = new Tuple<Bitmap, List<Point>>(brushPattern, new List<Point>());
curCurve.Item2.Add(e.Location);
In the MouseUp I add the current curve to the curves list:
curves.Add(new Tuple<Bitmap, List<Point>>(curCurve.Item1, curCurve.Item2.ToList()));
Since we want to clear the current curve, we need to copy its points list; this is achieved by the ToList() call!
In the MouseMove we simply add a new point to it:
if (e.Button == MouseButtons.Left)
{
curCurve.Item2.Add(e.Location);
panel1.Invalidate();
}
The Paint goes over all curves including the current one:
for (int c = 0; c < curves.Count; c++)
{
e.Graphics.TranslateTransform(-curves[c].Item1.Width / 2, -curves[c].Item1.Height / 2);
foreach (var p in curves[c].Item2)
e.Graphics.DrawImage(curves[c].Item1, p);
e.Graphics.ResetTransform();
}
if (curCurve != null && curCurve.Item2.Count > 0)
{
e.Graphics.TranslateTransform(-curCurve.Item1.Width / 2, -curCurve.Item1.Height / 2);
foreach (var p in curCurve.Item2)
e.Graphics.DrawImage(curCurve.Item1, p);
e.Graphics.ResetTransform();
}
It makes sure the patterns are drawn centered.
The ListView is set to SmallIcons and its SmallImageList points to a smaller copy of the original ImageList.
It is important to make the Panel Doublebuffered! to avoid flicker!
Update: Instead of a Panel, which is a Container control and not really meant to draw onto you can use a Picturebox or a Label (with Autosize=false); both have the DoubleBuffered property turned on out of the box and support drawing better than Panels do.
Btw: The above quick and dirty example has only 200 (uncommented) lines. Adding brush rotation, preview, a stepping distance, a save button and implementing the brushes cache takes it to 300 lines.
I'm working on a Winforms app that contains a large map image (5500px by 2500px). I've set it up so the map starts in full size, but the user can zoom out to a few different scales to see more of the map. The user is able to drag the map around to shift what they are looking at (like Google Maps, Bing Maps, Civilization, etc.).
When the map is full sized (scale = 1.0), I am able to prevent the user from scrolling past the borders of the image. I do this by calculating if they are trying to move past 0, or past the image width - current window size, similar to this:
if (_currHScroll <= 0) {
_currHScroll = 0;
}
This all works just fine. But, when I zoom out on the map (thus, making the image smaller), the limits for the bottom and right of the map break down. I know why this happens--because the Transform that is performed basically "compresses" the map a little bit, and so what used to be a 5000 px image is now smaller, depending on the scale. But, my limiters are based on the image size.
So, the user can scroll past the end of the map, and just sees white space. Worse things happen, I realize, but if possible I'd like to keep them from doing that.
I'm sure there is a straight-forward way to do this, but I haven't figured it out yet. I've tried simply multiplying my calculation by the scale, but that didn't seem to work (seems to under-estimate the size initially, then over-estimate on the smallest sizes). I've tried calculating the transform location of the bottom right of the image, and using that, but it turns out, that number is inverted, and I can't find what it relates to.
I'm including my transform point method here. It works just fine. It tells me, regardless of zoom level, what pixel was clicked on the original image. Thus, if someone clicks on point 200, 200 but the image is scaled at .5, it will show something like 400,400 as what was clicked (but, as I said, I don't think the scale value is a multiplier--using this just for demonstration purposes).
public Point GetTransformedPoint(Point mousePoint) {
Matrix clickTransform = _mapTransform.Clone();
Point[] xPoints = { new Point(mousePoint.X, mousePoint.Y) };
clickTransform.Invert();
clickTransform.TransformPoints(xPoints);
Debug.Print("Orig: {0}, {1} -- Trans: {2}, {3}", mousePoint.X, mousePoint.Y, xPoints[0].X, xPoints[0].Y);
return xPoints[0];
}
Many thanks in advance. I'm sure it's something relatively easy that I'm overlooking, but after several hours, I'm just not finding it.
If i understand right, you can calculate the maximum with your method GetTransformedPoint by using width and height from your Image as Point. The result can then be used inside your check...
And by the way, you are right, the scale value is a multiplier used as a factor. The only thing is, you have to cast the result to an integer.
I've been working on a proper slider for my C# WPF project.
I wanted to create a slider, with a background that indicates different parts of the process, by adding a different color to each section on the slider. Furthermore I wanted to add small indicators (like the default ticks, but custom shape and irregular position) to the background.
I achived this by creating a drawing brush and adding correspondingly colored rectangles. This seemed to work fine, but a small distortion was still present, so I investigated further and realized the following:
With slider.ActualWidth I get the width of the whole widget. So in order to create a background covering the actual "slider" part, I'll have to be aware of the distance from the widget to the actual slider. (See image)
I measured the distance in a very small window, in fullscreen and stretched on two screens. It seems this distance is always 5 pixels. I tried google and looked through the info WPF provides on its pages, but either I read over it, or there is no information on this.
Can I be sure this distance is always 5 pixels ? In there any place such information is kept ? Is there maybe another way, to determine the size of the slider itself?
Assuming you haven't tinkered with the Slider template you can just walk down the visual tree and check the ActualWidth of the track:
Border b = VisualTreeHelper.GetChild(slider, 0) as Border;
Grid g = VisualTreeHelper.GetChild(b, 0) as Grid;
Border track = VisualTreeHelper.GetChild(g, 2) as Border;
Console.WriteLine("Track ActualWidth: " + track.ActualWidth);
I am drawing lines on a background image in a c# panel. The panel is anchored to the form so as the form resizes the panel resizes. The background image is set to be stretched so all you see as you resize the form is the background image.
My initial problem:
The lines drawn on the panel (via the OnPaint event) stay where they were originally drawn as the image resizes.
My current solution:
Record the location of the line and redraw it on a new bitmap by scaling the X and Y coordinates (works fine).
My new problem:
As you continually resize the window and draw lines you can't calculate the scaling factor from any point in time and apply it to all lines since the lines were originall drawn in different size images.
The two options I think I have:
After I redraw the line go through my array of lines and update the coordinate information so it now matches the current scale.
Or
In addition to storing the coordinate information of the line also store the size information of the panel at the time it was drawn so I can always calculate the scale for each line based on when it was drawn and the new panel size.
What I'm hoping for:
If you have thoughts on either of the two approaches that would be greatly appreciated....Even better would be to point me in the direction of a far better method to do this (I am fairly new to graphics processing in c#).
Can't write a comment, much as I want to. You do have a few options:
Draw your lines directly on the original Bitmap. This might not be an option for you, depending on the task.
Do it as you're doing now, keeping track of the lines' coordinates, updating them on resize, and redrawing them on Paint - if you use this, you'll be able to move and delete them, too,
Or do it by introducing a "scale factor" (float) which you update on every resize, and in your Paint event handler you draw everything using that scale factor. As you create a line, you calculate its coordinates using the scale factor BACK TO an unified coordinate system (scale factor 1) and then you don't have to modify your coordinates at all. This might be easy to debug due to that unified coordinate system. This is what I'd recommend, but it again depends on your task.
Draw to a full transparent Bitmap of the same size as your original image, use a scale factor like in the previous option. On creating a line, calculate its coordinates in the unified coordinate system, draw it on the Bitmap, and then on every Paint, draw the entire Bitmap over your original one. This, again, might not be an option if you need to delete or move your lines, or if you're tight on memory, or you don't want your lines to be blurred when you scale up, but somehow many ppl like this because it's like a "layer in Photoshop". :)
I've a complex UI-system which allows a lot of stuff which also can be done with WPF but supports multiple plattforms ( iOS, Android, Windows, ... ). It's not completed yet and now i'm facing the following issue:
My designer wants rotating objects! Rotating objects are far more complex than simple axis aligned ones, which are the reason i can't use glScissor. A little graphic which might help to understand the problem:
You can see that i need to clip the object "Subcontainer" by the bounds of the "Parent Container". As far as i know there are few options:
Use the stencil buffer, in this case i got a problem because i have objects which are not visible and must also influence the stencil buffer because they are might mask the child object. Also i have to draw each object twice because i need to decrease the stencil buffer when going back in hierarchy.
Cut the plane ( triangulate; or any other ui model ) which is used to draw the UI-object, this seems to be a lot of afford because they might clipped at different points ( imagine a container in a rotated container in a rotated container... ) and also it's really hard to clip them correctly and it might be a source of performance issues
However both seem to cause a lot of different issues and might be a source for performance leaks. Is there any other way to archive what i want or is there any way to improve the both approaches above?
I ended up with using the Stencil-Buffer, this generates more draw calls than the depth-approach but is much easier to implement.
Before i draw i wrote this code:
if (_Mask)
{
if (Stage.StencilMaskDepth++ == 0)
GL.Enable(EnableFlags.STENCIL_TEST);
GL.ColorMask(false, false, false, false);
GL.DepthMask(false);
GL.StencilFunc(StencilFunction.ALWAYS, Stage.StencilMaskDepth, Stage.StencilMaskDepth);
GL.StencilOp(StencilOp.INCR, StencilOp.INCR, StencilOp.INCR);
// Draw rectangle
DrawColor(Colors.Black);
GL.ColorMask(true, true, true, true);
GL.DepthMask(true);
GL.StencilFunc(StencilFunction.EQUAL, Stage.StencilMaskDepth, Stage.StencilMaskDepth);
GL.StencilOp(StencilOp.KEEP, StencilOp.KEEP, StencilOp.KEEP);
}
After all childs have been drawn this code is called:
if (_Mask)
{
GL.ColorMask(false, false, false, false);
GL.DepthMask(false);
GL.StencilFunc(StencilFunction.ALWAYS, Stage.StencilMaskDepth, Stage.StencilMaskDepth);
GL.StencilOp(StencilOp.DECR, StencilOp.DECR, StencilOp.DECR);
// Draw rectangle
DrawColor(Colors.Black);
GL.ColorMask(true, true, true, true);
GL.DepthMask(true);
if (--Stage.StencilMaskDepth == 0)
GL.Disable(EnableFlags.STENCIL_TEST);
}
Maybe i going to test some other approaches in a few month but currently this is the easiest to implement.
This is just an idea, but what about using depth buffer to do the masking?
Enable depth buffer and set glDepthFunc(GL_LEQUAL);
Render Container A and its frame at Z = 0
Render Container A internal area / background (where other nested containers will be) at Z = 1
Now you have a "depth stencil" with container frame at depth 0 and container internals at depth 1. That means that anything you render inbetween will be above internals, but below frame (and clipped by it)
Now with next Container B, render its frame to Z = 0.5 (it will get clipped by parent container A on GPU)
Render container B internal area at Z = 0.75
Now anything you want to render within container B will have to go at Z = 0.75. It will overlay containers internal area, but will be clipped by both container A and B frames.
Maybe you could try rendering to texture. Create a parent texture. Then render all children to that texture. Then render the parent texture to the screen deforming and displacing it as desired. This solution may or may not have issues, depending on what do you want to achieve. Especially if you animate the containers scale or have a very complex tree of many nested containers you might have performance issues.