I have a requirement to scan various images for coloured lines, the result of this determines what we do with an image, no lines = delete, lines = save.
I have been meeting this requirement adequately by simply comparing the colour of each pixel to a list of known colours that we are looking for, if we find above a certain threshold of pixels then we are happy that there is something on the image that we are interested in.
I recently had to re-work this as we started to get highly compressed Jpegs and (for example) the red line ended up being made up of hundreds of shades of red - I got this working reliably but the process got me thinking that there mush be a better way so I have started to look at AForge to determine if it could be used to detect the different coloured lines.
I have spent a day looking into it and think that it will work but need some guidance on what the best approach/method will be as CV is a very big field and I only need to learn a very small part of it for the time being.
This is an example of one of the images
In this instance I'd want to find out about the red and blue lines.
I'd disregard the black ones.
Ive been reading and testing some things with hough line detection and have had some very limited success when detecting a STRAIGHT line on a black and white image but cant find many examples of detecting curved coloured lines.
All Im looking for is a little guidance on whether AForge is the best way forward (if it can even do what I want) and an idea of what the process would look like so that I can go and investigate the right things!
In case this is of use to anyone else in the future I found a way to do this, its still not perfect but has improved the reliability of our process.
Step one -> Remove all but the colour that we are interested in :
var c = Color.Red;
EuclideanColorFiltering filter = new EuclideanColorFiltering();
filter.CenterColor = new RGB(color.R, color.G, color.B);
filter.Radius = (short)radius;
filter.ApplyInPlace(input);
Step 2 -> Convert to gray scale
Grayscale.CommonAlgorithms.BT709.Apply(image);
Step 3 -> Run the result through a Hough
var lineTransform = new HoughLineTransformation();
lineTransform.ProcessImage(input);
HoughLine[] lines =
lineTransform.GetLinesByRelativeIntensity(_intensity);
Step one pretty much yields the same result that I used to get by scanning the image for pixels of a specific colour, but the HoughLineTransform has the effect of identifying which pixels form a line - removing a lot of the noise that we had on the highly compressed JPEGS.
There is still a bit of an issue in that the way that we are filtering out all but the colours that we are interested in doesnt work for all colours, we have quite a few shades of grey that we need to identify by that picks up the outlines of roads etc, so there is still work to do - but what I describe above has got us much closer to a solution.
I have a WinForms application that allows you to edit documents. Each document is made of chapters and each chapter holds a collection of RTF blocks. The RTF blocks are loaded in a PanelControl using Dock = DockStyle.Top.
The problem is that when the total height of a chapter gets too large (estimating > 32768 pixels) the lower blocks are not properly docked: they appear behind one another. When trying to isolate the problem I noticed that this also happens with simpler controls like a LabelControl.
Things I tried are methods like Refresh(), Invalidate() and PerformLayout: they will not resolve the issue.
What does help is resizing the form. After that all controls are laid out correctly.
Can anyone help on how to solve this without resizing the form?
Attached a simple demo-project that illustrates the problem.
From my comment above (seems really to be the problem here):
WinForms (and the GDI in general) is often behaving unpredictably if one tries to use coordinates outside a 16 bit range. Try to avoid that. In the range of possible problems are things just not getting drawn at all, OverflowExceptions at unexpected code positions etc.
If it's possible to you take decision to change this layout, I suggest you to take another approach on showing/editing the documents chapters with some kind of pagination or collapsing RTF blocks into a menu and showing only current.
You see.. it makes a sense the height value be a integer 16-bit value.
A screen is way more tiny than this.
As panel height increases to such a high size. You see that using scroll bar will become very very sensible.. and it's not a good thing.
Content with size 2x, 3x, 5x being scrolled is usable to user. But scrolling a content with height (~32768) of at least (using good resolution monitor w/ window maximized) in optimal case 32x the size of window is very uncomfortable.
Plus, I believe that the productivity of user will decrease due to brain difficulty in locate "things" in a increasing collection of "things".
I have bitmaps of lines and text that have anti-alias applied to them. I want to develop a filter that removes tha anti-alias affect. I'm looking for ideas on how to go about doing that, so to start I need to understand how anti-alias algorithms work. Are there any good links, or even code out there?
I need to understand how anti-alias algorithms work
Anti-aliasing works by rendering the image at a higher resolution before it is down-sampled to the output resolution. In the down-sampling process the higher resolution pixels are averaged to create lower resolution pixels. This will create smoother color changes in the rendered image.
Consider this very simple example where a block outline is rendered on a white background.
It is then down-sampled to half the resolution in the process creating pixels having shades of gray:
Here is a more realistic demonstration of anti-aliasing used to render the letter S:
I am not familiar at all with C# programming, but I do have experience with graphics. The closest thing to an anti-anti-alias filter would be a sharpening filter (at least in practice, using Photoshop), usually applied multiple times, depending on the desired effect. The sharpening filter work best when there is great contrast already between the anti-aliased elements and the background, and even better if the background is one flat color, rather than a complex graphic.
If you have access to any advanced graphics editor, you could try a few tests, and if you're happy with the results you could start looking into sharpening filters.
Also, if you are working with grayscale bitmaps, an even better solution is to convert it to a B/W image - that will remove any anti-aliasing on it.
Hope this helps at least a bit :)
I'm having a problem where drawing a grid using LineList and another (larger) grid overlapping it will make them flicker due to z-fighting. Using DepthBias will reduce that kind of problem when polygons and lines overlap but it apparently doesn't work when drawing lines in two separate DrawIndexedPrimitives calls.
Currently I "fixed" it by adding to the position of the second grid a small vector pointing towards the camera to simulate the DepthBias but the problem still happens when the camera is far from the grids.
Is there a better way to work around this problem?
From what I've heard you should take a look at your clip-planes. Example thread: xna.com
Edit: Dunno, about grids though, but you could always try! :)
Unfortunately, this is the natural behavior due to the limited precision of 32bit floating point numbers (as used by the depth buffer). You can either translate one set of lines minimally (as You do now) and try to chose Your clipping planes as close to each other as possible (as Rob mentioned), or:
Disable the depth buffer by setting device.RenderState.CompareFunction = CompareFunction.Allways, not by actually disabling the buffer!
Draw all Your lines.
Enable the depth buffer again by reversing the changes in step 1.
Draw all Your other geometry.
I have written a chart that displays financial data. Performance was good while I was drawing less than 10.000 points displayed as a connected line using PathGeometry together with PathFigure and LineSegments. But now I need to display up to 100.000 points at the same time (without scrolling) and it's already very slow with 50.000 points. I was thinking of StreamGeometry, but I am not sure since it's basically the same as a PathGeometry stroring the information as byte stream. Does any one have an idea to make this much more performant or maybe someone has even done something similar already?
EDIT: These data points do not change once drawn so if there is potential optimizing it, please let me know (line segments are frozen right now).
EDIT: I tried StreamGeometry. Creating the graphic took even longer for some reason, but this is not the issue. Drawing on the chart after drawing all the points is still as slow as the previous method. I think it's just too many data points for WPF to deal with.
EDIT: I've experimented a bit and I noticed that performance improved a bit by converting the coordinates which were previously in double to int to prevent WPF anti-aliasing sub-pixel lines.
EDIT: Thanks for all the responses suggesting to reduce the number of line segments. I have reduced them to at most twice the horizontal resolution for stepped lines and at most the horizontal resolution for simple lines and the performance is pretty good now.
I'd consider downsampling the number of points you are trying to render. You may have 50,000 points of data but you're unlikely to be able to fit them all on the screen; even if you charted every single point in one display you'd need 100,000 pixels of horizontal resolution to draw them all! Even in D3D that's a lot to draw.
Since you are more likely to have something like 2,048 pixels, you may as well reduce the points you are graphing and draw an approximate curve that fits onto the screen and has only a couple thousand verts. If for example the user graphs a time frame including 10000 points, then downsample those 10000 points to 1000 before graphing. There are numerous techniques you could try, from simple averaging to median-neighbor to Gaussian convolution to (my suggestion) bicubic interpolation. Drawing any number of points greater than 1/2 the screen resolution will simply be a waste.
As the user zooms in on a part of a graph, you can resample to get higher resolutions and more accurate curve fitting.
When you start dealing with hundreds of thousands of distinct vertices and vectors in your geometry, you should probably consider migrating your graphics code to use a graphics framework instead of depending on WPF (which, while built on top of Direct3D and therefore capable of remarkably efficient vector graphics rendering, has a lot of extra overhead going on that hampers its efficiency). It's possible to host both Direct3D and OpenGL graphics rendering windows within WPF -- I'd suggest moving that direction instead of continuing to work solely within WPF.
(EDIT: changed "DirectX" in original answer to "Direct3D")
Just ran into this question, but as I mentioned in this thread, the most performant approach might be to program against WPF's Visual layer.
Everything Visual in WPF eventually goes against this layer ... and so it is the most lightweight approach of them all.
See this and this for more info. Chapter 14 of Matthew MacDonald's Pro WPF in C# 2008 book also has a good section on it.
As another reference ... see Chapter 2 of Pavan Podila's book WPF Control Development Unleashed. On page 13, he discusses how DrawingVisuals would be an excellent choice for a charting component.
Finally, I just noticed that Charles Petzold wrote an MSDN Magazine article where the best overall (performant anyway) solution (to a scatter plot) was a DrawingVisual approach.
Another idea would be to use the Image control with the Source property set to a DrawingImage that you've dynamically created.
According to Pavan Podila in WPF Control Development Unleashed, this approach can be very helpful when you have thousands and thousands of visuals that don't need any interactivity. Check out page 25 of his book for more info.
This is an old thread, but I thought it was worth mentioning that you could attain interactivity with the above method by using the MouseUp() event. You know the size of the image's viewport, the resolution of the image, and the mouse's position. For example, you could maintain the collection actualScreenPoints through a timer attached to your UserControl_SizeChanged event:
double xworth = viewport.ActualWidth / (XEnd - XStart);
double xworth = viewport.ActualHeight / (YEnd - YStart);
List<Point> actualScreenPoints = new List<Point>();
for (var i = 0; i < points.Count; i++)
{
double posX = points[i].X * xworth;
double posY = points[i].Y * yworth;
actualScreenPoints.Add(posX, posY);
}
And then when your MouseUp() event fires, check if any of the points in the collection are within +-2px. There's your MouseUp on a given point.
I don't know how well it scales, but I've had some success using ZedGraph in WPF (WinForms control inside a WindowsFormsPresenter). I'm surprised no one mentioned it yet. It's worth taking a look at, even if you're not planning on using it for your current project.
ZedGraph
Good luck!
I believe the only method that might be faster while remaining in the WPF framework would be to override OnRender in a custom control. You can then render your geometry directly to the persisted scene, culling anything out of view. If the user can only see a small part of the data set at a time, culling could be enough on its own.
With this many data points, it's unlikely that the user can see full detail when the entire dataset is in view. So it might also be worthwhile to consider simplifying the dataset for full view and then showing a more detailed view if and when they zoom in.
Edit: Also, give StreamGeometry a shot. Its whole reason for existing is performance, and you never know until you try.
This is a very good question, and at it's heart begs the question "Can any user make practical use of, or business descisions from, a screen containing 100,000 discrete points?".
Following best practice in GUI design philosphy, the answer should be No, which would lead me to question whether there isn't a different way to meet the requirement for the application.
If there really is a bona-fide case for displaying 100,000 points on screen, with no scrolling, then using an off-screen buffer is the way to go. Composite your image to a bitmap, than whack that bitmap onto your Window / Page as needed. This way the heavy lifting is only done once, after which the hardware acceleration can be used every time the window needs to be drawn.
Hope this helps.
I haven't worked with WPF (disclaimer), but I suspect that your performance problem is because your code is trying to fit a smooth curved line through all of your data, and the time required increases geometrically (or worse) with the number of data points.
I don't know if this would be acceptable appearance-wise, but try graphing your data by connecting each point to the last with a straight line. This should make the time-to-graph proportional to the number of data points, and with as many points as you have the graph may end up looking exactly the same anyway.
Another idea would be to use the Image control with the Source property set to a DrawingImage that you've dynamically created.
According to Pavan Podila in WPF Control Development Unleashed, this approach can be very helpful when you have thousands and thousands of visuals that don't need any interactivity. Check out page 25 of his book for more info.