I'm currently plotting XY data on a canvas and drawing a curve with it. So far it is simple and working for a thin line but when I increase the thickness a peculiar effect happens due to how the lines are drawn to form a curve.
I've attached an example image that shows a nice smooth line that works fine when the line is thin. But when the line is thicker you can obviously see the problem.
Is there a way to connect these endpoints to make a nice smooth line?
If not, is there another drawing tool that is useful in creating a nice line?
I'm not happy about the implementation as is because quickly the canvas becomes cluttered by hundreds if not thousands of line objects on the Canvas. This seems like an awful way of doing this but I haven't found a better way as of yet. I'd much rather go with another route that would create a single curve object.
Any help is appreciated as always.
Thanks!
Point previousPoint;
public void DrawLineToBox(DrawLineAction theDrawAction, Point drawPoint)
{
Line myLine = new Line();
myLine.Stroke = new SolidColorBrush(Color.FromArgb(255, 0, 0, 0));
myLine.StrokeThickness = 29;
if(theDrawAction == DrawLineAction.KeepDrawing)
{
myLine.X1 = previousPoint.X; //draw from this point
myLine.Y1 = previousPoint.Y;
}
else if(theDrawAction == DrawLineAction.StartDrawing)
{
myLine.X1 = drawPoint.X; //draw from same point
myLine.Y1 = drawPoint.Y;
}
myLine.X2 = drawPoint.X; //draw to this point
myLine.Y2 = drawPoint.Y;
canvasToDrawOn.Children.Add(myLine); //add to canvas
previousPoint.X = drawPoint.X; //set current point as last point
previousPoint.Y = drawPoint.Y;
}
Try adding the following two lines:
myLine.StrokeStartLineCap = PenLineCap.Round;
myLine.StrokeEndLineCap = PenLineCap.Round;
Also, you really should use the Polylne or Path object to do what you are currently doing. Personally, I always set StrokeStartLineCap and StrokeEndLineCap to PenLineCap.Round and StrokeLineJoin to PenLineJoin.Round for the Polyline objects I used.
Related
I am trying to use openCV.NET to read scanned forms. The problem is that sometimes the positions of the relevant regions of interest and the alignment may differ depending on the printer it was printed form and the way the user scanned the form.
So I thought I could use an ArUco marker as a reference point as there are libraries (ArUco.NET) already built to recognize them. I was hoping to find out how much the ArUco code is rotated and then rotate the form backwards by that amount to make sure the text is straight. Then I can use the center of the ArUco code as a reference point to use OCR on specific regions on the form.
I am using the following code to get the OpenGL modelViewMatrix. However, it always seems to be the same numbers no matter which angle the ArUco code is rotated. I only just started with all of these libraries but I thought that the modelViewMatrix would give me different values depending on the rotation of the marker. Why would it always be the same?
Mat cameraMatrix = new Mat(3, 3, Depth.F32, 1);
Mat distortion = new Mat(1, 4, Depth.F32, 1);
using (Mat image2 = OpenCV.Net.CV.LoadImageM("./image.tif", LoadImageFlags.Grayscale))
{
using (var detector = new MarkerDetector())
{
detector.ThresholdMethod = ThresholdMethod.AdaptiveThreshold;
detector.Param1 = 7.0;
detector.Param2 = 7.0;
detector.MinSize = 0.01f;
detector.MaxSize = 0.5f;
detector.CornerRefinement = CornerRefinementMethod.Lines;
var markerSize = 10;
IList<Marker> detectedMarkers = detector.Detect(image2, cameraMatrix, distortion);
foreach (Marker marker in detectedMarkers)
{
Console.WriteLine("Detected a marker top left at: " + marker[0].X + #" " + marker[0].Y);
//Upper 3x3 matrix of modelview matrix (0,4,8,1,5,9,2,6,10) is called rotation matrix.
double[] modelViewMatrix = marker.GetGLModelViewMatrix();
}
}
}
It looks like you have not initialized your camera parameters.
cameraMatrix and distortion are the intrinsic parameters of your camera. You can use OpenCV to find them.
This is vor OpenCV 2.4 but will help you to understand the basics:
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
If you have found them you should be able to get the parameters.
I need a way to grab the coordinates of the face in C# for Windows Phone 8.1 in the camera view. I haven't been able to find anything on the web so I'm thinking it might not be possible. What I need is the x and y (and possibly area) of the "box" that forms around the face when it is detected in the camera view. Has anyone done this before?
Code snippet (bear in mind this is part of an app from the tutorial I linked below the code. It's not copy-pasteable, but should provide some help)
const string MODEL_FILE = "haarcascade_frontalface_alt.xml";
FaceDetectionWinPhone.Detector m_detector;
public MainPage()
{
InitializeComponent();
m_detector = new FaceDetectionWinPhone.Detector(System.Xml.Linq.XDocument.Load(MODEL_FILE));
}
void photoChooserTask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(e.ChosenPhoto);
WriteableBitmap btmMap = new WriteableBitmap(bmp);
//find faces from the image
List<FaceDetectionWinPhone.Rectangle> faces =
m_detector.getFaces(
btmMap, 10f, 1f, 0.05f, 1, false, false);
//go through each face, and draw a red rectangle on top of it.
foreach (var r in faces)
{
int x = Convert.ToInt32(r.X);
int y = Convert.ToInt32(r.Y);
int width = Convert.ToInt32(r.Width);
int height = Convert.ToInt32(r.Height);
btmMap.FillRectangle(x, y, x + height, y + width, System.Windows.Media.Colors.Red);
}
//update the bitmap before drawing it.
btmMap.Invalidate();
facesPic.Source = btmMap;
}
}
This is taken from developer.nokia.com
To do this in real-time, you need to intercept the viewfinder image, perhaps using the NewCameraFrame method (EDIT: not sure if you should use this method or PhotoCamera.GetPreviewBufferArgb32 as described below. I have to leave it up to your research)
So basically your task has 2 parts:
Get the viewfinder image
Detect faces on it (using something like the code above)
If I were you, I'd first do step 2. on an image loaded from disk, and once you can detect faces on that, I'd see how to obtain current viewfinder image and detect faces on that. X,Y coordinates are easy enough to obtain once you've detected the face - see code above.
(EDIT): I think you should try using PhotoCamera.GetPreviewBufferArgb32 method to obtain the viewfinder image. Look here MSDN documentation. Also, be sure to search through MSDN docs and tutorials. This should be more than enough to complete step 1.
A lot of face detection algorithms use Haar classifiers, Viola-Jones algorithm etc. If you're familiar with that, you'll feel more confident in what you're doing, but you can do without. Also, read the materials that I linked - they seem fairly good.
I'm trying to detect rectangles on this image:
with this code:
static void Main(string[] args)
{
// Open your image
string path = "test.png";
Bitmap image = (Bitmap)Bitmap.FromFile(path);
// locating objects
BlobCounter blobCounter = new BlobCounter();
blobCounter.FilterBlobs = true;
blobCounter.MinHeight = 5;
blobCounter.MinWidth = 5;
blobCounter.ProcessImage(image);
Blob[] blobs = blobCounter.GetObjectsInformation();
// check for rectangles
SimpleShapeChecker shapeChecker = new SimpleShapeChecker();
foreach (var blob in blobs)
{
List<IntPoint> edgePoints = blobCounter.GetBlobsEdgePoints(blob);
List<IntPoint> cornerPoints;
// use the shape checker to extract the corner points
if (shapeChecker.IsQuadrilateral(edgePoints, out cornerPoints))
{
// only do things if the corners form a rectangle
if (shapeChecker.CheckPolygonSubType(cornerPoints) == PolygonSubType.Rectangle)
{
// here i use the graphics class to draw an overlay, but you
// could also just use the cornerPoints list to calculate your
// x, y, width, height values.
List<Point> Points = new List<Point>();
foreach (var point in cornerPoints)
{
Points.Add(new Point(point.X, point.Y));
}
Graphics g = Graphics.FromImage(image);
g.DrawPolygon(new Pen(Color.Red, 5.0f), Points.ToArray());
image.Save("result.png");
}
}
}
}
but it dont recognize the rectangles (walls). It just recognize the big square, and when I reduce the minHeight and minWidth, it recognize trapezoids on the writing..
I propose a different algorithm approach, after working almost a year with image processing algorithms what I can tell is that to create an efficient algorithm, you have to "reflect" how you, as a human would do that, here is the proposed approach:
We don't really care about the textures, we care about the edges (rectangles are edges), therefore we will apply an Edge-detection>Difference (http://www.aforgenet.com/framework/docs/html/d0eb5827-33e6-c8bb-8a62-d6dd3634b0c9.htm), this gives us:
We want to exaggerate the walls, as humans we know that we are looking for the walls, but the computer does not know this, therefore, apply two rounds of Morphology>Dilatation (http://www.aforgenet.com/framework/docs/html/88f713d4-a469-30d2-dc57-5ceb33210723.htm), this gives us:
We care only about the what is wall and what is not, apply a Binarization>Threshold (http://www.aforgenet.com/framework/docs/html/503a43b9-d98b-a19f-b74e-44767916ad65.htm), we get:
(Optional) We can apply a blob extraction to erase the labels ("QUARTO, BANHEIRO", etc)
We apply a Color>Invert, this is just done because the next step detects the white color not black.
Apply a Blob>Processing>Connected Components Labeling (http://www.aforgenet.com/framework/docs/html/240525ea-c114-8b0a-f294-508aae3e95eb.htm), this will give us all the rectangles, like this:
Note that for each colored box you have its coordinates, center, width and height. So you can extract a snip from the real image with that coordinates.
PS: Using the program AForge Image Processing Lab is highly recommended to test your algos.
Each time a rectangle is found, the polygon is drawn on Graphics and the file is saved only for THAT rectangle. This means that result.png will only contain a single rectangle at a time.
Try first saving all the rectangles in a List<List<Points>> and then going over it and add ALL the rectangles to the image. Something like this (Pseudo):
var image..
var rectangles..
var blobs..
foreach (blob in blobs)
{
if (blob is rectangle)
{
rectangles.add(blob);
}
}
foreach (r in rectangles)
{
image.draw(r.points);
}
image.save("result.png");
If your problem now is to avoid noise due to writings on the image, use FillHoles with widht and height of holes smaller than the smallest rectangle but larger than any of the writings.
If the quality of image is good and no text is touching the border of the image, Invert the image and FillHoles will remove most of the stuff.
Hope I understood your problem correctly.
We are trying to detect rectangles in so many rectangles (considering gray rectangles of grid). Almost all algorithms will get confused here. You're not eliminating externals from input image. Why not replace grid line color with background color or use threshold above to eliminate all grids first.
Then grow all pixels equal to the width of wall, Find all horizontal and vertical lines thereafter use maths to find rectangles using detected lines. Uncontrolled filling will be risky as when boundries are not closed fill will make two rooms as one rectangle.
I have an application that is very "connection-based", i.e. multiple inputs/outputs.
The UI concept of a "cable" is exactly what I'm looking for to make the concept clear to the user. Propellerhead took a similar approach in their Reason software for audio components, illustrated in this YouTube video (fast forward to 2m:50s).
I can make this concept work in GDI by painting a spline from point A to point B, there's got to be a more elegant way to use Paths or something in WPF for this, but where do you start? Is there a good way to simulate the animation of the cable swing when you grab it and shake it?
I'm also open to control libraries (commercial or open source) if this wheel has already been invented for WPF.
Update: Thanks to the links in the answers so far, I'm almost there.
I've created a BezierCurve programmatically, with Point 1 being (0, 0), Point 2 being the bottom "hang" point, and Point 3 being wherever the mouse cursor is. I've created a PointAnimation for Point 2 with an ElasticEase easing function applied to it to give the "Swinging" effect (i.e., bouncing the middle point around a bit).
Only problem is, the animation seems to run a little late. I'm starting the Storyboard each time the mouse moves, is there a better way to do this animation? My solution so far is located here:
Bezier Curve Playground
Code:
private Path _path = null;
private BezierSegment _bs = null;
private PathFigure _pFigure = null;
private Storyboard _sb = null;
private PointAnimation _paPoint2 = null;
ElasticEase _eEase = null;
private void cvCanvas_MouseMove(object sender, MouseEventArgs e)
{
var position = e.GetPosition(cvCanvas);
AdjustPath(position.X, position.Y);
}
// basic idea: when mouse moves, call AdjustPath and draw line from (0,0) to mouse position with a "hang" in the middle
private void AdjustPath(double x, double y)
{
if (_path == null)
{
_path = new Path();
_path.Stroke = new SolidColorBrush(Colors.Blue);
_path.StrokeThickness = 2;
cvCanvas.Children.Add(_path);
_bs = new BezierSegment(new Point(0, 0), new Point(0, 0), new Point(0, 0), true);
PathSegmentCollection psCollection = new PathSegmentCollection();
psCollection.Add(_bs);
_pFigure = new PathFigure();
_pFigure.Segments = psCollection;
_pFigure.StartPoint = new Point(0, 0);
PathFigureCollection pfCollection = new PathFigureCollection();
pfCollection.Add(_pFigure);
PathGeometry pathGeometry = new PathGeometry();
pathGeometry.Figures = pfCollection;
_path.Data = pathGeometry;
}
double bottomOfCurveX = ((x / 2));
double bottomOfCurveY = (y + (x * 1.25));
_bs.Point3 = new Point(x, y);
if (_sb == null)
{
_paPoint2 = new PointAnimation();
_paPoint2.From = _bs.Point2;
_paPoint2.To = new Point(bottomOfCurveX, bottomOfCurveY);
_paPoint2.Duration = new Duration(TimeSpan.FromMilliseconds(1000));
_eEase = new ElasticEase();
_paPoint2.EasingFunction = _eEase;
_sb = new Storyboard();
Storyboard.SetTarget(_paPoint2, _path);
Storyboard.SetTargetProperty(_paPoint2, new PropertyPath("Data.Figures[0].Segments[0].Point2"));
_sb.Children.Add(_paPoint2);
_sb.Begin(this);
}
_paPoint2.From = _bs.Point2;
_paPoint2.To = new Point(bottomOfCurveX, bottomOfCurveY);
_sb.Begin(this);
}
If you want true dynamic motion (ie, when you "shake" the mouse pointer you can create waves that travel along the cord), you will need to use finite element techniques. However if you are ok with static behavior you can simply use Bezier curves.
First I'll briefly describe the finite element approach, then go into more detail on the static approach.
Dynamic approach
Divide your "cord" into a large number (1000 or so) "elements", each with a position and velocity Vector. Use the CompositionTarget.Rendering event to compute each element position as follows:
Compute the pull on each element along the "cord" from adjacent elements, which is proportional to the distance between elements. Assume the cord itself is massless.
Compute the net force vector on each "element" which consists of the pull from each adjacent element along the cord, plus the constant force of gravity.
Use a mass constant to convert the force vector to accelaration, and update the position and velocity using the equations of motion.
Draw the line using a StreamGeometry build with a BeginFigure followed by a PolyLineTo. With so many points there is little reason to do the extra computations to create a cubic bezier approximation.
Static approach
Divide your cord into perhaps 30 segments, each of which is a cubic bezier approximation to the catenary y = a cosh(x/a). Your end control points should be on the catenary curve, the parallels should tangent to the catenaries, and the control line lengths set based on the second derivative of the catenary.
In this case you will probably also want to render a StreamGeometry, using BeginFigure and PolyBezierTo to build it.
I would implement this as a custom Shape subclass "Catenary" similar to Rectangle and Ellipse. In that case, all you have to override the DefiningGeometry property. For efficiency I would also override CacheDefiningGeometry, GetDefiningGeometryBounds, and GetNaturalSize.
You would first decide how to parameterize your catenary, then add DependencyProperties for all your parameters. Make sure you set the AffectsMeasure and AffectsRender flags in your FrameworkPropertyMetadata.
One possible parameterization would be XOffset, YOffset, Length. Another might be XOffset, YOffset, SagRelativeToWidth. It would depend on what would be easiest to bind to.
Once your DependencyProperties are defined, implement your DefiningGeometry property to compute the cubic bezier control points, construct the StreamGeometry, and return it.
If you do this, you can drop a Catenary control anywhere and get a catenary curve.
User bezier curve segments in a path.
http://www.c-sharpcorner.com/UploadFile/dbeniwal321/WPFBezier01302009015211AM/WPFBezier.aspx
IMHO 'hanging' (physically simulated) cables are a case of over-doing it - favouring looks over usability.
Are you sure you're not just cluttering the user-experience ?
In a node/connection-based UI I find clear connections (like in Quartz Composer : http://ellington.tvu.ac.uk/ma/wp-content/uploads/2006/05/images/Quartz%20Composer_screenshot_011.png ) way more important than eye-candy like swinging cables that head in a different direction (down due to gravity) than where the actually connection-point is. (And in the mean time eat up CPU-cycles for the simulation that could be more useful elsewhere)
Just my $0.02
I'm working on a project that involves drawing curved paths between two objects. Currently, I've been writing some test code to play around with bezier curves and animation. The first test is simply to move the endpoint (Point3) from the origin object (a rectangle) to the destination object (another rectangle), in a straight line. here is the code which sets up the actual line:
connector = new Path();
connector.Stroke = Brushes.Red;
connector.StrokeThickness = 3;
PathGeometry connectorGeometry = new PathGeometry();
PathFigure connectorPoints = new PathFigure();
connectorCurve = new BezierSegment();
connectorPoints.StartPoint = new Point((double)_rect1.GetValue(Canvas.LeftProperty) + _rect1.Width / 2,
(double)_rect1.GetValue(Canvas.TopProperty) + _rect1.Height / 2);
connectorCurve.Point1 = connectorPoints.StartPoint;
connectorCurve.Point2 = connectorPoints.StartPoint;
connectorCurve.Point3 = connectorPoints.StartPoint;
connectorPoints.Segments.Add(connectorCurve);
connectorGeometry.Figures.Add(connectorPoints);
connector.Data = connectorGeometry;
MainCanvas.Children.Add(connector);
OK, so we now have a line collapsed to a point. Now, lets animate that line, going from _rect1 to _rect2 (the two objects at the endpoints):
PointAnimation pointAnim = new PointAnimation();
pointAnim.From = connectorCurve.Point3;
pointAnim.To = new Point((double)_rect2.GetValue(Canvas.LeftProperty) + _rect2.Width / 2,
(double)_rect2.GetValue(Canvas.TopProperty) + _rect2.Height / 2);
pointAnim.Duration = new Duration(TimeSpan.FromSeconds(5));
board.Children.Add(pointAnim);
Works beautifully. However, when I try to do it with a storyboard, I get nothing. Here's the storyboarded code:
Storyboard board = new Storyboard();
PointAnimation pointAnim = new PointAnimation();
pointAnim.From = connectorCurve.Point3;
pointAnim.To = new Point((double)_rect2.GetValue(Canvas.LeftProperty) + _rect2.Width / 2,
(double)_rect2.GetValue(Canvas.TopProperty) + _rect2.Height / 2);
pointAnim.Duration = new Duration(TimeSpan.FromSeconds(5));
Storyboard.SetTarget(pointAnim, connectorCurve);
Storyboard.SetTargetProperty(pointAnim, new PropertyPath(BezierSegment.Point3Property));
board.Children.Add(pointAnim);
board.Begin();
Nothing moves. I'm suspecting there is a problem with what I'm feeding SetTarget or SetTargetProperty, but can't seem to figure it out. Does anyone have experience with animating line / bezier points in WPF?
I recreated your code, and this works:
Storyboard.SetTarget(pointAnim, connector);
Storyboard.SetTargetProperty(pointAnim, new PropertyPath("Data.Figures[0].Segments[0].Point3"));
That fixes it :) It seems that the target needs to be the control itself.
Going one step down, like this:
Storyboard.SetTarget(pointAnim, connectorGeometry);
Storyboard.SetTargetProperty(pointAnim, new PropertyPath("Figures[0].Segments[0].Point3"));
...gives the InvalidOperationException:
'[Unknown]' property value in the path 'Figures[0].Segments[0].Point3' points to immutable instance of 'System.Windows.Media.PathFigure'.
http://msdn.microsoft.com/en-us/library/system.windows.media.animation.storyboard(VS.95).aspx says:
Do not attempt to call Storyboard members (for example, Begin) within the constructor of the page. This will cause the animation to fail silently.
..in case you were doing that!
The sample on that page also sets the Duration property of the Storyboard object.
Finally a general tip, with these kinds of UI objects and weird XAML object graphs once you've got the basics working best to put it in a ResourceDictionary and use something like 'Resources["Name"] as Storyboard' to get it back later.
Hope that's helpful: looks like the missing Duration should do the trick.
edit: Looks like Duration is set to Automatic by default, I will see what else I can come up with, please bear with me.. :)