I want to detect objects using cvHoughCircles method in visual c#.If anyone knows how to do this please help me.
Edit Details:
I searched in the Internet there is examples using gray.HoughCircles method.
this is my code.
Image<Bgr, Byte> image = capture.QueryFrame();
MCvScalar hsv_min = new MCvScalar(150, 84, 130, 0);
MCvScalar hsv_max = new MCvScalar(358, 256, 255, 0);
IntPtr hsv_frame = CvInvoke.cvCreateImage(new System.Drawing.Size(640, 480),IPL_DEPTH.IPL_DEPTH_8U, 3);
IntPtr thresholded = CvInvoke.cvCreateImage(new System.Drawing.Size(640, 480), IPL_DEPTH.IPL_DEPTH_8U, 1);
CvInvoke.cvCvtColor(image, hsv_frame, COLOR_CONVERSION.CV_BGR2HSV);
CvInvoke.cvInRangeS(hsv_frame, hsv_min, hsv_max, thresholded);
IntPtr storage = CvInvoke.cvCreateMemStorage(0);
CvInvoke.cvSmooth(thresholded, thresholded, SMOOTH_TYPE.CV_GAUSSIAN, 9, 9, 0, 0);
IntPtr circles= CvInvoke.cvHoughCircles(thresholded, storage,HOUGH_TYPE.CV_HOUGH_GRADIENT , 2, 4, 100, 50, 10, 400);
In the following link there is code.But it is in pythen.So what I'm doing is trying to convert it into visual c#.
http://www.lirtex.com/robotics/fast-object-tracking-robot-computer-vision/#comment-847
I want to take all detected circles in to for loop and then draw circle to corresponding objects as in pythen code.
I tried to use foreach loop but there is error,
foreach statement cannot operate on variables of type 'System.IntPtr' because 'System.IntPtr' does not contain a public definition for 'GetEnumerator'.
Is there any method to avoid this error.
Did you try this tutorial?
Shape (Triangle, Rectangle, Circle, Line) Detection in CSharp
This contains good tutorial which may be help you.
Related
I'm using Emgu.CV to templateMatch and to save Images.
Unfortunetly I have ran into an issue that I have no been able to solve for a weeks.
Problem is that i serialize byte array and size from original Image to json file, and whenever i try to convert it back sometimes the image is distorted.
I have already tried skipping over serializing procces and it still became distorted.
Here is code of converting procces:
Image<Bgr565, byte> screenCrop = SnipMaker.takeSnip();//method creates screenshot at this point when i display the images they are 100% correct
byte[] data = screenCrop.Bytes;//I would get normaly all this from json file(in this case im skipping over it)
Mat mat = new Mat(screenCrop.Rows, screenCrop.Cols, screenCrop.Mat.Depth, asset.NumberOfChannels);
Marshal.Copy(data, 0, mat.DataPointer, screenCrop.asset.Cols * screenCrop.asset.Rows * asset.NumberOfChannels);
Image<Bgr565, byte> img = mat.ToImage<Bgr565, byte>();//This image is suddenly distorted
Problem is that this results depending on "I'm not sure what" is either prefecly good image or skwed one:
normal result
same code different result
Its almost like its sometimes 1 pixel behind but only thing that is changing is size and dimentions of screen shots.
I have tried dirrect ways like
Image<Bgr, byte> img = new Image<Bgr, byte>(width, height);
img.Bytes = data;//data is byte array that i got from file
This also gives sometimes correct picture but other times it throws an exeption (out of range exception in marshal.cs when trying to copy bytes from data to img)
only thing that i suspect at this point is that im doing something wrong whenever im taking screenshot but im not sure what:
public static Image<Bgr565, byte> Snip()
{
int screenWidth = (int)System.Windows.SystemParameters.PrimaryScreenWidth;
int screenHeight = (int)System.Windows.SystemParameters.PrimaryScreenHeight;
using (Bitmap bmp = new Bitmap(screenWidth, screenHeight))
{
using (Graphics gr = Graphics.FromImage(bmp))
gr.CopyFromScreen(0, 0, 0, 0, bmp.Size);
using (var snipper = new SnippingTool(bmp))
{
if (snipper.ShowDialog() == true)
{
Bitmap bitmapImage = new Bitmap(snipper.Image);
Rectangle rectangle = new Rectangle(0, 0, bitmapImage.Width, bitmapImage.Height);//System.Drawing
BitmapData bmpData = bitmapImage.LockBits(rectangle, ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);//System.Drawing.Imaging
Image<Bgr565, byte> outputImage = new Image<Bgr565, byte>(bitmapImage.Width, bitmapImage.Height, bmpData.Stride, bmpData.Scan0);
bitmapImage.Dispose();
snipper.Close();
return outputImage;
}
}
return null;
}
}
So far I have not been able to solve this and knowing my luck noone will proppably anwser me here. But please could someone help me with this?
Thank you in advance
So thank you to everyones help.
The issue was indeed in the screenshot script. I've used incorrect combination of
pixel formats which resulted in inconsistent bit transfer.
But because the step property in Image<bgr,byte>.Mat was calculated based on the width of the image (Emgucv SC):
step = sizeof(byte) * s.Width * channels;
It caused that some of the images looked normal and other didn't.(speculation based on observation)
Fix:
change all Image<Bgr, byte> to Image<Bgra, byte>
to make it 32bit and then change:
BitmapData bmpData = bitmapImage.LockBits(rectangle, ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
to:
BitmapData bmpData = bitmapImage.LockBits(rectangle, ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
Hope this will help someone in the future. : )
I need to create screw shape using Eyeshot's libraries in .NET application.
In SolidWorks this is easily accomplished with creating the shape/profile that needs to be swept/stretched, and helix curve which is used as a rail or direction.
In SolidWorks positioning helix start point to some point in profile's diameter and using "Sweep" command results that point to be driven/rotated around the helix and the wanted shape is created.
SolidWorks example
In Eyeshot I create my profile as a LinearPath entity and use
SweepAsSolid(ICurve rail, double tol, sweepMethodType sweepMethod = sweepMethodType.RotationMinimizingFrames)
function but the result is different. It seems that SweepAsSolid function position helix start point in the center of the profile and different shape is created.
Using helix as a rail:
Eyeshot example with helix as a rail
Using straight line as a rail:
Eyeshot example with straight line as a rail
Is there a way to get wanted shape with Eyeshot's libraries using the same procedure as in SolidWorks?
I think the method ExtrudeWithTwist() is what you are looking for:
Point3D moveText = new Point3D(0,3,0);
// new example
Line l1 = new Line(-3, 0, 3, 0);
Line l2 = new Line(3, 0, 3, 2);
Line l3 = new Line(3, 2, -3, 2);
Line l4 = new Line(-3, 2, -3, 0);
CompositeCurve cc1 = new CompositeCurve(l1, l2, l3, l4);
Surface[] loft1 = Surface.ExtrudeWithTwist(cc1, new Vector3D(0, 0, -10), new Point3D(0, 1, 0), Math.PI, 0.1);
foreach (Surface s in loft1)
{
s.Translate(10, 0, 0);
}
model.Entities.AddRange(loft1, 0, Color.Orange);
I found the method for Chamfer and fillet but could not really understand the implementation of it.
Basically I am not able to evoke Fillet property.
http://documentation.devdept.com/100/WPF/topic4434.html
If anybody can guide.
Code:
ICurve line1 = new Line(0, 0, 0, 57.06, 0, 0);
ICurve line2 = new Line(0, 0, 0, 0, 45, 0);
So how do I fillet between these 2 lines. I cant locate Fillet method to pass these ICurves.
Adding the image, for better understanding of problem. As you can see I am not able to invoke Curve class and subsequently fillet property. I am using Eyeshot version 12
enter image description here
Image of all the dll added, but still same error
enter image description here
Thanks.
Here you go bud. Hopefully this is a decent start. I set flip 1 to be true, so that line1's end point is at the start of line 2. (I didn't actually plot this, but I think that's what they're asking for)
I also made the assumption you want to trim lines 1 and 2.
Eyeshots documentation on their website is pretty decent. Reading those can definitely help you understand the constructors a little better.
The output of the fillet command is an arc, which makes sense. You will most likely need to add myFillet seperately from lines 1 and 2 to the viewport, as they are all separate entities.
ICurve line1 = new Line(0, 0, 0, 57.06, 0, 0);
ICurve line2 = new Line(0, 0, 0, 0, 45, 0);
double radius = 10.0;
bool flip1 = true;
bool flip2 = false;
bool trim1 = true;
bool trim2 = true;
Arc myFillet;
Curve.Fillet(line1, line2, radius, flip1, flip2, trim1, trim2, out myFillet);
I have been trying to accomplish this for quite some time with no success; I've looked at several relevant questions here on StackOverflow with no success; I've also followed 6 different tutorials that ALL followed pretty much the same process:
Build the vertices: -1, -1, 0, -1, 1, 0, 1, 1, 0, and 1, -1, 0.
Build the indices: 0, 1, 2, 0, 2, 3.
Create the Vertex and Index buffers.
Clear the RenderTargetView.
Set the current Vertex and Pixel shaders.
Update the constant buffers if you have any.
Render the quad (see below).
Rinse and repeat 4 - 8.
Now, the reason this is driving me up the wall is because I can render far more advanced objects such as:
Spheres
3D Lines
Fans
My code for creating the quad is pretty much the same as everyone else's:
public class Quad {
private void SetVertices() {
float width = Rescale(Size.Width, 0, Application.Width, -1, 1);
float height = Rescale(Size.Height, 0, Application.Height, -1, 1);
vertices = new Vertex[] {
new Vertex(new Vector3(-width, -height, 0), Vector3.ForwardLH, Color.White),
new Vertex(new Vector3(-width, height, 0), Vector3.ForwardLH, Color.White),
new Vertex(new Vector3(width, height, 0), Vector3.ForwardLH, Color.White),
new Vertex(new Vector3(width, -height, 0), Vector3.ForwardLH, Color.White)
}
indices = new int[] { 0, 1, 2, 0, 2, 3 };
vertexBuffer = Buffer.Create(Device3D, BindFlags.VertexBuffer, vertices);
vertexBinding = new VertexBufferBinding(vertexBuffer, Utilities.SIzeOf<Vertex>(), 0);
indexBuffer = Buffer.Create(Device3D, BindFlags.IndexBuffer, indices);
indexCount = indices.Length;
}
public void Render() {
if (shaderResourceView != null)
context3D.PixelShader.SetShaderResource(0, shaderResourceView);
context3D.PixelShader.SetSampler(0, samplerState);
context3D.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleStrip;
context3D.InputAssembler.SetVertexBuffers(0, vertexBinding);
context3D.InputAssembler.SetIndexBuffer(indexBuffer, Format.R32_UInt, 0);
context3D.DrawIndexed(totalIndexCount, 0, 0);
}
}
Notes
I am using a right handed coordinate system (for some reason the previous developer hardwired Vector3.ForwardLH into some places that I now cannot get rid of yet); if that helps any and I cannot currently convert to a left handed coordinate system.
Am I missing something here? Why am I unable to render a basic quad?
If you feel more information is needed feel free to let me know and I will add it on request.
When rendering with Direct3D 11, you need to know the all the state. You do not mention what your BlendState, DepthStencilState, or RasterState settings are here which is a likely reason you aren't getting the results you want.
If the DepthStencilState is such set use the Z-Buffer, then the fact that your vertices have a 0 for the Z means they are going to get culled. You can set a depth/stencil state without Z writes or Z tests, and you should definitely turn off Z writes when drawing 2D stuff anyhow. You can also pass something like 0.5 for the Z value for your vertices which is fairly common for 2D drawing with 3D APIs.
If you backface culling enabled in the RasterState, then the winding order of your vertices could result in them being skipped. You can play with different winding orders, or disable culling.
It also really matters what your Vertex Shader and Pixel Shader are here. You don't show the code for setting your shaders or shader constants, and you don't show the HLSL.
You should seriously consider using the SharpDX Toolkit SpriteBatch class for efficient quad rendering or looking at their source for it.
I know you are using SharpDX an C#, but you might find it useful to see the DirectX Tool Kit for DX11 tutorial on drawing 2D shapes.
I tried following the algorithm but it does not work. I can't figure what is the problem.
Can somebody help me?
Where can I learn/find examples of gesture recognitions streamed from Kinect, using OpenCV?
Image<Gray, Byte> dest = new Image<Gray, Byte>(this.bitmap.Width, this.bitmap.Height);
CvInvoke.cvThreshold(src, dest, 220, 300, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY);
Bitmap nem1 = new Bitmap(dest.Bitmap);
this.bitmap = nem1;
Graphics g = Graphics.FromImage(this.bitmap);
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
{
for (Contour<Point> contours = dest.FindContours();
contours != null;
contours = contours.HNext)
{
g.DrawRectangle(new Pen(new SolidBrush(Color.Green)),contours.BoundingRectangle);
IntPtr seq = CvInvoke.cvConvexHull2(contours,storage.Ptr, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr defects = CvInvoke.cvConvexityDefects(contours, seq, storage);
Seq<Point> tr= contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
Seq<Emgu.CV.Structure.MCvConvexityDefect> te = contours.GetConvexityDefacts(storage, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
g.DrawRectangle(new Pen(new SolidBrush(Color.Green)), tr.BoundingRectangle);
}
}
Without having some graphical data it's hard to help (I'm also without proper hardware). Anyway, I suggest you two things:
since it's a graphical procedure, debug everything saving or showing any intermediate step (threshold, contours, convexhull)
change to a simpler approach. For example:
apply a threshold (resulting in a 0/1 map of your hands)
for each row, count 0/1 transitions
test the maximum number of transitions: i.e., if it's above 7, hands are open
Let me know if it works :-)