Detect small rectangles in image AForge - c#

I'm trying to detect rectangles on this image:
with this code:
static void Main(string[] args)
{
// Open your image
string path = "test.png";
Bitmap image = (Bitmap)Bitmap.FromFile(path);
// locating objects
BlobCounter blobCounter = new BlobCounter();
blobCounter.FilterBlobs = true;
blobCounter.MinHeight = 5;
blobCounter.MinWidth = 5;
blobCounter.ProcessImage(image);
Blob[] blobs = blobCounter.GetObjectsInformation();
// check for rectangles
SimpleShapeChecker shapeChecker = new SimpleShapeChecker();
foreach (var blob in blobs)
{
List<IntPoint> edgePoints = blobCounter.GetBlobsEdgePoints(blob);
List<IntPoint> cornerPoints;
// use the shape checker to extract the corner points
if (shapeChecker.IsQuadrilateral(edgePoints, out cornerPoints))
{
// only do things if the corners form a rectangle
if (shapeChecker.CheckPolygonSubType(cornerPoints) == PolygonSubType.Rectangle)
{
// here i use the graphics class to draw an overlay, but you
// could also just use the cornerPoints list to calculate your
// x, y, width, height values.
List<Point> Points = new List<Point>();
foreach (var point in cornerPoints)
{
Points.Add(new Point(point.X, point.Y));
}
Graphics g = Graphics.FromImage(image);
g.DrawPolygon(new Pen(Color.Red, 5.0f), Points.ToArray());
image.Save("result.png");
}
}
}
}
but it dont recognize the rectangles (walls). It just recognize the big square, and when I reduce the minHeight and minWidth, it recognize trapezoids on the writing..

I propose a different algorithm approach, after working almost a year with image processing algorithms what I can tell is that to create an efficient algorithm, you have to "reflect" how you, as a human would do that, here is the proposed approach:
We don't really care about the textures, we care about the edges (rectangles are edges), therefore we will apply an Edge-detection>Difference (http://www.aforgenet.com/framework/docs/html/d0eb5827-33e6-c8bb-8a62-d6dd3634b0c9.htm), this gives us:
We want to exaggerate the walls, as humans we know that we are looking for the walls, but the computer does not know this, therefore, apply two rounds of Morphology>Dilatation (http://www.aforgenet.com/framework/docs/html/88f713d4-a469-30d2-dc57-5ceb33210723.htm), this gives us:
We care only about the what is wall and what is not, apply a Binarization>Threshold (http://www.aforgenet.com/framework/docs/html/503a43b9-d98b-a19f-b74e-44767916ad65.htm), we get:
(Optional) We can apply a blob extraction to erase the labels ("QUARTO, BANHEIRO", etc)
We apply a Color>Invert, this is just done because the next step detects the white color not black.
Apply a Blob>Processing>Connected Components Labeling (http://www.aforgenet.com/framework/docs/html/240525ea-c114-8b0a-f294-508aae3e95eb.htm), this will give us all the rectangles, like this:
Note that for each colored box you have its coordinates, center, width and height. So you can extract a snip from the real image with that coordinates.
PS: Using the program AForge Image Processing Lab is highly recommended to test your algos.

Each time a rectangle is found, the polygon is drawn on Graphics and the file is saved only for THAT rectangle. This means that result.png will only contain a single rectangle at a time.
Try first saving all the rectangles in a List<List<Points>> and then going over it and add ALL the rectangles to the image. Something like this (Pseudo):
var image..
var rectangles..
var blobs..
foreach (blob in blobs)
{
if (blob is rectangle)
{
rectangles.add(blob);
}
}
foreach (r in rectangles)
{
image.draw(r.points);
}
image.save("result.png");

If your problem now is to avoid noise due to writings on the image, use FillHoles with widht and height of holes smaller than the smallest rectangle but larger than any of the writings.
If the quality of image is good and no text is touching the border of the image, Invert the image and FillHoles will remove most of the stuff.
Hope I understood your problem correctly.

We are trying to detect rectangles in so many rectangles (considering gray rectangles of grid). Almost all algorithms will get confused here. You're not eliminating externals from input image. Why not replace grid line color with background color or use threshold above to eliminate all grids first.
Then grow all pixels equal to the width of wall, Find all horizontal and vertical lines thereafter use maths to find rectangles using detected lines. Uncontrolled filling will be risky as when boundries are not closed fill will make two rooms as one rectangle.

Related

How to draw rectangle on original image after rotating image for tesseract?

I have an image with some text in 90 deg, which I'm reading using tesseract and c#. Since the accuracy of reading rotated text is low in tesseract I'm creating an ROI around the text and rotating the roi to make the ROI part of image straight and then reading it with tesseract.
To summarize - There is a main image, within the main image I'm drawing and ROI around the text >> then I'm rotating the ROI 90 degree to make it straight >> then I'm reading the text >> then I draw the bounding rect around each character.
But the bounding box that I get is drawn like it was a straight image and not the original 90 deg ROI. I need the bounding boxes to be drawn on the original ROI. How do I do that?
Here is how it looks :
Here is how I want it to look :
This is the code I use to draw rectangle around each character:
rotatedimg = mainimg.Clone();
for (int i = 0; i <roiincrement; i++)
{
rotatedimg.ROI = ROIRect[i];
rotatedimg.Rotate(90.0, new McvScalar(255,255,255); // rotating the image 0 deg to make it straight for tesseract to read
///....reading part of tesseract.
var page = tesseract.Process(rotatedimg, PageSegMode.Auto)
using (var iter = page.GetIterator()) //this parts draw the rect for each character
{
iter.Begin();
Rect symbolBounds;
do
{
if (iter.TryGetBoundingBox(PageIteratorLevel.Symbol, out symbolBounds))
{
CvInvoke.cvRectangle(resultimg, new System.Drawing.Point(ROIRect[i].X + symbolBounds.X1, ROIRect[i].Y + symbolBounds.Y1), new System.Drawing.Point(ROIRect[i].X + symbolBounds.X2, ROIRect[i].Y + symbolBounds.Y2), new MCvScalar(0, 255, 0), 1, LINE_TYPE.FOUR_CONNECTED, 0);
}
} while (iter.Next(PageIteratorLevel.Symbol));
}
}
Well I couldn't find any reasonable answers to this question so I took a shortcut. I simply took the ROI of the image before it is rotated for tesseract to read, then found the contours and draw rectangles around those contours. By doing this I get the required bounding boxes. It does add a couple of ms to the processing time, but not much.
This is the result I get now :

C# LinearGradientBrush Bitmap repeating Vertically

I am playing around with the Microsoft Vision API and learning C# as I go, and one of the properties of a Vision object is an "Accent Color" of the image.
From a series of images analysed, I want to show those colors ordered in a Linear Gradient -- because that will be able to show the user that most pictures are (for example) blue, because Blue colors take up half of the gradient etc.
I have this working, in that I am ordering the Colors by Hue, and able to produce a Linear Gradient I am filling into a Bitmap.
BUT, the gradient by default is Horizontal, and I need Vertical -- so I've used LinearGradientBrush.RotateTransform(90) which rotates that actual gradient fine, but doesn't seem to fill the entire Rectangle, and it repeats. This is what I'm getting as a result:
How do I create a Vertical LinearGradient that fills up the entire Height of the Rectangle object for my Bitmap?
Here is my code:
private Bitmap CreateColorGradient(System.Drawing.Rectangle rect, System.Drawing.Color[] colors)
{
Bitmap gradient = new Bitmap(rect.Width, rect.Height);
LinearGradientBrush br = new LinearGradientBrush(rect, System.Drawing.Color.White, System.Drawing.Color.White, 0, false);
ColorBlend cb = new ColorBlend();
// Positions
List<float> positions = new List<float>();
for (int i = 0; i < colors.Length; i++) positions.Add((float)i / (colors.Length - 1));
cb.Positions = positions.ToArray();
cb.Colors = colors;
br.InterpolationColors = cb;
br.RotateTransform(90);
using (Graphics g = Graphics.FromImage(gradient))
g.FillRectangle(br, rect);
return gradient;
}
Thanks for reading and any help -- also if you see something in my code that could be done better please point it out it helps me learn :)
You are ignoring the angle parameter in the constructor. And as you instead do a rotation on the Grahics object your brush rectangle is no longer correctly fits the target bitmap and the gradient can't fill it; so it repeats.
To correct
simply set the angle to 90 and
remove the br.RotateTransform(90); call.
Here this changes the result from the left to the middle version:
While we're looking at it, do take note of the WrapMode property of LinearGradientBrush. What you see in the first image is the default WrapMode.Clamp. Often a changing to one of the Flip mode helps.. So lets have a look at the impact of it one the first version at the right position.
It looks like WrapMode.TileFlipY but since I have brought back the rotation it actually takes a value WrapMode.TileFlipX or WrapMode.TileFlipXY:
br.WrapMode = WrapMode.TileFlipX;

Facial detection coordinates using a camera

I need a way to grab the coordinates of the face in C# for Windows Phone 8.1 in the camera view. I haven't been able to find anything on the web so I'm thinking it might not be possible. What I need is the x and y (and possibly area) of the "box" that forms around the face when it is detected in the camera view. Has anyone done this before?
Code snippet (bear in mind this is part of an app from the tutorial I linked below the code. It's not copy-pasteable, but should provide some help)
const string MODEL_FILE = "haarcascade_frontalface_alt.xml";
FaceDetectionWinPhone.Detector m_detector;
public MainPage()
{
InitializeComponent();
m_detector = new FaceDetectionWinPhone.Detector(System.Xml.Linq.XDocument.Load(MODEL_FILE));
}
void photoChooserTask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(e.ChosenPhoto);
WriteableBitmap btmMap = new WriteableBitmap(bmp);
//find faces from the image
List<FaceDetectionWinPhone.Rectangle> faces =
m_detector.getFaces(
btmMap, 10f, 1f, 0.05f, 1, false, false);
//go through each face, and draw a red rectangle on top of it.
foreach (var r in faces)
{
int x = Convert.ToInt32(r.X);
int y = Convert.ToInt32(r.Y);
int width = Convert.ToInt32(r.Width);
int height = Convert.ToInt32(r.Height);
btmMap.FillRectangle(x, y, x + height, y + width, System.Windows.Media.Colors.Red);
}
//update the bitmap before drawing it.
btmMap.Invalidate();
facesPic.Source = btmMap;
}
}
This is taken from developer.nokia.com
To do this in real-time, you need to intercept the viewfinder image, perhaps using the NewCameraFrame method (EDIT: not sure if you should use this method or PhotoCamera.GetPreviewBufferArgb32 as described below. I have to leave it up to your research)
So basically your task has 2 parts:
Get the viewfinder image
Detect faces on it (using something like the code above)
If I were you, I'd first do step 2. on an image loaded from disk, and once you can detect faces on that, I'd see how to obtain current viewfinder image and detect faces on that. X,Y coordinates are easy enough to obtain once you've detected the face - see code above.
(EDIT): I think you should try using PhotoCamera.GetPreviewBufferArgb32 method to obtain the viewfinder image. Look here MSDN documentation. Also, be sure to search through MSDN docs and tutorials. This should be more than enough to complete step 1.
A lot of face detection algorithms use Haar classifiers, Viola-Jones algorithm etc. If you're familiar with that, you'll feel more confident in what you're doing, but you can do without. Also, read the materials that I linked - they seem fairly good.

DrawLine method provides higher quality than DrawLines method

In my aplication I need to plot an equation. The plotted equation will be composed of many small linear lines. When I plot it using the DrawLine method inside a for I get higher quality than when using the DrawLines method.
Graphics canvas = pnlCanvas.CreateGraphics();
canvas.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
//High Quality
for (int i = 0; i < plot_points.Length - 1; i++)
{
canvas.DrawLine(penKat, plot_points[i], plot_points[i + 1]);
}
//Low Quality
canvas.DrawLines(penKat, plot_points);
I need to plot it using the DrawLines method because of some issues. Is there a way to get high quality using that method?
Try:
penKat.EndCap = System.Drawing.Drawing2D.LineCap.Round;
penKat.StartCap = System.Drawing.Drawing2D.LineCap.Round;
penKat.LineJoin = LineJoin.Round;
MiterLimit might help, if your lines are thicker than a few pixels..
Edit:
For crisp joins you may want to experiment with other LineJoin values:
penKat.LineJoin = LineJoin.MiterClipped;
penKat.MiterLimit = 1.5f;
Or
penKat.LineJoin = LineJoin.Miter;
penKat.MiterLimit = 1.5f;
Do try out other MiteLimit values until you're happy!
Or post an example image with the two versions..
For stroke widths of 2-4 pixels the difference between the LineJoins will not be very visible. This changes dramatically with growing stroke widths; so remember this property for those thicker lines!

Drawing a clock timer with a fill

I'm trying to make a timer that mimics a clock, with a ticking hand. I have no problem drawing a clock texture and then a line for the hand, but I also want the space behind the clock hand to have a fill. So as time goes on, I want the clock to "fill up" starting at the origin (0:00) all the way up to the clock hand.
I basically want to do this:
What's the best way for me to do this? I have the foundation, just don't know how to add the fill part.
You should aproximate it building a triangle fan.
int n=0;
VertexPostionColor[] V = new VertexPositionColor[num_triangles+2]
V[0] = Center;
for (var angle = start ;angle<=end; angle += (end - start) / num_triangles)
{
V[++N].Position = new Vector3( Math.Cos(angle), Math.Sin(angle)) * radius + Center;
V[N].Color = CircleColor;
}
Short[] Index = new Short[num_triangles*3];
for (int i = 0; i< num_triangles; i++)
{
Index[i*3] = 0;
Index[i*3+1] = i+1;
Index[i*3+2] = i+2;
}
GraphicsDevice.DrawUserIndexedPrimitives(...);
If you want to get complicated using a spritebatch, you have to use a small sector texture, and draw it multiple times rotating it about the center.
this is an example, it need to be tuned to be precise.
float SectorAngle = Mathhelper.ToRadians(10);
Texture2D SectorTex;
Vector2 Origin = new Vector2(SectorTex.Width/2, SectorTex.Height);
for (var angle=start; angle<=end; angle+=SectorAngle) {
spriteBatch.Draw(SectorTex, Center, null, Color.White, Origin, angle, scale,...)
}
If you want to do it using textures, you should be able to manage it with two simple textures: a semi-circle (exactly half a circle), and a full circle.
First, draw the full circle white. Then it's just a matter of calculating how much of the circle needs to be filled.
If it's less than half, draw the half circle blue, rotated to match the "minute hand". Then draw another half circle white to cover the left side.
If it's more than half, draw the half circle blue, covering the entire right side. Then draw another half circle blue, rotated to match the "minute hand".
Once the fill is complete, you just need to draw the other clock components; the hands and border.

Categories

Resources