I use:
Emgucv 4.0.1
Opencv 4.1.0
I have a series of circles detected with HoughCircles function that I need to analize one by one.
I need to calculate how much color is in the rounded green circle, so I have to extract only the circle image but I know how to extract only the box that contains much more pixel than the circle. How to extract only the image inside the surrounded green area?
See images in link below.
1) Source Image
2) Boxed image I retrieved with more pixel than i want
3) Image that I would like to extract
// m_ListCircles = list from HoughCircles() circles coordinates.
// cell = cell number to extract.
// pictureBox1 = main picturebox with all circles detected.
// pictureBoxROI = picturebox destination single circles cutted.
int id = (int)m_ListCircles[0 + cell ];
int x = (int)m_ListCircles[1 + cell ];
int y = (int)m_ListCircles[2 + cell ];
int r = (int)m_ListCircles[3 + cell ]; // radius
// box area around the circle
int X0 = x;
int Y0 = y;
int X1 = x + r * 2;
int Y1 = y + r * 2;
// area to copy
int wid = Math.Abs(X0 - X1);
int hgt = Math.Abs(Y0 - Y1);
if ((wid < 1) || (hgt < 1)) return;
// create a rectangle are to copy
Rectangle source_rectangle = new Rectangle(Math.Min(X0, X1),Math.Min(Y0,Y1), wid, hgt);
// assign the area copied to image var
var image = new Image<Bgr, byte>(new Bitmap(pictureBox1.Image));
image.ROI = source_rectangle;
// show image
pictureBoxROI.Image = image.Bitmap;
pictureBoxROI.Refresh();
/*
// tried this but result is always a black image.
Point xyCell = new Point();
xyCell.X = X0;
xyCell.Y = Y0;
Image<Gray, byte> mask = new Image<Gray, byte>(image.Width, image.Height);
CvInvoke.Circle(mask, xyCella, r, new MCvScalar(255, 255, 255), -1,
LineType.AntiAlias, 0);
Image<Bgr, byte> dest = new Image<Bgr, byte>(image.Width, image.Height);
dest = image.And(image, mask);
pictureBoxROI.Image = dest.Bitmap;
pictureBoxROI.Refresh();
*/
You can always create masks of ROI form the found circles and analyze tha images like that
Custom ROI - this shows how to use the mask
You can only have rectangular images. However you can after cutting the rectangle set all the pixels outside of the circle to transparent.
You can determine which pixels are outside of the circle by calculating their distance from the center point of your image using pythagoras.
This is very slow of course, as you must loop over all pixels, but for low pixel counts it's reasonably fast.
try
{
Image rectCroppedImage = originalImage.Clone(CropRect, originalImage.PixelFormat);
double r = rectCroppedImage.Height; // because you are centered on your circle
Bitmap img = new Bitmap(rectCroppedImage);
for (int x = 0; x < img.Width; x++)
{
for (int y = 0; y < img.Height; y++)
{
// offset to center
int virtX = x - img.Width / 2;
int virtY = y - img.Height / 2;
if (Math.Sqrt(virtX * virtX + virtY * virtY) > r)
{
img.SetPixel(x, y, Color.Transparent);
}
}
}
return img; // your circle cropped image
}
catch (Exception ex)
{
}
This could also be achieved by using a mask and "multiplying" your image with a white circle. Such a thing can be achieved for example with image magick. You can find an ImageMagick NuGet packet here: https://github.com/dlemstra/Magick.NET
Related
I want to detect a display on an image (more precisely its corners).
I segment the image in display color and not display color:
Image<Gray, byte> segmentedImage = greyImage.InRange(new Gray(180), new Gray(255));
Then I use corner Harris to find the corners:
Emgu.CV.Image<Emgu.CV.Structure.Gray, Byte> harrisImage = new Image<Emgu.CV.Structure.Gray, Byte>(greyImage.Size);
CvInvoke.CornerHarris(segmentedImage, harrisImage, 2);
CvInvoke.Normalize(harrisImage, harrisImage, 0, 255, NormType.MinMax, DepthType.Cv32F);
There are now white pixels in the corners, but I cannot access them:
for (int j = 0; j < harrisImage.Rows; j++)
{
for (int i = 0; i < harrisImage.Cols; i++)
{
Console.WriteLine(harrisImage[j, i].Intensity);
}
}
It writes only 0s. How can I access them? And if I can access them, how can I find the 4 corners of the screen in the harris image? Is there a function to find a perspectively transformed rectangle from points?
EDIT:
On the OpenCV IRC they said FindContours is not that precise. And when I try to run it on the segmentedImage, I get this:
(ran FindContours on the segmentedImage, then ApproxPolyDP and drew the found contour on the original greyscale image)
I cannot get it to find the contours more precise...
EDIT2:
I cannot get this to work for me. Even with your code, I get the exact same result...
Here is my full Emgu code:
Emgu.CV.Image<Emgu.CV.Structure.Gray, Byte> imageFrameGrey = new Image<Emgu.CV.Structure.Gray, Byte>(bitmap);
Image<Gray, byte> segmentedImage = imageFrameGrey.InRange(new Gray(180), new Gray(255));
// get rid of small objects
int morph_size = 2;
Mat element = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Rectangle, new System.Drawing.Size(2 * morph_size + 1, 2 * morph_size + 1), new System.Drawing.Point(morph_size, morph_size));
CvInvoke.MorphologyEx(segmentedImage, segmentedImage, Emgu.CV.CvEnum.MorphOp.Open, element, new System.Drawing.Point(-1, -1), 1, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar());
// Find edges that form rectangles
List<RotatedRect> boxList = new List<RotatedRect>();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(segmentedImage, contours, null, Emgu.CV.CvEnum.RetrType.External, ChainApproxMethod.ChainApproxSimple);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour, CvInvoke.ArcLength(contour, true) * 0.01, true);
if (CvInvoke.ContourArea(approxContour, false) > 10000)
{
if (approxContour.Size == 4)
{
bool isRectangle = true;
System.Drawing.Point[] pts = approxContour.ToArray();
LineSegment2D[] edges = Emgu.CV.PointCollection.PolyLine(pts, true);
for (int j = 0; j < edges.Length; j++)
{
double angle = Math.Abs(edges[(j + 1) % edges.Length].GetExteriorAngleDegree(edges[j]));
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
}
if (isRectangle)
boxList.Add(CvInvoke.MinAreaRect(approxContour));
}
}
}
}
}
So as promised i tried it myself. In C++ but you should adopt it easy to Emgu.
First i get rid of small object in your segmented image with an opening:
int morph_elem = CV_SHAPE_RECT;
int morph_size = 2;
Mat element = getStructuringElement(morph_elem, Size(2 * morph_size + 1, 2 * morph_size + 1), Point(morph_size, morph_size));
// Apply the opening
morphologyEx(segmentedImage, segmentedImage_open, CV_MOP_OPEN, element);
Then detect all the contours and take the large ones and check for rectangular shape:
vector< vector<Point>> contours;
findContours(segmentedImage_open, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for each (vector<Point> var in contours)
{
double area = contourArea(var);
if (area> 30000)
{
vector<Point> approx;
approxPolyDP(var, approx, 0.01*arcLength(var, true), true);
if (4 == approx.size()) //rectangular shape
{
// do something
}
}
}
Here is the result with the contour in red and the approximated curve in green:
Edit:
You can improve your code by increasing the approximation factor until you get a contour with 4 points or you pass a threshold. Just wrap a for loop around approxPolyDP. You can define a range for your approximation value and prevent your code to fail if your object differs too much from a rectangle.
This might be a tricky question.
I will just cut to the chase.
First i make a blank Bitmap image:
Bitmap MasterImage = new Bitmap(PageSize.Width, PageSize.Height, PixelFormat.Format32bppArgb);
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(MasterImage);
Afterwards:
I get a parameter of how much images i want to place in the Blank Bitmap (page) based on columns and rows:
int numOfColumns=2; //(could be any value)
int numOfRows=4;
So we should get something like this:
I also get a parameter if i have some margin at top image and left image:
int PagetopMargin=0; //(could be any value)
int PageLeftMargin=0;
Than i have a variable called: imagesPath which is type of List<String> it contains full path of images.
Now i loop through images:
while (imagesPath.Count > 0)
{
Image image = Image.FromFile(imagesPath[0]);
//Logic comes here.
//after finishing processing and drawing images in Bitmap i remove image from list
imagesPath.RemoveAt(0);
}
What i am trying to achieve is how to place maximum images in that Bitmap page based on column/row and margin variables and preserving aspect ratio of images. (so images wont get distorted). if some images are left behind, its okey, i will continue next blank Bitmap to place them. just need to fill the Bitmap blank image to the full.
Here is a function to fit a rectangle into another, either centered or uncentered (aligned top-left):
Rectangle FitToBox(Rectangle scr, Rectangle dest, bool centered)
{
var ratioX = (double)dest.Width / scr.Width;
var ratioY = (double)dest.Height / scr.Height;
var ratio = Math.Min(ratioX, ratioY);
var newWidth = (int)(scr.Width * ratio);
var newHeight = (int)(scr.Height * ratio);
if (!centered)
return new Rectangle(0, 0, newWidth, newHeight);
else
return new Rectangle((dest.Width - newWidth) / 2,
(dest.Height - newHeight) / 2, newWidth, newHeight);
}
Its basic math is taken form this post.
Here is a test bed, centered and uncentered using different random values:
Random rnd = new Random();
int cols = rnd.Next(3) + 2;
int rows = rnd.Next(4) + 3;
int w = pan_dest.Width / cols;
int h = pan_dest.Height / rows;
using (Graphics G = pan_dest.CreateGraphics())
{
G.Clear(Color.White);
for (int c = 0; c < cols; c++)
for (int r = 0; r < rows; r++)
{
Rectangle rDest = new Rectangle(c * w, r * h, w, h);
Rectangle rSrc = new Rectangle(0, 0, rnd.Next(200) + 10, rnd.Next(200) + 10);
Rectangle rResult = FitToBox(rSrc, rDest, checkBox1.Checked);
Rectangle rDestPlaced = new Rectangle(c * w + (int)rResult.X,
r * h + rResult.Y, rResult.Width, rResult.Height);
using (Pen pen2 = new Pen(Color.SlateGray, 4f))
G.DrawRectangle(pen2, Rectangle.Round(rDest));
G.DrawRectangle(Pens.Red, rDestPlaced);
G.FillEllipse(Brushes.LightPink, rDestPlaced);
}
}
You would then draw your images like this:
G.DrawImage(someImageimg, rDestPlaced, rSrc, GraphicsUnit.Pixel);
This maximizes the image sizes, not the number of images you can fit on a page, as specified in your comment. For the latter you should look into something like 2-dimensional packing..
This is what i did in form1 constructor:
Bitmap bmp2 = new Bitmap(#"e:\result1001.jpg");
CropImageWhiteAreas.ImageTrim(bmp2);
bmp2.Save(#"e:\result1002.jpg");
bmp2.Dispose();
The class CropImageWhiteAreas:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;
namespace Test
{
class CropImageWhiteAreas
{
public static Bitmap ImageTrim(Bitmap img)
{
//get image data
BitmapData bd = img.LockBits(new Rectangle(Point.Empty, img.Size),
ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
int[] rgbValues = new int[img.Height * img.Width];
Marshal.Copy(bd.Scan0, rgbValues, 0, rgbValues.Length);
img.UnlockBits(bd);
#region determine bounds
int left = bd.Width;
int top = bd.Height;
int right = 0;
int bottom = 0;
//determine top
for (int i = 0; i < rgbValues.Length; i++)
{
int color = rgbValues[i] & 0xffffff;
if (color != 0xffffff)
{
int r = i / bd.Width;
int c = i % bd.Width;
if (left > c)
{
left = c;
}
if (right < c)
{
right = c;
}
bottom = r;
top = r;
break;
}
}
//determine bottom
for (int i = rgbValues.Length - 1; i >= 0; i--)
{
int color = rgbValues[i] & 0xffffff;
if (color != 0xffffff)
{
int r = i / bd.Width;
int c = i % bd.Width;
if (left > c)
{
left = c;
}
if (right < c)
{
right = c;
}
bottom = r;
break;
}
}
if (bottom > top)
{
for (int r = top + 1; r < bottom; r++)
{
//determine left
for (int c = 0; c < left; c++)
{
int color = rgbValues[r * bd.Width + c] & 0xffffff;
if (color != 0xffffff)
{
if (left > c)
{
left = c;
break;
}
}
}
//determine right
for (int c = bd.Width - 1; c > right; c--)
{
int color = rgbValues[r * bd.Width + c] & 0xffffff;
if (color != 0xffffff)
{
if (right < c)
{
right = c;
break;
}
}
}
}
}
int width = right - left + 1;
int height = bottom - top + 1;
#endregion
//copy image data
int[] imgData = new int[width * height];
for (int r = top; r <= bottom; r++)
{
Array.Copy(rgbValues, r * bd.Width + left, imgData, (r - top) * width, width);
}
//create new image
Bitmap newImage = new Bitmap(width, height, PixelFormat.Format32bppArgb);
BitmapData nbd
= newImage.LockBits(new Rectangle(0, 0, width, height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(imgData, 0, nbd.Scan0, imgData.Length);
newImage.UnlockBits(nbd);
return newImage;
}
}
}
I also tried before it Peter solution.
In both the result is(This is a screenshot of my facebook after uploaded the image) still the white areas around:
You can the rectangle around the image i just uploaded and see what i mean by white area around.
If I understand correctly, you have found a sample code snippet that uses LockBits(), but you are not sure how it works or how to modify it to suit your specific need. So I will try to answer from that perspective.
First, a wild guess (since you didn't include the implementation of the LockBitmap class you're using in the first example): the LockBitmap class is some kind of helper class that is supposed to encapsulate the work of calling LockBits() and using the result, including providing versions of GetPixel() and SetPixel() which are presumably much faster than calling those methods on a Bitmap object directly (i.e. access the buffer obtained by calling LockBits()).
If that's the case, then modifying the first example to suit your need is probably best:
public void Change(Bitmap bmp)
{
Bitmap newBitmap = new Bitmap(bmp.Width, bmp.Height, bmp.PixelFormat);
LockBitmap source = new LockBitmap(bmp),
target = new LockBitmap(newBitmap);
source.LockBits();
target.LockBits();
Color white = Color.FromArgb(255, 255, 255, 255);
for (int y = 0; y < source.Height; y++)
{
for (int x = 0; x < source.Width; x++)
{
Color old = source.GetPixel(x, y);
if (old != white)
{
target.SetPixel(x, y, old);
}
}
}
source.UnlockBits();
target.UnlockBits();
newBitmap.Save("d:\\result.png");
}
In short: copy the current pixel value to a local variable, compare that value to the white color value, and if it is not the same, go ahead and copy the pixel value to the new bitmap.
Some variation on the second code example should work as well. The second code example does explicitly what is (I've assumed) encapsulated inside the LockBitmap class that the first code example uses. If for some reason, the first approach isn't suitable for your needs, you can follow the second example.
In that code example you provide, most of the method there is just handling the "grunt work" of locking the bitmap so that the raw data can be accessed, and then iterating through that raw data.
It computes the oRow and nRow array offsets (named for "old row" and "new row", I presume) based on the outer y loop, and then accesses individual pixel data by computing the offset within a given row based on the inner x loop.
Since you want to do essentially the same thing, but instead of converting the image to grayscale, you just want to selectively copy all non-white pixels to the new bitmap, you can (should be able to) simply modify the body of the inner x loop. For example:
byte red = oRow[x * pixelSize + 2],
green = oRow[x * pixelSize + 1],
blue = oRow[x * pixelSize];
if (red != 255 || green != 255 || blue != 255)
{
nRow[x * pixelSize + 2] = red;
nRow[x * pixelSize + 1] = green;
nRow[x * pixelSize] = blue;
}
The above would entirely replace the body of the inner x loop.
One caveat: do note that when using the LockBits() approach, knowing the pixel format of the bitmap is crucial. The example you've shown assumes the bitmaps are in 24 bpp format. If your own bitmaps are in this format, then you don't need to change anything. But if they are in a different format, you'll need to adjust the code to suit that. For example, if your bitmap is in 32 bpp format, you need to pass the correct PixelFormat value to the LockBits() method calls, and then set pixelSize to 4 instead of 3 as the code does now.
Edit:
You've indicated that you would like to crop the new image so that it is the minimize size required to contain all of the non-white pixels. Here is a version of the first example above that should accomplish that:
public void Change(Bitmap bmp)
{
LockBitmap source = new LockBitmap(bmp);
source.LockBits();
Color white = Color.FromArgb(255, 255, 255, 255);
int minX = int.MaxValue, maxX = int.MinValue,
minY = int.MaxValue, maxY = int.MinValue;
// Brute-force scan of the bitmap to find image boundary
for (int y = 0; y < source.Height; y++)
{
for (int x = 0; x < source.Width; x++)
{
if (source.GetPixel(x, y) != white)
{
if (x < minX) minX = x;
if (x > maxX) maxX = x;
if (y < minY) minY = y;
if (y > maxY) maxY = y;
}
}
}
Bitmap newBitmap = new Bitmap(maxX - minx + 1, maxY - minY + 1, bmp.PixelFormat);
LockBitmap target = new LockBitmap(newBitmap);
target.LockBits();
for (int y = 0; y < target.Height; y++)
{
for (int x = 0; x < target.Width; x++)
{
target.SetPixel(x, y, source.GetPixel(x + minX, y + minY));
}
}
source.UnlockBits();
target.UnlockBits();
newBitmap.Save("d:\\result.png");
}
This example includes an initial scan of the original bitmap, after locking it, to find the minimum and maximum coordinate values for any non-white pixel. Having done that, it uses the results of that scan to determine the dimensions of the new bitmap. When copying the pixels, it restricts the x and y loops to the dimensions of the new bitmap, adjusting the x and y values to map from the location in the new bitmap to the given pixel's original location in the old one.
Note that since the initial scan determines where the non-white pixels are, there's no need to check again when actually copying the pixels.
There are more efficient ways to scan the bitmap than the above. This version simply looks at every single pixel in the original bitmap, keeping track of the min and max values for each coordinate. I'm guessing this will be fast enough for your purposes, but if you want something faster, you can change the scan so that it scans for each min and max in sequence:
Scan each row from y of 0 to determine the first row with a non-white pixel. This is the min y value.
Scan each row from y of source.Height - 1 backwards, to find the max y value.
Having found the min and max y values, now scan the columns from x of 0 to find the min x and from source.Width - 1 backwards to find the max x.
Doing it that way involves a lot more code and is probably harder to read and understand, but would involve inspecting many fewer pixels in most cases.
Edit #2:
Here is a sample of the output of the second code example:
Note that all of the white border of the original bitmap (shown on the left side) has been cropped out, leaving only the smallest subset of the original bitmap that can contain all of the non-white pixels (shown on the right side).
i'm implementing Harries Corner detector using Emgucv
and i've converted the Code to C#
using this link
http://docs.opencv.org/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.html
So i want to access the coordinates of those corners or any information about this interest points
how to do this ?
Thanks
You can make a threshold image of the Harris Corner Image and then iterate over it. This way were the intensity of the point is 255 you have a corner point whit X and Y values on the image. Example:
// create corner strength image and do Harris
m_CornerImage = new Image<Gray, float>(m_SourceImage.Size);
CvInvoke.cvCornerHarris(m_SourceImage, m_CornerImage, 3, 3, 0.01);
// create and show inverted threshold image
m_ThresholdImage = new Image<Gray, Byte>(m_SourceImage.Size);
CvInvoke.cvThreshold(m_CornerImage, m_ThresholdImage, 0.0001, 255.0, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY_INV);
imageBox2.Image = m_ThresholdImage;
imageBox1.Image = m_CornerImage;
const double MAX_INTENSITY = 255;
int contCorners = 0;
for (int x = 0; x < m_ThresholdImage.Width; x++)
{
for (int y = 0; y < m_ThresholdImage.Height; y++)
{
Gray imagenP = m_ThresholdImage[y,x];
if (imagenP.Intensity == MAX_INTENSITY)
{
//X and Y are harris point cordenates
}
}
}
I am writing a small 2d game engine in C# for my own purposes, and it works fine except for the sprite collision detection. I've decided to make it a per-pixel detection (easiest for me to implement), but it is not working the way it's supposed to. The code detects a collision long before it happens. I've examined every component of the detection, but I can't find the problem.
The collision detection method:
public static bool CheckForCollision(Sprite s1, Sprite s2, bool perpixel) {
if(!perpixel) {
return s1.CollisionBox.IntersectsWith(s2.CollisionBox);
}
else {
Rectangle rect;
Image img1 = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(s1.Image, s1.Position, s1.Origin, s1.Rotation, out rect), s1.Scale);
int posx1 = rect.X;
int posy1 = rect.Y;
Image img2 = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(s2.Image, s2.Position, s2.Origin, s2.Rotation, out rect), s2.Scale);
int posx2 = rect.X;
int posy2 = rect.Y;
Rectangle abounds = new Rectangle(posx1, posy1, (int)img1.Width, (int)img1.Height);
Rectangle bbounds = new Rectangle(posx2, posy2, (int)img2.Width, (int)img2.Height);
if(Utilities.RectangleIntersects(abounds, bbounds)) {
uint[] bitsA = s1.GetPixelData(false);
uint[] bitsB = s2.GetPixelData(false);
int x1 = Math.Max(abounds.X, bbounds.X);
int x2 = Math.Min(abounds.X + abounds.Width, bbounds.X + bbounds.Width);
int y1 = Math.Max(abounds.Y, bbounds.Y);
int y2 = Math.Min(abounds.Y + abounds.Height, bbounds.Y + bbounds.Height);
for(int y = y1; y < y2; ++y) {
for(int x = x1; x < x2; ++x) {
if(((bitsA[(x - abounds.X) + (y - abounds.Y) * abounds.Width] & 0xFF000000) >> 24) > 20 &&
((bitsB[(x - bbounds.X) + (y - bbounds.Y) * bbounds.Width] & 0xFF000000) >> 24) > 20)
return true;
}
}
}
return false;
}
}
The image rotation method:
internal static Image RotateImagePoint(Image img, Vector pos, Vector orig, double rotation, out Rectangle sz) {
if(!(new Rectangle(new Point(0), img.Size).Contains(new Point((int)orig.X, (int)orig.Y))))
Console.WriteLine("Origin point is not present in image bound; unwanted cropping might occur");
rotation = (double)ra_de((double)rotation);
sz = GetRotateDimensions((int)pos.X, (int)pos.Y, img.Width, img.Height, rotation, false);
Bitmap bmp = new Bitmap(sz.Width, sz.Height);
Graphics g = Graphics.FromImage(bmp);
g.SmoothingMode = SmoothingMode.AntiAlias;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
g.RotateTransform((float)rotation);
g.TranslateTransform(sz.Width / 2, sz.Height / 2, MatrixOrder.Append);
g.DrawImage(img, (float)-orig.X, (float)-orig.Y);
g.Dispose();
return bmp;
}
internal static Rectangle GetRotateDimensions(int imgx, int imgy, int imgwidth, int imgheight, double rotation, bool Crop) {
Rectangle sz = new Rectangle();
if (Crop == true) {
// absolute trig values goes for all angles
double dera = de_ra(rotation);
double sin = Math.Abs(Math.Sin(dera));
double cos = Math.Abs(Math.Cos(dera));
// general trig rules:
// length(adjacent) = cos(theta) * length(hypotenuse)
// length(opposite) = sin(theta) * length(hypotenuse)
// applied width = lo(img height) + la(img width)
sz.Width = (int)(sin * imgheight + cos * imgwidth);
// applied height = lo(img width) + la(img height)
sz.Height = (int)(sin * imgwidth + cos * imgheight);
}
else {
// get image diagonal to fit any rotation (w & h =diagonal)
sz.X = imgx - (int)Math.Sqrt(Math.Pow(imgwidth, 2.0) + Math.Pow(imgheight, 2.0));
sz.Y = imgy - (int)Math.Sqrt(Math.Pow(imgwidth, 2.0) + Math.Pow(imgheight, 2.0));
sz.Width = (int)Math.Sqrt(Math.Pow(imgwidth, 2.0) + Math.Pow(imgheight, 2.0)) * 2;
sz.Height = sz.Width;
}
return sz;
}
Pixel getting method:
public uint[] GetPixelData(bool useBaseImage) {
Rectangle rect;
Image image;
if (useBaseImage)
image = Image;
else
image = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(Image, Position, Origin, Rotation, out rect), Scale);
BitmapData data;
try {
data = ((Bitmap)image).LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
}
catch (ArgumentException) {
data = ((Bitmap)image).LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, image.PixelFormat);
}
byte[] rawdata = new byte[data.Stride * image.Height];
Marshal.Copy(data.Scan0, rawdata, 0, data.Stride * image.Height);
((Bitmap)image).UnlockBits(data);
int pixelsize = 4;
if (data.PixelFormat == PixelFormat.Format24bppRgb)
pixelsize = 3;
else if (data.PixelFormat == PixelFormat.Format32bppArgb || data.PixelFormat == PixelFormat.Format32bppRgb)
pixelsize = 4;
double intdatasize = Math.Ceiling((double)rawdata.Length / pixelsize);
uint[] intdata = new uint[(int)intdatasize];
Buffer.BlockCopy(rawdata, 0, intdata, 0, rawdata.Length);
return intdata;
}
The pixel retrieval method works, and the rotation method works as well, so the only place that the code might be wrong is the collision detection code, but I really have no idea where the problem might be.
I don't think many people here will bother to scrutinize your code to figure out what exactly is wrong. But I can come with some hints to how you can find the problem.
If collision happens long before it is supposed to I suggest your bounding box check isn't working properly.
I would change the code to dump out all the data about rectangles at collision. So you can create some code that will display the situation at collision. That might be easier than looking over the numbers.
Apart from that I doubt that per pixel collision detection easier for you to implement. When you allow for rotation and scaling that quickly becomes difficult to get right. I would do polygon based collision detection instead.
I have made my own 2D engine like you but I used polygon based collision detection and that worked fine.
I think I've found your problem.
internal static Rectangle GetRotateDimensions(int imgx, int imgy, int imgwidth, int imgheight, double rotation, bool Crop) {
Rectangle sz = new Rectangle(); // <-- Default constructed rect here.
if (Crop == true) {
// absolute trig values goes for all angles
double dera = de_ra(rotation);
double sin = Math.Abs(Math.Sin(dera));
double cos = Math.Abs(Math.Cos(dera));
// general trig rules:
// length(adjacent) = cos(theta) * length(hypotenuse)
// length(opposite) = sin(theta) * length(hypotenuse)
// applied width = lo(img height) + la(img width)
sz.Width = (int)(sin * imgheight + cos * imgwidth);
// applied height = lo(img width) + la(img height)
sz.Height = (int)(sin * imgwidth + cos * imgheight);
// <-- Never gets the X & Y assigned !!!
}
Since you never assigned imgx and imgy to the X and Y coordinates of the Rectangle, every call of GetRotateDimensions will produce a Rectangle with the same location. They may be of differing sizes, but they will always be in the default X,Y position. This would cause the really early collisions that you are seeing because any time you tried to detect collisions on two sprites, GetRotateDimensions would put their bounds in the same position regardless of where they actually are.
Once you have corrected that problem, you may run into another error:
Rectangle rect;
Image img1 = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(s1.Image, s1.Position, s1.Origin, s1.Rotation, out rect), s1.Scale);
// <-- Should resize rect here.
int posx1 = rect.X;
int posy1 = rect.Y;
You get your boundary rect from the RotateImagePoint function, but you then resize the image. The X and Y from the rect are probably not exactly the same as that of the resized boundaries of the image. I'm guessing that you mean for the center of the image to remain in place while all points contract toward or expand from the center in the resize. If this is the case, then you need to resize rect as well as the image in order to get the correct position.
I doubt this is the actual problem, but LockBits doesn't guarantee that the bits data is aligned to the image's Width.
I.e., there may be some padding. You need to access the image using data[x + y * stride] and not data[x + y * width]. The Stride is also part of the BitmapData.