Forming bounding box only around visible sprites - c#

this site has been really amazing for helping me with game development however I'm unable to find an answer for the following question (nor am I able to solve it on my own).
I am trying to do rectangle collision in my game. My idea is to 1) get the original collision bounding rectangle 2) Transform the texture (pos/rot/scale) 3) Factor changes of item into a matrix and then use this matrix to change the original collision bounds of the item.
However, my textures contain a lot of transparency, transparency that affect the overall height/width of the texture (I do this to maintain power of two dimensions).
My problem: How to create a rectangle that forms dimensions which ignore transparency around the object. A picture is provided below:

I guess you could step through each row of pixels in the bounding rectangle, starting from the top, checking when you first hit a pixel with colour by checking its alpha value (Color.A != 0).
That way you'll get Y coordinate of the rectangle.
Then step through each column starting from the left of the bounding rectangle, looking for the first coloured pixel.
You'll get the X this way.
Then step through each row again but starting from the bottom and you'll get the height.
Then step through each column again but starting from the right and you'll get the width.
Hope that helps

I think dois answer is the way to do it. I use the following code to find the pixel value at a specific point inside my texture. You can change this code to get line by line the pixels, check for transparency and stop when you find a line with a pixel which is not transparent.
Color[] colorData = new Color[1];
Texture2D texture = //Get your Texture2D here
texture.GetData<Color>(0, new Rectangle(targetPoint.X, targetPoint.Y, 1, 1), colorData, 0, 1);
To check if pixel is not transparent I do this :
if(colorData[0].A > 0)
I don't know how expensive can this operation be for collision detection though.

I know for my games, I use circles. If your objects are reasonably round, it can often provide a closer estimate AND collision detection is dead easy. If the distance between the centres of the objects is less than the sum of the radii, then they are colliding.
If circles are out of the question, then muku or Dois have provided decent answers.

Related

Segmenting Kinect body arms

I am trying to segment arms from a Kinect depth image in my app (click for larger picture):
I tried using joint positions to get the vector between elbow and wrist/hand-tip, and created a 2D bounding rotated rectangle between these two joints, and then removed all pixels outside the rectangle. The problem is that, depending on the distance from the sensor, this rectangle changes width, and can become trapezoidal (e.g. if hand is closer to the camera), so it can basically only allow me to discard parts of the image before doing actual processing.
When the hand is near the body (like my left arm below), I need to detect the edge of the hand - presumably by checking the depth gradient. But I couldn't find a flood fill algorithm which "stops" at gradients.
Is there a better approach perhaps? I could use an algorithm idea.

OpenCV/EMGU (C#) detection of objects

I'm trying to write some image detection code for a pick and place machine. I'm new to OpenCV and have been going through a lot of examples - but still ahve two outstanding questions. The first one I think I have a solution for but I'm lost on the second.
I'm trying to detect the offset and angle of the bottom of a part. Essentially, how far is the object from the cross (just an indicator of the center of the frame), and what the angle of rotation the part has about the part's center. I've used filters to show the pads of the components.
I'm pretty sure that I want to implement something like this http://felix.abecassis.me/2011/10/opencv-bounding-box-skew-angle/ - but I'm not sure how to translate the code into C# (http://www.emgu.com/wiki/index.php/Main_Page). Any pointers would be helpful.
One issue is if the part is smaller than the needle that's holding it and you can see both the part and the needle.
The square bit is the part I want to detect. The round part is part of the needle that is still exposed. I've got no clue how to approach this - I'm thinking something along the lines of detecting the straight lines and discarding the curved ones to generate a shape. Again, I'm interested in the offset from the center and the angle of rotation.
First you should detect every Object with findContours. Then you can use the minimum area rectangle function on every found contour. I assume you know the size and coordiantes of your cross so you can use the Center Coordinates of the MCvBox2D to get the Offset to it. Furthermore you can read the angle property of the box so it should fit your purpose.
For the second part i would try to fit a least square reactangle. The round part seems to be very small compared to the square one so maybe it will work.
Maybe the Detection of Quadrilaterlas in the AForge Library could help you too.
Edit:
To merge your the contours i would try something like this:
Rectangle merged = New Rectangle(New Point(img.Width, img.Height), New Size(0, 0)); //img is your binarized image
Contour<Point> Pad_contours= img.FindContours(CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,CvEnum.RETR_TYPE.CV_RETR_LIST);
while(Pad_contours !=null)
{
if(Pad_contours.Area <Pad_Area_max &&Pad_contours>Pad_Area_min)//Filter Pads to avoid false positive Contours
{
//Merge Pad contours into a single rectangle
if(merged.Height >0)
merged.Bottom=Math.Max(merged.Bottom,Pad_Contours.BoundingRectangle.Bottom);
else
merged.Height = contours.BoundingRectangle.Height;
merged.Top=Math.Min(merged.Top,Pad_Contours.BoundingRectangle.Top);
merged.Left=math.Min(merged.Left,Pad_Contours.BoundingRectangle.Left);
if(merged.Width>0)
merged.Right=Max(merged.Right,pad_Contours.BoundingRectangle.Right);
else
merged.Width=Pad_Contours.BoundingRectangle.Width;
}
//get next Pad
If(Pad_contours.VNext==null)
Pad_contours=Pad_contours.HNext;
else
Pad_contours = Pad_contours.VNext;
}
The rectangle "merged" should enclose all your Pads now. The problem is you wont get an angle this way because the rectangle is always vertically 90°. To solve this I would iterate through the contours like shown above and store every point of every contour in an extra datacontainer. Then I would use the minimum area rectangle function mentioned above and apply it on all gathered points. This should give you a bounding rectangle with an angle property.

Draw a one pixel line around square sprite

I have a 15 x 15 pixel box, that I draw several off in different colours using:
spriteBatch.Draw(texture, position, colour);
What I'd like to do is draw a one pixel line around the outside, in different colours, thus making it a 17 x 17 box, with (for example), a blue outline one pixel wide and a grey middle.
The only way I can think of doing it is to draw two boxes, one 17x17 in the outline colour, one 15x15 with the box colour, and layer them to give the appearance of an outline:
spriteBatch.Draw(texture17by17, position, outlineColour);
spriteBatch.Draw(texture15by15, position, boxColour);
Obviously the position vector would need to be modified but I think that gives a clear picture of the idea.
The question is: is there a better way?
You can draw lines and triangles using DrawUserIndexedPrimitives, see Drawing 3D Primitives using Lists or Strips on MSDN for more details. Other figures like rectangles and circles are constructed from lines, but you'll need to implement them yourself.
To render lines in 2D, just use orthographic projection which mirrors transformation matrix from SpriteBatch.
You can find a more complete example with the PrimitiveBatch class which encapsulates the logic of drawing in the example Primitives from XBox Live Indie Games.
Considering XNA can't draw "lines" like OpenGL immediate mode can, it is far more efficient to draw a spite with a pre-generated texture quad (2 triangles) than to draw additional geometry with dynamic texturing particularly when a single "line" each requiring 1 triangle; 2 triangles vs 4 respectfully. Less triangles and vertices in the former too.
So I would not try to draw a "thin" line using additional geometry that is trying to mimic lines around the outside of the other, instead continue with what you are doing - drawing 2 different sprites (each is a quad anyway)
Every object drawn in 3D is drawn using triangles. - Would you like to know more?

XNA Rotation Help(Interesting...)

Hello Stack Overflow users, I have a fun problem that I have in my XNA Game.
So basically I have an asteroid, 80x80, and I set the origin as imageW / 2, imageH / 2 (If order would matter, it wouldn't, the asteroid is a square).
Here is an image, explaining the problem! Visualization FTW :D
http://i.imgur.com/dsawS.png
So, any ideas on what is causing this? I spend 1 hour, I looked at examples, I found out it is supposed to rotate like this:
http://www.riemers.net/images/Tutorials/XNA/Csharp/Series2D/rotation.jpg
But it's not.
Here is a code sample. I have a object named Drawable that has properties which hold the vector position, etc.
Vector2 asteroidOrigin = new Vector2(asteroidImgs[asteroid.asteroidType].Width / 2, asteroidImgs[asteroid.asteroidType].Height / 2);
drawableList.Add(new Drawable(asteroidImgs[asteroid.asteroidType], asteroid.asteroidPos, asteroid.angle, asteroidOrigin));
Here is the Draw Method:
foreach (Drawable drawable in renderManager.getRenderList)
{
spriteBatch.Draw(drawable.image, drawable.position, drawable.sourceRectangle, drawable.tint, drawable.angle, drawable.origin, drawable.imageScale, drawable.spriteEffects, drawable.depth);
}
And yes, the Drawable Class has multiple constructors and they assign default values.
When you define an Origin in SpriteBatch.Draw, you are defining the new point on your texture which will draw at the Position argument. Obviously this affects translation as well as your desired rotation. When you set the origin to the center of the image, the image is translated so that the center is at your Position, then rotated around that point. When you set the origin to Vector2.Zero, the translation is not changed, but the image rotates around its top left corner.
The solution is to either redefine what you mean as "Position" for sprites to be where the CENTER of the image draws on screen (I recommend this, makes things nice) or perform a bit of work before drawing by adding the Origin to the Position before calling Draw.
I, again, recommend the first solution, because then when you want to draw a circle in the center of the screen you can just set its position to be the center of the screen and be done. You won't need to take its size into account. And so on.

create 3D rectangle using c#

I read about rectangle structure in c# and the intersection function in it
My Question is: how to custom it such that I can have a 3D rectanlge, have x,y,z coordinates
and get it intersection with another one ?
Any idea
Just create your own. Here are some ideas:
a 3D rectangle not only has a width and a height, but also a plane
planes can be described with a normal vector and a point (origin)
the origin would be similar to the (x, y) in the 2D rectangle, that is, the "upper left" point, but any would do
intersecting with another rectangle could be as easy as intersecting the two plains and then checking to see if the intersection line "cuts" any of the rectangles
there are tons of math related websites to check for the formulas on how to do this
chances are pretty good, that in your application you won't need to do this in an optimized manner. Really. Just code it already and try it out. You can optimize later.
EDIT:
Wait. On second thoughts: An origin, a height, a width and a normal vector won't really cut it, since you don't have a sense of "up" as you do in 2D.
So, scratch that. Thinking about it reveals that the width and the height in 2D are actually vectors two, except that their direction is implied: Width is the length of a vector in x direction, Height is the length of a vector in y direction.
So, model your rectangle like this:
a point (Origin)
a vector Width (this is often called u in maths)
a vector Height (this is often called v in maths)
the normal vector is not necessary anymore since it is can be calculated by the vectorial product of Width x Height
The three other points of your rectangle can then be calculated as:
Origin + Width
Origin + Width + Height
Origin + Height
The rectangle class you have linked to models a 2D rectangle (I don't know what a 3D rectangle would be, BTW).
Pretty much the whole System.Drawing namespace deals with 2D, so you can't customise it that way.
The System.Drawing parent namespace contains types that support basic GDI+ graphics functionality. Child namespaces support advanced two-dimensional and vector graphics functionality, advanced imaging functionality, and print-related and typographical services.
(emphasis mine)
(about the intersection function)
You cannot create such a function.
The intersecting function of 2 rectangles in 2D is interesting because it returns you a third rectangle (than can be empty).
Intersection of 2 "3D rectangles" in space is not always a 3D rectange!
(for example take 2 identical rectangles and rotate one, then take the intersection...)
So you cannot just create a rectangle object, then an intersection function that returns a rectangle object.
You need more complete 3D object management library.
remark:
A 3D rectangle is delimited by 6 planes.
so you can identify it by 6 constraints on x,y,z
Then the intersection of 2 3D rectangles will just be a 3D object identified by 12 contraints.
If these 12 constraints can be simplfied to 6 ones it can be a rectange (but it's not always the case)
and if it cannot then it's not a rectangle.

Categories

Resources