Me and some guys are creating a simple game in XNA 4.0 (yea, i know it's not supported by MS anymore, but it's requirement given by our tutors). Recently I wrote Light-Pre Pass Renderer based on J. Coluna's one. It was working fine until we added some meshes with bump and albedo maps. Now we've got strange bug. Here are given the examples:
I dont't have a clue what causes these artifacts (green/purple). Sometimes similar artifacts occur on the floor and they are black. Do you have any idea what might be the problem in renderer?
If my post isn't clear enough let me know, I'll try clarify it.
You have not provided any code so I have to base my answer on observation only.
I believe the problem is from flipped normals related to how the object geometry was created.
If you take a close look on image #3 - you will notice that the purple artifact can be seen on the left side of the object as well but in a narrower area. according to that this is the theory I offer:
Your animator created the object and didn't like the sharp edges - so to get rid of them he or she rotated the edge vertexes and moved them inner to in a manner that overlap the shape exterior.
If I'll try to illustrate it somehow it would look something like this:
original object:
+-------+
|=====|
|=====|
|=====|
+-------+
vs. tinkered object:
-------
|x===x|
|=====|
|x===x|
-------
Where the '+' is converted to 'x' - meaning the vertex was rotated and moved further inside the shape. this probably inverted the normals which affect the light being reflected back from the object.
The reason we see in image #3 a narrow area with artifact in the left is probably since the artist rotated all corners at the same time - and if that is the case I believe the if you rotate the shape the phenomena will be symmetric meaning you will see again wider artifact to the right of the object and narrower to the left - but if you flip the shape (rotate on Y 180 degrees) the phenomena will flip with it.
Another option to test for this will be to put a new simple box shape into the scene and check if the artifact is gone.
Related
Hi i am new to directx c#. i have a problem in one project. i draw two cubes which is one after another (ie x and y same location z is different), but the problem is when i view the front cube it is transparent and back cube is visible through front cube, i checked the transparency, no transparent level has been set. cullmode=null,Can anyone suggest what was the problem in tat?
And I think that the pixel of back cube overlaps with the front cube , how to overcome this?
here the screen shots..
Front Facing: http://postimg.org/image/6irstpv75/
Top View:http://postimg.org/image/o7ktw54h3/
Welcome :)
Please, consider adding tags to your post (programming language, "DirectX" etc) Without knowing which language you use (edit: c#... you should write it in tags... Soo, which framework? SharpDX? SlimDX? :)), i cannot be more specific.
Looks like you dont use DepthBuffer: you draw your distant cube first and closer cube after it, so it override existing pixels on BackBuffer.
I am currently working on a project which we have a set of photos of trucks going by a camera. I need to detect what type of truck it is (how many wheels it has). So I am using EMGU to try to detect this.
Problem I have is I cannot seem to be able to detect the wheels using EMGU's HoughCircle detection, it doesn't detect all the wheels and will also detect random circles in the foliage.
So I don't know what I should try next, I tried implementing SURF algo to match wheels between them but this does not seem to work either since they aren't exactly the same, is there a way I could implement a "loose" SURF algo?
This is what I start with.
This is what I get after the Hough Circle detection. Many erroneous detections, has some are not even close to having a circle and the back wheels are detected as a single one for some reason.
Would it be possible to either confirm that the detected circle are actually wheels using SURF and matching them between themselves? I am a bit lost on what I should do next, any help would be greatly appreciated.
(sorry for the bad English)
UPDATE
Here is what i did.
I used blob tracking to be able to find the blob in my set of photos. With this I effectively can locate the moving truck. Then i split the rectangle of the blob in two and take the lower half from there i know i get the zone that should contain the wheels which greatly increases the detection. I will then run a light intensity loose check on the wheels i get. Since they are in general more black i should get a decently low value for those and can discard anything that is too white, 180/255 and up. I also know that my circles radius cannot be greater than half the detection zone divided by half.
In this answer I describe an approach that was tested successfully with the following images:
The image processing pipeline begins by either downsampling the input image, or performing a color reduction operation to decrease the amount data (colors) in the image. This creates smaller groups of pixels to work with. I chose to downsample:
The 2nd stage of the pipeline performs a gaussian blur in order to smooth/blur the images:
Next, the images are ready to be thresholded, i.e binarized:
The 4th stage requires executing Hough Circles on the binarized image to locate the wheels:
The final stage of the pipeline would be to draw the circles that were found over the original image:
This approach is not a robust solution. It's meant only to inspire you to continue your search for answers.
I don't do C#, sorry. Good luck!
First, the wheels projections are ellipses and not circles. Second, some background gradient can easily produce circle-like object so there should be no surprise here. The problem with ellipses of course is that they have 5 DOF and not 3DOF as circles. Note thatfive dimensional Hough space becomes impractical. Some generalized Hough transforms can probably solve ellipse problem at the expense of a lot of additional false alarm (FA) circles. To counter FA you have to verify that they really are wheels that belong to a truck and nothing else.
You probably need to start with specifying your problem in terms of objects and backgrounds rather than wheel detection. This is important since objects would create a visual context to detect wheels and background analysis will show how easy would it be to segment a truck (object) on the first place. If camera is static one can use motion to detect background. If background is relatively uniform a gaussian mixture models of its colors may help to eliminate much of it.
I strongly suggest using:
http://cvlabwww.epfl.ch/~lepetit/papers/hinterstoisser_pami11.pdf
and the C# implementation:
https://github.com/dajuric/accord-net-extensions
(take a look at samples)
This algorithm can achieve real-time performance by using more than 2000 templates (20-30 fps) - so you can cover ellipse (projection) and circle shape cases.
You can modify hand tracking sample (FastTemplateMatchingDemo)
by putting your own binary templates (make them in Paint :-))
P.S:
To suppress false-positives some kind of tracking is also incorporated. The link to the library that I have posted also contains some tracking algortihms like: Discrete Kalman Filter and Particle Filter all with samples!
This library is still under development so there is possibility that something will not work.
Please do not hesitate sending me a message.
In my XNA 3D game, for some reason, the depth buffer is off and ignored - even though I've done everything I can find to enable it (which admittedly isn't much, but it's supposed to be simple... not to mention default)
Before the models are rendered:
global.GraphicsDevice.DepthStencilState = DepthStencilState.Default;
Somewhere earlier:
graphics.PreferredDepthStencilFormat = DepthFormat.Depth24Stencil8;
graphics.ApplyChanges();
the models are rendered using a Vertex/Index list with device.DrawUserIndexedPrimitives and a BasicEffect.
It comes out like this:
The purple object is very far from the camera, and is drawn 1st.
The gray object is very near the camera, and is drawn 2nd.
The blue object is medium-distance to the camera and is drawn 3rd.
The gray object is rendering behind the blue object - that is correct if you're going off the draw order, but I want it to organize by distance from the camera (Using the depth buffer), in which case the gray object should draw in front of the blue object.
(And, no, just quickly sorting them manually, while it may provide a temporary solution, is not the way to fix this problem)
Update: This is directly related to the fact that I'm rendering onto a RenderTarget2D instead of straight onto the screen. (If I render on-screen instead of to the rendertarget, the depth is calculated correctly. The rendertarget is needed for other parts of the program... or an equivalent system.)
Well I found what I missed on my own after searching randomly:
A RenderTarget2D has its DepthBuffer disabled by default.
I just had to set the DepthFormat on the rendertarget I was drawing too.
Knew I missed something obvious...
(I'll green-check-mark-this in 4 hours when the site lets me.)
ED: that is, unless, in that time, somebody posts a nice list of everywhere XNA depth buffer data could/should be set, to better help people who might find this topic by google.
My current project has required me to learn face detection/tracking and image processing, given my experience in c#, I chose Emgu CV as my choice library for face detection and tracking. From what I've learned so far, I can do face detection and tracking, and basic image processing.
My goal is to be able to place virtual hair on the detected face. What I want to achieve is similar to [this video]: http://www.youtube.com/watch?v=BdPmECfUFcI.
What I would like to know is the technique(s) to use in handling hair placement for different kind of hairstyles on the detected face. In what image format do I store the the hair?
After watching the video I noticed it considers the head as a flat rectangle and not as a rectangular prism (the 3D object), so it doesn't consider the use of perspective transformations and I will not consider it too. This is a limitation but serves as a decent first step in doing such placements. Note that it is not a simply matter of taking perspective into consideration, your face tracking algorithm also needs to be able to handle more complicated configurations (the eyes might not be fully visible, for example).
So, the first thing you want is a bounding rectangle aligned according to the angle the eyes make with the x axis, illustrated in the following right figure (the red segment indicates the connection between the eyes). The left figure shows a typical bounding box aligned to the axis, which doesn't serve for this problem.
The problem is also simplified after you consider the head is symmetric, so you know the top middle point in the above figure is the middle of the top of your head. Also, considering that a typical head will likely be larger at top than at bottom, then you have something like in the following figure where the width of the rectangle is close to the width of the forehead. You could also consider a bounding rectangle on only upper half of the head, for example.
Now all that is left is positioning some object in this rectangle. For that, you need to augment the description of this object to be positioned so it is not purely pixels. We can define "entrance width" (EW) and "entrance middle point" (EM). This EW establishes the width needed in the other rectangle (the head one) to position it. So, if EW is smaller than the needed value, you upscale this object, respectively for when EW is larger. Note that the full width of the head's rectangle is usually an overestimation to position this object, so you can experiment with percentages of the width. The EM value is useful to know how you will position this object over the head. In the following figure, EW is the horizontal blue dashed horizontal, and EM is the middle point on it. The vertical blue line indicates how much over the EM you want to move this object inside the top segment of head's rectangle.
The only other special thing this object needs is a value that is considered as background. So when painting this object it is easy to know whether to make a point fully transparent (the background value) or fully opaque (anything else). This was the sketch I had in mind of what needs to be basically done.
I'm making a game in C# and XNA, and I was trying to come up with a method to render massive terrains without using a tremendous amount of memory or passing the poly limit hard-coded into XNA.
My solution so far is to create a massive heightmap, and that heightmap is loaded into memory at the beginning of the game in the initialization phase. Then, terrain is only generated nearest to the camera. This is accomplished by projecting a triangle whose vertex is the character and the other two endpoints extend to the sides of the character's viewing area. Then, all the pixels inside that triangle on the heightmap are rendered and drawn into the game, thus only rendering what is seen.
The problem is, I've successfully found (I think, can't test until I get terrain rendering) the three vertices of the triangle. Now I need to find a list of the coordinates for every single pixel inside that triangle - whole numbers only, because I just need a list of pixels to render.
I know it sounds a little confusing, so here's the gist of it:
I have an image, and I project a triangle onto that image. The only thing I know about that triangle are the three vertices. I need a list of the pixels inside that triangle.
I've been Googling around for maybe 20 minutes now, and I figured I midas well go ahead and post something here due to the fact that what I'm trying to do isn't all that common. If I find an answer, I'll be sure to post it here.
But until then, can anyone tell me how to accomplish this?
Edit: A formula, please. If you can provide a formula or algorithm, and an explanation, that would be just perfect.
Edit: I've posted a new question, as I've ditched this method of rendering large terrains. The question is here.
Start here:
http://mathworld.wolfram.com/TriangleInterior.html
One of the non-trivial problems, not mentioned there, that you have to deal with is the pixelization along the boundary.