I'm making a unity SteamVR game.
I want to achieve an unlit art style reminiscent of Antichamber.
I want to draw lines on the edges of objects, but I couldn't find an online solution that was not outdated and that worked.
Here is a picture from antichamber.
You can see that every object has a line across all of it's edges.
How can I achieve this effect?
Yo can do this with Unity's "Edge Detect Effect Normals" which is part of Unity's image effects and is included in their Standard Assets. You can get it here.
There is also a new image effect for Unity called "Post Processing Stack" which is really faster than the one I linked above and you can get that here. Edge Detect is not included in the new "Post Processing Stack" image effects but if you decide to use the new Post Processing Stack, you can get the ported version of "Edge Detect Effect Normals" here.
Any one of these effects can draw lines on edges of objects.
I want to draw lines on the edges of objects
This free asset promises to do just that.
"Supports outlines with variable thickness, color, that work in Editor, Ortho and Perspective cameras."
Source: https://assetstore.unity.com/packages/vfx/shaders/toon-shader-free-21288
Related
Edit: Bad angle picture was due to Depth Testing. Still need the 'right' way to do 3d text rendering though!
I got 2D text rendering in OpenTK working. It's very simple, I use .NET Graphics class methods to draw a string to Bitmap, which I can load into GPU via OpenTK. Ok great, but how about 3D?
I'm thinking about this in terms of a cylinder. What is a cylinder? It's just a circle stretched out over a certain height. That's EXACTLY what I want to do here! I researched a bunch...but surprisingly there isn't all that much info readily available IMO for such a basic task.
Here's what I've tried:
1) Rendering the bitmap 100 times from Z = 0.0f to 1.0f. This actually works pretty well! For certain rotations anyway.
2) Drawing 16x16x16 Voxels (well, I think I'm drawing voxels). Basically the idea is, use the typical GL.TexCoord3 and GL.Vertex3 methods for drawing the SURFACE of a cube, but because we are drawing so freakin many of them, I figured it would actually give depth to my text. It sort of does, but the results are actually worse than attempt 1.
I want to get this working with a really simple solution, if one exists. I'm using Immediate mode, and if possible I'd like to keep using that.
This is what solution 1 looks like at a good angle:
Bad Angle:
I know that my method is inherently flawed because these bitmaps dont actually have a depth when I draw them, which is why at some critical angles either the text becomes flat looking, or disappears from view.
I am currently working on a project which we have a set of photos of trucks going by a camera. I need to detect what type of truck it is (how many wheels it has). So I am using EMGU to try to detect this.
Problem I have is I cannot seem to be able to detect the wheels using EMGU's HoughCircle detection, it doesn't detect all the wheels and will also detect random circles in the foliage.
So I don't know what I should try next, I tried implementing SURF algo to match wheels between them but this does not seem to work either since they aren't exactly the same, is there a way I could implement a "loose" SURF algo?
This is what I start with.
This is what I get after the Hough Circle detection. Many erroneous detections, has some are not even close to having a circle and the back wheels are detected as a single one for some reason.
Would it be possible to either confirm that the detected circle are actually wheels using SURF and matching them between themselves? I am a bit lost on what I should do next, any help would be greatly appreciated.
(sorry for the bad English)
UPDATE
Here is what i did.
I used blob tracking to be able to find the blob in my set of photos. With this I effectively can locate the moving truck. Then i split the rectangle of the blob in two and take the lower half from there i know i get the zone that should contain the wheels which greatly increases the detection. I will then run a light intensity loose check on the wheels i get. Since they are in general more black i should get a decently low value for those and can discard anything that is too white, 180/255 and up. I also know that my circles radius cannot be greater than half the detection zone divided by half.
In this answer I describe an approach that was tested successfully with the following images:
The image processing pipeline begins by either downsampling the input image, or performing a color reduction operation to decrease the amount data (colors) in the image. This creates smaller groups of pixels to work with. I chose to downsample:
The 2nd stage of the pipeline performs a gaussian blur in order to smooth/blur the images:
Next, the images are ready to be thresholded, i.e binarized:
The 4th stage requires executing Hough Circles on the binarized image to locate the wheels:
The final stage of the pipeline would be to draw the circles that were found over the original image:
This approach is not a robust solution. It's meant only to inspire you to continue your search for answers.
I don't do C#, sorry. Good luck!
First, the wheels projections are ellipses and not circles. Second, some background gradient can easily produce circle-like object so there should be no surprise here. The problem with ellipses of course is that they have 5 DOF and not 3DOF as circles. Note thatfive dimensional Hough space becomes impractical. Some generalized Hough transforms can probably solve ellipse problem at the expense of a lot of additional false alarm (FA) circles. To counter FA you have to verify that they really are wheels that belong to a truck and nothing else.
You probably need to start with specifying your problem in terms of objects and backgrounds rather than wheel detection. This is important since objects would create a visual context to detect wheels and background analysis will show how easy would it be to segment a truck (object) on the first place. If camera is static one can use motion to detect background. If background is relatively uniform a gaussian mixture models of its colors may help to eliminate much of it.
I strongly suggest using:
http://cvlabwww.epfl.ch/~lepetit/papers/hinterstoisser_pami11.pdf
and the C# implementation:
https://github.com/dajuric/accord-net-extensions
(take a look at samples)
This algorithm can achieve real-time performance by using more than 2000 templates (20-30 fps) - so you can cover ellipse (projection) and circle shape cases.
You can modify hand tracking sample (FastTemplateMatchingDemo)
by putting your own binary templates (make them in Paint :-))
P.S:
To suppress false-positives some kind of tracking is also incorporated. The link to the library that I have posted also contains some tracking algortihms like: Discrete Kalman Filter and Particle Filter all with samples!
This library is still under development so there is possibility that something will not work.
Please do not hesitate sending me a message.
In my XNA 3D game, for some reason, the depth buffer is off and ignored - even though I've done everything I can find to enable it (which admittedly isn't much, but it's supposed to be simple... not to mention default)
Before the models are rendered:
global.GraphicsDevice.DepthStencilState = DepthStencilState.Default;
Somewhere earlier:
graphics.PreferredDepthStencilFormat = DepthFormat.Depth24Stencil8;
graphics.ApplyChanges();
the models are rendered using a Vertex/Index list with device.DrawUserIndexedPrimitives and a BasicEffect.
It comes out like this:
The purple object is very far from the camera, and is drawn 1st.
The gray object is very near the camera, and is drawn 2nd.
The blue object is medium-distance to the camera and is drawn 3rd.
The gray object is rendering behind the blue object - that is correct if you're going off the draw order, but I want it to organize by distance from the camera (Using the depth buffer), in which case the gray object should draw in front of the blue object.
(And, no, just quickly sorting them manually, while it may provide a temporary solution, is not the way to fix this problem)
Update: This is directly related to the fact that I'm rendering onto a RenderTarget2D instead of straight onto the screen. (If I render on-screen instead of to the rendertarget, the depth is calculated correctly. The rendertarget is needed for other parts of the program... or an equivalent system.)
Well I found what I missed on my own after searching randomly:
A RenderTarget2D has its DepthBuffer disabled by default.
I just had to set the DepthFormat on the rendertarget I was drawing too.
Knew I missed something obvious...
(I'll green-check-mark-this in 4 hours when the site lets me.)
ED: that is, unless, in that time, somebody posts a nice list of everywhere XNA depth buffer data could/should be set, to better help people who might find this topic by google.
I'm making a game in C# and XNA, and I was trying to come up with a method to render massive terrains without using a tremendous amount of memory or passing the poly limit hard-coded into XNA.
My solution so far is to create a massive heightmap, and that heightmap is loaded into memory at the beginning of the game in the initialization phase. Then, terrain is only generated nearest to the camera. This is accomplished by projecting a triangle whose vertex is the character and the other two endpoints extend to the sides of the character's viewing area. Then, all the pixels inside that triangle on the heightmap are rendered and drawn into the game, thus only rendering what is seen.
The problem is, I've successfully found (I think, can't test until I get terrain rendering) the three vertices of the triangle. Now I need to find a list of the coordinates for every single pixel inside that triangle - whole numbers only, because I just need a list of pixels to render.
I know it sounds a little confusing, so here's the gist of it:
I have an image, and I project a triangle onto that image. The only thing I know about that triangle are the three vertices. I need a list of the pixels inside that triangle.
I've been Googling around for maybe 20 minutes now, and I figured I midas well go ahead and post something here due to the fact that what I'm trying to do isn't all that common. If I find an answer, I'll be sure to post it here.
But until then, can anyone tell me how to accomplish this?
Edit: A formula, please. If you can provide a formula or algorithm, and an explanation, that would be just perfect.
Edit: I've posted a new question, as I've ditched this method of rendering large terrains. The question is here.
Start here:
http://mathworld.wolfram.com/TriangleInterior.html
One of the non-trivial problems, not mentioned there, that you have to deal with is the pixelization along the boundary.
I am making a WPF program with the possibility to modify data graphically in 3D. In order to give the user the option to select multiple graphical objects at the same time, I want to implement a selection rectangle. (Just like the one in windows explorer.) A common functionality in programs like this one is to have 2 different functions for the selection rectangle, and that the user can somehow choose which of the methods should be used.
Any object that is partially or completely inside the rectangle is selected.
Only objects that are completely inside the rectangle are selected.
The 2nd method is straight forward by using the bounding box of each object, and check if it is inside the rectangle. The 1st one on the other hand, seems to be quite some work. All my graphical objects are complicated 3D figures, and can be rotated by the user in any way. At the moment I am unable to find any other way than checking if any of the triangles in the mesh of any of the objects cross my 2D rectangle, and that can be quite time consuming.
I have little experience with WPF 3D, but I have done this before in OpenGL. Then I could tell OpenGL to draw a specific area of the screen, and the collect a list of objects that was visible in the specific area. All I needed to get the functionality I wanted was about 5 lines of code.
I guess my question is this:
Is there a way to do this with WPF 3D, similar to the OpenGL approach?
If not, is there any other smart way to find all objects (Visual3D) in a viewport that is partially behind a 2D rectangle?
I refuse to believe I am the only one with this kind of problem, so I hope a clever mind can point me in the right direction.
Regards,
Sverre
Thank you for your answer!
The 2D-rectangle is just in front of the camera and extending infinitely forward. I want to get any object that is partially or completely inside that frustum.
The camera we are using is an orthographic or perspective projection camera (System.Windows.Media.Media3D.ProjectionCamera). The reason we are not using the matrix camera is that we are using a 3rd party tool that does not support the matrix camera. But I am sure there is a way to get the matrix from a projection camera as well, so that is hopefully not the problem.
In theory your solution sounds like just what we need, but I am not sure how to proceed. Do you have any links to sample-code, or can you give some more hints on how to actually implement this?
Btw: Since we are working with WPF, we do not have direct access to DirectX. At least that’s what we have concluded after some research. You mention use of the z-buffer, which we haven’t been able to access through WPF. If you know a way to access the z-buffer, it’s greatly appreciated! This is of-topic, but we have struggled to disable the z-buffer for some time, but have given up…
Best regards,
Sverre
Is your intersection region a 2d rectangle or a frustrum based at a 2d rectangle and extending infinitely forward (or perhaps to some clipping limit)? If it can be construed as a viewing frustrum, then you can leverage the existing capabilities of the graphics system to render the scene using a Camera View and Projection that corresponds to your originating rectangle, with all lighting and shading disabled and colors chosen specifically to 'tag' the different objects in your scene. This means you can use the graphics hardware to perform the clipping/projection as a 'rendering' operation, then simply enumerate the pixel values as 'tags' to determine the objects present in the rectangular view.
If you need to restrict selection to an actual 2d slice (or a very shallow frustrum), you can use the Z-buffer (if you can get access to it) to exclude tagged pixels that are outside the Z range of your desired selection frustrum.
The nice thing about this approach is that you probably already have the Camera matrix (it's the same matrix used for your window for selection) and only need to change the Projection matrix to be a sub-set of the viewing window.
A 'smart' way would be to transform the rectangle into a box using the Camera's matrix
And then do a intersection of all the objects and the box.