Currently i am working on kinect Virtual Jewel shop app. In which user can able to choose the jewels and check how it looks .
The App started with 2d images where it does not look realistic .
so can any one suggest your ideas for the following queries.
How to make the 2D images realistic without going forward for 3D?
I choose 3d for fitting the jewels with neck , we can skew and rotate the images in 3D. Whether the same thing can be accomplished by 2d? if so how can we do it.
For going forward with 3d is there any tool available to convert the 2D images into 3D?
I am zero and new to 3D objects, pls tell me how we can sit the 3D object in skeleton data. is there any format available in 3D ?
The current application is in wpf 4.0 so How can we use the 3D object in WPF ??
I don't think it can look realistic without 3D. But you can generate a 3D mesh with the form of the jewel.
(See this to get the idea http://fenicsproject.org/_images/hollow_cylinder.png ).
And put your 2D images like a texture on the mesh.
Check the riemers tutorials.
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Textures.php
Related
In my (limited) experiences on 3D programming, usually we set up a 3D model with materials and texture, then set up the light and camera. Finally we can get a 2D view through the camera.
But I need to reverse this procedure: given a 2D view image, a camera setup, and a 3D model without texture, I wanted to find the texture for the model such that it results in the same 2D view. To simplify we ignore the light and materials, assuming they are even.
Although not easy, I think I can write a program to do this. But are there any existing wheels out there so I don't have to invent it again? (C#, WPF 3D or openCV)
Helix3d Toolkit for WPF has an interesting example called "ContourDemo". If you download the whole source you get a very comprehensive example app showcasing its capabilities.
This particular example uses a number of helper methods to generate a contour mesh from a given 3D model file(.3ds, .obj, .stl).
With some extending this could be the basis of reverse calculating the uv mapping, possibly.
Even if there is nothing suitable to perform the core requirement (extracting the texture) it is a great toolkit for displaying your original files and any outputs you have generated generated.
We're making a sign language translator using kinect 1.0 device for my undergrad final year project.
So far we have achieved recognizing gestures in 2D using the skeleton api's in kinect sdk and applied DTW algorithm on it.
We also tracked fingers and distinguished between how many fingers are shown in the frame using contouring and applying convex hull on the contour. We used C# and Emgucv to achieve this.
Now we're stuck at how to transform the data into 3d coordniates. What I don't get is that:
How the 3d visualization will look like? I mean for now we just use the depth stream and apply a skin classifier on it to show only the skin parts as white pixels and the rest of the objects as black pixels, and we show the contoured and convex hulled area in the color stream. For 3d we'll use the same depth and color stream? If yes then how we'll transform the data and coordinates into 3d?
For gestures that involve touching of fingers on nose, how will I isolate the contoured area not to include all of the face and just to tell which finger touches which side of nose? Is this where 3d will come in?
Which api's and libraries are there that can help us in c#?
Extracted Fingers after Contouring and Convex Hull
Kinect has support for creating a depth map using infrared lasers. It projects an infrared grid and measures the distance for each point in the grid. It seems that you're already using the depth info by this grid.
For converting to 3D you should indeed use the depth info. Some basic trigonometry will help to transform the depth map into 3D (x,y,z) coordinates. The color stream from the camera can be mapped onto these points.
Detecting whether fingers are touching a nose is a difficult issue. While the grid density of the kinect is not very high, 3D probably won't help you. I would suggest to use edge detection (e.g. canny algorithm) with contour recognition on the camera images to detect whether a finger is in front of the face. Testing whether a finger actually touches the nose or is just close to is the real challenge.
I'm currently facing a problem with WPF 3D using C#. To put it simple, I need to animate some simple mechanical part by only moving two of them (one at a time or both together). Here is a simple drawing depicting the situation :
So by moving (translating) vertically P1 or/and P2 parts, the whole thing needs to move accordingly.
I guess it may be possible to do by computing a lot of angles and applying numerous transformations but this is not my goal.
Therefore I would imagine something like attaching the parts together by the means of a pivot point.
What is the preferred way to do this to preview it using WPF 3D?
WPF 3D, Ogre, Mogre, OpenTK... are libraries for display. They have nothing to do with mechanical constraints calculations. But they goes well with physics engines.
WPF 3D is a subset of WPF dedicated to 3D drawing. If you need 2D, then WPF is enough.
As your project looks 2D, you might want to have a look to Farseer Physics which is a port of Box 2D. The feature you need is called joints. Both libraries target 2D games development, but they can be used for simple kinematics animations, and Farseer Physics is doing very well with WPF.
It's a simple problem for any 2D kinematics package.
http://books.google.com/books?id=IGtIWmM2GWIC&pg=PR12&lpg=PR12&dq=c%23+kinematics&source=bl&ots=eCJZLq_i6R&sig=wC42cNOdtw4VX9ElTk4IBDAYtzc&hl=en&sa=X&ei=3YkXU4u1EeHu2wXum4GYDA&ved=0CFsQ6AEwBQ#v=onepage&q=c%23%20kinematics&f=false
I'm thinking about making a 3D point and click game, is it possible to make one in winforms or WPF? I don't need any physics or anything all I need is to make the application render 3D objects. I know that I can use XNA but if I do then I will have to relearn almost everything again. My third approach would be to make the scenes in a 3D game engine and then print the screen and then load it as a image. Any suggestion would be appreciated.
There's a big difference between a 3D game, and just letting players interact with a rendered image.
Your approach of loading a pre-rendered image is possible to do in both Winforms and WPF. You would just need to capture click events on the image and check the location against your list of active areas. Then just handle what needed to be done, ie: move to the next area, activate item, etc.
Edit from comment:
It's not so much which is friendlier. You can host an XNA viewport in Winforms/WPF. It's more about how you want your game to work. If you never have moving 3D scenes, XNA is overkill, and images will work just fine.
If you want dynamic scenes, you'll need to be able to render them on the fly. Then, XNA makes more sense. It is a lot more work though compared to just displaying images.
If you just want to show pre-rendered 3d images in your game, why not create them using a real 3d graphics tool, such as 3D Studio Max or Maya (or a free one, such as Blender)? It sounds like there's no need for actually rendering the 3d scenes in a game engine at all.
I am going to make a game like XNA example game "Platformer1" which comes with the XNA. But I need longer levels which doesn't fit in the screen (like Super Mario levels). How can I manage this kind of level? Do I need to use a 2d camera that follows the sprite? If I do this way how can I load the level? I am a bit confused and I am not sure if I could explain my problem clearly. Hope someone can help?
The tutorial based on Platformer Starter Kit in MSDN has a step Adding a Scrolling Level which guides you through creation of longer levels. The tutorial is very detailed, I highly recommend it.
I couldn't find the tutorial in the section for XNA Game Studio 4.0, but differences should be minimal. According to the comment at the bottom of the page, all you need to change is replace
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None, cameraTransform);
with
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullCounterClockwise, null, cameraTransform);
in the tutorial code.
If you want to create a side scrolling game, then I would look into parallax scrolling. A quick google/bing will help you find lots of tutorials. Also, another useful tip is to search YouTube for XNA videos has we a lot of posters share their source code .
Here is a link to Microsofts Parallax Scrolling.
Sounds like you have a few problems ahead of you.
But I need longer levels which doest'n fit in the screen(like super mario levels). How can I manage this kind of levels.
There are several ways to do this, but a fairly easy way would be to have a 2d array (or sparse array, depending on how large your levels are) of a class named Tile that stores info about the tile image, animation, ...whatever.
Yes, you'll probably want a "camera". This can be as simple as only drawing a certain range of that array or a more featured camera that uses transforms to zoom out and translate across your level.
Hopefully this will help get you started.
I've done a decent amount of work in XNA, and from my experience, there are 2 ways to draw a 2D scene:
1) Strictly 2D. This method is much easier, but has a few limitations. There is no "camera" per se, what you do is move everything underneath the fixed 2D "camera". I say "camera" in quotes because the camera is fixed (as far as I know). The upside is that it's easy, the downside is that you can't easily zoom in or out or do other camera effects.
2) 2D in 3D. Set up a 3D world with a 2D plane. This is more flexible, but is also more challenging to work with because you will need to set up a 3D world and 3D camera. If this is your first attempt with making a game, I would highly recommend against this method.
I'm really only familiar with the strictly 2D method, and you would want a list of map objects that have a 2D coordinate. You would also want to store which section of the map you are looking at, I do this with a Rectangle or Vector2 that stores this. This value would move forward as the character moves. You can then take your 2D map objects' coordinate and subtract the (X,Y) of the top-left of what you are looking at to determine an object's screen position. So:
float screenX = myMapObject.X - focusPoint.X;
float screenY = myMapObject.Y - focusPoint.Y;
An other thing to note, use floats or Vector2/3 to store locations, you may not think it's required now, but it will be down the line.
It might be overkill, but my SF project uses XNA to draw a Strictly 2D scene that you can move around: http://sourceforge.net/projects/asteroidoutpost/
I hope this helps.
Have a look at Nick Gravelyns tutorials. They helped me tonne when I was first starting out - Really really worth a look for learning a lot on 2D games.
All the videos are now on youtube here