What I want to do is draw and animate a skeleton (like we can do with the sensor stream) from saved data (so I have x, y and z value of every joint).
I searched a lot, but I can't find anything that can help me.
I can convert the data to a joints collection, associate it to a skeleton, but then? I don't know how to map the skeleton to the colorImagePoint.
Maybe I have to create a depthImageFrame?
Thank you so much!
Look into the Kinect Toolbox. It offers a recorder and playback functionality which may match your needs as is, or provide you with a starting point:
http://kinecttoolbox.codeplex.com/
If you role your own, I'm not sure why you would need to map it to a color or depth frame, unless I'm missing a requirement of what you are doing.
Have a look at the SkeletonBasics example in the Microsoft Kinect for Windows SDK Toolkit examples. It will show you have to draw a skeleton manually based on skeleton data. From there, you could look into doing the following for you application:
Set up your skeleton tracking callback
At each skeleton frame, or less (if you don't need so many) save the joint positions
Also save a 0-based timestamp
Save data to format of choice when complete
During a playback, read in your recorded data and start a timer. When the timer hits the next skeleton frame's stored timestamp update your drawn skeleton on the screen (using the SkeletonBasics example app as guidance).
Related
I am implementing an eye tracker using emgucv (openCV C# wrapper), So far i was able to detect iris center and eye corner accurately.
As the next step i want to get the screen coordinate where user is focusing (also known as gaze point),As a beginner to image processing , i am completely unaware of gaze mapping and gaze estimation.
I would be thankful , if you provide any code snippets or algoritms to perform gaze mapping to retrieve gaze coordinate on screen .
Thanks in advance
If you want to research the field in more depth, there's this interesting piece of research called "EyeTab: Model-based gaze estimation on unmodified tablet computers". It might have some of the information you want, or at least help you understand the field more.
http://www.cl.cam.ac.uk/research/rainbow/projects/eyetab/
(also GitHub)
how do i track points in a video in real time like in the below video.
https://www.youtube.com/watch?v=jg6Nz6BfoSQ
I managed to use optical flow method to get this output to my video,
But i couldn't find a way to point track with Emgu Cv. Can someone suggest what should I do?
In youTube video he used c++ as the language.Does the language type affect to the real time response of the system?
You can get the floor coordinates (http://msdn.microsoft.com/en-us/library/hh973078.aspx) there is an API function. You can also get raw depth stream data and determine the 3D coordinates of a point. To make 3D point cloud it is a litle expensive usualy done on GPU for real time applications. The problem is that you have to track objects. For tracking you can check OpenCV and to combine it with kinect raw data.
I am dealing with a project and for now I can get analog signals from my sensors and with PIC's ADC, convert them to digital datas. Besides, via USB I can transfer all the datas to my windows application (user interface) which is made by C#. When I look into my inputbuffer, all the datas there.
My problem is after these steps, how can I draw these datas as a continuous signal? I use ZedGraph and I want to observe my sensor datas as a continuous signal. I know how to draw something using ZedGraph. I even drew my inputbuffer only one time. But still I cannot manage it as a continuous signal.
Which architecture is more suitable for me? Should I use circular buffer?
Can I use PerformanceCounter class as my custom events such as drawing my sensor datas or this class is only useful for system events?
You could create a Performance Counter, that is easy to do.
Drawing the graph yourself is a bit harder:
(I take it you mean by continuous signal a line instead of points) Just draw a line from the previous position to the current position.
A circular buffer might help (although the oldest data should shift back in I think) but you still need to keep track of the previous position so you know how to draw the line. Make sure you shift the buffer in proportion to the time elapsed.
An alternative would be to be to push the data to an Excelsheet and use the charts there.
Can't you just assign a timestamp to each data point in your sample?
Then you should be able to easily plot the sample using time as X-axis and value as Y-axis with zed-graph, only need to adjust some parameters like line smoothing etc.
How do I take a 3D model that I created in 3D Studio Max and put it into my Winform C# program and make it spin? I'd prefer not to use DirectX if possible. I don't want anything complex. I simply want my model to rotate along the X axis. Thats it.
Thanks
You should use a 3D rendering engine for C#
Something like
http://axiom3d.net/wiki/index.php/Main_Page
http://www.codeproject.com/KB/GDI-plus/exoengine.aspx
http://irrlicht.sourceforge.net/features.html
http://freegamedev.net/wiki/Free,_cross-platform,_real-time_3D_engines
I have never used any rendering engines but for your requirements (letting the user move the object) i think a 3D engine would do. But perhaps this is over kill
If you want it to be dynamic, then the simplest option would be to render out an animation of the object rotating, but make each frame a separate file. Then you just show the correct image based on how the user is dragging the mouse. If the user drags the mouse to the right, then increment the frame and show the next image. If moving to the left, decrement the frame.
For something non interactive:
Export the animation to an AVI and embed that in your form:
Embedding Video in a WinForms app
It's not really what I'd recommend, but it's an alternative to creating an animated gif.
For something partially interactive (i.e. allowing limited movement):
I've seen QuickTime movies that you can control with the mouse. There's an example on this page. It's not 3D though.
For something fully interactive:
You need a 3D rendering engine of some sort and that does (usually) require DirectX or OpenGL. However, if you're only dealing with simple objects you might (repeat might) get away with a software renderer.
Problem Overview
I am working on a game application and need to be able to implement scrollable maps in Silverlight similar to those found in Google Maps. However, I am unsure as to how to implement this effectively. The following paragraphs provide much additional detail. Any ideas or guidance is greatly appreciated!
Problem Detail
I have been working on a new MMOG (massively multi-player online game). The game will implement a coordinate (x,y) based map. Only a very small fraction (less than 0.1%) of the map will be displayed on the screen at any given time. The player should have the ability to click on the map and drag the mouse to scroll and view map areas which are not presently visible. (This is somewhat similar to Google Maps.)
The map background is made up of a series of stitched (repeating) images. These images are woven together to give the basic appearance of the game's "world". A standard set of additional graphics are then superimposed, as appropriate, on each of the coordinate locations . For example, point (0,0) might be a lake, (0,1) might be a city, and (0,2) might be a forest. The respective images for a lake, a city, and a forest would be superimposed on the background.
It is important to mention that the entire map is NOT stored on the local client machine. Rather, as a player scrolls to or opens a specific location, the appropriate map information is retrieved from the remote game server. It is infeasible for us to build the entire game world map ahead of time due to its size and the fact that portions of the map are constantly changing.
I have toyed with the idea of building a bitmap on-the-fly of the new map each time a player moves. However, I think there may be a much better way to add to the map as the player scrolls.
When scrolling, movement of the map should not, if possible, result in a "flickered" refresh of the screen. I believe recreating a bitmap each and every time a player moves even one or two pixels would almost certainly result in flicker.
I am open to 3rd party tools and solutions. However, to the degree possible, I would prefer to use standard Microsoft libraries or open source tools rather than commercial tools.
What are some ideas as to the best way to implement this functionality so that it performs well, is reliable, and transitions to new areas of the map appear seamless to the player?
Thank you in advance for all your help!
Update
Here are a few pieces of additional information that may prove helpful.
Since my initial post, I have been introduced to the concept of a "tile engine". (Many thanks to Michael and Paul for pointing me towards Bing and BruTile.)
My understanding is that a tile engine basically breaks larger images into sections and renders them side by side. As a user scrolls, additional tiles are rendered as others are removed from view. This is very much what I am looking for.
However, there may be a couple of wrinkles that affect my use of a standard tile engine. All of the graphics for the game, including the backgrounds which would be displayed on any tile, will already be downloaded on the client. It is important that the tile engine not retrieve the graphics from a server as this would consume significant unnecessary bandwidth.
Other graphics (e.g. a lake, forest, hill), which represent objects from the gameworld, must be superimposed when the tiles are rendered on the screen. Tile engines such as Bing appear to provide the ability to superimpose custom images. Whatever tile engine is used must not only support this feature but allow exact placement of these superimposed images.
Finally, there is a a requirement to support popup descriptions when the user mouses over one of the superimposed graphics. Unlike the graphics which are already stored on the client, the descriptions contain information which must be downloaded from the game server. BruTile, while excellent in many ways, does not appear to yet support these popup descriptions.
We are making great progress. Thanks for all your help so far!
For an open source solution you could look at BruTile. It too has all the features you describe. It can also be used on the Microsoft Surface and on Windows Phone (for your markeplace version).
Use the Bing Maps control or the MultiScaleImage (Deep Zoom) which it uses.
To seen an example, go here. You can use the Deep Zoom Composer to create maps or topologies using your own photos and images.
Here is the SDK for the control.