I am dealing with a project and for now I can get analog signals from my sensors and with PIC's ADC, convert them to digital datas. Besides, via USB I can transfer all the datas to my windows application (user interface) which is made by C#. When I look into my inputbuffer, all the datas there.
My problem is after these steps, how can I draw these datas as a continuous signal? I use ZedGraph and I want to observe my sensor datas as a continuous signal. I know how to draw something using ZedGraph. I even drew my inputbuffer only one time. But still I cannot manage it as a continuous signal.
Which architecture is more suitable for me? Should I use circular buffer?
Can I use PerformanceCounter class as my custom events such as drawing my sensor datas or this class is only useful for system events?
You could create a Performance Counter, that is easy to do.
Drawing the graph yourself is a bit harder:
(I take it you mean by continuous signal a line instead of points) Just draw a line from the previous position to the current position.
A circular buffer might help (although the oldest data should shift back in I think) but you still need to keep track of the previous position so you know how to draw the line. Make sure you shift the buffer in proportion to the time elapsed.
An alternative would be to be to push the data to an Excelsheet and use the charts there.
Can't you just assign a timestamp to each data point in your sample?
Then you should be able to easily plot the sample using time as X-axis and value as Y-axis with zed-graph, only need to adjust some parameters like line smoothing etc.
Related
i'm trying to capture sound from several interfaces of single audio card. Can i get arrays that not shifted relative to each other ? I want to use 4 microphones (2 microphone in one interface for each channels) to detect sound emitter position.I use windows, so i can't create aggregated device. I also recorded sound from different threads, but delay between arrays was very randomly. This is main problem, because i want to apply intercorrelation function for array to get delay(shift) that gives maximum value, this shift defines angle to sound source, so i can use anything different against of ASIO, but it's must be stable for all recording interval. If there isn't solution for c#, i know c++. Please, tell me how i can solve my problem.
If all mics are connected to the same hardware device you can use ASIO, assuming the device actually has ASIO drivers. If not, you can either try ASIO4All (but i have no idea whether it will synchronize independent devices) or use WASAPI and perform synchronization manually. WASAPI's IAudioCaptureClient::GetBuffer method will give you both the stream position and stream time at which the position was recorded, from there you should be able to work out the time shift between each of the 4 mics and then perform "unshifting" yourself.
I'm trying to write a C# hand and fingertip detection program, for now I have been able to get the hand points and store them in a List but I'm a little stuck regarding how to present that data in order to visualise the results.
My solution at the moment is to draw a black point in a canvas (I'm trying to use Ellipse shape for this) for each point I have, but I think that this is so time consuming that I can't see the results.
Is there a way to make the Kinect ignore the next, for exemple, 30 frames, or in other words can I make kinect only call the onFrameReadyEvent once every 30 Frames?
If anyone has any other solution for result presentation feel free to share ;)
Thanks in advance.
Since the OnFrameReadyEvent is an event, look into Reactive Extensions.
Rx has a Throttle extension method that you can use to only get 1 frame a second. For an example, check out this SO question:
How to throttle event stream using RX?
I have the following scenario in mind:
I want to send (via serial port) some commands to a device. This device does send me back a continuous stream of data (max. 12000 values per second).
To control some settings I need some buttons to send commands to the device to start/stop/change settings before and during data stream. Also I want to have a real time plot of this data. I will filter this data of course. Also at certain timestamps there will be a signal which indicates that I want to cut out a certain window of the received data.
This means I will have two charts. I made already some progress using WPF but now when I interact (zoom/pan) with the lower chart, the upper one freezes noticeable. This is because both have do be refreshed very often!
Work (data receiving/filtering) is done using threads but the update of the plot has to be done within the ui thread.
Any ideas how to solve this issue? Maybe using multiple processes?
You should use Reactive Extensions. It was built for this kind of thing.
http://msdn.microsoft.com/en-us/data/gg577609.aspx
Requesting a clear, picturesque explanation of Reactive Extensions (RX)?
On this second link, although the topic is javascript, much of what it says is about Reactive Extensions and cross-applies to Rx in C#.
I'm making a similar WPF application with real-time waveforms (about 500Hz). I have a background threads that receives real-time data, a separate threads to process them and prepare the data for drawing (I have a buffer with the "size" of the screen where I put the prepared values). In the UI thread I draw the waveforms to the RenderTargetBitmap which is in the end is rendered to the Canvas. This technique allows me have a lot of real-time waveforms on the screen and have zoom and pan working without any problems (about 40-50 fps).
Please let me know if you need some technical details, I can later share them with you.
I think you have some code in the UI thread that is not optimized well or can be moved to the background thread.
Btw, do you use any framework for charts?
Edit
philologon is right, you should use Rx for real-time data, it simplifies code A LOT. I also use them in my project.
Its a commercial product but there is a real-time WPF chart which can handle this use-case and then some. Please take a look at the Tutorial below:
http://www.scichart.com/synchronizing-chartmodifier-mouse-events-across-charts/
There is a live Silverlight demo of this behaviour here:
Sync Multichart Mouse Silverlight Demo
And this chart should be able to handle zooming while inputting values at high speed:
Realtime Performance Demo
Disclosure: I am the owner and tech-lead of SciChart
What I want to do is draw and animate a skeleton (like we can do with the sensor stream) from saved data (so I have x, y and z value of every joint).
I searched a lot, but I can't find anything that can help me.
I can convert the data to a joints collection, associate it to a skeleton, but then? I don't know how to map the skeleton to the colorImagePoint.
Maybe I have to create a depthImageFrame?
Thank you so much!
Look into the Kinect Toolbox. It offers a recorder and playback functionality which may match your needs as is, or provide you with a starting point:
http://kinecttoolbox.codeplex.com/
If you role your own, I'm not sure why you would need to map it to a color or depth frame, unless I'm missing a requirement of what you are doing.
Have a look at the SkeletonBasics example in the Microsoft Kinect for Windows SDK Toolkit examples. It will show you have to draw a skeleton manually based on skeleton data. From there, you could look into doing the following for you application:
Set up your skeleton tracking callback
At each skeleton frame, or less (if you don't need so many) save the joint positions
Also save a 0-based timestamp
Save data to format of choice when complete
During a playback, read in your recorded data and start a timer. When the timer hits the next skeleton frame's stored timestamp update your drawn skeleton on the screen (using the SkeletonBasics example app as guidance).
I need to be able to generate a 3D perspective from a bunch of 2D images of a pipe.
Basically... We have written software that interprets combined data from laser and sonar units to give us an image slice from a section of pipe. These units travel through the pipe and scan the inside of the pipe every 100mm.
All of this is working great. My client now wants to take all these 2D image slices and generate a 3D view so they can "travel" through the pipe looking at defects etc.. that are picked up by the scans. We can see the defects in the 2D images but there can be hundreds of images in a single inspection - hence the requirement to be able to look through the pipe.
I am doing this in VS2010 on the .NET 4 platform in C#.
I am honestly clueless as to where to start here. I am not a graphics developer so this is all new territory to me. I see it as a great challenge but need some help kicking off - and a bit of direction.
Any help appreciated :)
Mike
Well, every 10cm isn't very detailed.. However, you need to scan the pixels of the pipe, creating a list of closed polygons, then just use a trianglestrip to connect one set to the next, all the way down the pipe.
Try to start with very basic 2d instead of full blown 3d rendering - may be good enough. Pipe when you look at it from inside can be represented as several trapeze. Assuming your images are small cylinder portions of a pipe - map each stripe to trapezoids (4 would be good start - easy to position) and draw than in circular pattern. You may draw several stripes this way at the same time. To move back/forward - just reassign images to trapezoids.
If you need full 3d - consider if WPF would work, if not - XNA or some OpenGL library will give you full 3d.
You don't specify the context, 100mm sample intervals may be sparse (a 1m pipe) or detailed (10km pipe). Nor do you specify how many sample points there are (number of cross sections and size of cross section image).
A simple way to show the data is to use voxels where each pixel on a cross section is treated as a cube and adjacent samples form adjacent cubes (think Minecraft). The result will look blocky but as it's an engineering / scientific application this is probably preferable. Interpolating the model to produce a smooth surface may hide defects or make areas appear to be defective. Also, rendering a cross section through a voxel is a bit easier than a polygon surface.