AForge - incomplete list of VideoCapabilities - high frame rate - c#

I use AForge and C # to record video from a Logitech BRIO webcam connected via USB. I want to get about 60 fps. This camera can do such registration - other applications give such possibility for it. AForge does not allow to choose VideoCapabilities with this frame rate (list is incomplete). I mean option with AverageFrameRate=60 fps (about MaximumFrameRate AForge gives information e.g. 120 fps). These settings in other applications are not related to the low number of bits per pixel as I read in other threads.Where is the problem and how to solve it?
I understand that if VideoCapabilities has two options with the same resolution, only one of them will be shown - the one with the most bits per pixel. Is it possible to choose the second one (with fewer bits per pixel)?

Related

Observed periodic noise with ASIO sound card in my audio processing application

I observe noise which shows up periodically (every 5 seconds or so) when i use Asio sound card with the custom built audio processing application visualisation tab which displays the frequency analysis.
The noise is not observed when using a Direct Sound card with the same audio.
I have tried changing the number of channels that are listening to the audio for Asio from 8 to 2 but that doesn't fix the issue!
The sampling rate is 48kHz.( Tweaked it to 44kHz, doesn't fix the issue).
The audio processing application is written in C# and made use of NAudio API.
I've included the images for the waveform in the link:
https://www.sendbig.com/view-files?Id=8fe0ff05-d27e-9ec2-161f-415d923599b7
The first image is the clean signal with no noise and the next image shows the audio along with the noise.
Any inputs on this is appreciated!

c# thermal image server stream for workstation client

I am Tommaso and I have just signed up.I would like to open a new this discussion hoping it could be interesting.
I am working with thermal camera(C# ,Visual studio 2012,windows 7 -x64) and I have already create a server that performs the following task:
Get raw frame from camera
Eventually rotation
Convert raw pixel value to Kelvin
Calculate min , avg and max pixel in a frame
Check temperature alert ,alarm.
Now I am asked to allow 4 work stations to see the real time thermal frames stream from cameras. Unfortunately in this project these are located in a real wide area at many meters (600-700 m) from main server. At 3,75 frame/s, frame resolution of 640x512 pixel and pixel depth of 14 bit (16) we are talking about 2,5Mbyte per second. No compression is provided.
So I decide to use the frames arrived at the server creating a socket to listen for 1 or 4 work stations need the stream. So each time a client connects, I create a dedicate queue where the main thread enqueues frames and where the socket thread dequeues and sends them to the connected client.
Here is my question: Due to importance of a single frame do you suggest to use the reliable and heaviest TCP or a the simplest UDP considering the amount of flow?
Sorry for my prolixity but it's just for an explanation .
If you want to know more about my project please ask .
Tommaso
You want to stream video. If a frame doesn't reach to it's destination, there would be no problem. Because after 250ms (I'll assume your video is 4 fps) another frame will be sent. Since every frame is not viral, you better use UDP.

DirectShow transform filter with multiple video frames - Sync with audio

I've written a DirectShow transform filter (in C# but concept is the same in C++) which buffers multiple video frames before sending them to the renderer (hence a delay). These frames are processed before producing an output frame (think sliding window of say 6 frames).
On a 6fps video source, this causes a 1 second delay. Audio ends up playing back 1 second ahead of video. How do I tell the graph to delay audio by the same amount?
Video and audio renderers present data respecting attached time stamps. You need to restamp your audio data adding the desired delay.

Directshow stream images are sometimes flipped

I'm using the wpfmediakit with two ids ethernet cameras over the uEye drivers and sometimes when I launch the app the video feed is upside down. I am using the contols just like the documentation on the projects front page. I have observed two things,
The video feed is perfectly normal when used with usb cameras (microsoft, logitech and ueye usb cameras)
When I'm use the demo application the video feed is always correct
Has anyone experienced similar issues?
When the image is flipped upside down in DirectShow, there is one common cause for such artifact:
The normal image row order for RGB images is bottom-to-top, that is last row goes first. The top-to-bottom format also exists and goes with negative value in biHeight field of underlying media type. It is quite rare and some components might be ignoring this. A similar but much more rare thing is that YUV images are always top-to-bottom, regardless of biHeight sign, and some buggy components are incorrectly flipping such images.
All in all, somewhere in the pipeline the top-to-bottom order is likely to be confused with bottom-to-top and as a result the image is flipped.

Programmable cameras C# for vehicle system

I recently joined a project where I need to get some vehicle based computer vision system. So what sort of special functionalities does a camera need, to be able to capture images while traveling at varying speeds ? for example how high a frame rate is required, and the exposure duration, shutter speed? Do you think that webcams(even if high end) will be able to achieve it ? The project requires the camera to be programmable in C# ...
Thank you very much in advance!
Unless video is capable of producing high quality low blur images, I would go with a camera with really fast shutterspeed, very short exposure duration, and for frame rate, following Seth's math, 44 centimeters is roughly a little more than a foot, which should be decent for calculations.
Reaction time for a human to respond to someone hitting the breaks in front of them is 1.5 seconds. If you can determine they hit their break light within 1/30th of a second, and it takes you 1 second to calculate and apply breaks, you already beat a human in reaction time.
How fast your shutter speed needs to be, is based on how fast you're vehicle is moving. Shutter speed reduces motion blur for a more accurate picture to analyze.
Try different speeds (if you can get a camera with this value configurable, might help).
I'm not sure that's an answerable question. It sounds like the sort of thing that the Darpa Grand Challenge hopes to determine :)
With regard to frame rate: If you're vehicle is going 30 miles per hour, a 30 FPS web cam will capture one frame for every 44 centimeters the vehicle travels. Whether or not that's "enough" depends on what you're planning to do with the image.
Not sure about the out-of-the-box C# programability, but a specific web-cam style camera to consider would be the PS3 eye.
It was specially engineered for motion-capture and (as I understand it) is capable of higher-quality images a high framerates than the majority of the competition. Windows drivers are available for it, and that opens the door for creating a C# wrapper.
Here is the product page, note the 120fps upper-end spec (not sure that the Windows drivers run at this rate, but obviously the hardware is capable of it).
One Note on shutter speed... images taken at a high framerate in low-light will likely be underexposed and unusable. If you'll need this to work in varying light conditions then the framerate will likely either need to be fixed at the low-end of your acceptable range, or will need to self-adjust based on available light.
These guys: Mobileye - develop such commercial systems for lane departure warnings and vehicle and pedestrian detection.
If you go to the "Manufacturer Products->Development and Evaluation Platforms->Cameras"
You can see what they use as cameras and also for their processing platforms.
30 fps should be sufficient for the applications mentioned above.
If money isn't an issue, take a look at cameras from companies like Opeton and others. You can control every aspect of every image capture including: capture time, image size, ++.
My iPhone can take pictures out the side of a car that are fairly blur free... past 10-20 feet. Inside of that, things are simply moving too fast; the shutter speed would need to be higher to not blur that.
Start with a middle-of-the-road webcamera, and move up as necessary? A laptop and a ride in your car while capturing still images would probably give you an idea of how well it works.

Categories

Resources