C# Microsoft LifeCam HD mjpeg capture - c#

I have a Microsoft LifeCam HD-5000 webcams. According to AMCap, the camera outputs a MJPEG stream at 30fps at 720p. I want to capture each JPEG frame in a small application without doing any preview or decompression/transcoding to minimize CPU utilization to the minimum possible.
I'm a C# developer, but I'm new to DirectShow. Is there a simple way to capture the MJPEG stream frame by frame as its output from the camera in C#/.NET without decompressing it?

First of all, you might not need to use DirectShow to access your camera. Check out the OpenCV project, which has .net bindings available at opencvdotnet.
If you'd like to go the DirectShow route, then you'll need the .NET bindings, available at the directshownet project. I believe your best bet will be to create a filter graph that contains your webcam as a source filter, and a sample grabber as your destination filter. Documentation for the sample grabber is on MSDN. This will give you access to the raw data. You can also request a particular data format and use the DirectShow intelligent connect to fill in the filter graph with the right conversion filters.
That being said, I definitely recommend OpenCV over DirectShow. DirectShow is very general purpose, and probably does more than you need it to do. OpenCV can be used to quickly access your camera. Perhaps check out this stackoverflow question Webcam Usage in C# for some more information and answers.

With DirectShowNet, I could never access to a single frame and show it as Bitmap anywhere. In every project I just see converters, direct show to screen or streaming. How to capture the stream as single bitmap frames?

in the Directshow.net Download package, they have sample code and inside Capture there is a project DxSnap which connects to a webcam through directshow.net and snaps a picture from the stream. You can view it and use that as a starting place.

Related

HoloLens - Capturing photo when locatable camera is in VideoMode (streaming)

I am trying to create a HoloLens application, which uses the built in WebCam to take photos and sends them to a rest interface for further face recognition. This is working well so far. To capture photos from the WebCam it needs to be in the PhotoMode.
The problem:
If I want now to present my application via live stream, the WebCam is set automatically to the VideoMode and capturing photos is not possible.
The locatable camera description https://developer.microsoft.com/en-us/windows/mixed-reality/locatable_camera_in_unity says:
"Only a single operation can occur with the camera at a time."
Since the application has to be presented to a great number of people it is absolutely essential to show it via live stream.
Does somebody have any general idea how to solve this problem, or maybe some hack to access the WebCam in PhotoMode simultaneously to the streaming?
Many thanks in advance!
This is possible if you can live with Preview Frames from the MediaCapture streams. Just start the video capture (layer on holograms if you need to), and then use the PreviewFrames as your 'photos'. This limits you to the resolution of the camera stream of course.
I was able to get this plugin working on a HoloLens. Had to use .Net instead of IL2CPP and I used 2017.4.22f1. At the very least the code shows how use MediaCapture and PreviewFrames to get a video feed from the camera for which you can grab the current frame to save as a photo. The sample doesn't do that last bit, but the bytes for the frames are being passed around, just need to make them available for your need. =)
https://github.com/VulcanTechnologies/HoloLensCameraStream

How to grab constant stream of bitmap images from webcam in c#

We have a c# application that performs processing on video streams. This is a low-level application that receives each frame in Bitmap format, so basically we need 25 images each second. This application is already working for some of our media sources, but we now need to add a webcam as an input device.
So we basically need to capture bitmap images from a webcam continuously so that we can pass all these frames as a "stream" to our application.
What is the best and simplest way to access the webcam and read the actual frames directly from the webcam as individual images? I am still in the starting blocks.
There are a multitude of libraries out there that allows one to access the webcam, preview the content of the webcam on a windows panel and then use screen capturing to capture this image again. This, unfortunately, will not give us the necessary performance when capturing 25 frames per second. IVMRWindowlessControl9::GetCurrentImage has been mentioned as another alternative, but this again seems to be aimed at an infrequent snapshot rather than a constant stream of images. Directshow.Net is mentioned by many as a good candidate, but it is unclear how to simply grab the images from the webcam. Also, many sources state a concern about Microsoft no longer supporting Directshow. Also, implementations I've seen of this requires ImageGrabber which is apparently also no longer supported. The newer alternative from MS seems to be Media Foundation, but my research hasn't turned up any working examples of how this can be implemented (and I'm not sure if this will run on older versions of windows such as XP). DirectX.Capture is an awesome library (see a nice implementation) but seems to lack the filters and methods to get the video images directly. I have also started looking at Filters and Filter Graphs but this seems awfully complex and does feel a bit like "reinventing the wheel".
Overall, all the solutions briefly mentioned above seem to rather old. Can someone please point me in the direction of a step-by-step guide for getting a webcam working in C# and grabbing several images per second from it? (We will also have to do audio at some point, so a solution that does not exclude video would be most helpful).
I use AForge.Video (find it here: code.google.com/p/aforge/) because it's a very fast c# implementation. i am very pleased with the performance and it effortlessly captures from two HD webcams at 30fps on an 8 year old PC. the data is supplied as a native IntPtr so it's ideal for further processing using native code or opencv.
opencv wrappers emgu and opencvsharp both implement a rudimentary video capture functionality which might be sufficient for your purposes. clearly if you are going perform image processing / computer vision you might want to use those anyway.
As dr.mo suggests, Aforge was the answer.
I used the tutorial from here: http://en.code-bude.net/2013/01/02/how-to-easily-record-from-a-webcam-in-c/
In the tutorial, they have an event handler fire each time a frame is received from the webcam. In the original tutorial, this bitmap is used to write the image to a PictureBox. I have simply modified it to save the bitmap image to a file rather than to a picturebox. So I have replaced the following code:
pictureBoxVideo.BackgroundImage = (Bitmap)eventArgs.Frame.Clone();
with the following code:
Bitmap myImage = (Bitmap)eventArgs.Frame.Clone();
string strGrabFileName = String.Format("C:\\My_folder\\Snapshot_{0:yyyyMMdd_hhmmss.fff}.bmp", DateTime.Now);
myImage.Save(strGrabFileName, System.Drawing.Imaging.ImageFormat.Bmp);
and it works like a charm!

Axis Camera video streaming with C# 64bit

I'm looking for a way to stream video from an AXIS M10 IP camera, and display the feed using windows forms (or better, wpf). However, it need to be running on 64-bit platform.
This means that I can't use the AXIS Media Control ActiveX component.
Also, I found that these methods work but only in 32bit environment:
1.Using MediaElement Class for WPF
2.Using embedded media player
3.VlcLib (for dotnet)
So far it looks like my only option is to directly implement RTSP protocol and decode the given RTP/AVP stream using Media Foundation (for .net) and display it somehow. (I was able to get the camera to stream to a UDP port using RTSP calls).
I'm fairly new to RTSP/streaming, so I'm concerned that I might be missing the big picture - Will I be able to use media foundation to render/display videos on winform/wpf, or do I have to look at that functionality elsewhere?(from my research it looked like it could decode H.264 streams, but I did not see any video-playing capabilities). I also came across DirectShow - should I use DirectShow over Media Foundation?
Or better yet, is there a library that is able to handle RTSP streaming that runs in 64bit?
VisioForge Video Capture SDK .Net for example (but commercial), WPF controls included.
Decoding using FFMPEG, with DirectShow engine. Really, I don't see any Media Foundation advantages here.
Also any other way using FFMPEG.
Or, you can write RTSP source filter (based on DirectShow Push Source sample) with H264 output pin for video and G726/G711/AAC for audio. Also you can made virtual video capture source filter and use it in MF or DirectShow. You can use live555 library for RTSP implementation.
So, no simple ways here, if you are starting from zero.
If you just need the Video, I would prefer to just display the MJPEG stream of the camera. This is really easy done without the complexity of DirectShow or MediaFoundation. I display 12 cameras at the same time in my application with this little library in WPF: MJPEG Decoder. You can also use it in WinForms. It decodes the MJPEG Stream and gives you the images to display.
The 64 bits Axis Media Control SDK is available now, but requires an account on Axis web site to be downloaded.
After Sign in, you need to join Axis Developper program (free) and download the AMC SDK.
You will install a executable file (.exe), this install all the libs and samples in your Drive
C:\Program Files\Axis Communication\SDK
I found a way to use VLC in 64bits without ActiveX DLL :
The VLCSharp Library is composed of multiple NuGets to use VLC Player on severals platforms (WPF, Winforms, Xamarion, TvOS).
It is working fine on Onvif Cameras

Save Kinect's color camera video stream in to .avi video

I want to save the video streams that is captured by Kinect's Color camera to .avi format video, I tried many ways of doing this but nothing was succeeded. Has anyone successfully done this? I'm using Kinect for Windows SDK and WFP for application development
I guess the easiest workaround would be to use a screen capture software like http://camstudio.org/.
There is also post with the same question her:
Kinect recording a video in C# WPF
As far as I understand you need to to save the single frames delivered by the kinect by into a video file. This post should explain how to do it How to render video from raw frames in WPF?.
You can use the AVIFile Windows API using interop:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd756808(v=vs.85).aspx
or you can use a wrapper like this one, done by Corina John
http://www.codeproject.com/Articles/7388/A-Simple-C-Wrapper-for-the-AviFile-Library

how do I get a snap of a directshw stream

I am using the directshow.net to capture video from the camera. The video is visible on a form. Can someone tell me how I can take a snap of it as an .NEt Image object. I need to take snaps of it intermittently.
I believe there is a sample application that shows you how to do this with DirectShow.net. When you download the SDK, look for the DxSnap demo. This shows how to take a screenshot of a webcam's video stream. You can modify it to fit your scenario but this should be all you need.
Even on this site, however, Microsoft recommends that you use WIA (Windows Image Acquisition) to accomplish this instead. Just in case you have the option and haven't come across this before, here is a tutorial on how to get started:
http://channel9.msdn.com/coding4fun/articles/Look-at-me-Windows-Image-Acquisition

Categories

Resources