Creating a video encoder app with multiple video sources. Is there a certain type of IP camera that can be used as a Windows video source (I.e. DirectShow) or a generic IP camera driver that can be used to connect both video and audio from a hardware camera?
To do video capture in directshow, you must acquire an IBaseFilter pointer to the video device, then add the filter to the graph.
You can get these IBaseFilter pointers for your cameras by enumerating the CLSID_VideoInputDevice category.
Getting audio follows the same procedure, this time though, you'll acquire the audio IBaseFilters by enumerating the CLSID_AudioInputDevice category.
I can post code to show how this is accomplished if you're interested, but I only have c++ code, I've never tried directshow coding with c#
Related
Is there any way to play audio directly into a capture device in C#? In my project I will have to feed later on a virtual capture driver with audio so I can use it in other programs and play the wanted audio anywhere else, but Im not sure it is possible in C#, I tried to do this with NAudio (which is truly amazing):
var enumerator = new MMDeviceEnumerator();
MMDevice captureDevice = enumerator.GetDefaultAudioEndpoint(DataFlow.Capture, Role.Multimedia);
WasapiOut wasapiOut = new WasapiOut(captureDevice, AudioClientShareMode.Shared, false, 0);
But ultimately it just throws a COMException with the code 0x88890003 which translates to the error "The AUDCLNT_STREAMFLAGS_LOOPBACK flag is set but the endpoint device is a capture device, not a rendering device". So in the end is there any possible solution or do I have to turn to another language like C++?
You cannot push audio to the device which generates audio on its own, "capture device".
Loopback mode means that you can have a copy of audio stream from a rendering device, but this does not work the other way.
The way things can work more or less as you assumed is when you have a special (and custom or third party, since no stock implementation of the kind exists) implementation of audio capture device, designed to generate audio supplied by external application such as your pushing the payload audio data via an API.
Switching to C++ will be of no help with this challenge.
I have little desktop app which uses UWP API to capture data from webcam (MediaCapture). On my computer it works fine -- I can capture video and audio. When I run the same program on the other computer it crashes -- as I found out I had to disable audio recording:
var media_settings = new MediaCaptureInitializationSettings();
// audio+video by default
media_settings.StreamingCaptureMode = Windows.Media.Capture.StreamingCaptureMode.Video;
await mediaCapture.InitializeAsync(media_settings);
Is there a way to find out in advance if given webcam supports audio recording? By "in advance" I mean the other way than trying, catching exception and in second take disabling audio :-).
You can find out if the given webcam supports audio recording by enumerating the audio devices before you initialize the MediaCaptureInitializationSettings object. After finishing enumerating the audio device, you can find whether there is a audio device from the webcam or not.
You can follow the Enumerate devices topic or see the DeviceEnumerationAndPairing sample directly to find the AudioCapture device, then you should be able to judge if there is a audio device from the webcam.
Is there any way in c#/.net of recording the current audio being played? I've searched a lot on the internet but the only result I could find is recording using a microphone.
I dont want to record using microphone input, I want to record what is being played on the computer when I click a record button.
Thanks
You have two options here:
Hardware loopback device - virtual "Stereo Mix" audio device, which acts as a regular audio capture device and in the same time produces a copy of mixed audio feed played through default audio output device of the system. Since such device shows up as real audio input device, you can use standard APIs, libraries and even exitsing applications to record from such device.
Programmatic access to a virtual loopback device as if it was microphone-like device. API on the background will duplicate played audio content and make it available for reading back as it plays. The good news is that you can access the mixed audio feed for device of your interest.
Both options are described in detail in Loopback Recording article on MSDN and available via standard audio APIs, specifically WASAPI.
For C# development you are likely to use a wrapper like NAudio.
For option 1 you will find quite a few questions on StackOverflow, and for the other option the keyword is AUDCLNT_STREAMFLAGS_LOOPBACK.
The only way to be able to receive data from another application is if the developer provides an access point, normally through some SDK, API, or other means. Without this, there is no way for your application code to receive the bytes from the other application.
The reason a microphone works is because it is receiving the sound output bytes from the application and sending those soundwave bytes back into your PC to render and output the sound. Since you have access to these bytes from the microphone you are able to capture the sound.
See if there is an API or an SDK from the developer of the application you are trying to get sound from.
I am trying to create a HoloLens application, which uses the built in WebCam to take photos and sends them to a rest interface for further face recognition. This is working well so far. To capture photos from the WebCam it needs to be in the PhotoMode.
The problem:
If I want now to present my application via live stream, the WebCam is set automatically to the VideoMode and capturing photos is not possible.
The locatable camera description https://developer.microsoft.com/en-us/windows/mixed-reality/locatable_camera_in_unity says:
"Only a single operation can occur with the camera at a time."
Since the application has to be presented to a great number of people it is absolutely essential to show it via live stream.
Does somebody have any general idea how to solve this problem, or maybe some hack to access the WebCam in PhotoMode simultaneously to the streaming?
Many thanks in advance!
This is possible if you can live with Preview Frames from the MediaCapture streams. Just start the video capture (layer on holograms if you need to), and then use the PreviewFrames as your 'photos'. This limits you to the resolution of the camera stream of course.
I was able to get this plugin working on a HoloLens. Had to use .Net instead of IL2CPP and I used 2017.4.22f1. At the very least the code shows how use MediaCapture and PreviewFrames to get a video feed from the camera for which you can grab the current frame to save as a photo. The sample doesn't do that last bit, but the bytes for the frames are being passed around, just need to make them available for your need. =)
https://github.com/VulcanTechnologies/HoloLensCameraStream
I need to be able to record video from an external camera in a C# application.
Unfortunately a webcam is pretty much out of the question as the application will record outside and during the evening/night. That is why I was thinking of a camcorder since it also has manual control over exposure and focus, lower noise and better sensor.
So far I would use the AV/S-Video output from the camcorder and send the signal to a USB capture card (the computer is a laptop so no PCI-E cards).
How would I be able to access the video stream from the C# application, now that it comes from the capture card ?
Does my proposed system seem feasible (achievable, good video quality, good fps)? Does anybody have another working solution?
Thanks
This Code Project Article could be of a good starting point.
The Author mentions :
The main goal of the application was to make it flexible and
extensible. The application itself can communicate with any video
source – it may be an IP video camera or a server, it may be a local
camera attached to USB, it may be an MMS stream from a remote server,
or it may be any other video source. And more of it, the application
can work with all these video sources simultaneously, displaying them
all on a single screen.
The solution I used in the end was Microsoft Expression Encoder.