I need to be able to record video from an external camera in a C# application.
Unfortunately a webcam is pretty much out of the question as the application will record outside and during the evening/night. That is why I was thinking of a camcorder since it also has manual control over exposure and focus, lower noise and better sensor.
So far I would use the AV/S-Video output from the camcorder and send the signal to a USB capture card (the computer is a laptop so no PCI-E cards).
How would I be able to access the video stream from the C# application, now that it comes from the capture card ?
Does my proposed system seem feasible (achievable, good video quality, good fps)? Does anybody have another working solution?
Thanks
This Code Project Article could be of a good starting point.
The Author mentions :
The main goal of the application was to make it flexible and
extensible. The application itself can communicate with any video
source – it may be an IP video camera or a server, it may be a local
camera attached to USB, it may be an MMS stream from a remote server,
or it may be any other video source. And more of it, the application
can work with all these video sources simultaneously, displaying them
all on a single screen.
The solution I used in the end was Microsoft Expression Encoder.
Related
Is there any way in c#/.net of recording the current audio being played? I've searched a lot on the internet but the only result I could find is recording using a microphone.
I dont want to record using microphone input, I want to record what is being played on the computer when I click a record button.
Thanks
You have two options here:
Hardware loopback device - virtual "Stereo Mix" audio device, which acts as a regular audio capture device and in the same time produces a copy of mixed audio feed played through default audio output device of the system. Since such device shows up as real audio input device, you can use standard APIs, libraries and even exitsing applications to record from such device.
Programmatic access to a virtual loopback device as if it was microphone-like device. API on the background will duplicate played audio content and make it available for reading back as it plays. The good news is that you can access the mixed audio feed for device of your interest.
Both options are described in detail in Loopback Recording article on MSDN and available via standard audio APIs, specifically WASAPI.
For C# development you are likely to use a wrapper like NAudio.
For option 1 you will find quite a few questions on StackOverflow, and for the other option the keyword is AUDCLNT_STREAMFLAGS_LOOPBACK.
The only way to be able to receive data from another application is if the developer provides an access point, normally through some SDK, API, or other means. Without this, there is no way for your application code to receive the bytes from the other application.
The reason a microphone works is because it is receiving the sound output bytes from the application and sending those soundwave bytes back into your PC to render and output the sound. Since you have access to these bytes from the microphone you are able to capture the sound.
See if there is an API or an SDK from the developer of the application you are trying to get sound from.
Our app is using C#/WinForms/VMR9/DirectShowLib-2005 to either play back a local video file or to receive (and render) the live video stream via udp using a third party DirectShow filter. The video stream uses H.265 coding and sends 1080p files.
I also have that DirectShow filter recording the live video feed to a local file for me.
When I resize the form during video playback or live video feed playback, I'm getting a device lost and need to reset it. I'm freeing all the resources, but device reset still fails unless I also destroy the graph as well. But it's used to receive my live video feed and record it.
So, problem is I would like to keep the video feed recorded without interruption by resize, move to another monitor, device lost or reset. What are my options to achieve this? We can consider converting the code to WPF/WF, purchasing a commercially available or using a free plugin to do the job, etc. Need an advice here.
Second question on the same subject, if I may. While live feed is recorded to a local file and we're playing back that live feed in the video window, we also display a time line (slider control), representing the time from the beginning of the live video feed till a present moment (which moves forward while the live feed is active). I need to give user the ability to select any previous moment in time and immediately play that part of the recorded video back, while live feed is still recorded to the same file. After reviewing a part of the recorded video, I need to know how to let user to continue watching the live feed.
I am not sure which technology we should be using to achieve that as well. I would appreciate any help.
Thank you very much.
Recording filter graphs are sensitive to unexpected state transitions and it is the assumption that recording takes place "at once" without pausing and continuations, including such as caused by necessity to reset video hardware or change format.
The typical method is to separate recording from other activity into separate graph. A dedicated recording graph would receive data produced outside and record it into file (or stream to network). Playback and presentation activity running in another graph can be flexibly reset or reconfigured as needed.
See also:
Directshow Preview Only and Capture & Preview with a single Graph
I am trying to create a HoloLens application, which uses the built in WebCam to take photos and sends them to a rest interface for further face recognition. This is working well so far. To capture photos from the WebCam it needs to be in the PhotoMode.
The problem:
If I want now to present my application via live stream, the WebCam is set automatically to the VideoMode and capturing photos is not possible.
The locatable camera description https://developer.microsoft.com/en-us/windows/mixed-reality/locatable_camera_in_unity says:
"Only a single operation can occur with the camera at a time."
Since the application has to be presented to a great number of people it is absolutely essential to show it via live stream.
Does somebody have any general idea how to solve this problem, or maybe some hack to access the WebCam in PhotoMode simultaneously to the streaming?
Many thanks in advance!
This is possible if you can live with Preview Frames from the MediaCapture streams. Just start the video capture (layer on holograms if you need to), and then use the PreviewFrames as your 'photos'. This limits you to the resolution of the camera stream of course.
I was able to get this plugin working on a HoloLens. Had to use .Net instead of IL2CPP and I used 2017.4.22f1. At the very least the code shows how use MediaCapture and PreviewFrames to get a video feed from the camera for which you can grab the current frame to save as a photo. The sample doesn't do that last bit, but the bytes for the frames are being passed around, just need to make them available for your need. =)
https://github.com/VulcanTechnologies/HoloLensCameraStream
The question is simple, but I guess the answer might not be.
I want to create a display device, on which the GPU would render a desktop or a video game. However, this device would not be connected to any physical screen on the video card port. The data rendered would be retrieved and streamed somewhere else over network.
A bit similar to what OnLive did, but I would like to stream that video output over LAN. Obviously it must be a full and real display so that existing applications or video games could work properly on it.
Is it even possible in C#?
surely you could link it up as an output stream with your processed frames, piped to a network socket or such?
I am experimenting with C#, and I wanted to create a fun/useful network program. I've programmed for most of my years using C++, C# seems a lot cleaner and easier to program in. I mostly programmed data structures and algorithms. I haven't really touched networking much.
I have video files on my computer that I would like to be able to share/stream/send to other computers on my network. I'm going to eventually expand on it and add a lot of features, but I want to conquer the hardest part first.
Is there a library out that helps with the data management for this?
I see accomplishing this three ways, Idk what's easiest and best.
Maybe using Windows File Sharing (Like how other computers on a network can open videos in a shared folder?)
Streaming the video data to the client computer? Then having their native video program open the data stream? (Buffer-like on youtube?)
Silverlight or some other Library. I can use the built in video player, etc to run it
Features:
I want to allow the client to be able to copy the video tutorial file to their own computer eventually if necessary, so idk. Maybe buffering is the best solution.
Want to allow the client to pause/download the video.
Hopefully I can learn a lot in this project.
You can use Microsoft Expression Encoder SDK to push video stream to a local port, or publish it in Windows or IIS Media Services. Windows Media Player, Silverlight or player-based application can be used for playback on another computer. Also, the are some options for playback on Apple devices. For H.264 support, you would need Pro version of the encoder.
For more information see the SDK documentation on MSDN, and articles Getting started with IIS Live Smooth Streaming and Apple HTTP Live Streaming with IIS Media Services.
You should be able to use vlc to transcode the file (or just stream it) then connect to the stream it produces. I know you're experimenting with C#, but it seems odd to re-invent the wheel, especially when it's such a good one!
I'm sure you'd have some fun automating vlc.