HowTo Multicast a Stream Captured with DirectShow? - c#

I have a requirement to build a very simple streaming server. It needs to be able to capture video from a device and then stream that video via multicast to several clients on a LAN.
The capture part of this is pretty easy (in C#) thanks to a library someone wrote with DirectShow.Net (http://www.codeproject.com/KB/directx/directxcapture.aspx).
The question I have now is how to multicast this? This is the part I'm stuck on. I'm not sure what to do next, or what steps to take.

There are no filters available that you can plug and use.
You need to do three things here:
Compress the video into MPEG2 or MPEG4
Mux it into MPEG Transport Stream
Broadcast it
There are lots of codecs available for part 1, and some devices can even output compressed video.
The part 3 is quite simple too.
Main problem goes with part 2, as MPEG Transport Stream is patented. It is licensed so that you cannot develop free software based on it (VLC and FFMPEG violate that license), and you have to pay several hundred dollars just to obtain a copy of specification.
If you have to develop it, you need to:
Obtain a copy of ISO/IEC 13818-1-2000 (you may download it as PDF from their site), it describes MPEG Transport Stream
Develop a renderer filter that takes MPEG Elementary Streams and muxes them into Transport Stream
It has to be a renderer as Transport Stream is not a transform filter. There are some kind of outband data (program allocation tables and reference clocks) that need to be sent on a regular basis, and you need to keep a worker thread to do that.

To achieve that you need to setup/write some kind of video streaming server.
I've used VideoCapX for the same purpose on my project. The documentation and support is not top notch, but it's good enough. It's using WMV streaming technology. The stream is called MMS stream. You can view it with any most media player. I've tested with Windows Media Player, Media Player Classics and VLC. If you would like to see it's capability without writing any code just yet, take a look at U-Broadcast, it uses VideoCapX to do the job behind the scene.
I've been using DirectShow.Net for almost 2 years, and I still find it hard to write a streaming server myself, due to the complexity of DirectShow technology.
Other than WMV, you can take a look at Helix Server or Apple Streaming Server. The latter one is not free, so is WMV Streaming Server from Microsoft.
You can also take a look at VLC or Windows Media Encoder to do streaming straight from the application. But so far I find U-Broadcast out do both of the above. VLC has some compatibility issue with codec and playback from non VLC player, WME has problem with starting up capturing device.
Good Luck
NOTE: I'm not associated with VideoCapX or it's company, I'm just a happy user of it.

http://www.codeproject.com/KB/directx/DShowStreamingServer.aspx might help, and http://en.wikipedia.org/wiki/VLC_media_player#cite_note-14
VLC also "should" be able to stream from any device natively.

Related

How to make audio broadcast live streaming

I need to broadcast audio content to the network, receive it and play in the browser “on the fly”. The audio content is just list of mp3 files. and on client side it should looks like endless audio stream without state. Something like YouTube live streams. Or smth like online radio.
But I really don’t know anything about that. Can anyone helps me with that? How it works, which protocol is used for sending and receiving data? Anything that can help me with that.
At the best I’m looking for a solution for .NET, but I will be glad for anything that can help, at least to understand how it works in general.
Thank you.
One way to do it would be with help of ffmpeg.
You can use ffmpeg to create DASH or HLS playlist https://ffmpeg.org/ffmpeg-formats.html#hls-2
FFMPEG supports other streaming solutions too.
To invoke ffmpeg you have to find binaries that are compatible with your system that your server is running on (windows, linux). Here is how can you start external process from C#: How do I start a process from C#?
To play your playlist in browser you can use VideoJS. It has built-in support for DASH and HLS: https://videojs.com/ (it can play audio too)
Build your logic to manage / update playlists and then you just need to create HTTP service that can serve your playlist file. VideoJS will play it for you.
If you go with HLS then you probably should read this: https://developer.apple.com/streaming/
If you go with DASH then read this:
https://mpeg.chiariglione.org/standards/mpeg-dash
Another way is to use out of the box solutions which often aren't free:
https://azure.microsoft.com/en-us/services/media-services/
https://www.wowza.com/

How to compress video from a PCL in Xamarin

I´m doing a "Whatsapp" like app and I need to send user videos (from camera/gallery).
I need to send video from ios to android and from android to ios (windows phone in the future).
First thing I thought is to use camera params to record the video in low resolution, but that won´t help with recorded videos stored in the phone already.
Second thought was to zip the video file, but I guess this is not enough for very large files.
Third: actually compressing the video file generating a new file, and then zip it before sending it through the network.
So this is what I need before actually sending the video:
Compress the video file, generating a new file that will play nicely in
both platforms (ios and android)
Make the compressing process aysnc(as I don´t want to block the UI
thread for a really long time)
Zip it (this is the easy part, just for the record)
Any ideas or help are appreciated
You would best need to use your platforms framework to also leverage existing hardware support for encoding (mainly h.264 hardware encoding). A PCL solution would eat to much battery as it would need to run on CPU only giving you bad performance and even worst battery live.
This ties in with 1. Just use your platforms native method to execute the frameworks methods async.
Skip this part. It will increase overhead and disallow video streaming There are virtually 0 benefits from using a zip algorithm on top of an already compressed video stream.
Just make sure that you end up with a cross platform compatible video format like H264.

Using video codecs like XVid in c#

I'm trying to develop an application which captures a series of images from web cam using DirectShow.Net and then sends it over network to other clients.
Everything is working fine, except the images are too big and compression methods like using GZipStream, JPEG Compression and etc does not help more about reducing the size.
Now, I want to know how to using codecs like XVid or any other codec, to reduce the size.
Playing around the demos of VisioForge, it approves that XVid files are too smaller than regular AVI files.
Thanks for any help
There are specific video compression algorithms, which effectively compress video, some of the most popular are: M-JPEG, MPEG-4, H.261, H.263, H.264, VP8, Theora. In DirectShow the video compression items have form-factor of video compression filters (or codecs). Standard Windows does not normally include much for this task (for various reasons, patents to specifically mention), so you need to use a third party or otherwise installable codec. Luckily, the codecs have more or less uniform interface and you use them similarly from C#.
See related questions with helpful information:
Real-time video encoding in DirectShow
Capturing webcam using DirectShow.NET library
Be sure to check DirectShow.NET samples out:
\Samples\Misc\DxWebCam
A poor man's web cam program. This application runs as a Win32
Service. It takes the output of a capture graph, turns it into a
stream of JPEG files, and sends it thru TCP/IP to a client
application.
\Samples\Capture\CapWMV
A .NET sample application using the WM ASF Writer filter to create an
wmv file

Multiple Audio, Live Smooth Streaming

I could not find any document which explains how to provide multiple audio streams for Live Smooth Streaming.
For example, in Microsoft PDC's streams, it is possible to select languages.
Does SMF provide this feature? If it is, how? How my isml file will look like?
This link gives a sample for multiple languages for Audio in Smooth streaming.
If you are looking for this, please note that unlike video smooth streaming currently does not support multiple bit rates for audio.
There is SmoothStreamingMediaElement.ManifestMerge event that enables adding additional streams to manifest loaded when opening media. This is called manifest merging and is described here:
http://msdn.microsoft.com/en-us/library/ff432455%28v=vs.90%29.aspx
In SMF you can access SSME by IAdaptiveMediaPlugin.VisualElement interface.
So if you have two live streaming endpoints:
AudioAndVideo.isml/Manifest (standard audio and video streams)
Audio2.isml/Manifest (second audio stream with dummy video streams)
you could open the first one and merge it with audio stream from the second one. This requires two encoding sessions of Expression Encoder.

How can I emulate a video capture device and provide dynamic video content?

I would like to emulate video input from a webcam for testing purposes.
So I need to be able to emulate a software video capture device in Windows and be able to dynamically generate its output.
How can I achieve this?
I would prefer a solution in C# or C++.
You can use a Virtual Webcam (old link, but there are others) it will take a video/images file and will display it in a webcam device. Your system will think that its a normal device.
Then you will need to create something that will generate the video/images, if you need static image then its pretty easy to generate a bmp.
Old (no selected answer) question.... actually probably one of the oldest I've ever seen... but I came across this looking for an answer myself, I remembered the day when "Virtual Webcam" still existed (now just a chinese ad site).
Fear not! There are new sources to solve your decade long quest:
First of all, checkout OBS, open source does a LOT with video streams:
https://obsproject.com/
Second, checkout this virtual webcam plugin for it. Does exactly what you're talking about, and does use #qbeuek's suggestion of DirectDraw:
https://obsproject.com/forum/resources/obs-virtualcam.949/
It is written in C++, so grabbing the bits you need and rewriting to C# is left as an exercise to the reader, but the capability is there.
As far as I know, there is a set of COM interfaces that govern the recording and playback of audio and video in Windows. It used to be called DirectShow, but maybe in the meantime the name has been changed. Those interfaces are used to construct a graph of audio and video filters, to encode / decode the data stream.
The way to go:
- read about the Microsoft DirectShow API,
- implement a COM object that implements the video source interface,

Categories

Resources