I'm developing a C# video streaming application using NReco library. At this moment I was able to encode audio and video separately and save data in to queue. I can stream video using UDP protocol without audio and it played nicely in ffplay equally I can stream audio using UDP protocol without video and it also played nicely.Now I want to merge these two streams in to one and stream through UDP protocol and player should play both audio and video. But I have no idea how to do it. I would appreciate if some one can give me any points to do this or any other different method achieve this.
Thank You.
The answer highly depends on the source of video and audio streams. NReco.VideoConverter is a wrapper to FFMpeg tool and it actually can combine video and audio streams (see filters configuration in FFMpeg documentation) if either video or audio input can be specified as ffmpeg input source (UDP stream or direct show input device).
If both video and audio data represented by byte stream in your C# code you cannot pass them together using NReco.VideoConverter (ConvertLiveMedia method) because it uses stdin for communicating with ffmpeg and only one stream can be passed from C# code.
Related
I have a RTP video stream encoded in h.264 and I would like to capture it into a file.
I'm trying to create a graph in GraphEdit that will listen to a specific port (RTP stream) and will save it to a file.
If you know any good filters I can use or good guides I would love to try them.
MainConcept is good but not cheap.
I am using NAudio to capture an audio signal from my line in device into a byte array. I can successfully send that byte array across my WLAN via UDP broadcast and receive it on another computer. Once the byte array has been received, I am able to play the audio stream.
My goal is to stream an audio signal from a line in device so it can be consumed by an HTML5 audio tag or jPlayer. Do you have an example or reading material on how to convert the input byte array to stream as compatible HTML5 format?
I would to to create a .Net solution without using any third party applications.
Here is a sample of how I am capturing and broadcasting the audio signal via UDP.
var waveIn = new WaveInEvent();
waveIn.DeviceNumber = deviceID;
waveIn.WaveFormat = Program.WAVEFORMAT;
waveIn.BufferMilliseconds = 50;
waveIn.DataAvailable += OnDataAvailable;
var udpSender = new UdpClient();
udpSender.JoinMulticastGroup(Program.MulticastIP);
waveIn.StartRecording();
private void OnDataAvailable(object sender, WaveInEventArgs e)
{
udpSender.Send(e.Buffer, e.BytesRecorded, Program.EndPoint);
}
The answer to your question is really dependent on whether you want to stream the audio to a HTML5 compatible browser, or whether you simply want to playback recorded audio.
According to W3Schools, all major browsers support the major audio formats (wav/ogg/mp3):
http://www.w3schools.com/html/html5_audio.asp
Playing a pre-recorded audio file using the HTML5 audio tag is pretty easy. The link above describes how to accomplish that.
Live-streaming the audio to a HTML5 browser is a whole different story. You need some kind of streaming server. I would advise against trying to implement this yourself. It's much easier to just use an external library for this.
The only reliable solution I can think of right now is using one of the freely available .Net P/Invoke libraries for libvlc, which provides access to the streaming capabilities of the VLC Media Player.
The following StackOverflow question describes how to setup an audio streaming server using libvlc:
Use libvlc to stream mp3 to network
You can try to accomplish the same thing in C#. It might require some P/Invoke work, but it's the most realiable solution I can think of.
I could not find any document which explains how to provide multiple audio streams for Live Smooth Streaming.
For example, in Microsoft PDC's streams, it is possible to select languages.
Does SMF provide this feature? If it is, how? How my isml file will look like?
This link gives a sample for multiple languages for Audio in Smooth streaming.
If you are looking for this, please note that unlike video smooth streaming currently does not support multiple bit rates for audio.
There is SmoothStreamingMediaElement.ManifestMerge event that enables adding additional streams to manifest loaded when opening media. This is called manifest merging and is described here:
http://msdn.microsoft.com/en-us/library/ff432455%28v=vs.90%29.aspx
In SMF you can access SSME by IAdaptiveMediaPlugin.VisualElement interface.
So if you have two live streaming endpoints:
AudioAndVideo.isml/Manifest (standard audio and video streams)
Audio2.isml/Manifest (second audio stream with dummy video streams)
you could open the first one and merge it with audio stream from the second one. This requires two encoding sessions of Expression Encoder.
I need to convert an AMR (Adaptive Multi-Rate) audio file recorded in a phone (as a Stream object) to a PCM uncompressed wav audio Stream so it can be processed afterwards for speech recognition. The Speech Recognition doesn't like the AMR format. This is going to be a server application using the Microsoft Speech Platform. I am not sure about using the ffdshow or similar libraries in a .
Right now I am researching NAudio and DirectShowNet to see if they can help me accomplish this but was hoping someone can point in the right direction.
After a lot of searching for a solution for this, I am going to use ffmpeg. It provides the AMR-NB (NB=Narrow Band) decoder. There are a lot of c# wrappers for ffmpeg around; most of them abandoned efforts and one that is up to date but is not free. Just running ffmpeg with the basic parameters provides what I need, plus it is really fast.
I don't like the idea of calling an external process to do the conversion, plus I need to save the AMR stream as a file so it can be converted to a wav file but I believe I can make it work efficiently.
I have a requirement to build a very simple streaming server. It needs to be able to capture video from a device and then stream that video via multicast to several clients on a LAN.
The capture part of this is pretty easy (in C#) thanks to a library someone wrote with DirectShow.Net (http://www.codeproject.com/KB/directx/directxcapture.aspx).
The question I have now is how to multicast this? This is the part I'm stuck on. I'm not sure what to do next, or what steps to take.
There are no filters available that you can plug and use.
You need to do three things here:
Compress the video into MPEG2 or MPEG4
Mux it into MPEG Transport Stream
Broadcast it
There are lots of codecs available for part 1, and some devices can even output compressed video.
The part 3 is quite simple too.
Main problem goes with part 2, as MPEG Transport Stream is patented. It is licensed so that you cannot develop free software based on it (VLC and FFMPEG violate that license), and you have to pay several hundred dollars just to obtain a copy of specification.
If you have to develop it, you need to:
Obtain a copy of ISO/IEC 13818-1-2000 (you may download it as PDF from their site), it describes MPEG Transport Stream
Develop a renderer filter that takes MPEG Elementary Streams and muxes them into Transport Stream
It has to be a renderer as Transport Stream is not a transform filter. There are some kind of outband data (program allocation tables and reference clocks) that need to be sent on a regular basis, and you need to keep a worker thread to do that.
To achieve that you need to setup/write some kind of video streaming server.
I've used VideoCapX for the same purpose on my project. The documentation and support is not top notch, but it's good enough. It's using WMV streaming technology. The stream is called MMS stream. You can view it with any most media player. I've tested with Windows Media Player, Media Player Classics and VLC. If you would like to see it's capability without writing any code just yet, take a look at U-Broadcast, it uses VideoCapX to do the job behind the scene.
I've been using DirectShow.Net for almost 2 years, and I still find it hard to write a streaming server myself, due to the complexity of DirectShow technology.
Other than WMV, you can take a look at Helix Server or Apple Streaming Server. The latter one is not free, so is WMV Streaming Server from Microsoft.
You can also take a look at VLC or Windows Media Encoder to do streaming straight from the application. But so far I find U-Broadcast out do both of the above. VLC has some compatibility issue with codec and playback from non VLC player, WME has problem with starting up capturing device.
Good Luck
NOTE: I'm not associated with VideoCapX or it's company, I'm just a happy user of it.
http://www.codeproject.com/KB/directx/DShowStreamingServer.aspx might help, and http://en.wikipedia.org/wiki/VLC_media_player#cite_note-14
VLC also "should" be able to stream from any device natively.