Using video codecs like XVid in c# - c#

I'm trying to develop an application which captures a series of images from web cam using DirectShow.Net and then sends it over network to other clients.
Everything is working fine, except the images are too big and compression methods like using GZipStream, JPEG Compression and etc does not help more about reducing the size.
Now, I want to know how to using codecs like XVid or any other codec, to reduce the size.
Playing around the demos of VisioForge, it approves that XVid files are too smaller than regular AVI files.
Thanks for any help

There are specific video compression algorithms, which effectively compress video, some of the most popular are: M-JPEG, MPEG-4, H.261, H.263, H.264, VP8, Theora. In DirectShow the video compression items have form-factor of video compression filters (or codecs). Standard Windows does not normally include much for this task (for various reasons, patents to specifically mention), so you need to use a third party or otherwise installable codec. Luckily, the codecs have more or less uniform interface and you use them similarly from C#.
See related questions with helpful information:
Real-time video encoding in DirectShow
Capturing webcam using DirectShow.NET library
Be sure to check DirectShow.NET samples out:
\Samples\Misc\DxWebCam
A poor man's web cam program. This application runs as a Win32
Service. It takes the output of a capture graph, turns it into a
stream of JPEG files, and sends it thru TCP/IP to a client
application.
\Samples\Capture\CapWMV
A .NET sample application using the WM ASF Writer filter to create an
wmv file

Related

How to compress video from a PCL in Xamarin

I´m doing a "Whatsapp" like app and I need to send user videos (from camera/gallery).
I need to send video from ios to android and from android to ios (windows phone in the future).
First thing I thought is to use camera params to record the video in low resolution, but that won´t help with recorded videos stored in the phone already.
Second thought was to zip the video file, but I guess this is not enough for very large files.
Third: actually compressing the video file generating a new file, and then zip it before sending it through the network.
So this is what I need before actually sending the video:
Compress the video file, generating a new file that will play nicely in
both platforms (ios and android)
Make the compressing process aysnc(as I don´t want to block the UI
thread for a really long time)
Zip it (this is the easy part, just for the record)
Any ideas or help are appreciated
You would best need to use your platforms framework to also leverage existing hardware support for encoding (mainly h.264 hardware encoding). A PCL solution would eat to much battery as it would need to run on CPU only giving you bad performance and even worst battery live.
This ties in with 1. Just use your platforms native method to execute the frameworks methods async.
Skip this part. It will increase overhead and disallow video streaming There are virtually 0 benefits from using a zip algorithm on top of an already compressed video stream.
Just make sure that you end up with a cross platform compatible video format like H264.

Direct Show Decrease Video File Size

I am using direct show to capture video and save it to file. I have tried in vain to find ways to decrease the resultant video size but cannot manage. I would like to know if anyone can tell me how I can:
Decrease the frame rate of the video
Decrease the quality of the video (even down to 320 x 240)
Apply a compression on the video (mpeg? etc).
Raw video is huge in size, and to size-efficient storage assumes you compress the video. You are to use one of the video encoders, such as MPEG-4 AVC (H.264) or Windows Media. You typically insert an additional filter into your pipeline between capture filter and multiplexer/file writer. Read up on this in multiple past topics:
Using video codecs like XVid in c#
Real-time video encoding in DirectShow
How to properly build a directshow graph to compress video...

How to change the resolution of a WMV file in C#

I have a C# application and I would like to be able to read in a WMV file and then write out a WMV file with reduced resolution/quality.
Are there any built-in libraries for C# that can do this? Do I need the Windows Media Format SDK?
Does anyone have experience with this?
Can I use something like FFmpeg for this?
You will have to decode and re-encode ( = transcode) the file to do this. By doing so you will inherently reduce quality since you are working off an already compressed base.
One way to do it if you need a high degree of control is is with a DirectShow wrapper for C#, i.e. DirectShow.NET. then you just need to define a simple transcoding graph.
Actually the simplest way to do this is with Expression Encoder (the successor to Windows Media Encoder) which has a simple managed API and should do the job with much less effort than integrating DirectShow.
There's a summary article here. A simple transcoding job looks like this (sample from article, only presets changed):
MediaItem src = new MediaItem
(#"C:\WMdownloads\AdrenalineRush.wmv");
Job job = new Job();
job.MediaItems.Add(src);
job.ApplyPreset(Presets.VC1WindowsMobile);
job.OutputDirectory = #"C:\EncodedFiles";
job.Encode();
I don't think there are any classes in the .Net Framework which deal with transcoding WMV files.
But you can install the Windows Media 9 Encoder SDK and create appropriate objects in C# to do the conversion. See CodeProject.com - Convert MP3, MPEG, AVI to Windows Media Formats for a starting point. Even though that link starts with non-WMV files, the Windows Media Encoder doesn't restrict the input file format (at least when I've used the VBScript encoding batch file).
N.B If you use the WM9Encoder on Vista or Win7, you may need the hotfix - see TechNet - issues in using Windows Media Encoder 9 Series on Windows 7

Multiple Audio, Live Smooth Streaming

I could not find any document which explains how to provide multiple audio streams for Live Smooth Streaming.
For example, in Microsoft PDC's streams, it is possible to select languages.
Does SMF provide this feature? If it is, how? How my isml file will look like?
This link gives a sample for multiple languages for Audio in Smooth streaming.
If you are looking for this, please note that unlike video smooth streaming currently does not support multiple bit rates for audio.
There is SmoothStreamingMediaElement.ManifestMerge event that enables adding additional streams to manifest loaded when opening media. This is called manifest merging and is described here:
http://msdn.microsoft.com/en-us/library/ff432455%28v=vs.90%29.aspx
In SMF you can access SSME by IAdaptiveMediaPlugin.VisualElement interface.
So if you have two live streaming endpoints:
AudioAndVideo.isml/Manifest (standard audio and video streams)
Audio2.isml/Manifest (second audio stream with dummy video streams)
you could open the first one and merge it with audio stream from the second one. This requires two encoding sessions of Expression Encoder.

HowTo Multicast a Stream Captured with DirectShow?

I have a requirement to build a very simple streaming server. It needs to be able to capture video from a device and then stream that video via multicast to several clients on a LAN.
The capture part of this is pretty easy (in C#) thanks to a library someone wrote with DirectShow.Net (http://www.codeproject.com/KB/directx/directxcapture.aspx).
The question I have now is how to multicast this? This is the part I'm stuck on. I'm not sure what to do next, or what steps to take.
There are no filters available that you can plug and use.
You need to do three things here:
Compress the video into MPEG2 or MPEG4
Mux it into MPEG Transport Stream
Broadcast it
There are lots of codecs available for part 1, and some devices can even output compressed video.
The part 3 is quite simple too.
Main problem goes with part 2, as MPEG Transport Stream is patented. It is licensed so that you cannot develop free software based on it (VLC and FFMPEG violate that license), and you have to pay several hundred dollars just to obtain a copy of specification.
If you have to develop it, you need to:
Obtain a copy of ISO/IEC 13818-1-2000 (you may download it as PDF from their site), it describes MPEG Transport Stream
Develop a renderer filter that takes MPEG Elementary Streams and muxes them into Transport Stream
It has to be a renderer as Transport Stream is not a transform filter. There are some kind of outband data (program allocation tables and reference clocks) that need to be sent on a regular basis, and you need to keep a worker thread to do that.
To achieve that you need to setup/write some kind of video streaming server.
I've used VideoCapX for the same purpose on my project. The documentation and support is not top notch, but it's good enough. It's using WMV streaming technology. The stream is called MMS stream. You can view it with any most media player. I've tested with Windows Media Player, Media Player Classics and VLC. If you would like to see it's capability without writing any code just yet, take a look at U-Broadcast, it uses VideoCapX to do the job behind the scene.
I've been using DirectShow.Net for almost 2 years, and I still find it hard to write a streaming server myself, due to the complexity of DirectShow technology.
Other than WMV, you can take a look at Helix Server or Apple Streaming Server. The latter one is not free, so is WMV Streaming Server from Microsoft.
You can also take a look at VLC or Windows Media Encoder to do streaming straight from the application. But so far I find U-Broadcast out do both of the above. VLC has some compatibility issue with codec and playback from non VLC player, WME has problem with starting up capturing device.
Good Luck
NOTE: I'm not associated with VideoCapX or it's company, I'm just a happy user of it.
http://www.codeproject.com/KB/directx/DShowStreamingServer.aspx might help, and http://en.wikipedia.org/wiki/VLC_media_player#cite_note-14
VLC also "should" be able to stream from any device natively.

Categories

Resources