I can't play UDP video stream whit LibVLCSharp - c#

I am beginning to learn how to use the LibVLCSharp library. Right now I am trying to play a streaming video that comes to me in multicast UDP, 224.XX.XX.XX:PORT. The problem is that said video comes to me without format.
I get to reproduce it in cmd with:
vlc udp://#224.XX.XX.XX:PORT --demux=mp4v --rawvid-fps=12
This is mi code:
public void PlayURLFile(string file)
{
var media = new Media(_libVLC, "udp://#224.XX.XX.XX:XXXXX");
media.AddOption(":demux=mp4v");
media.AddOption(":rawvid-fps=12");
_mp.Play(media);
isPlaying = true;
}
When executing it does not show me any error.
The videoview I have to show the video shows me the black screen.
I understand that the problem may be that I am not entering AddOption correctly or that the options are different. But after fighting with the code and looking at documentation, I can't find an answer that is clarifying.
Can someone help me?
Greetings and thank you.

Give the following options in the LibVLC ctor instead.
new LibVLC("--demux=mp4v", "--rawvid-fps=12");

Related

Google VR SDK (Unity) - GvrVideoPlayerTexture swap URL

I'm working with the Google VR SDK for Unity trying to build a simple 360 Video viewer using the components that come with the SDK. I'm trying to extend their PanoVideoSample to dynamically change the source video when the user navigates from a menu.
I'm having trouble changing the URL for the GvrVideoPlayerTexture via code. In their demo scene (VideoDemo) they have a PanoVideoSample that contains a Video Sphere, which you can edit the GVRVideoPlayerTexture script in the inspector panel to point at the proper video URL.
I'd like to dynamically set the video URL in C# rather than hard-code a bunch of individual video spheres then hide/show them. I've almost got this working with the following code.
public void SwapVideo(int index){
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().videoURL = urls [index];// my new url
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture>().ReInitializeVideo ();
videoSphere.SetActive (true);
}
public void ReturnToMainMenu(){
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture>().CleanupVideo();
videoSphere.SetActive (false);
this.gameObject.SetActive (true);
}
The code above seems to work, but the problem is the texture on the videoSphere turns white after the url is set and the texture is re-initialized. I can see that the new video loads and I can hear the audio for the new video, but the scene just shows a white texture.
See the output here
I'm wondering if I'm missing a key step on the GvrVideoPlayerTexture or perhaps additional calls to update the StereoPanoSphereMaterial that is used to render the scene. This SDK is pretty new and there doesn't seem to be many people writing about it, so any help is appreciated.
I eventually found the answer to my question in the documentation on the Google VR docs (Streaming Video Support).
I'm still not totally clear what I was doing wrong in my first attempt, but this is the code that worked for me.
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().videoURL = urls [index];
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().videoType = GvrVideoPlayerTexture.VideoType.Other;
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().videoProviderId = string.Empty;
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().videoContentID = string.Empty;
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().CleanupVideo ();
videoSphere.GetComponentInChildren<GvrVideoPlayerTexture> ().ReInitializeVideo ();

Getting MP4 File Duration with DirectShow

I need to get the duration of an mp4 file, preferably as a double in seconds. I was using DirectShow (see code below), but it keeps throwing a particularly unhelpful error. I'm wondering if someone has an easy solution to this. (Seriously, who knew that getting that information would be so difficult)
public static void getDuration(string moviePath)
{
FilgraphManager m_objFilterGraph = null;
m_objFilterGraph = new FilgraphManager();
m_objFilterGraph.RenderFile(moviePath);
IMediaPosition m_objMediaPosition = null;
m_objMediaPosition = m_objFilterGraph as IMediaPosition;
Console.WriteLine(m_objMediaPosition.Duration);
}
Whenever I run this code, I get the error: "Exception from HRESULT: 0x80040265"
I also tried using this: Getting length of video
but it doesn't work either because I don't think that it works on MP4 files.
Seriously, I feel like there has to be a much easier way to do this.
Note: I would prefer to avoid using exe's like ffmpeg and then parsing the output to get the information.
You are approaching the problem correctly. You need to build a good pipeline starting from source .MP4 file and up to video and audio renderers. Then IMediaPosition.Duration will get you what you want. Currently you are getting VFW_E_UNSUPPORTED_STREAM because you cannot build the pipeline.
Note that there is no good support for MPEG-4 in DirectShow in clean Windows, you need a third party parser installed to add missing blocks. This is the likely cause of your problem. There are good Free DirectShow Mpeg-4 Filters available to fill this gap.
The code sample under the link Getting length of video is basically valid too, however it uses deprecated component which in additional make additional assumptions onto the media file in question. Provided that there is support for .MP4 in the system, IMediaPosition.Duration is to give you what you look for.
You can use get_Duration() from IMediaPosition interface.
This return a double value with the video duration in seconds.
Double Lenght;
m_FilterGraph = new FilterGraph()
//Configure the FilterGraph()
m_mediaPosition = m_FilterGraph as IMediaPosition;
m_mediaPosition.get_Duration(out Length);
Using Windows Media Player Component also, we can get the duration of the video.
I hope that following code snippet may help you guys :
using WMPLib;
// ...
var player = new WindowsMediaPlayer();
var clip = player.newMedia(filePath);
Console.WriteLine(TimeSpan.FromSeconds(clip.duration));
and don't forget to add the reference of wmp.dll which will be
present in System32 folder.

Generating video from a sequence of images in C#

I have a task of generating video from a sequence of images in my app and while searching for that i found out that FFMPEG is able to do that.Can anyone provide me any tutorial or link which can guide me in right direction.I am a newbiew in this so please help appropriately guys.
Will appreciate any sort of help
I could not manage to get the above example to work. However I did find another library that works amazingly well once. Try via NuGet "accord.extensions.imaging.io", then I wrote the following little function:
private void makeAvi(string imageInputfolderName, string outVideoFileName, float fps = 12.0f, string imgSearchPattern = "*.png")
{ // reads all images in folder
VideoWriter w = new VideoWriter(outVideoFileName,
new Accord.Extensions.Size(480, 640), fps, true);
Accord.Extensions.Imaging.ImageDirectoryReader ir =
new ImageDirectoryReader(imageInputfolderName, imgSearchPattern);
while (ir.Position < ir.Length)
{
IImage i = ir.Read();
w.Write(i);
}
w.Close();
}
It reads all images from a folder and makes a video out of them.
If you want to make it nicer you could probably read the image dimensions instead of hard coding, but you got the point.
http://electron.mit.edu/~gsteele/ffmpeg/
http://www.codeproject.com/Articles/7388/A-Simple-C-Wrapper-for-the-AviFile-Library
http://ffmpeg.org/ffmpeg.html -> search for For creating a video from many images:
All the links are from this question on SO
Link related to FFMPEG in .net (From this question);
FFMpeg.NET
FFMpeg-Sharp
FFLib.NET
http://ivolo.mit.edu/post/Convert-Audio-Video-to-Any-Format-using-C.aspx
Other resources
Expression Encoder
VLC
I bit late but I have made a tutorial on how I solved my similar problem if you did not succeed yet: Image sequence to video stream?
Same question asked here.
The answer there points to here, which is not exactly what you're doing, but is easily configurable to do the job.

How to capture screen to be video using C# .Net?

I know there are lots of question like this.
But I don't want to use the Windows media encoder 9 because it's a problem to get one, and then it is no longer supported.
I know that, one possibility is to capture lots of screenshots and create a video with ffmpeg but I don't want use third party executables.
Is there are a .net only solution?
the answer is the Microsoft Expression Encoder. It is according to my opinion the easiest way to record something on vista and windows 7
private void CaptureMoni()
{
try
{
Rectangle _screenRectangle = Screen.PrimaryScreen.Bounds;
_screenCaptureJob = new ScreenCaptureJob();
_screenCaptureJob.CaptureRectangle = _screenRectangle;
_screenCaptureJob.ShowFlashingBoundary = true;
_screenCaptureJob.ScreenCaptureVideoProfile.FrameRate = 20;
_screenCaptureJob.CaptureMouseCursor = true;
_screenCaptureJob.OutputScreenCaptureFileName = string.Format(#"C:\test.wmv");
if (File.Exists(_screenCaptureJob.OutputScreenCaptureFileName))
{
File.Delete(_screenCaptureJob.OutputScreenCaptureFileName);
}
_screenCaptureJob.Start();
}
catch(Exception e) { }
}
Edit Based on Comment Feedback:
A developer by the name baSSiLL has graciously shared a repository that has a screen recording c# library as well as a sample project in c# that shows how it can be used to capture the screen and mic.
Starting a screen capture using the sample code is as straight forward as:
recorder = new Recorder(_filePath,
KnownFourCCs.Codecs.X264, quality,
0, SupportedWaveFormat.WAVE_FORMAT_44S16, true, 160);
_filePath is the path of the file I'd like to save the video to.
You can pass in a variety of codecs including AVI, MotionJPEG, X264, etc. In the case of x264 I had to install the codec on my machine first but AVI works out of the box.
Quality only comes into play when using AVI or MotionJPEG. The x264 codec manages its own quality settings.
The 0 above is the audio device I'd like to use. The Default is zero.
It currently supports 2 wave formats. 44100 at 16bit either stereo or mono.
The true parameter indicates that I want the audio encoded into mp3 format. I believe this is required when choosing x264 as the uncompressed audio combined in a .mp4 file would not play back for me.
The 160 is the bitrate at which to encode the audio.
~~~~~
To stop the recording you just
recorder.Dispose();
recorder = null;
Everything is open source so you can edit the recorder class and change dimensions, frames per second, etc.
~~~~
To get up and running with this library you will need to either download or pull from the github / codeplex libraries below. You can also use NuGet:
Install-Package SharpAvi
Original Post:
Sharp AVI:
https://sharpavi.codeplex.com/
or
https://github.com/baSSiLL/SharpAvi
There is a sample project within that library that has a great screen recorder in it along with a menu for settings/etc.
I found Screna first from another answer on this StackoverFlow question but I ran into a couple issues involving getting Mp3 Lame encoder to work correctly. Screna is a wrapper for SharpAVI. I found by removing Screna and going off of SharpAvi's sample I had better luck.

Create Video out of image files?

I'm trying to make a video out of a folder full of jpeg files. I tried Google, and everybody is stuck with this. I have downloaded the WM SDK and the Encoder, but since the moment I don't know their object model I cant do much.
Does somebody here have some code WORKING about how to create a WMV or an AVI or a MPEG video file out of a folder full of jpegs? (In C#)
I can see on the answers that apparently there is no way to do it from C#, just using a third party. I will check your suggestions.
Take a look at Corinna John's AVIFile wrapper. I used it in the AVI output plugin for Cropper.
VirtualDub is capable of making a video out of several image files. Here's a quite overview of how to do it.
FFMPEG, as CptSkippy mentioned, also has this feature.
See the AVBlocks Slideshow sample. It creates a video (like MP4) from images. The input is a series of JPEG images. The output is configured with an AVBlocks preset.
Try via NuGet "accord.extensions.imaging.io", then I wrote the following little function:
private void makeAvi(string imageInputfolderName, string outVideoFileName, float fps = 12.0f, string imgSearchPattern = "*.png")
{ // reads all images in folder
VideoWriter w = new VideoWriter(outVideoFileName,
new Accord.Extensions.Size(480, 640), fps, true);
Accord.Extensions.Imaging.ImageDirectoryReader ir =
new ImageDirectoryReader(imageInputfolderName, imgSearchPattern);
while (ir.Position < ir.Length)
{
IImage i = ir.Read();
w.Write(i);
}
w.Close();
}
It reads all images from a folder and makes a video out of them.
If you want to make it nicer you could probably read the image dimensions instead of hard coding, but you got the point.
Have you considered using FFMPEG? I've used it to create thumbnails from video in several projects.
I finally settled on Splicer. Free, simple to use, and it works. More info at Working way to make video from images in C#

Categories

Resources