DirectShow GetDuration gives wrong duration value - c#

I am trying to get media file duration with DirectShow. I use following code (C#):
var seekingParser = filter as IMediaSeeking;
if (seekingParser != null)
{
long duration;
if (seekingParser.SetTimeFormat(TimeFormat.MediaTime) == 0
&& seekingParser.GetDuration(out duration) == 0)
track.Duration = duration / 10000000f;
}
to get media file duration in seconds. However, when I try to open 3-4 mins MP3 files, track.Duration becomes 11-12 mins. I tried on multiple files and effect is always the same. What may be the reason?

You normally use IMediaPosition interface (instead of IMediaSeeking) from the application side. Duration is reported always in seconds. However this is unlikely to make a difference, and what might make it is reading duration from ID3 tags instead, using Windows Media API, ID3 Tag Support.
Are there more reliable ways to get exact duration of media file with DirectShow API?
Windows Media Player plays MP3 files through Media Foundation, a non-DirectShow API, so you don't have an option here to expect or do exactly the same from DirectShow.

From the documentation:
Depending on the source format, the duration might not be exact. For example, if the source contains a variable bit-rate (VBR) stream, the method might return an estimated duration.
Are you using a VBR stream, by any chance?

You can try the same on a clean windows installation. It might be possible you have a codec(pack) installed which is buggy.

Related

Saving Bigger Files with TTS Android

Recently i used android TTS - I save the file as MP3 and play it using MediaPlayer so users can pause/resume etc.
It all works fine other than when i have a large text it just does not work.
I read that the android TTS has the limit of 4000 CHs? What should i do to tackle large amount of text?
The following is the code i am using to save MP3
Android.Speech.Tts.TextToSpeech textToSpeech;
...
textToSpeech = new Android.Speech.Tts.TextToSpeech(this, this, "com.google.android.tts");
...
textToSpeech.SynthesizeToFile(ReadableText, null, new Java.IO.File(System.IO.Path.Combine(documentsPath, ID + "_audio.mp3")), ID);
The following is the code i am using to playback the audio
MediaPlayer MP = new MediaPlayer();
...
MP.SetDataSource(System.IO.Path.Combine(documentsPath, ID + "_audio.mp3"));
MP.Prepare();
MP.Start();
It works for small amount of text but not for large text.
File gets saved (most likely just a corrupt file) because when i play it i get the following error
setDataSoruceFD failed: status=0x80000000
Java Solution is also acceptable
FYI - The question is about the max text size as I can generate the file for smaller text
Cheers
In Android ASOP (at least since API-18), TextToSpeech.MaxSpeechInputLength is set to 4000.
Note: OEMs could change this value in their OS image, so it would be wise to check the value and not make any assumptions.
Note: You are naming the output with an .mp3 extension, but by default the files created will be .wav formatted, some speech engines do support other formats/bitrate/etc. but you are passing null for the parameters.
Unless you want to deal with properly joining multiple wave files, I would recommend that you break your text into smaller parts and synthesize multiple files.
You can then play these back in sequence (using the MediaPlayer Completion event|listener).

Increase fps in record video ffmpeg

I need to record 2 videos from 2 cameras in full hd 30 fps.
I use ffmpeg and wrapper - Aforge for c#.
init device:
_videoCaptureDevice = new VideoCaptureDevice(deviceName);
_videoCaptureDevice.VideoResolution = _videoCaptureDevice.VideoCapabilities[0];
_videoCaptureDevice.DesiredFrameRate = _fps;
_videoSourcePlayer.VideoSource = _videoCaptureDevice;
_videoCaptureDevice.NewFrame += _videoCaptureDevice_NewFrame;
_videoSourcePlayer.Start();
saving frames
if (_videoRecordStatus == VideoRecordStatus.Recording)
{
_videoFileWriter.WriteVideoFrame(eventArgs.Frame);
}
and init file writer
_videoFileWriter = new VideoFileWriter();
_videoFileWriter.Open(_fileName, _videoCaptureDevice.VideoResolution.FrameSize.Width,
_videoCaptureDevice.VideoResolution.FrameSize.Height, 30, VideoCodec.MPEG4, 10 * 1000 * 1000);
now _videoCaptureDevice.VideoResolution.FrameSize equals 1280x720 and 640x480 (for second device). But I already have problems with recording. Maximum fps is 24 for 480p and 13-14 for 720p (when I try to record videos from 2 cameras in the same time)
How to increase it?
Or it isn't possible? Maybe more powerfull computer will solve this problem (I have Pentium(R) Dual-Core CPU 2.50Ghz and usual videocart (Geforse 8500 GT) for working with two displays, usual hdd, usb 2.0)?
I will glad any help (maybe another library, but not language (c#))
PS
I already used Emgu.CV and faced with simular problems..
The framerate is limited by the hardware.
Use AMCap or GraphEdit to check what your camera really supports. It will depend on the choosen resolution and ouput format (higher resolution -> lower framerater).
Be aware that AForge always uses the highest value for all resolutions which can lead to oversampling (e.g.: AForge produces the rames at 60Hz, but the camera only supports 15Hz at the given resolution, so the images will mostly be duplicates. See here
Also use process explorer and a profiler to see how busy your CPU really is and what it is doing.

Getting MP4 File Duration with DirectShow

I need to get the duration of an mp4 file, preferably as a double in seconds. I was using DirectShow (see code below), but it keeps throwing a particularly unhelpful error. I'm wondering if someone has an easy solution to this. (Seriously, who knew that getting that information would be so difficult)
public static void getDuration(string moviePath)
{
FilgraphManager m_objFilterGraph = null;
m_objFilterGraph = new FilgraphManager();
m_objFilterGraph.RenderFile(moviePath);
IMediaPosition m_objMediaPosition = null;
m_objMediaPosition = m_objFilterGraph as IMediaPosition;
Console.WriteLine(m_objMediaPosition.Duration);
}
Whenever I run this code, I get the error: "Exception from HRESULT: 0x80040265"
I also tried using this: Getting length of video
but it doesn't work either because I don't think that it works on MP4 files.
Seriously, I feel like there has to be a much easier way to do this.
Note: I would prefer to avoid using exe's like ffmpeg and then parsing the output to get the information.
You are approaching the problem correctly. You need to build a good pipeline starting from source .MP4 file and up to video and audio renderers. Then IMediaPosition.Duration will get you what you want. Currently you are getting VFW_E_UNSUPPORTED_STREAM because you cannot build the pipeline.
Note that there is no good support for MPEG-4 in DirectShow in clean Windows, you need a third party parser installed to add missing blocks. This is the likely cause of your problem. There are good Free DirectShow Mpeg-4 Filters available to fill this gap.
The code sample under the link Getting length of video is basically valid too, however it uses deprecated component which in additional make additional assumptions onto the media file in question. Provided that there is support for .MP4 in the system, IMediaPosition.Duration is to give you what you look for.
You can use get_Duration() from IMediaPosition interface.
This return a double value with the video duration in seconds.
Double Lenght;
m_FilterGraph = new FilterGraph()
//Configure the FilterGraph()
m_mediaPosition = m_FilterGraph as IMediaPosition;
m_mediaPosition.get_Duration(out Length);
Using Windows Media Player Component also, we can get the duration of the video.
I hope that following code snippet may help you guys :
using WMPLib;
// ...
var player = new WindowsMediaPlayer();
var clip = player.newMedia(filePath);
Console.WriteLine(TimeSpan.FromSeconds(clip.duration));
and don't forget to add the reference of wmp.dll which will be
present in System32 folder.

How to capture screen to be video using C# .Net?

I know there are lots of question like this.
But I don't want to use the Windows media encoder 9 because it's a problem to get one, and then it is no longer supported.
I know that, one possibility is to capture lots of screenshots and create a video with ffmpeg but I don't want use third party executables.
Is there are a .net only solution?
the answer is the Microsoft Expression Encoder. It is according to my opinion the easiest way to record something on vista and windows 7
private void CaptureMoni()
{
try
{
Rectangle _screenRectangle = Screen.PrimaryScreen.Bounds;
_screenCaptureJob = new ScreenCaptureJob();
_screenCaptureJob.CaptureRectangle = _screenRectangle;
_screenCaptureJob.ShowFlashingBoundary = true;
_screenCaptureJob.ScreenCaptureVideoProfile.FrameRate = 20;
_screenCaptureJob.CaptureMouseCursor = true;
_screenCaptureJob.OutputScreenCaptureFileName = string.Format(#"C:\test.wmv");
if (File.Exists(_screenCaptureJob.OutputScreenCaptureFileName))
{
File.Delete(_screenCaptureJob.OutputScreenCaptureFileName);
}
_screenCaptureJob.Start();
}
catch(Exception e) { }
}
Edit Based on Comment Feedback:
A developer by the name baSSiLL has graciously shared a repository that has a screen recording c# library as well as a sample project in c# that shows how it can be used to capture the screen and mic.
Starting a screen capture using the sample code is as straight forward as:
recorder = new Recorder(_filePath,
KnownFourCCs.Codecs.X264, quality,
0, SupportedWaveFormat.WAVE_FORMAT_44S16, true, 160);
_filePath is the path of the file I'd like to save the video to.
You can pass in a variety of codecs including AVI, MotionJPEG, X264, etc. In the case of x264 I had to install the codec on my machine first but AVI works out of the box.
Quality only comes into play when using AVI or MotionJPEG. The x264 codec manages its own quality settings.
The 0 above is the audio device I'd like to use. The Default is zero.
It currently supports 2 wave formats. 44100 at 16bit either stereo or mono.
The true parameter indicates that I want the audio encoded into mp3 format. I believe this is required when choosing x264 as the uncompressed audio combined in a .mp4 file would not play back for me.
The 160 is the bitrate at which to encode the audio.
~~~~~
To stop the recording you just
recorder.Dispose();
recorder = null;
Everything is open source so you can edit the recorder class and change dimensions, frames per second, etc.
~~~~
To get up and running with this library you will need to either download or pull from the github / codeplex libraries below. You can also use NuGet:
Install-Package SharpAvi
Original Post:
Sharp AVI:
https://sharpavi.codeplex.com/
or
https://github.com/baSSiLL/SharpAvi
There is a sample project within that library that has a great screen recorder in it along with a menu for settings/etc.
I found Screna first from another answer on this StackoverFlow question but I ran into a couple issues involving getting Mp3 Lame encoder to work correctly. Screna is a wrapper for SharpAVI. I found by removing Screna and going off of SharpAvi's sample I had better luck.

Sound quality issues when using NAudio to play several wav files at once

My objective is this: to allow users of my .NET program to choose their own .wav files for sound effects. These effects may be played simultaneously. NAudio seemed like my best bet.
I decided to use WaveMixerStream32. One early challenge was that my users had .wav files of different formats, so to be able to mix them together with WaveMixerStream32, I needed to "normalize" them to a common format. I wasn't able to find a good example of this to follow so I suspect my problem is a result of my doing this part wrong.
My problem is that when some sounds are played, there are very noticeable "clicking" sounds at their end. I can reproduce this myself.
Also, my users have complained that sometimes, sounds aren't played at all, or are "scratchy" all the way through. I haven't been able to reproduce this in development but I have heard this for myself in our production environment.
I've played the user's wav files myself using Windows Media and VLC, so I know the files aren't corrupt. It must be a problem with how I'm using them with NAudio.
My NAudio version is v1.4.0.0.
Here's the code I used. To set up the mixer:
_mixer = new WaveMixerStream32 { AutoStop = false, };
_waveOutDevice = new WaveOut(WaveCallbackInfo.NewWindow())
{
DeviceNumber = -1,
DesiredLatency = 300,
NumberOfBuffers = 3,
};
_waveOutDevice.Init(_mixer);
_waveOutDevice.Play();
Surprisingly, if I set "NumberOfBuffers" to 2 here I found that sound quality was awful, with audible "ticks" occurring several times a second.
To initialize a sound file, I did this:
var sample = new AudioSample(fileName);
sample.Position = sample.Length; // To prevent the sample from playing right away
_mixer.AddInputStream(sample);
AudioSample is my class. Its constructor is responsible for the "normalization" of the wav file format. It looks like this:
private class AudioSample : WaveStream
{
private readonly WaveChannel32 _channelStream;
public AudioSample(string fileName)
{
MemoryStream memStream;
using (var fileStream = File.OpenRead(fileName))
{
memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
}
WaveStream originalStream = new WaveFileReader(memStream);
var pcmStream = WaveFormatConversionStream.CreatePcmStream(originalStream);
var blockAlignReductionStream = new BlockAlignReductionStream(pcmStream);
var waveFormatConversionStream = new WaveFormatConversionStream(
new WaveFormat(44100, blockAlignReductionStream.WaveFormat.BitsPerSample, 2), blockAlignReductionStream);
var waveOffsetStream = new WaveOffsetStream(waveFormatConversionStream);
_channelStream = new WaveChannel32(waveOffsetStream);
}
Basically, the AudioSample delegates to its _channelStream object. To play an AudioSample, my code sets its "Position" to 0. This code that does this is marshalled onto the UI thread.
This almost works great. I can play multiple sounds simultaneously. Unfortunately the sound quality is bad as described above. Can anyone help me figure out why?
Some points in response to your question:
Yes, you have to have all inputs at the same sample rate before you feed them into a mixer. This is simply how digital audio works. The ACM sample rate conversion provided by WaveFormatConversion stream isn't brilliant (has no aliasing protection). What sample rates are your input files typically at?
You are passing every input through two WaveFormatConversionStreams. Only do this if it is absolutely necessary.
I'm surprised that you are getting bad sound with NumberOfBuffers=2, which is now the default in NAudio. Have you been pausing and resuming, because there was a bug where a buffer could get dropped (fixed in the latest and will be fixed for NAudio 1.4 final)
A click at the end of a file can just mean it doesn't end on a zero sample. You would have to add a fade out to eliminate this (a lot of media players do this automatically)
Whenever you are troubleshooting a bad sound issue, I always recommend using WaveFileWriter to convert your WaveStream into a WAV file (taking care not to produce a never ending file!), so you can listen to it in another media player. This allows you to quickly determine whether it is your audio processing that is causing the problem, or the playback itself.

Categories

Resources