I would like to see the current playout buffer value while streaming a video on a network over a wireless link,
I couldn't find any software that does this so i have decided to write a small app that can show me the current buffer size.
I have put a windows media player object in a form which plays a video from a URL that I specify, Is there any way to display the current buffer size s the video is played out?
Thanks
You can access the bufferingProgress property with this code:
int progressPercent = axWindowsMediaPlayer1.network.bufferingProgress;
IWMPNetwork interface
http://msdn.microsoft.com/en-us/library/windows/desktop/dd563492(v=vs.85).aspx
Related
I have an app that plays audio and I would like to record at least 10 seconds of audio when playback is happening. I have checked the official documentation of the MediaRecorder class from both Google and Microsoft and I have followed every step of the recipe required when you want to record audio using that class. I have also declared all the needed permissions but I have a problem setting the output file of the MediaRecorder object I have, The documentation says I need an object of FileDescriptor in the method mediaRecorder.SetOutputFile(FileDescriptor dp) but the constructor of that class does not accept a string for a file name, instinctively I know that if the MediaRecorder object writes audio to a file then that file name needs a name so that I can search for it later and test how the audio was recorded.
Instantiating FileDescriptor like this compiles but what about the file name?
MediaRecorder recorder = new MediaRecorder(this);
//set the source of the audio
recorder.SetAudioSource(AudioSource.Mic);
//set the media encoding of the output
recorder.SetOutputFormat(Android.Media.OutputFormat.Mpeg2Ts);
//specify a file descriptor where to save the recording
//what about the file name?, It surely needs a string for the file name
FileDescriptor destination = new FileDescriptor();
recorder.SetOutputFile(destination);
//prepare the recorder
recorder.Prepare();
//start recording for 10 seconds
recorder.SetMaxDuration(10000);
//start the recording session
recorder.Start();
Checking how FileDescriptor is used in Java then I learnt that it can work like below due to inheritance and stuff.
FileDescriptor destination = new FileOutputStream("myrecording");
//the line above does not compile for Xamarin Android
How can I create an instance of a FileDescriptor while passing a file name to it that I can use for searching the recorded audio?
I am trying to record two audio input (webcam + microphone) and one video input (webcam) via MediaCapture from C#. Just 1 audio and 1 video input works like a charm but the class itself does not allow to specify two audio input device ids.
An example:
var captureDeviceSettings = new MediaCaptureInitializationSettings
{
VideoDeviceId = videoDeviceId, #Webcam video input
AudioDeviceId = audioDeviceId, #Webcam audio input
StreamingCaptureMode = StreamingCaptureMode.AudioAndVideo,
}
I thought about using audio graph and using a submix node but, I need to specify a device ID for the media capture, the submix node does not give that. Further on, using the output device of audio graph seems not like the solution, I do not want to play microphone to default output device. Tried that but sounds horrible. I thought about creating a virtual audio device but I don't know how.
Any suggestions on that?
MediaCapture won't be able to do this, WASAPI maybe possibile, but it is not trivial to do.
Best option may be to utilize https://learn.microsoft.com/en-us/windows/win32/coreaudio/loopback-recording.
You will still have to mux in video stream though if you get loopback recording to work.
I am using a MediaPlaybackList to essentially 'stream' audio data coming in via Bluetooth as a byte[] on a fixed time gather interval. According to the MS documentation, MediaPlaybackList is 'gapless' playback between audio samples. But in my case, I have a popping sound and gap when transitioning to the next audio sample.
byte[] audioContent = new byte[audioLength];
chatReader.ReadBytes(audioContent);
MediaPlaybackItem mediaPlaybackItem = new MediaPlaybackItem(MediaSource.CreateFromStream(new MemoryStream(audioContent).AsRandomAccessStream(), "audio/mpeg"));
playbackList.Items.Add(mediaPlaybackItem);
if (_mediaPlayerElement.MediaPlayer.PlaybackSession.PlaybackState != MediaPlaybackState.Playing)
{
_mediaPlayerElement.MediaPlayer.Play(); ;
}
How can I achieve truly 'gapless' streaming audio using a method similar to this?
Also, I have tried writing my stream to a file realtime as the data comes in just to see if the popping sound or the gap is there. It plays from the file that the bytes are appended to perfectly with no pop or gap.
using (var stream = await playbackFile.OpenStreamForWriteAsync())
{
stream.Seek(0, SeekOrigin.End);
await stream.WriteAsync(audioContent, 0, audioContent.Length);
}
The MediaPlayer and in particular the MediaPlayerList is not designed to be used with a "live" audio stream. You must finish writing the data to the byte stream before adding it to the list and starting the MediaPlayer. Using the MediaPlayer is not the correct solution for this particular scenario.
A better solution would be to use the Audio Graph. The Audio Graph allows you to add input sources from actual audio endpoints so you don't need to fill the byte buffer with the streaming audio. You can then use sub-mix nodes to mix between the audio endpoint streams with no clicks or pops.
I'd like to be able to mix the microphone output with a mp3-File, and output that to a specific device.
I got playing the mp3-File to a specific device working:
Mp3FileReader reader = new Mp3FileReader("C:\\Users\\Victor\\Music\\Musik\\Attack.mp3");
var waveOut = new WaveOut();// or WaveOutEvent()
waveOut.DeviceNumber = deviceId; //deviceId, like 0 or 1
waveOut.Init(reader);
waveOut.Play();
So would I would like to be able to do is basically send the microphone output always to specific output and overlay that output to that specific device with the sound of a mp3-file when for example a button is pressed.
Now is what I'm trying to do possible with naudio and if so how would I go about it?
Thanks!
The basic strategy is to put the audio rececived from the microphone into a BufferedWaveProvider. Then turn that into an ISampleProvider with the ToSampleProvider extension method. Now you can pass that into a MixingSampleProvider. Then play from the MixingSampleProvider. Now at any time you can mix in other audio by adding an input into the MixingSampleProvider
I know there are lots of question like this.
But I don't want to use the Windows media encoder 9 because it's a problem to get one, and then it is no longer supported.
I know that, one possibility is to capture lots of screenshots and create a video with ffmpeg but I don't want use third party executables.
Is there are a .net only solution?
the answer is the Microsoft Expression Encoder. It is according to my opinion the easiest way to record something on vista and windows 7
private void CaptureMoni()
{
try
{
Rectangle _screenRectangle = Screen.PrimaryScreen.Bounds;
_screenCaptureJob = new ScreenCaptureJob();
_screenCaptureJob.CaptureRectangle = _screenRectangle;
_screenCaptureJob.ShowFlashingBoundary = true;
_screenCaptureJob.ScreenCaptureVideoProfile.FrameRate = 20;
_screenCaptureJob.CaptureMouseCursor = true;
_screenCaptureJob.OutputScreenCaptureFileName = string.Format(#"C:\test.wmv");
if (File.Exists(_screenCaptureJob.OutputScreenCaptureFileName))
{
File.Delete(_screenCaptureJob.OutputScreenCaptureFileName);
}
_screenCaptureJob.Start();
}
catch(Exception e) { }
}
Edit Based on Comment Feedback:
A developer by the name baSSiLL has graciously shared a repository that has a screen recording c# library as well as a sample project in c# that shows how it can be used to capture the screen and mic.
Starting a screen capture using the sample code is as straight forward as:
recorder = new Recorder(_filePath,
KnownFourCCs.Codecs.X264, quality,
0, SupportedWaveFormat.WAVE_FORMAT_44S16, true, 160);
_filePath is the path of the file I'd like to save the video to.
You can pass in a variety of codecs including AVI, MotionJPEG, X264, etc. In the case of x264 I had to install the codec on my machine first but AVI works out of the box.
Quality only comes into play when using AVI or MotionJPEG. The x264 codec manages its own quality settings.
The 0 above is the audio device I'd like to use. The Default is zero.
It currently supports 2 wave formats. 44100 at 16bit either stereo or mono.
The true parameter indicates that I want the audio encoded into mp3 format. I believe this is required when choosing x264 as the uncompressed audio combined in a .mp4 file would not play back for me.
The 160 is the bitrate at which to encode the audio.
~~~~~
To stop the recording you just
recorder.Dispose();
recorder = null;
Everything is open source so you can edit the recorder class and change dimensions, frames per second, etc.
~~~~
To get up and running with this library you will need to either download or pull from the github / codeplex libraries below. You can also use NuGet:
Install-Package SharpAvi
Original Post:
Sharp AVI:
https://sharpavi.codeplex.com/
or
https://github.com/baSSiLL/SharpAvi
There is a sample project within that library that has a great screen recorder in it along with a menu for settings/etc.
I found Screna first from another answer on this StackoverFlow question but I ran into a couple issues involving getting Mp3 Lame encoder to work correctly. Screna is a wrapper for SharpAVI. I found by removing Screna and going off of SharpAvi's sample I had better luck.