MediaStreamSource video streaming in UWP - c#

I just started to experiment with MediaStreamSource in UWP.
I took the MediaStreamSource streaming example from MS and tried to rewrite it to support mp4 instead of mp3.
I changed nothing but the InitializeMediaStreamSource part, it now looks like this:
{
var clip = await MediaClip.CreateFromFileAsync(inputMP3File);
var audioTrack = clip.EmbeddedAudioTracks.First();
var property = clip.GetVideoEncodingProperties();
// initialize Parsing Variables
byteOffset = 0;
timeOffset = new TimeSpan(0);
var videoDescriptor = new VideoStreamDescriptor(property);
var audioDescriptor = new AudioStreamDescriptor(audioTrack.GetAudioEncodingProperties());
MSS = new MediaStreamSource(videoDescriptor)
{
Duration = clip.OriginalDuration
};
// hooking up the MediaStreamSource event handlers
MSS.Starting += MSS_Starting;
MSS.SampleRequested += MSS_SampleRequested;
MSS.Closed += MSS_Closed;
media.SetMediaStreamSource(MSS);
}
My problem is, that I cannot find a single example where video streams are used instead of audio, so I can't figure out what's wrong with my code. If I set the MediaElement's Source property to the given mp4 file, it works like a charm. If I pick an mp3 and leave the videoDescriptor out then as well. But if I try to do the same with a video (I'm still not sure whether I should add the audioDescriptor as a second arg to the MediaStreamSource or not, but because I've got one mixed stream, I guess it's not needed), then nothing happens. The SampleRequested event is triggered. No error is thrown. It's really hard to debug it, it's a real pain in the ass. :S

I have solution to build working video MediaStreamSource from file bitmaps but unfortunately have not found solution for RGBA buffer.
First of all read MediaStreamSource Class documentation https://learn.microsoft.com/en-us/uwp/api/windows.media.core.mediastreamsource
I'm creating MJPEG MediaStreamSource
var MediaStreamSource = new MediaStreamSource(
new VideoStreamDescriptor(
VideoEncodingProperties.CreateUncompressed(
CodecSubtypes.VideoFormatMjpg, size.Width, size.Height
)
)
);
Then initialize some buffer time
MediaStreamSource.BufferTime = TimeSpan.FromSeconds(1);
Then subscribe for event to set requested frame.
MediaStreamSource.SampleRequested += async (MediaStreamSource sender, MediaStreamSourceSampleRequestedEventArgs args) =>
{
var deferal = args.Request.GetDeferral();
try
{
var timestamp = DateTime.Now - startedAt;
var file = await Windows.ApplicationModel.Package.Current.InstalledLocation.GetFileAsync(#"Assets\grpPC1.jpg");
using (var stream = await file.OpenReadAsync())
{
args.Request.Sample = await MediaStreamSample.CreateFromStreamAsync(
stream.GetInputStreamAt(0), (uint)stream.Size, timestamp);
}
args.Request.Sample.Duration = TimeSpan.FromSeconds(5);
}
finally
{
deferal.Complete();
}
};
As you may see in my sample I use CodecSubtypes.VideoFormatMjpg and hardcoded path to jpeg file that I permanently use as MediaStreamSample. We need to research which CodecSubtypes to set to use RGBA (4 byte per pixel) format bitmap like thin
var buffer = new Windows.Storage.Streams.Buffer(size.Width * size.Height * 4);
// latestBitmap is SoftwareBitmap
latestBitmap.CopyToBuffer(buffer);
args.Request.Sample = MediaStreamSample.CreateFromBuffer(buffer, timestamp);

Related

How to change default image folder on Xamarin forms (android)

I am having the issue of "Canvas Drawing too large bitmaps". After a quick search, I found the following thread, which promptly helped me know what is the issue.
The solution is to put the image in drawable-xxhdpi/ instead of simply drawable/. And here lies the issue: the image is not static, it is imported when I need it. As such, I do not chose where the image ends up stored. It store itself in drawable. Is there 1) A solution to chose which folder to use, or 2) a way to tell it not get the image if it's too heavy?
var file = new SmbFile(path, auth);
try
{
if (file.Exists())
{
// Get readable stream.
var readStream = file.GetInputStream();
//Create reading buffer.
MemoryStream memStream = new MemoryStream();
//Get bytes.
((Stream)readStream).CopyTo(memStream);
var stream1 = new MemoryStream(memStream.ToArray());
if (stream1.Length < 120188100)
{
//Save image
ProductImage = ImageSource.FromStream(() => stream1);
//Dispose readable stream.
readStream.Dispose();
InfoColSpan = 1;
}
else
{
Common.AlertError("Image trop lourde pour l'affichage");
}
}
}

How to record an input device with more than 2 channels to mp3 format

I am building a recording software for recording all connected devices to PC into mp3 format.
Here is my code:
IWaveIn _captureInstance = inputDevice.DataFlow == DataFlow.Render ?
new WasapiLoopbackCapture(inputDevice) : new WasapiCapture(inputDevice)
var waveFormatToUse = _captureInstance.WaveFormat;
var sampleRateToUse = waveFormatToUse.SampleRate;
var channelsToUse = waveFormatToUse.Channels;
if (sampleRateToUse > 48000) // LameMP3FileWriter doesn't support a rate more than 48000Hz
{
sampleRateToUse = 48000;
}
else if (sampleRateToUse < 8000) // LameMP3FileWriter doesn't support a rate less than 8000Hz
{
sampleRateToUse = 8000;
}
if (channelsToUse > 2) // LameMP3FileWriter doesn't support a number of channels more than 2
{
channelsToUse = 2;
}
waveFormatToUse = WaveFormat.CreateCustomFormat(_captureInstance.WaveFormat.Encoding,
sampleRateToUse,
channelsToUse,
_captureInstance.WaveFormat.AverageBytesPerSecond,
_captureInstance.WaveFormat.BlockAlign,
_captureInstance.WaveFormat.BitsPerSample);
_mp3FileWriter = new LameMP3FileWriter(_currentStream, waveFormatToUse, 32);
This code works properly, except the cases when a connected device (also virtual as SteelSeries Sonar) has more than 2 channels.
In the case with more than 2 channels all recordings with noise only.
How can I solve this issue? It isn't required to use only LameMP3FileWriter, I only need it will mp3 or any format with good compression. Also if it's possible without saving intermediate files on the disk (all processing in memory), only the final file with audio.
My recording code:
// When the capturer receives audio, start writing the buffer into the mentioned file
_captureInstance.DataAvailable += (s, a) =>
{
lock (_writerLock)
{
// Write buffer into the file of the writer instance
_mp3FileWriter?.Write(a.Buffer, 0, a.BytesRecorded);
}
};
// When the Capturer Stops, dispose instances of the capturer and writer
_captureInstance.RecordingStopped += (s, a) =>
{
lock (_writerLock)
{
_mp3FileWriter?.Dispose();
}
_captureInstance?.Dispose();
};
// Start audio recording
_captureInstance.StartRecording();
If LAME doesn't support more than 2 channels, you can't use this encoder for your purpose. Have you tried it with the Fraunhofer surround MP3 encoder?
Link: https://download.cnet.com/mp3-surround-encoder/3000-2140_4-165541.html
Also, here's a nice article discussing how to convert between most audio formats (with C# code samples): https://www.codeproject.com/articles/501521/how-to-convert-between-most-audio-formats-in-net

c# cscore how to instantly play loopback capture recorded bytes

Yo guys, its me again with my noob questions. so this time I've used cscore to record windows sounds then send the recorded bytes to another pc by sockets and let them play there.
I just could not figure out how to play the gotten bytes under DataAvailable callback...
I've tried to write the bytes gotten to a file and play that file that worked but sound is not playing correctly like there's some unexpected sounds being heard with it too.
so here's my code:
WasapiCapture capture = new WasapiLoopbackCapture();
capture.Initialize();
capture.DataAvailable += (s, e) =>
{
WaveWriter w = new WaveWriter("file.mp3", capture.WaveFormat);
w.Write(e.Data, e.Offset, e.ByteCount);
w.Dispose();
MemoryStream stream = new MemoryStream(File.ReadAllBytes("file.mp3"));
SoundPlayer player = new SoundPlayer(stream);
player.Play();
stream.Dispose();
};
capture.Start();
any help would be highly appreciated ;-;.
if you wanna hear how sound comes out by that way I would record you the result.
NOTE: if I just record sounds to a file and open later it just works perfectly but if I write and play instantly it unexpected sounds being heard.....
Use the SoundInSource as an adapater.
var capture = new WasapiCapture(...)
capture.Initialize(); //initialize always first!!!!
var soundInSource = new SoundInSource(capture)
{ FillWithZeros = true }; //set FillWithZeros to true, to prevent WasapiOut from stopping for the case WasapiCapture does not serve any data
var soundOut = new WasapiOut();
soundOut.Initialize(soundInSource);
soundOut.Play();

Naudio float Sample and Concentus.OggFile short Sample, convert mp3 to opus ogg

trying to convert mp3 file to opus ogg file by using
NAudio: https://github.com/naudio/NAudio
Concentus.OggFile https://github.com/lostromb/concentus.oggfile
using (var source = new MemoryStream(mp3File))
using (var mp3Reader = new MyAudioFileReader(source, FileReaderType.Mp3))
using (var memo = new MemoryStream())
{
var bufferFloat = new float[mp3Reader.Length / (mp3Reader.WaveFormat.BitsPerSample / 8)];
var count = mp3Reader.Read(bufferFloat, 0, bufferFloat.Length);
//convert float to short
var buffShort = new short[count];
var scale = (float)(short.MaxValue);
for (int i = 0; i < count; i++)
{
buffShort[i] = (short)(bufferFloat[i] * scale);
}
//encoder
var encoder = OpusEncoder.Create(48000,
mp3Reader.WaveFormat.Channels,
OpusApplication.OPUS_APPLICATION_AUDIO);
encoder.Bitrate = 65536;//64kbps
//tags
var tags = new OpusTags();
tags.Fields[OpusTagName.Title] = "Title";
tags.Fields[OpusTagName.Artist] = "Artist";
//
var oggOut = new OpusOggWriteStream(encoder, memo, tags);
oggOut.WriteSamples(buffShort, 0, buffShort.Length);
oggOut.Finish();
result = memo.ToArray();
}
I don't know the basics, did some GDD and here is result of what I get.
OpusOggWriteStream.WriteSamples()
requires short[] sample as input.
is it okay the way I convert the NAudio float[] sample provider to short[] ?
probably not cuz output file can't be played .
this code doesn't work and I have no idea why :"D
This is probably too little too late but whatever. As far as I can tell your code looks fine, so to debug I would try just a few things:
There is actually a WriteSamples() overload in OggOpusWriteStream that accepts float[]. Try using that first
I would make sure that mp3Reader.Read actually produces as much data as you believe it to be. I wonder if it might only be returning a single frame of decoded data or something like that. Try writing out the data as uncompressed pcm and sanity checking it
I checked to see if there was some bug in Concentus.Oggfile since you only ever WriteSamples() once - I thought maybe it wouldn't finalize the pages properly in that case, but I can't find anything.

Storing a wav file in an array

I need a fast method to store all samples of a wav file in an array. I am currently working around this problem by playing the music and storing the values from the Sample Provider, but this is not very elegant.
From the NAudio Demo I have the Audioplayer Class with this Method:
private ISampleProvider CreateInputStream(string fileName)
{
if (fileName.EndsWith(".wav"))
{
fileStream = OpenWavStream(fileName);
}
throw new InvalidOperationException("Unsupported extension");
}
var inputStream = new SampleChannel(fileStream, true);
var sampleStream = new NotifyingSampleProvider(inputStream);
SampleRate = sampleStream.WaveFormat.SampleRate;
sampleStream.Sample += (s, e) => { aggregator.Add(e.Left); }; // at this point the aggregator gets the current sample value, while playing the wav file
return sampleStream;
}
I want to skip this progress of getting the sample values while playing the file, instead I want the values immediatly without waiting till the end of the file. Basically like the wavread command in matlab.
Use AudioFileReader to read the file. This will automatically convert to IEEE float samples. Then repeatedly call the Read method to read a block of samples into a float[] array.

Categories

Resources