How to use track bar in NAudio MP3 streaming section - c#

I am using NAudio Library in C# and I need a track bar to control the music. In NAudio Mp3Stream example project we have standard controls such as play, stop, pause, volume but there is no track bar.
Can I use track bar in Mp3Stream at all ? because when I set the break point on do command, CanSeek property is false in readFullyStream variable.
How can I use track bar on streaming ?
using (var responseStream = resp.GetResponseStream())
{
var readFullyStream = new ReadFullyStream(responseStream);
do
{
if (IsBufferNearlyFull)
{
Debug.WriteLine("Buffer getting full, taking a break");
Thread.Sleep(500);
}
else
{
Mp3Frame frame;
try
{
frame = Mp3Frame.LoadFromStream(readFullyStream);
}
catch (EndOfStreamException)
{
fullyDownloaded = true;
// reached the end of the MP3 file / stream
break;
}
catch (WebException)
{
// probably we have aborted download from the GUI thread
break;
}
if (decompressor == null)
{
// don't think these details matter too much - just help ACM select the right codec
// however, the buffered provider doesn't know what sample rate it is working at
// until we have a frame
decompressor = CreateFrameDecompressor(frame);
bufferedWaveProvider = new BufferedWaveProvider(decompressor.OutputFormat);
bufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(20); // allow us to get well ahead of ourselves
//this.bufferedWaveProvider.BufferedDuration = 250;
}
int decompressed = decompressor.DecompressFrame(frame, buffer, 0);
//Debug.WriteLine(String.Format("Decompressed a frame {0}", decompressed));
bufferedWaveProvider.AddSamples(buffer, 0, decompressed);
}
} while (playbackState != StreamingPlaybackState.Stopped);
Debug.WriteLine("Exiting");
// was doing this in a finally block, but for some reason
// we are hanging on response stream .Dispose so never get there
decompressor.Dispose();
}

the NAudio streaming demo simply decompresses and plays MP3 frames as they arrive. There is no code to store previously received audio. So if you wanted the ability to rewind, you'd need to implement that.
One option you could consider is using the MediaFoundationReader, passing in the stream URL as the path. This can play audio as it downloads, and should support repositioning as well.

Related

How to record an input device with more than 2 channels to mp3 format

I am building a recording software for recording all connected devices to PC into mp3 format.
Here is my code:
IWaveIn _captureInstance = inputDevice.DataFlow == DataFlow.Render ?
new WasapiLoopbackCapture(inputDevice) : new WasapiCapture(inputDevice)
var waveFormatToUse = _captureInstance.WaveFormat;
var sampleRateToUse = waveFormatToUse.SampleRate;
var channelsToUse = waveFormatToUse.Channels;
if (sampleRateToUse > 48000) // LameMP3FileWriter doesn't support a rate more than 48000Hz
{
sampleRateToUse = 48000;
}
else if (sampleRateToUse < 8000) // LameMP3FileWriter doesn't support a rate less than 8000Hz
{
sampleRateToUse = 8000;
}
if (channelsToUse > 2) // LameMP3FileWriter doesn't support a number of channels more than 2
{
channelsToUse = 2;
}
waveFormatToUse = WaveFormat.CreateCustomFormat(_captureInstance.WaveFormat.Encoding,
sampleRateToUse,
channelsToUse,
_captureInstance.WaveFormat.AverageBytesPerSecond,
_captureInstance.WaveFormat.BlockAlign,
_captureInstance.WaveFormat.BitsPerSample);
_mp3FileWriter = new LameMP3FileWriter(_currentStream, waveFormatToUse, 32);
This code works properly, except the cases when a connected device (also virtual as SteelSeries Sonar) has more than 2 channels.
In the case with more than 2 channels all recordings with noise only.
How can I solve this issue? It isn't required to use only LameMP3FileWriter, I only need it will mp3 or any format with good compression. Also if it's possible without saving intermediate files on the disk (all processing in memory), only the final file with audio.
My recording code:
// When the capturer receives audio, start writing the buffer into the mentioned file
_captureInstance.DataAvailable += (s, a) =>
{
lock (_writerLock)
{
// Write buffer into the file of the writer instance
_mp3FileWriter?.Write(a.Buffer, 0, a.BytesRecorded);
}
};
// When the Capturer Stops, dispose instances of the capturer and writer
_captureInstance.RecordingStopped += (s, a) =>
{
lock (_writerLock)
{
_mp3FileWriter?.Dispose();
}
_captureInstance?.Dispose();
};
// Start audio recording
_captureInstance.StartRecording();
If LAME doesn't support more than 2 channels, you can't use this encoder for your purpose. Have you tried it with the Fraunhofer surround MP3 encoder?
Link: https://download.cnet.com/mp3-surround-encoder/3000-2140_4-165541.html
Also, here's a nice article discussing how to convert between most audio formats (with C# code samples): https://www.codeproject.com/articles/501521/how-to-convert-between-most-audio-formats-in-net

C# BroadCast Mp3 File To ShoutCast Server

Im trying to Make a radio Like Auto Dj to Play List Of Mp3 Files in series Like What Happen In Radio.
I tried a lot of work around but finally i thought of sending mp3 files to shoutcast server and play the output of that server my problem is i don't how to do that
i have tried bass.radio to use bass.net and that's my code
private int _recHandle;
private BroadCast _broadCast;
EncoderLAME l;
IStreamingServer server = null;
// Init Bass
Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT,IntPtr.Zero);
// create the stream
int _stream = Bass.BASS_StreamCreateFile("1.mp3", 0, 0,
BASSFlag.BASS_SAMPLE_FLOAT | BASSFlag.BASS_STREAM_PRESCAN);
l= new EncoderLAME(_stream);
l.InputFile = null; //STDIN
l.OutputFile = null;
l.Start(null, IntPtr.Zero, false);
// decode the stream (if not using a decoding channel, simply call "Bass.BASS_ChannelPlay" here)
byte[] encBuffer = new byte[65536]; // our dummy encoder buffer
while (Bass.BASS_ChannelIsActive(_stream) == BASSActive.BASS_ACTIVE_PLAYING)
{
// getting sample data will automatically feed the encoder
int len = Bass.BASS_ChannelGetData(_stream, encBuffer, encBuffer.Length);
}
//l.Stop(); // finish
//Bass.BASS_StreamFree(_stream);
//Server
SHOUTcast shoutcast = new SHOUTcast(l);
shoutcast.ServerAddress = "50.22.219.37";
shoutcast.ServerPort = 12904;
shoutcast.Password = "01008209907";
shoutcast.PublicFlag = true;
shoutcast.Genre = "Hörspiel";
shoutcast.StationName = "Kravis Server";
shoutcast.Url = "";
shoutcast.Aim = "";
shoutcast.Icq = "";
shoutcast.Irc = "";
server = shoutcast;
server.SongTitle = "BASS.NET";
// disconnect, if connected
if (_broadCast != null && _broadCast.IsConnected)
{
_broadCast.Disconnect();
}
_broadCast = null;
GC.Collect();
_broadCast = new BroadCast(server);
_broadCast.Notification += OnBroadCast_Notification;
_broadCast.AutoReconnect = true;
_broadCast.ReconnectTimeout = 5;
_broadCast.AutoConnect();
but i don't get my File Streamed to streamed to the server even the _broadCast Is Connected.
so if any solution of code or any other thing i can do.
I haven't used BASS in many years, so I can't give you specific advice on the code you have there. But, I wanted to give you the gist of the process of what you need to do... it might help you get started.
As your file is in MP3, it is possible to send it directly to the server and hear it on the receiving end. However, there are a few problems with that. The first is rate control. If you simply transmit the file data, you'll send say 5 minutes of data in perhaps a 10 second time period. This will eventually cause failures as the clients aren't going to buffer much data, and they will disconnect. Another problem is that your MP3 files often have extra data in them in the form of ID3 tags. Some players will ignore this, others won't. Finally, some of your files might be in different sample rates than others, so even if you rate limit your sending, the players will break when they hit a file in a different sample rate.
What needs to happen is the generation of a fresh stream. The pipeline looks something like this:
[Source File] -> [Codec] -> [Raw PCM Audio] -> [Codec] -> [MP3 Stream] -> [SHOUTcast Server] -> [Clients]
Additionally, that raw PCM audio step needs to run in at a realtime rate. While your computer can definitely decode and encode faster than realtime, it needs to be ran at realtime so that the players can listen in realtime.

Play real-time sound buffer using C# Media.SoundPlayer

I have developed a system in which a C# program receives sound buffers (byte arrays) from another subsystem. It is supposed to play the incoming buffers continuously. I searched in the web and I decided to use SoundPlayer. It works perfectly in the offline mode (play the buffers after receiving them all). However, I have a problem in the real-time mode.
In the real-time mode the program at first waits for a number of buffer arrays (for example 200) to receive and accumulate them. Then it adds a wav header and plays it. However, after that for each next 200 arrays it plays repeatedly the first buffer.
I have read following pages:
Play wav/mp3 from memory
https://social.msdn.microsoft.com/Forums/vstudio/en-US/8ac2847c-3e2f-458c-b8ff-533728e267e0/c-problems-with-mediasoundplayer?forum=netfxbcl
and according to their suggestions I implemented my code as follow:
public class MediaPlayer
{
System.Media.SoundPlayer soundPlayer;
public MediaPlayer(byte[] buffer)
{
byte[] headerPlusBuffer = AddWaveHeader(buffer, false, 1, 16, 8000, buffer.Length / 2); //add wav header to the **first** buffer
MemoryStream memoryStream = new MemoryStream(headerPlusBuffer, true);
soundPlayer = new System.Media.SoundPlayer(memoryStream);
}
public void Play()
{
soundPlayer.PlaySync();
}
public void Play(byte[] buffer)
{
soundPlayer.Stream.Seek(0, SeekOrigin.Begin);
soundPlayer.Stream.Write(buffer, 0, buffer.Length);
soundPlayer.PlaySync();
}
}
I use it like this:
MediaPlayer _mediaPlayer;
if (firstBuffer)
{
_mediaPlayer = new MediaPlayer(dataRaw);
_mediaPlayer.Play();
}
else
{
_mediaPlayer.Play(dataRaw);
}
Each time _mediaPlayer.Play(dataRaw) is called, the first buffer is played again and again; the dataRaw is updated though.
I appreciate your help.

MediaElement not playing after changing position

I have a video player which downloads a video file in chunks. After a chunk of 1MB has been downloaded, an event is called giving the MediaElement its source, and making it play. Whilst the video is being played, the rest of the 1MB chunks are downloaded until the file is complete. If only 1MB of the video is downloaded, the playback time is equal to 17 seconds(This will come in later).
When the file is completely downloaded, permission is given to the user to change the position of the video or seek it. If the user seeks to a position under or equal to 17 seconds, the MediaElement will change its position and keep on playing, however if the user seeks to a position greater than 17s, the video freezes.
This could be for the fact that the MediaElement has buffered only 1MB of the video so it'll only seek withing that timeframe, but it doesn't make sense because if I let it play without interruption, it'll play the whole video without any problem. Can someone tell me whats going on?
Code:
private void downloadchunks()
for (int i = 1; i <= 20; i++)
{
WriteStream = new System.IO.FileStream(DownloadLocation, System.IO.FileMode.Create, System.IO.FileAccess.Write, System.IO.FileShare.ReadWrite);
//request and receive a response of 1MB of a file
rpstream = response.GetResponseStream();
byte[] buffer;
using (var SReader = new MemoryStream())
{
rpstream.CopyTo(SReader);
buffer =SReader.ToArray();
WriteStream.Seek(WritePos,SeekOrigin.Begin);
WriteStream.Write(buffer, 0, buffer.Length);
WriteStream.Close();
}
if (i==1)
{
PlayVideo();
}
}
private void PlayVideo()
{
MediaElement.Source = new uri(DownloadLocation);
MediaElement.Play();
}
I've figured it out. Just create a dummmy file before you assign it to the MediaElement, and then start the download.
File.WriteAllBytes(location, new byte[filesize]);

Accessing MediaRecorder bytes while recording in Xamarin Android

I would like to access the audio bytes that is being recorded by MediaRecorder during the recording process so I can send them with UdpClient to a server application.
I'm able to do this with AudioRecord by doing the following (note the while(true) loop)
endRecording = false;
isRecording = true;
audioBuffer = new Byte[1024];
audioRecord = new AudioRecord (
// Hardware source of recording.
AudioSource.Mic,
// Frequency
11025,
// Mono or stereo
ChannelIn.Mono,
// Audio encoding
Android.Media.Encoding.Pcm16bit,
// Length of the audio clip.
audioBuffer.Length
);
audioRecord.StartRecording ();
while (true) {
if (endRecording) {
endRecording = false;
break;
}
try {
// Keep reading the buffer while there is audio input.
int numBytes = await audioRecord.ReadAsync (audioBuffer, 0, audioBuffer.Length);
//Send the audio data with the DataReceived event where it gets send over UdpClient in the Activity code
byte[] encoded = audioBuffer; //TODO: encode audio data, for now just stick with regular PCM audio
DataReceived(encoded);
} catch (Exception ex) {
Console.Out.WriteLine (ex.Message);
break;
}
}
audioRecord.Stop ();
audioRecord.Release ();
isRecording = false;
But I'm not sure how to get the bytes out of MediaRecorder so I can do something similar. Most of the examples I see only work with a file after the recording has been finished like the following example code from here and here.
I don't want to wait for a complete recording before it starts to send. I don't need MediaRecorder to record a file, just give me access to the bytes. But having the option to both write to a file and send the bytes would work well. Is there a way to do this, perhaps by using ParcelFileDescriptor or something else?

Categories

Resources