Recording live Audio Streams on iOS - c#

//Declare string for application temp path and tack on the file extension
string fileName = string.Format ("Myfile{0}.wav", DateTime.Now.ToString ("yyyyMMddHHmmss"));
string audioFilePath = Path.Combine (Path.GetTempPath (), fileName);
Console.WriteLine("Audio File Path: " + audioFilePath);
url = NSUrl.FromFilename(audioFilePath);
//set up the NSObject Array of values that will be combined with the keys to make the NSDictionary
NSObject[] values = new NSObject[]
{
NSNumber.FromFloat (44100.0f), //Sample Rate
NSNumber.FromInt32 ((int)AudioToolbox.AudioFormatType.LinearPCM), //AVFormat
NSNumber.FromInt32 (2), //Channels
NSNumber.FromInt32 (16), //PCMBitDepth
NSNumber.FromBoolean (false), //IsBigEndianKey
NSNumber.FromBoolean (false) //IsFloatKey
};
//Set up the NSObject Array of keys that will be combined with the values to make the NSDictionary
NSObject[] keys = new NSObject[]
{
AVAudioSettings.AVSampleRateKey,
AVAudioSettings.AVFormatIDKey,
AVAudioSettings.AVNumberOfChannelsKey,
AVAudioSettings.AVLinearPCMBitDepthKey,
AVAudioSettings.AVLinearPCMIsBigEndianKey,
AVAudioSettings.AVLinearPCMIsFloatKey
};
//Set Settings with the Values and Keys to create the NSDictionary
settings = NSDictionary.FromObjectsAndKeys (values, keys);
//Set recorder parameters
recorder = AVAudioRecorder.Create(url, new AudioSettings(settings), out error);
//Set Recorder to Prepare To Record
recorder.PrepareToRecord();
This code works well, but how can you keep a record from a microphone directly to stream?
I did not find any information in the Internet, i hope you can help me

You are looking for buffered access to the audio stream (recording or playback), iOS provides it via Audio Queue Services (AVAudioRecorder is too high level), so then as audio buffers are filled, iOS calls your callback with a filled buffer from the queue, you do something with it (save it to disk, write it to a C#-based Stream, send to a playback audio queue [speakers], etc...) and, normally, place it back into the queue for reuse.
Something like this starts recording to queue of audio buffers:
var recordFormat = new AudioStreamBasicDescription() {
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
FramesPerPacket = 1,
ChannelsPerFrame = 1,
BitsPerChannel = 16,
BytesPerPacket = 2,
BytesPerFrame = 2,
Reserved = 0
};
recorder = new InputAudioQueue (recordFormat);
for (int count = 0; count < BufferCount; count++) {
IntPtr bufferPointer;
recorder.AllocateBuffer(AudioBufferSize, out bufferPointer);
recorder.EnqueueBuffer(bufferPointer, AudioBufferSize, null);
}
recorder.InputCompleted += HandleInputCompleted;
recorder.Start ();
So assuming a AudioBufferSize of 8k and a BufferCount of 3 in this example, so once the first of three buffers is filled, our handler HandleInputCompleted is called (since there are 2 buffers still in the queue recording continues to them.
Our InputCompleted handler:
private void HandleInputCompleted (object sender, InputCompletedEventArgs e)
{
// We received a new buffer of audio, do something with it....
// Some unsafe code will be required to rip the buffer...
// Place the buffer back into the queue so iOS knows you are done with it
recorder.EnqueueBuffer(e.IntPtrBuffer, AudioBufferSize, null);
// At some point you need to call `recorder.Stop();` ;-)
}
(I ripped out our code from the handler as it is a custom audio-2-text learning neutral network as we use really small buffers in a very large queue to reduce feedback latency and load that audio data within single TCP/UDP packets for cloud processing (think Siri ;-)
In this handler you have access to the Pointer to the buffer that is currently filled via InputCompletedEventArgs.IntPtrBuffer, Using that pointer you could peek each byte in the buffer and poke them to your C#-based Stream if that is your goal.
Apple has a great tech article concerning Audio Queue: https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AboutAudioQueues/AboutAudioQueues.html

Related

How to record an input device with more than 2 channels to mp3 format

I am building a recording software for recording all connected devices to PC into mp3 format.
Here is my code:
IWaveIn _captureInstance = inputDevice.DataFlow == DataFlow.Render ?
new WasapiLoopbackCapture(inputDevice) : new WasapiCapture(inputDevice)
var waveFormatToUse = _captureInstance.WaveFormat;
var sampleRateToUse = waveFormatToUse.SampleRate;
var channelsToUse = waveFormatToUse.Channels;
if (sampleRateToUse > 48000) // LameMP3FileWriter doesn't support a rate more than 48000Hz
{
sampleRateToUse = 48000;
}
else if (sampleRateToUse < 8000) // LameMP3FileWriter doesn't support a rate less than 8000Hz
{
sampleRateToUse = 8000;
}
if (channelsToUse > 2) // LameMP3FileWriter doesn't support a number of channels more than 2
{
channelsToUse = 2;
}
waveFormatToUse = WaveFormat.CreateCustomFormat(_captureInstance.WaveFormat.Encoding,
sampleRateToUse,
channelsToUse,
_captureInstance.WaveFormat.AverageBytesPerSecond,
_captureInstance.WaveFormat.BlockAlign,
_captureInstance.WaveFormat.BitsPerSample);
_mp3FileWriter = new LameMP3FileWriter(_currentStream, waveFormatToUse, 32);
This code works properly, except the cases when a connected device (also virtual as SteelSeries Sonar) has more than 2 channels.
In the case with more than 2 channels all recordings with noise only.
How can I solve this issue? It isn't required to use only LameMP3FileWriter, I only need it will mp3 or any format with good compression. Also if it's possible without saving intermediate files on the disk (all processing in memory), only the final file with audio.
My recording code:
// When the capturer receives audio, start writing the buffer into the mentioned file
_captureInstance.DataAvailable += (s, a) =>
{
lock (_writerLock)
{
// Write buffer into the file of the writer instance
_mp3FileWriter?.Write(a.Buffer, 0, a.BytesRecorded);
}
};
// When the Capturer Stops, dispose instances of the capturer and writer
_captureInstance.RecordingStopped += (s, a) =>
{
lock (_writerLock)
{
_mp3FileWriter?.Dispose();
}
_captureInstance?.Dispose();
};
// Start audio recording
_captureInstance.StartRecording();
If LAME doesn't support more than 2 channels, you can't use this encoder for your purpose. Have you tried it with the Fraunhofer surround MP3 encoder?
Link: https://download.cnet.com/mp3-surround-encoder/3000-2140_4-165541.html
Also, here's a nice article discussing how to convert between most audio formats (with C# code samples): https://www.codeproject.com/articles/501521/how-to-convert-between-most-audio-formats-in-net

What is causing my image to corrupt during transfer via Bluetooth RFCOMM using FTP?

I'm developing two separate applications for data transfer over Bluetooth RFCOMM using the Obex File Transfer protocol. On one side, a Windows C# Console Application running on a PC listens for incoming bluetooth connections and sends an image whenever a client makes a request. On the other side, an Android application running on a mobile device scans the nearby bluetooth devices, looks for the server and receives the image.
In most cases, everything works fine and the image is transmitted without problems. Sometimes - not very often, I still can't figure out how to reproduce the error - the image is corrupted during the transmission, as some of the received bytes from the Android app do not match the original buffer (I compute the CRC of the received buffer and compare it with the original one to check if the image has been sent successfully).
Here's an example:
Original
,
Received
This kind of "glitchy" image is just an example, every time something goes wrong the received image has a different 'glitch effect'.
Few things I tried to solve the problem:
Changing UUID, but neither the OOP UUID nor a custom UUID seems to work, as the exact same problem arises.
My smartphone (Xiaomi Redmi Note 8T) from which I am running the client app had almost zero free space of internal storage, so I got desperate and tried to free some memory to see if that was causing the error for some reason (yeah it doesn't make much sense but it's worth mentioning). At first it worked and I thought that solved the problem somehow, but then the error re-appeared just like before.
Using an ACK system to control each sub array of data sent from the server to the client, something like: the PC sends the first sub array of data, then it waits until the smartphone sends an ACK to acknowledge the reception of the sub array, and ONLY after that it proceeds to send the next sub array of data, and so on until the end of the buffer. Needless to say that neither this option worked (again, same error and corrupted data).
I also tried to see if other devices trying to connect to my smartphone could cause the problem, but it wasn't the case.
CODE
Server side
Here's my implementation of the listener in the C# Console app running on Windows 10. I took this Server sample as a reference.
// Initialize the provider for the hosted RFCOMM service
_provider = await RfcommServiceProvider.CreateAsync(
RfcommServiceId.ObexFileTransfer); // Use Obex FTP protocol
// UUID is 00001106-0000-1000-8000-00805F9B34FB
// Create a listener for this service and start listening
StreamSocketListener listener = new StreamSocketListener();
listener.ConnectionReceived += OnConnectionReceivedAsync;
await listener.BindServiceNameAsync(
_provider.ServiceId.AsString(),
SocketProtectionLevel
.BluetoothEncryptionAllowNullAuthentication);
// Set the SDP attributes and start advertising
InitializeServiceSdpAttributes(_provider);
_provider.StartAdvertising(listener);
InitializeServiceSdpAttributes function:
const uint SERVICE_VERSION_ATTRIBUTE_ID = 0x0300;
const byte SERVICE_VERSION_ATTRIBUTE_TYPE = 0x0A; // UINT32
const uint SERVICE_VERSION = 200;
void InitializeServiceSdpAttributes(RfcommServiceProvider provider)
{
Windows.Storage.Streams.DataWriter writer = new Windows.Storage.Streams.DataWriter();
// First write the attribute type
writer.WriteByte(SERVICE_VERSION_ATTRIBUTE_TYPE);
// Then write the data
writer.WriteUInt32(MINIMUM_SERVICE_VERSION);
IBuffer data = writer.DetachBuffer();
provider.SdpRawAttributes.Add(SERVICE_VERSION_ATTRIBUTE_ID, data);
}
Whenever a new connection attempt is detected, the OnConnectionReceivedAsync function stops the advertisement, disposes the listener and creates a new StreamSocket object. At this point I set the input and output streams, convert the image to an array of bytes and send the buffer length to the remote device through the socket. Once the Android app has received the length of the buffer, it sends an ACK which means that it is ready to recevie the actual data.
// Create input and output stream
DataWriter writer = new DataWriter(_socket.OutputStream);
// Convert image to array of bytes
byte[] imageByteArray;
using (var inputStream = await file.OpenSequentialReadAsync())
{
var readStream = inputStream.AsStreamForRead();
imageByteArray = new byte[readStream.Length];
await readStream.ReadAsync(imageByteArray, 0, imageByteArray.Length);
}
// Write length of data
writer.WriteBytes(intToByteArray(imageByteArray.Length));
await writer.StoreAsync();
// Wait for ACK ...
Finally, I send the image:
// Write bytes and send
writer.WriteBytes(imageByteArray);
await writer.StoreAsync();
// Wait for ACK ...
As soon as the image is sent, the app receives an ACK from the remote device when all the data has been received and then closes the connection.
Client Side
First of all, the Android app creates a BluetoothSocket object using the same UUID specified from the Server app:
// Scan devices and find the remote Server by specifying the target MAC address
// ...
// targetDevice is the Server device
BluetoothSocket socket = targetDevice.createInsecureRfcommSocketToServiceRecord(
UUID.fromString("00001106-0000-1000-8000-00805F9B34FB") // FTP
);
// Connect the server
socket.connect();
Finally, it reads the incoming data from the socket InputStream. First it reads the length of the incoming buffer and sends an ACK to confirm that it's ready to receive the image. Then waits for each sub array until all the buffer is complete. At this point, it sends a final ACK and closes the connection.
// Get input stream
InputStream inputStream = socket.getInputStream();
// Buffer that contains the incoming data
byte[] buffer = null;
// The numOfBytes is the expected length of the buffer
int numOfBytes = 0;
// Index of the sub array within the complete buffer
int index = 0;
// flag is true if the receiver is computing the number of bytes that it has to receive
// flag is false if the receiver is actually reading the image sub arrays from the stream
int flag = true;
while(true){
// Estimate number of incoming bytes
if(flag){
try{
// inputStream.available() estimates the number of bytes that can be read
byte[] temp = new byte[inputStream.available()];
// Read the incoming data and store it in byte array temp (returns > 0 if successful)
if(inputStream.read(temp) > 0){
// Get length of expected data as array and parse it to an Integer
String lengthString = new String(temp, StandardCharsets.UTF_8);
numOfBytes = Integer.parseInt(lengthString);
// Create buffer
buffer = new byte[numOfBytes];
// Set the flag to false (turn on read image mode)
flag = false;
// Send ACK
}
}
catch (IOException e){
// ...
}
}
// Read image sub arrays
else {
try{
byte[] data = new byte[inputStream.available()];
// Read sub array and store it
int numbers = inputStream.read(data);
if(numbers <= 0 && index < numOfBytes)
continue;
// Copy sub array into the full image byte array
System.arraycopy(data, 0, buffer, index, numbers);
// Update index
index = index + numbers;
// Reached the end of the buffer (received all the data)
if(index == numOfBytes){
// Send ACK (Transfer success)
// ...
// Decode buffer and create image from byte array
Bitmap bmp = BitmapFactory.decodeByteArray(buffer, 0, numOfBytes);
// Store output image
outputImage = bmp;
// Dismiss the bluetooth manager (close socket, exit waiting loop...)
dismiss();
// Return the image
return bmp;
}
}
catch (IOException e){
// ...
}
}
}

C# BroadCast Mp3 File To ShoutCast Server

Im trying to Make a radio Like Auto Dj to Play List Of Mp3 Files in series Like What Happen In Radio.
I tried a lot of work around but finally i thought of sending mp3 files to shoutcast server and play the output of that server my problem is i don't how to do that
i have tried bass.radio to use bass.net and that's my code
private int _recHandle;
private BroadCast _broadCast;
EncoderLAME l;
IStreamingServer server = null;
// Init Bass
Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT,IntPtr.Zero);
// create the stream
int _stream = Bass.BASS_StreamCreateFile("1.mp3", 0, 0,
BASSFlag.BASS_SAMPLE_FLOAT | BASSFlag.BASS_STREAM_PRESCAN);
l= new EncoderLAME(_stream);
l.InputFile = null; //STDIN
l.OutputFile = null;
l.Start(null, IntPtr.Zero, false);
// decode the stream (if not using a decoding channel, simply call "Bass.BASS_ChannelPlay" here)
byte[] encBuffer = new byte[65536]; // our dummy encoder buffer
while (Bass.BASS_ChannelIsActive(_stream) == BASSActive.BASS_ACTIVE_PLAYING)
{
// getting sample data will automatically feed the encoder
int len = Bass.BASS_ChannelGetData(_stream, encBuffer, encBuffer.Length);
}
//l.Stop(); // finish
//Bass.BASS_StreamFree(_stream);
//Server
SHOUTcast shoutcast = new SHOUTcast(l);
shoutcast.ServerAddress = "50.22.219.37";
shoutcast.ServerPort = 12904;
shoutcast.Password = "01008209907";
shoutcast.PublicFlag = true;
shoutcast.Genre = "Hörspiel";
shoutcast.StationName = "Kravis Server";
shoutcast.Url = "";
shoutcast.Aim = "";
shoutcast.Icq = "";
shoutcast.Irc = "";
server = shoutcast;
server.SongTitle = "BASS.NET";
// disconnect, if connected
if (_broadCast != null && _broadCast.IsConnected)
{
_broadCast.Disconnect();
}
_broadCast = null;
GC.Collect();
_broadCast = new BroadCast(server);
_broadCast.Notification += OnBroadCast_Notification;
_broadCast.AutoReconnect = true;
_broadCast.ReconnectTimeout = 5;
_broadCast.AutoConnect();
but i don't get my File Streamed to streamed to the server even the _broadCast Is Connected.
so if any solution of code or any other thing i can do.
I haven't used BASS in many years, so I can't give you specific advice on the code you have there. But, I wanted to give you the gist of the process of what you need to do... it might help you get started.
As your file is in MP3, it is possible to send it directly to the server and hear it on the receiving end. However, there are a few problems with that. The first is rate control. If you simply transmit the file data, you'll send say 5 minutes of data in perhaps a 10 second time period. This will eventually cause failures as the clients aren't going to buffer much data, and they will disconnect. Another problem is that your MP3 files often have extra data in them in the form of ID3 tags. Some players will ignore this, others won't. Finally, some of your files might be in different sample rates than others, so even if you rate limit your sending, the players will break when they hit a file in a different sample rate.
What needs to happen is the generation of a fresh stream. The pipeline looks something like this:
[Source File] -> [Codec] -> [Raw PCM Audio] -> [Codec] -> [MP3 Stream] -> [SHOUTcast Server] -> [Clients]
Additionally, that raw PCM audio step needs to run in at a realtime rate. While your computer can definitely decode and encode faster than realtime, it needs to be ran at realtime so that the players can listen in realtime.

Play real-time sound buffer using C# Media.SoundPlayer

I have developed a system in which a C# program receives sound buffers (byte arrays) from another subsystem. It is supposed to play the incoming buffers continuously. I searched in the web and I decided to use SoundPlayer. It works perfectly in the offline mode (play the buffers after receiving them all). However, I have a problem in the real-time mode.
In the real-time mode the program at first waits for a number of buffer arrays (for example 200) to receive and accumulate them. Then it adds a wav header and plays it. However, after that for each next 200 arrays it plays repeatedly the first buffer.
I have read following pages:
Play wav/mp3 from memory
https://social.msdn.microsoft.com/Forums/vstudio/en-US/8ac2847c-3e2f-458c-b8ff-533728e267e0/c-problems-with-mediasoundplayer?forum=netfxbcl
and according to their suggestions I implemented my code as follow:
public class MediaPlayer
{
System.Media.SoundPlayer soundPlayer;
public MediaPlayer(byte[] buffer)
{
byte[] headerPlusBuffer = AddWaveHeader(buffer, false, 1, 16, 8000, buffer.Length / 2); //add wav header to the **first** buffer
MemoryStream memoryStream = new MemoryStream(headerPlusBuffer, true);
soundPlayer = new System.Media.SoundPlayer(memoryStream);
}
public void Play()
{
soundPlayer.PlaySync();
}
public void Play(byte[] buffer)
{
soundPlayer.Stream.Seek(0, SeekOrigin.Begin);
soundPlayer.Stream.Write(buffer, 0, buffer.Length);
soundPlayer.PlaySync();
}
}
I use it like this:
MediaPlayer _mediaPlayer;
if (firstBuffer)
{
_mediaPlayer = new MediaPlayer(dataRaw);
_mediaPlayer.Play();
}
else
{
_mediaPlayer.Play(dataRaw);
}
Each time _mediaPlayer.Play(dataRaw) is called, the first buffer is played again and again; the dataRaw is updated though.
I appreciate your help.

WasapiCapture NAudio

We are using the NAudio Stack written in c# and trying to capture the audio in Exclusive mode with PCM 8kHZ and 16bits per sample.
In the following function:
private void InitializeCaptureDevice()
{
if (initialized)
return;
long requestedDuration = REFTIMES_PER_MILLISEC * 100;
if (!audioClient.IsFormatSupported(AudioClientShareMode.Shared, WaveFormat) &&
(!audioClient.IsFormatSupported(AudioClientShareMode.Exclusive, WaveFormat)))
{
throw new ArgumentException("Unsupported Wave Format");
}
var streamFlags = GetAudioClientStreamFlags();
audioClient.Initialize(AudioClientShareMode.Shared,
streamFlags,
requestedDuration,
requestedDuration,
this.waveFormat,
Guid.Empty);
int bufferFrameCount = audioClient.BufferSize;
this.bytesPerFrame = this.waveFormat.Channels * this.waveFormat.BitsPerSample / 8;
this.recordBuffer = new byte[bufferFrameCount * bytesPerFrame];
Debug.WriteLine(string.Format("record buffer size = {0}", this.recordBuffer.Length));
initialized = true;
}
We configured the WaveFormat before calls this function to (8000,1) and also a period of 100 ms.
We expected the system to allocate 1600 bytes for the buffer and interval of 100 ms as requested.
But we noticed following occured:
1. the system allocated audioClient.BufferSize to be 4800 and "this.recordBuffer" an array of 9600 bytes (which means a buffer for 600ms and not 100ms).
2. the thread is going to sleep and then getting 2400 samples (4800 bytes) and not as expected frames of 1600 bytes
Any idea what is going there?
You say you are capturing audio in exclusive mode, but in the example code you call the Initialize method with AudioClientMode.Shared. It strikes me as very unlikely that shared mode will let you work at 8kHz. Unlike the wave... APIs, WASAPI does no resampling for you of playback or capture, so the soundcard itself must be operating at the sample rate you specify.

Categories

Resources