I would like to access the audio bytes that is being recorded by MediaRecorder during the recording process so I can send them with UdpClient to a server application.
I'm able to do this with AudioRecord by doing the following (note the while(true) loop)
endRecording = false;
isRecording = true;
audioBuffer = new Byte[1024];
audioRecord = new AudioRecord (
// Hardware source of recording.
AudioSource.Mic,
// Frequency
11025,
// Mono or stereo
ChannelIn.Mono,
// Audio encoding
Android.Media.Encoding.Pcm16bit,
// Length of the audio clip.
audioBuffer.Length
);
audioRecord.StartRecording ();
while (true) {
if (endRecording) {
endRecording = false;
break;
}
try {
// Keep reading the buffer while there is audio input.
int numBytes = await audioRecord.ReadAsync (audioBuffer, 0, audioBuffer.Length);
//Send the audio data with the DataReceived event where it gets send over UdpClient in the Activity code
byte[] encoded = audioBuffer; //TODO: encode audio data, for now just stick with regular PCM audio
DataReceived(encoded);
} catch (Exception ex) {
Console.Out.WriteLine (ex.Message);
break;
}
}
audioRecord.Stop ();
audioRecord.Release ();
isRecording = false;
But I'm not sure how to get the bytes out of MediaRecorder so I can do something similar. Most of the examples I see only work with a file after the recording has been finished like the following example code from here and here.
I don't want to wait for a complete recording before it starts to send. I don't need MediaRecorder to record a file, just give me access to the bytes. But having the option to both write to a file and send the bytes would work well. Is there a way to do this, perhaps by using ParcelFileDescriptor or something else?
Related
I'm developing two separate applications for data transfer over Bluetooth RFCOMM using the Obex File Transfer protocol. On one side, a Windows C# Console Application running on a PC listens for incoming bluetooth connections and sends an image whenever a client makes a request. On the other side, an Android application running on a mobile device scans the nearby bluetooth devices, looks for the server and receives the image.
In most cases, everything works fine and the image is transmitted without problems. Sometimes - not very often, I still can't figure out how to reproduce the error - the image is corrupted during the transmission, as some of the received bytes from the Android app do not match the original buffer (I compute the CRC of the received buffer and compare it with the original one to check if the image has been sent successfully).
Here's an example:
Original
,
Received
This kind of "glitchy" image is just an example, every time something goes wrong the received image has a different 'glitch effect'.
Few things I tried to solve the problem:
Changing UUID, but neither the OOP UUID nor a custom UUID seems to work, as the exact same problem arises.
My smartphone (Xiaomi Redmi Note 8T) from which I am running the client app had almost zero free space of internal storage, so I got desperate and tried to free some memory to see if that was causing the error for some reason (yeah it doesn't make much sense but it's worth mentioning). At first it worked and I thought that solved the problem somehow, but then the error re-appeared just like before.
Using an ACK system to control each sub array of data sent from the server to the client, something like: the PC sends the first sub array of data, then it waits until the smartphone sends an ACK to acknowledge the reception of the sub array, and ONLY after that it proceeds to send the next sub array of data, and so on until the end of the buffer. Needless to say that neither this option worked (again, same error and corrupted data).
I also tried to see if other devices trying to connect to my smartphone could cause the problem, but it wasn't the case.
CODE
Server side
Here's my implementation of the listener in the C# Console app running on Windows 10. I took this Server sample as a reference.
// Initialize the provider for the hosted RFCOMM service
_provider = await RfcommServiceProvider.CreateAsync(
RfcommServiceId.ObexFileTransfer); // Use Obex FTP protocol
// UUID is 00001106-0000-1000-8000-00805F9B34FB
// Create a listener for this service and start listening
StreamSocketListener listener = new StreamSocketListener();
listener.ConnectionReceived += OnConnectionReceivedAsync;
await listener.BindServiceNameAsync(
_provider.ServiceId.AsString(),
SocketProtectionLevel
.BluetoothEncryptionAllowNullAuthentication);
// Set the SDP attributes and start advertising
InitializeServiceSdpAttributes(_provider);
_provider.StartAdvertising(listener);
InitializeServiceSdpAttributes function:
const uint SERVICE_VERSION_ATTRIBUTE_ID = 0x0300;
const byte SERVICE_VERSION_ATTRIBUTE_TYPE = 0x0A; // UINT32
const uint SERVICE_VERSION = 200;
void InitializeServiceSdpAttributes(RfcommServiceProvider provider)
{
Windows.Storage.Streams.DataWriter writer = new Windows.Storage.Streams.DataWriter();
// First write the attribute type
writer.WriteByte(SERVICE_VERSION_ATTRIBUTE_TYPE);
// Then write the data
writer.WriteUInt32(MINIMUM_SERVICE_VERSION);
IBuffer data = writer.DetachBuffer();
provider.SdpRawAttributes.Add(SERVICE_VERSION_ATTRIBUTE_ID, data);
}
Whenever a new connection attempt is detected, the OnConnectionReceivedAsync function stops the advertisement, disposes the listener and creates a new StreamSocket object. At this point I set the input and output streams, convert the image to an array of bytes and send the buffer length to the remote device through the socket. Once the Android app has received the length of the buffer, it sends an ACK which means that it is ready to recevie the actual data.
// Create input and output stream
DataWriter writer = new DataWriter(_socket.OutputStream);
// Convert image to array of bytes
byte[] imageByteArray;
using (var inputStream = await file.OpenSequentialReadAsync())
{
var readStream = inputStream.AsStreamForRead();
imageByteArray = new byte[readStream.Length];
await readStream.ReadAsync(imageByteArray, 0, imageByteArray.Length);
}
// Write length of data
writer.WriteBytes(intToByteArray(imageByteArray.Length));
await writer.StoreAsync();
// Wait for ACK ...
Finally, I send the image:
// Write bytes and send
writer.WriteBytes(imageByteArray);
await writer.StoreAsync();
// Wait for ACK ...
As soon as the image is sent, the app receives an ACK from the remote device when all the data has been received and then closes the connection.
Client Side
First of all, the Android app creates a BluetoothSocket object using the same UUID specified from the Server app:
// Scan devices and find the remote Server by specifying the target MAC address
// ...
// targetDevice is the Server device
BluetoothSocket socket = targetDevice.createInsecureRfcommSocketToServiceRecord(
UUID.fromString("00001106-0000-1000-8000-00805F9B34FB") // FTP
);
// Connect the server
socket.connect();
Finally, it reads the incoming data from the socket InputStream. First it reads the length of the incoming buffer and sends an ACK to confirm that it's ready to receive the image. Then waits for each sub array until all the buffer is complete. At this point, it sends a final ACK and closes the connection.
// Get input stream
InputStream inputStream = socket.getInputStream();
// Buffer that contains the incoming data
byte[] buffer = null;
// The numOfBytes is the expected length of the buffer
int numOfBytes = 0;
// Index of the sub array within the complete buffer
int index = 0;
// flag is true if the receiver is computing the number of bytes that it has to receive
// flag is false if the receiver is actually reading the image sub arrays from the stream
int flag = true;
while(true){
// Estimate number of incoming bytes
if(flag){
try{
// inputStream.available() estimates the number of bytes that can be read
byte[] temp = new byte[inputStream.available()];
// Read the incoming data and store it in byte array temp (returns > 0 if successful)
if(inputStream.read(temp) > 0){
// Get length of expected data as array and parse it to an Integer
String lengthString = new String(temp, StandardCharsets.UTF_8);
numOfBytes = Integer.parseInt(lengthString);
// Create buffer
buffer = new byte[numOfBytes];
// Set the flag to false (turn on read image mode)
flag = false;
// Send ACK
}
}
catch (IOException e){
// ...
}
}
// Read image sub arrays
else {
try{
byte[] data = new byte[inputStream.available()];
// Read sub array and store it
int numbers = inputStream.read(data);
if(numbers <= 0 && index < numOfBytes)
continue;
// Copy sub array into the full image byte array
System.arraycopy(data, 0, buffer, index, numbers);
// Update index
index = index + numbers;
// Reached the end of the buffer (received all the data)
if(index == numOfBytes){
// Send ACK (Transfer success)
// ...
// Decode buffer and create image from byte array
Bitmap bmp = BitmapFactory.decodeByteArray(buffer, 0, numOfBytes);
// Store output image
outputImage = bmp;
// Dismiss the bluetooth manager (close socket, exit waiting loop...)
dismiss();
// Return the image
return bmp;
}
}
catch (IOException e){
// ...
}
}
}
I need to display UDP raw data coming from my Drone.
Unfortunately the Drone does not just use one of the existing standards for streaming video such as RTSP. Instead, the raw video packets are sent via UDP and need to be reassembled and decoded before they can be viewed..
Since the size of a single video frame is larger than the size of a single UDP packet, the Tello breaks apart each frame, and sends the packets with header of 2 bytes to indicate how they need to be reassembled.
The video data itself is just H264 encoded YUV420p. Using this information, it is possible to decode the video using standard tools such as ffmpeg, once you remove the header bytes.
Any idea how I can do this with C#?
Position Usage
0 Sequence number
1 Sub-sequence number
2-n Video data
UdpClient receiver = new UdpClient(11111);
//receiver.Client.ReceiveBufferSize = 1024;
IPEndPoint hostEP = new IPEndPoint(IPAddress.Parse("192.168.10.1"),0);
receiver.Connect(hostEP);
IPEndPoint ep = new IPEndPoint(IPAddress.Parse("0.0.0.0"), 0);
int i = 0;
string fileName = "car_pic";
while (true)
{
if (receiver.Available > 0)
{
//Debug.Write("Packet Received");
byte[] data = receiver.Receive(ref ep);
MemoryStream stream = new MemoryStream(data);
Device.BeginInvokeOnMainThread(() =>
{
try
{
//DronController.displayImage.Source =
ImageSource.FromStream(()=>stream);
//DronController.displayPath.Text =
data.ToString();
//stream.Close();
}
catch (Exception ex)
{
Debug.WriteLine("Exception!!!");
Debug.WriteLine(ex);
}
});
}
}
I am using NAudio Library in C# and I need a track bar to control the music. In NAudio Mp3Stream example project we have standard controls such as play, stop, pause, volume but there is no track bar.
Can I use track bar in Mp3Stream at all ? because when I set the break point on do command, CanSeek property is false in readFullyStream variable.
How can I use track bar on streaming ?
using (var responseStream = resp.GetResponseStream())
{
var readFullyStream = new ReadFullyStream(responseStream);
do
{
if (IsBufferNearlyFull)
{
Debug.WriteLine("Buffer getting full, taking a break");
Thread.Sleep(500);
}
else
{
Mp3Frame frame;
try
{
frame = Mp3Frame.LoadFromStream(readFullyStream);
}
catch (EndOfStreamException)
{
fullyDownloaded = true;
// reached the end of the MP3 file / stream
break;
}
catch (WebException)
{
// probably we have aborted download from the GUI thread
break;
}
if (decompressor == null)
{
// don't think these details matter too much - just help ACM select the right codec
// however, the buffered provider doesn't know what sample rate it is working at
// until we have a frame
decompressor = CreateFrameDecompressor(frame);
bufferedWaveProvider = new BufferedWaveProvider(decompressor.OutputFormat);
bufferedWaveProvider.BufferDuration = TimeSpan.FromSeconds(20); // allow us to get well ahead of ourselves
//this.bufferedWaveProvider.BufferedDuration = 250;
}
int decompressed = decompressor.DecompressFrame(frame, buffer, 0);
//Debug.WriteLine(String.Format("Decompressed a frame {0}", decompressed));
bufferedWaveProvider.AddSamples(buffer, 0, decompressed);
}
} while (playbackState != StreamingPlaybackState.Stopped);
Debug.WriteLine("Exiting");
// was doing this in a finally block, but for some reason
// we are hanging on response stream .Dispose so never get there
decompressor.Dispose();
}
the NAudio streaming demo simply decompresses and plays MP3 frames as they arrive. There is no code to store previously received audio. So if you wanted the ability to rewind, you'd need to implement that.
One option you could consider is using the MediaFoundationReader, passing in the stream URL as the path. This can play audio as it downloads, and should support repositioning as well.
Im trying to Make a radio Like Auto Dj to Play List Of Mp3 Files in series Like What Happen In Radio.
I tried a lot of work around but finally i thought of sending mp3 files to shoutcast server and play the output of that server my problem is i don't how to do that
i have tried bass.radio to use bass.net and that's my code
private int _recHandle;
private BroadCast _broadCast;
EncoderLAME l;
IStreamingServer server = null;
// Init Bass
Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT,IntPtr.Zero);
// create the stream
int _stream = Bass.BASS_StreamCreateFile("1.mp3", 0, 0,
BASSFlag.BASS_SAMPLE_FLOAT | BASSFlag.BASS_STREAM_PRESCAN);
l= new EncoderLAME(_stream);
l.InputFile = null; //STDIN
l.OutputFile = null;
l.Start(null, IntPtr.Zero, false);
// decode the stream (if not using a decoding channel, simply call "Bass.BASS_ChannelPlay" here)
byte[] encBuffer = new byte[65536]; // our dummy encoder buffer
while (Bass.BASS_ChannelIsActive(_stream) == BASSActive.BASS_ACTIVE_PLAYING)
{
// getting sample data will automatically feed the encoder
int len = Bass.BASS_ChannelGetData(_stream, encBuffer, encBuffer.Length);
}
//l.Stop(); // finish
//Bass.BASS_StreamFree(_stream);
//Server
SHOUTcast shoutcast = new SHOUTcast(l);
shoutcast.ServerAddress = "50.22.219.37";
shoutcast.ServerPort = 12904;
shoutcast.Password = "01008209907";
shoutcast.PublicFlag = true;
shoutcast.Genre = "Hörspiel";
shoutcast.StationName = "Kravis Server";
shoutcast.Url = "";
shoutcast.Aim = "";
shoutcast.Icq = "";
shoutcast.Irc = "";
server = shoutcast;
server.SongTitle = "BASS.NET";
// disconnect, if connected
if (_broadCast != null && _broadCast.IsConnected)
{
_broadCast.Disconnect();
}
_broadCast = null;
GC.Collect();
_broadCast = new BroadCast(server);
_broadCast.Notification += OnBroadCast_Notification;
_broadCast.AutoReconnect = true;
_broadCast.ReconnectTimeout = 5;
_broadCast.AutoConnect();
but i don't get my File Streamed to streamed to the server even the _broadCast Is Connected.
so if any solution of code or any other thing i can do.
I haven't used BASS in many years, so I can't give you specific advice on the code you have there. But, I wanted to give you the gist of the process of what you need to do... it might help you get started.
As your file is in MP3, it is possible to send it directly to the server and hear it on the receiving end. However, there are a few problems with that. The first is rate control. If you simply transmit the file data, you'll send say 5 minutes of data in perhaps a 10 second time period. This will eventually cause failures as the clients aren't going to buffer much data, and they will disconnect. Another problem is that your MP3 files often have extra data in them in the form of ID3 tags. Some players will ignore this, others won't. Finally, some of your files might be in different sample rates than others, so even if you rate limit your sending, the players will break when they hit a file in a different sample rate.
What needs to happen is the generation of a fresh stream. The pipeline looks something like this:
[Source File] -> [Codec] -> [Raw PCM Audio] -> [Codec] -> [MP3 Stream] -> [SHOUTcast Server] -> [Clients]
Additionally, that raw PCM audio step needs to run in at a realtime rate. While your computer can definitely decode and encode faster than realtime, it needs to be ran at realtime so that the players can listen in realtime.
I have developed a system in which a C# program receives sound buffers (byte arrays) from another subsystem. It is supposed to play the incoming buffers continuously. I searched in the web and I decided to use SoundPlayer. It works perfectly in the offline mode (play the buffers after receiving them all). However, I have a problem in the real-time mode.
In the real-time mode the program at first waits for a number of buffer arrays (for example 200) to receive and accumulate them. Then it adds a wav header and plays it. However, after that for each next 200 arrays it plays repeatedly the first buffer.
I have read following pages:
Play wav/mp3 from memory
https://social.msdn.microsoft.com/Forums/vstudio/en-US/8ac2847c-3e2f-458c-b8ff-533728e267e0/c-problems-with-mediasoundplayer?forum=netfxbcl
and according to their suggestions I implemented my code as follow:
public class MediaPlayer
{
System.Media.SoundPlayer soundPlayer;
public MediaPlayer(byte[] buffer)
{
byte[] headerPlusBuffer = AddWaveHeader(buffer, false, 1, 16, 8000, buffer.Length / 2); //add wav header to the **first** buffer
MemoryStream memoryStream = new MemoryStream(headerPlusBuffer, true);
soundPlayer = new System.Media.SoundPlayer(memoryStream);
}
public void Play()
{
soundPlayer.PlaySync();
}
public void Play(byte[] buffer)
{
soundPlayer.Stream.Seek(0, SeekOrigin.Begin);
soundPlayer.Stream.Write(buffer, 0, buffer.Length);
soundPlayer.PlaySync();
}
}
I use it like this:
MediaPlayer _mediaPlayer;
if (firstBuffer)
{
_mediaPlayer = new MediaPlayer(dataRaw);
_mediaPlayer.Play();
}
else
{
_mediaPlayer.Play(dataRaw);
}
Each time _mediaPlayer.Play(dataRaw) is called, the first buffer is played again and again; the dataRaw is updated though.
I appreciate your help.