(apologies if this is a duplicate ... i posted but saw no evidence that it actually made it to the forum)
I've been trying to get SlimDX DirectSound working. Here's the code I have. It fills the secondary buffer from a wav file and then, in a thread loop, alternately fills the lower or upper halves of the buffer.
It plays the first load of the buffer fine. The AutoResetEvents fire when they should and the lower half then upper half of the buffer are populated (verified with Debug statements). But playing does not continue after the first load of the buffer. So somehow the repopulation of the buffer doesn't work as it should.
Ideas?
(I'm using DirectSound because it's the only way I've found to set the guid of the audio device that I want to use. Am open to other .NET-friendly approaches.)
private void PlaySound(Guid soundCardGuid, string audioFile) {
DirectSound ds = new DirectSound(soundCardGuid);
ds.SetCooperativeLevel(this.Handle, CooperativeLevel.Priority);
WaveFormat format = new WaveFormat();
format.BitsPerSample = 16;
format.BlockAlignment = 4;
format.Channels = 2;
format.FormatTag = WaveFormatTag.Pcm;
format.SamplesPerSecond = 44100;
format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlignment;
SoundBufferDescription desc = new SoundBufferDescription();
desc.Format = format;
desc.Flags = BufferFlags.GlobalFocus;
desc.SizeInBytes = 8 * format.AverageBytesPerSecond;
PrimarySoundBuffer pBuffer = new PrimarySoundBuffer(ds, desc);
SoundBufferDescription desc2 = new SoundBufferDescription();
desc2.Format = format;
desc2.Flags = BufferFlags.GlobalFocus | BufferFlags.ControlPositionNotify | BufferFlags.GetCurrentPosition2;
desc2.SizeInBytes = 8 * format.AverageBytesPerSecond;
SecondarySoundBuffer sBuffer1 = new SecondarySoundBuffer(ds, desc2);
NotificationPosition[] notifications = new NotificationPosition[2];
notifications[0].Offset = desc2.SizeInBytes / 2 + 1;
notifications[1].Offset = desc2.SizeInBytes - 1; ;
notifications[0].Event = new AutoResetEvent(false);
notifications[1].Event = new AutoResetEvent(false);
sBuffer1.SetNotificationPositions(notifications);
byte[] bytes1 = new byte[desc2.SizeInBytes / 2];
byte[] bytes2 = new byte[desc2.SizeInBytes];
Stream stream = File.Open(audioFile, FileMode.Open);
Thread fillBuffer = new Thread(() => {
int readNumber = 1;
int bytesRead;
bytesRead = stream.Read(bytes2, 0, desc2.SizeInBytes);
sBuffer1.Write<byte>(bytes2, 0, LockFlags.None);
sBuffer1.Play(0, PlayFlags.None);
while (true) {
if (bytesRead == 0) { break; }
notifications[0].Event.WaitOne();
bytesRead = stream.Read(bytes1, 0, bytes1.Length);
sBuffer1.Write<byte>(bytes1, 0, LockFlags.None);
if (bytesRead == 0) { break; }
notifications[1].Event.WaitOne();
bytesRead = stream.Read(bytes1, 0, bytes1.Length);
sBuffer1.Write<byte>(bytes1, desc2.SizeInBytes / 2, LockFlags.None);
}
stream.Close();
stream.Dispose();
});
fillBuffer.Start();
}
}
You haven't set it to loop on the play buffer. Change your code to:
sBuffer1.Play(0, PlayFlags.Looping);
Related
So I have this file transfer function that I made which works fine with small files but does not work with big files (above 5 or 10mbs). Basically what I do is I split the file into chunks and add a 'header' which defines what type of data it is so that the client can understand. I am also using this 'header' approach because I also want to be able to upload files and do other stuff at the same time. The problem is that the client is receiving half of the bytes even tho the server is sending all the bytes so I am assuming the header gets corrupted and if (header == "s^") return false.
Here is my code
Server (Uploader):
int max_buffer = 1024 * 1024 * 2;
using (var fs = new FileStream(fo_path, FileMode.Open, FileAccess.Read))
{
byte[] header = Encoding.Unicode.GetBytes("s^fd");
byte[] buffer = new byte[max_buffer];
int bytesRead = 0;
long bytesToRead = fs.Length;
while (bytesToRead > 0)
{
int n = fs.Read(buffer, 0, max_buffer);
if (n == 0) break;
if (n != buffer.Length)
Array.Resize(ref buffer, n);
byte[] data = new byte[header.Length + n];
Buffer.BlockCopy(header, 0, data, 0, header.Length);
Buffer.BlockCopy(buffer, 0, data, header.Length, n);
_client.Send(data);
bytesRead += n;
bytesToRead -= n;
}
}
Client (Downloader):
private static MemoryStream fileStream = new MemoryStream(); //global scope
byte[] buffer = new byte[2048];
int recieved = _clientSocket.Receive(buffer, SocketFlags.None);
if (recieved == 0) return;
byte[] data = new byte[recieved];
Array.Copy(buffer, data, recieved);
string header = Encoding.Unicode.GetString(data, 0, 4);
if (header == "s^")
{
string header_cmd = Encoding.Unicode.GetString(data, 4, 4);
if (header_cmd == "fd")
{
byte[] fileBytes = new byte[data.Length - 32];
Buffer.BlockCopy(data, 32, fileBytes, 0, fileBytes.Length);
fileStream.Write(fileBytes, 0, fileBytes.Length);
fo_writeSize += fileBytes.Length;
if (fo_writeSize >= fo_size) //fo_size is the file size that the server sends before sending the actual file itself
{
var fs = new FileStream(fo_path, FileMode.Create, FileAccess.Write);
fileStream.WriteTo(fs);
fileStream.Close();
fs.Close();
}
}
}
I'm capturing audio with WasapiLoopbackCapture
- format = IeeeFloat
- SampleRate = 48000
- BitsPerSample = 32
I need to convert this to muLaw (8Khz, 8 bit, mono) - eventually it'll be sent to a phone via SIP trunking. I've tried 100s of samples (most of them with NAudio) and solutions but still have no clue how to do this ...
The Mu-Law tools in NAudio are limited so you might have to roll your own.
You'll need to set up a chain of IWaveProvider filters to convert to mono, change bit-rate, and change bit-depth.
waveBuffer = new BufferedWaveProvider(waveIn.WaveFormat);
waveBuffer.DiscardOnBufferOverflow = true;
waveBuffer.ReadFully = false; // leave a buffer?
sampleStream = new WaveToSampleProvider(waveBuffer);
// Stereo to mono
monoStream = new StereoToMonoSampleProvider(sampleStream)
{
LeftVolume = 1f,
RightVolume = 1f
};
// Downsample to 8000
resamplingProvider = new WdlResamplingSampleProvider(monoStream, 8000);
// Convert to 16-bit in order to use ACM or MuLaw tools.
ieeeToPcm = new SampleToWaveProvider16(resamplingProvider);
Then create a custom IWaveProvider for the next step.
// In MuLawConversionProvider
public int Read(byte[] destinationBuffer, int offset, int readingCount)
{
// Source buffer has twice as many items as the output array.
var sizeOfPcmBuffer = readingCount * 2;
_sourceBuffer = BufferHelpers.Ensure(_sourceBuffer, sizeOfPcmBuffer);
var sourceBytesRead = _sourceProvider.Read(_sourceBuffer, offset * 2, sizeOfPcmBuffer);
var samplesRead = sourceBytesRead / 2;
var outIndex = 0;
for (var n = 0; n < sizeOfPcmBuffer; n += 2)
{
destinationBuffer[outIndex++] = MuLawEncoder.LinearToMuLawSample(BitConverter.ToInt16(_sourceBuffer, offset + n));
}
return samplesRead * 2;
}
The new provider can be sent directly to WaveOut
outputStream = new MuLawConversionProvider(ieeeToPcm);
waveOut.Init(outputStream);
waveOut.Play();
These filters remain in place with the BufferedWaveProvider as the "root". Whenever you call BufferedWaveProvider.AddSamples(), the data will go through all these filters.
I have an IRandomAccessStream to store data on WindowsPhone 8.1 and I want to process the data in a certain BLOCKSIZE.
Here is the code
in the capture function:
await this.mediaCapture.StartRecordToStreamAsync(recordProfile, this.message);
in the stop function, when done I stop:
await this.mediaCapture.StopRecordAsync();
using (var dataReader = new DataReader(message.GetInputStreamAt(0)))
{
dataReader.UnicodeEncoding = Windows.Storage.Streams.UnicodeEncoding.Utf8;
dataReader.ByteOrder = Windows.Storage.Streams.ByteOrder.LittleEndian;
// await dataReader.LoadAsync((uint)message.Size);
await dataReader.LoadAsync(((uint)message.Size + (int)FREQUENCY) * 10);
byte[] buffer = new byte[((int)message.Size + (int)FREQUENCY) * 10];
dataReader.ReadBytes(buffer);
short[] signal = new short[BLOCKSIZE];
int bufferReadResult = 0;
while (bufferReadResult != buffer.Length)
{
for (int index = 0; index < BLOCKSIZE; index += 2, bufferReadResult++)
{
signal[bufferReadResult] = BitConverter.ToInt16(buffer, index);
}
// .....process the signal[] in BLOCKSIZE,
// keep processing BLOCKSIZE until end of buffer
// so process [BLOCKSIZE][BLOCKSIZE]..............buffer end
}
}
The problem is that when bufferReadResult reaches BLOCKSIZE that's it. It won't continue with a new block unit end of buffer.
What is the best way to do this? I have a buffer or IRandomAccessStream and I want to process the all the data in chunks of short[BLOCKSIZE].
I have audio samples extracted through NAudio, i know parameters:
channels
bytes per sample,
samplerate
How to play that samples by using .Net api / or other .Net library
Here code:
openFileDialog1.ShowDialog();
using (var reader = new Mp3FileReader(openFileDialog1.FileName))
{
var pcmLength = (int)reader.Length;
var _leftBuffer = new byte[pcmLength / 2];
var buffer = new byte[pcmLength];
var bytesRead = reader.Read(buffer, 0, pcmLength);
int index = 0;
for (int i = 0; i < bytesRead; i += 4)
{
//extracting only left channel
_leftBuffer[index] = buffer[i];
index++;
_leftBuffer[index] = buffer[i + 1];
index++;
}
// How to play _leftBuffer (Single channel, 2 bytes per sample, 44100 samples per secound)
}
First, you need to implement IWaveProvider or user one of the IWaveProvider implementations that come with NAudio like WaveProvider16, for example. Next, Initialize a WaveOut object with your IWaveProvider using WaveOut.Init(IWaveProvider Provider), and finally, call WaveOut.Play().
YES, I have found a solution: A low-level audio player in C#
Full worked code:
public partial class Form1 : Form
{
private byte[] _leftBuffer;
private BiQuadFilter _leftFilter;
private BiQuadFilter _rightFilter;
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
openFileDialog1.ShowDialog();
using (var reader = new Mp3FileReader(openFileDialog1.FileName))
{
var pcmLength = (int)reader.Length;
_leftBuffer = new byte[pcmLength / 2];
var buffer = new byte[pcmLength];
var bytesRead = reader.Read(buffer, 0, pcmLength);
int index = 0;
for (int i = 0; i < bytesRead; i += 4)
{
_leftBuffer[index] = buffer[i];
index++;
_leftBuffer[index] = buffer[i + 1];
index++;
}
var player = new WaveLib.WaveOutPlayer(-1, new WaveLib.WaveFormat(44100, 16, 1), _leftBuffer.Length, 1, (data, size) =>
{
byte[] b = _leftBuffer;
System.Runtime.InteropServices.Marshal.Copy(b, 0, data, size);
});
}
}
}
I would like to know how I could read a file into ByteArrays that are 4 bytes long.
These arrays will be manipulated and then have to be converted back to a single array ready to be written to a file.
EDIT:
Code snippet.
var arrays = new List<byte[]>();
using (var f = new FileStream("file.cfg.dec", FileMode.Open))
{
for (int i = 0; i < f.Length; i += 4)
{
var b = new byte[4];
var bytesRead = f.Read(b, i, 4);
if (bytesRead < 4)
{
var b2 = new byte[bytesRead];
Array.Copy(b, b2, bytesRead);
arrays.Add(b2);
}
else if (bytesRead > 0)
arrays.Add(b);
}
}
foreach (var b in arrays)
{
BitArray source = new BitArray(b);
BitArray target = new BitArray(source.Length);
target[26] = source[0];
target[31] = source[1];
target[17] = source[2];
target[10] = source[3];
target[30] = source[4];
target[16] = source[5];
target[24] = source[6];
target[2] = source[7];
target[29] = source[8];
target[8] = source[9];
target[20] = source[10];
target[15] = source[11];
target[28] = source[12];
target[11] = source[13];
target[13] = source[14];
target[4] = source[15];
target[19] = source[16];
target[23] = source[17];
target[0] = source[18];
target[12] = source[19];
target[14] = source[20];
target[27] = source[21];
target[6] = source[22];
target[18] = source[23];
target[21] = source[24];
target[3] = source[25];
target[9] = source[26];
target[7] = source[27];
target[22] = source[28];
target[1] = source[29];
target[25] = source[30];
target[5] = source[31];
var back2byte = BitArrayToByteArray(target);
arrays.Clear();
arrays.Add(back2byte);
}
using (var f = new FileStream("file.cfg.enc", FileMode.Open))
{
foreach (var b in arrays)
f.Write(b, 0, b.Length);
}
EDIT 2:
Here is the Ugly Betty-looking code that accomplishes what I wanted. Now I must refine it for performance...
var arrays_ = new List<byte[]>();
var arrays_save = new List<byte[]>();
var arrays = new List<byte[]>();
using (var f = new FileStream("file.cfg.dec", FileMode.Open))
{
for (int i = 0; i < f.Length; i += 4)
{
var b = new byte[4];
var bytesRead = f.Read(b, 0, b.Length);
if (bytesRead < 4)
{
var b2 = new byte[bytesRead];
Array.Copy(b, b2, bytesRead);
arrays.Add(b2);
}
else if (bytesRead > 0)
arrays.Add(b);
}
}
foreach (var b in arrays)
{
arrays_.Add(b);
}
foreach (var b in arrays_)
{
BitArray source = new BitArray(b);
BitArray target = new BitArray(source.Length);
target[26] = source[0];
target[31] = source[1];
target[17] = source[2];
target[10] = source[3];
target[30] = source[4];
target[16] = source[5];
target[24] = source[6];
target[2] = source[7];
target[29] = source[8];
target[8] = source[9];
target[20] = source[10];
target[15] = source[11];
target[28] = source[12];
target[11] = source[13];
target[13] = source[14];
target[4] = source[15];
target[19] = source[16];
target[23] = source[17];
target[0] = source[18];
target[12] = source[19];
target[14] = source[20];
target[27] = source[21];
target[6] = source[22];
target[18] = source[23];
target[21] = source[24];
target[3] = source[25];
target[9] = source[26];
target[7] = source[27];
target[22] = source[28];
target[1] = source[29];
target[25] = source[30];
target[5] = source[31];
var back2byte = BitArrayToByteArray(target);
arrays_save.Add(back2byte);
}
using (var f = new FileStream("file.cfg.enc", FileMode.Open))
{
foreach (var b in arrays_save)
f.Write(b, 0, b.Length);
}
EDIT 3:
Loading a big file into byte arrays of 4 bytes wasn't the smartest idea...
I have over 68 million arrays being processed and manipulated. I really wonder if its possible to load it into a single array and still have the bit manipulation work. :/
Here's another way, similar to #igofed's solution:
var arrays = new List<byte[]>();
using (var f = new FileStream("test.txt", FileMode.Open))
{
for (int i = 0; i < f.Length; i += 4)
{
var b = new byte[4];
var bytesRead = f.Read(b, i, 4);
if (bytesRead < 4)
{
var b2 = new byte[bytesRead];
Array.Copy(b, b2, bytesRead);
arrays.Add(b2);
}
else if (bytesRead > 0)
arrays.Add(b);
}
}
//make changes to arrays
using (var f = new FileStream("test-out.txt", FileMode.Create))
{
foreach (var b in arrays)
f.Write(b, 0, b.Length);
}
Regarding your "Edit 3" ... I'll bite, although it's really a diversion from the original question.
There's no reason you need Lists of arrays, since you're just breaking up the file into a continuous list of 4-byte sequences, looping through and processing each sequence, and then looping through and writing each sequence. You can do much better. NOTE: The implementation below does not check for or handle input files whose lengths are not exactly multiples of 4. I leave that as an exercise to you, if it is important.
To directly address your comment, here is a single-array solution. We'll ditch the List objects, read the whole file into a single byte[] array, and then copy out 4-byte sections of that array to do your bit transforms, then put the result back. At the end we'll just slam the whole thing into the output file.
byte[] data;
using (Stream fs = File.OpenRead("E:\\temp\\test.bmp")) {
data = new byte[fs.Length];
fs.Read(data, 0, data.Length);
}
byte[] element = new byte[4];
for (int i = 0; i < data.Length; i += 4) {
Array.Copy(data, i, element, 0, element.Length);
BitArray source = new BitArray(element);
BitArray target = new BitArray(source.Length);
target[26] = source[0];
target[31] = source[1];
// ...
target[5] = source[31];
target.CopyTo(data, i);
}
using (Stream fs = File.OpenWrite("E:\\temp\\test_out.bmp")) {
fs.Write(data, 0, data.Length);
}
All of the ugly initial read code is gone since we're just using a single byte array. Notice I reserved a single 4-byte array before the processing loop to re-use, so we can save the garbage collector some work. Then we loop through the giant data array 4 bytes at a time and copy them into our working array, use that to initialize the BitArrays for your transforms, and then the last statement in the block converts the BitArray back into a byte array, and copies it directly back to its original location within the giant data array. This replaces BitArrayToByteArray method, since you did not provide it. At the end, writing is also easy since it's just slamming out the now-transformed giant data array.
When I ran your original solution I got an OutOfMemory exception on my original test file of 100MB, so I used a 44MB file. It consumed 650MB in memory and ran in 30 seconds. The single-array solution used 54MB of memory and ran in 10 seconds. Not a bad improvement, and it demonstrates how bad holding onto millions of small array objects is.
Here is what you want:
using (var reader = new StreamReader("inputFileName"))
{
using (var writer = new StreamWriter("outputFileName"))
{
char[] buff = new char[4];
int readCount = 0;
while((readCount = reader.Read(buff, 0, 4)) > 0)
{
//manipulations with buff
writer.Write(buff);
}
}
}
IEnumerable<byte[]> arraysOf4Bytes = File
.ReadAllBytes(path)
.Select((b,i) => new{b, i})
.GroupBy(x => x.i / 4)
.Select(g => g.Select(x => x.b).ToArray())