I made my own beep code as an exercise. However it is lagging.
2-3 beeps lagging then 3rd or 4th comes quicker.
Can someone please explain why? And how do I rid off the latency?
I used bits from Mark Heath blogs and NAudio github code..
This is main console code:
var waveSine = new BeepStream(waveWhite.WaveFormat);
var sineChannel = new WaveChannel32(waveSine) { PadWithZeroes = false };
List<WaveChannel32> inputs = new List<WaveChannel32>();
inputs.Add(sineChannel);
var mixer = new MixingWaveProvider32(inputs);
var output = new WaveOut();
output.Init(mixer);
output.Play();
output.Volume = 0.5f;
while (true)
{
Thread.Sleep(1000);
waveSine.Beep(1000, 150);
}
This is beep code (it basically outputs 0s, but when Beep is called it pushes cached sine samples):
(note1: not production code so please ignore obvious inconsistencies)
(note1: in reality there are at least 2 channels - one is always playing, this is why I had to make second - the beep channel - also always playing. I would gladly write SamplesProviders instead but couldnt find good example)
public class BeepStream : WaveStream
...
public override int Read(byte[] buffer, int offset, int count)
{
int totalBytesRead = 0;
int beepBytesRead = 0;
while (totalBytesRead < count)
{
if(playingNow == null)
{
// silence
buffer[totalBytesRead] = 0;
totalBytesRead += 1;// bytesRead;
}
else
{
// beep
buffer[totalBytesRead] = playingNow[beepBytesRead++];
totalBytesRead += 1;// bytesRead;
if (beepBytesRead >= playingNow.Length)
playingNow = null;
}
}
return totalBytesRead;
}
An observation - Reads come with count 52920 always. What is this magic number? Can I reduce it? The wave format is float 44100 if that helps..
Found solution by simply trying everything.
Use of exclusive WASAPI output:
var output = new WasapiOut(NAudio.CoreAudioApi.AudioClientShareMode.Exclusive, 3);
output.Init(mixer);
output.Play();
Related
In order to clean some messy code and get a better understanding of the SocketAsyncEventArgs class, I'd to know what's the most efficient technique to reassemble partially received messages from SocketAsyncEventArgs buffers.
To give you the big picture, I'm connected to a TCP server using a C# Socket client that will essentially receive data. The data received is message-based delimited by a \n character.
As you're probably already aware of, when using the ReceiveAsync method, this is almost a certitude that the last received message will be uncompleted such as you'll have to locate the index of the last complete message, copy the incomplete buffer section and keep it as start for the next received buffer and so on.
The thing is, I wish to abstract this operation from the upper layer and call the ProcessReceiveDataImpl as soon I get completed messages in the _tmpBuffer. I found that my Buffer.BlockCopy is not much readable (very old code also (-:) but anyway I wish to know what are you doing in this typical use case?
Code to reassemble messages:
public class SocketClient
{
private const int _receiveBufferSize = 8192;
private byte[] _remBuffer = new byte[2 * _receiveBufferSize];
private byte[] _tmpBuffer = new byte[2 * _receiveBufferSize];
private int _remBufferSize = 0;
private int _tmpBufferSize = 0;
private void ProcessReceiveData(SocketAsyncEventArgs e)
{
// the buffer to process
byte[] curBuffer = e.Buffer;
int curBufferSize = e.BytesTransferred;
int curBufferOffset = e.Offset;
int curBufferLastIndex = e.BytesTransferred - 1;
int curBufferLastSplitIndex = int.MinValue;
if (_remBufferSize > 0)
{
curBufferLastSplitIndex = GetLastSplitIndex(curBuffer, curBufferOffset, curBufferSize);
if (curBufferLastSplitIndex != curBufferLastIndex)
{
// copy the remain + part of the current into tmp
Buffer.BlockCopy(_remBuffer, 0, _tmpBuffer, 0, _remBufferSize);
Buffer.BlockCopy(curBuffer, curBufferOffset, _tmpBuffer, _remBufferSize, curBufferLastSplitIndex + 1);
_tmpBufferSize = _remBufferSize + curBufferLastSplitIndex + 1;
ProcessReceiveDataImpl(_tmpBuffer, _tmpBufferSize);
Buffer.BlockCopy(curBuffer, curBufferLastSplitIndex + 1, _remBuffer, 0, curBufferLastIndex - curBufferLastSplitIndex);
_remBufferSize = curBufferLastIndex - curBufferLastSplitIndex;
}
else
{
// copy the remain + entire current into tmp
Buffer.BlockCopy(_remBuffer, 0, _tmpBuffer, 0, _remBufferSize);
Buffer.BlockCopy(curBuffer, curBufferOffset, _tmpBuffer, _remBufferSize, curBufferSize);
ProcessReceiveDataImpl(_tmpBuffer, _remBufferSize + curBufferSize);
_remBufferSize = 0;
}
}
else
{
curBufferLastSplitIndex = GetLastSplitIndex(curBuffer, curBufferOffset, curBufferSize);
if (curBufferLastSplitIndex != curBufferLastIndex)
{
// we must copy the unused byte into remaining buffer
_remBufferSize = curBufferLastIndex - curBufferLastSplitIndex;
Buffer.BlockCopy(curBuffer, curBufferLastSplitIndex + 1, _remBuffer, 0, _remBufferSize);
// process the msg
ProcessReceiveDataImpl(curBuffer, curBufferLastSplitIndex + 1);
}
else
{
// we can process the entire msg
ProcessReceiveDataImpl(curBuffer, curBufferSize);
}
}
}
protected virtual void ProcessReceiveDataImpl(byte[] buffer, int bufferSize)
{
}
private int GetLastSplitIndex(byte[] buffer, int offset, int bufferSize)
{
for (int i = offset + bufferSize - 1; i >= offset; i--)
{
if (buffer[i] == '\n')
{
return i;
}
}
return -1;
}
}
Your input is very important and appreciated!
Thank you!
Updated:
Also, rather then calling the ProcessReceiveDataImpl and block further receive operations, will it be useful to queue completed messages and make them available to the consumer?
I'm capturing audio with WasapiLoopbackCapture
- format = IeeeFloat
- SampleRate = 48000
- BitsPerSample = 32
I need to convert this to muLaw (8Khz, 8 bit, mono) - eventually it'll be sent to a phone via SIP trunking. I've tried 100s of samples (most of them with NAudio) and solutions but still have no clue how to do this ...
The Mu-Law tools in NAudio are limited so you might have to roll your own.
You'll need to set up a chain of IWaveProvider filters to convert to mono, change bit-rate, and change bit-depth.
waveBuffer = new BufferedWaveProvider(waveIn.WaveFormat);
waveBuffer.DiscardOnBufferOverflow = true;
waveBuffer.ReadFully = false; // leave a buffer?
sampleStream = new WaveToSampleProvider(waveBuffer);
// Stereo to mono
monoStream = new StereoToMonoSampleProvider(sampleStream)
{
LeftVolume = 1f,
RightVolume = 1f
};
// Downsample to 8000
resamplingProvider = new WdlResamplingSampleProvider(monoStream, 8000);
// Convert to 16-bit in order to use ACM or MuLaw tools.
ieeeToPcm = new SampleToWaveProvider16(resamplingProvider);
Then create a custom IWaveProvider for the next step.
// In MuLawConversionProvider
public int Read(byte[] destinationBuffer, int offset, int readingCount)
{
// Source buffer has twice as many items as the output array.
var sizeOfPcmBuffer = readingCount * 2;
_sourceBuffer = BufferHelpers.Ensure(_sourceBuffer, sizeOfPcmBuffer);
var sourceBytesRead = _sourceProvider.Read(_sourceBuffer, offset * 2, sizeOfPcmBuffer);
var samplesRead = sourceBytesRead / 2;
var outIndex = 0;
for (var n = 0; n < sizeOfPcmBuffer; n += 2)
{
destinationBuffer[outIndex++] = MuLawEncoder.LinearToMuLawSample(BitConverter.ToInt16(_sourceBuffer, offset + n));
}
return samplesRead * 2;
}
The new provider can be sent directly to WaveOut
outputStream = new MuLawConversionProvider(ieeeToPcm);
waveOut.Init(outputStream);
waveOut.Play();
These filters remain in place with the BufferedWaveProvider as the "root". Whenever you call BufferedWaveProvider.AddSamples(), the data will go through all these filters.
I cannot find a way to play wav-files "chunk-wise", meaning playing byte chunk(byte[]) to byte chunk.
Between the comments in the code, I have a made-up method called playSound(byte[], int firstInde, in stopIndex).
I wonder if there is a method in some class that is similar to that one that can handle "chunk-wise" audio playing.
byte[] b = null;
int chunkSize = 512;
int chunkIndex=0;
int fileSize=512000;
try{
using (var steam = Properties.Resources.sju_tre_g_i)
{
steam.Read(b,0,fileSize);
}
while(chunkIndex < b.Length)
{
//-------------The method I would like to use------
playAudio(b, chunkIndex, chunkSize);
//-------------------------------------------------
chunkIndex += chunkSize;
}
}
catch(Exception)
{
}
I want to communicate with a DSP using RS232, so I use System.IO.SerialPort to achieve this. Everything goes well except the reading performance.
Every 200ms, the port can received a package of 144 bytes. But in the tests, the applications almost skip every other package. I try to print the system time in the console. It amaze me that the code below (when length = 140) take me over 200ms. It let the application can not handle the data in time.
Does anything wrong I do?
Port Property:
BaudRate = 9600
Parity = None
StopBits = One
private byte[] ReadBytesInSpicifiedLength(int length)
{
byte[] des = new byte[length];
for (int i = 0; i < length; i++)
{
des[i] = (byte)serialPort.ReadByte();
}
return des;
}
You're doing a lot of individual I/O calls, which means a lot of kernel transitions. Those are expensive. Not being able to reach 720 bytes per second is surprising, but you can make the data handling an order of magnitude faster by doing block reads:
private byte[] ReadBytesWithSpecifiedLength(int length)
{
byte[] des = new byte[length];
serialPort.BaseStream.Read(des, 0, des.Length);
return des;
}
If you have timeouts enabled, you could get partial reads. Then you need to do something like:
private byte[] ReadBytesWithSpecifiedLength(int length)
{
byte[] des = new byte[length];
int recd = 0;
do {
int partial = serialPort.BaseStream.Read(des, recd, length - recd);
if (partial == 0) throw new IOException("Transfer Interrupted");
recd += partial;
} while (recd < length);
return des;
}
The nice thing about BaseStream is that it also has async support (via ReadAsync). That's what new C# code should be using.
Best explained with code:
long pieceLength = Math.Pow(2,18); //simplification
...
public void HashFile(string path)
{
using (FileStream fin = File.OpenRead(path))
{
byte[] buffer = new byte[(int)pieceLength];
int pieceNum = 0;
long remaining = fin.Length;
int done = 0;
int offset = 0;
while (remaining > 0)
{
while (done < pieceLength)
{
int toRead = (int)Math.Min(pieceLength, remaining);
int read = fin.Read(buffer, offset, toRead);
//if read == 0, EOF reached
if (read == 0)
break;
offset += read;
done += read;
remaining -= read;
}
HashPiece(buffer, pieceNum);
done = 0;
pieceNum++;
buffer = new byte[(int)pieceLength];
}
}
}
This works fine if the file is smaller than pieceLength and only does the outer loop once. However, if the file is larger, it throws this at me:
This is in the int read = fin.Read(buffer, offset, toRead); line.
Unhandled Exception: System.ArgumentException: Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
at System.IO.FileStream.Read(Byte[] array, Int32 offset, Int32 count)
done, buffer DO get reinitialized properly. File is larger than 1 MB.
Thanks in advance
Well, at least one problem is that you're not taking into account the "piece already read" when you work out how much to read. Try this:
int toRead = (int) Math.Min(pieceLenght - done, remaining);
And then also adjust where you're reading to within the buffer:
int read = fin.Read(buffer, done, toRead);
(as you're resetting done for the new buffer, but not offset).
Oh, and at that point offset is irrelevant, so remove it.
Then note djna's answer as well - consider the case where for whatever reason you read to the end of the file, but without remaining becoming zero. You may want to consider whether remaining is actually useful at all... why not just keep reading blocks until you get to the end of the stream?
You don't adjust the value of "remaining" in this case
if (read == 0)
break;
The FileStream.Read method's Offset and Length parameters relate to positions in the buffer, not to positions in the file.
Basically, this should fix it:
int read = fin.Read(buffer, 0, toRead);