I have a problem using NAudio and WasapiLoopBackCapture...
I'm beginner and I don't understand all what I do.
I want to convert byte buffer to another with wave format 44100 rate and 16 bits.
It work but I have some sizzle or strange noise after conversion.
Edit: I tryed to merge all buffer without convert. I convert and write only at the end and then, it work fine. I think the problem is in function Convert16 or readStream. If I merge converted buffer, the sizzle come beetween 2 buffer. It can be a length problem ? I need to Convert buffer without merging because I send them by udp
The functions Convert16 and Readstream are from this topic
public byte[] Convert16(byte[] input, int length, WaveFormat format)
{
if (length == 0)
return new byte[0];
using (var memStream = new MemoryStream(input, 0, length))
{
using (var inputStream = new RawSourceWaveStream(memStream, format))
{
var sampleStream = new NAudio.Wave.SampleProviders.WaveToSampleProvider(inputStream);
var resamplingProvider = new NAudio.Wave.SampleProviders.WdlResamplingSampleProvider(sampleStream, audioRate);
var ieeeToPCM = new NAudio.Wave.SampleProviders.SampleToWaveProvider16(resamplingProvider);
var sampleStreams = new NAudio.Wave.StereoToMonoProvider16(ieeeToPCM);
sampleStreams.RightVolume = 0.5f;
sampleStreams.LeftVolume = 0.5f;
return readStream(sampleStreams, length);
}
}
}
private byte[] readStream(IWaveProvider waveStream, int length)
{
byte[] buffer = new byte[length];
using (var stream = new MemoryStream())
{
int read;
while ((read = waveStream.Read(buffer, 0, length)) > 0)
{
stream.Write(buffer, 0, read);
}
return stream.ToArray();
}
}
public void InputBufferToFileCallback(object sender, WaveInEventArgs e)
{
// Used to see WaveViewer and to test
baseWriter.Write(e.Buffer, 0, e.BytesRecorded);
// byte[] convertedTo16 -- PROBLEM IS HERE
convertedTo16 = Convert16(e.Buffer, e.BytesRecorded, waveFormatIn);
// Used to see WaveViewer and to test
convertedWriter.Write(convertedTo16, 0, convertedTo16.Length);
// Send over udp real time
SendSoundController(convertedTo16);
}
We can see on this image the difference between the audacity resampling and my resampling. We can see the trouble.
https://i.imgur.com/H3PbNYR.png
Thanks and have a good day.
When you're resampling recorded audio, you need to maintain a single resampler that lives across multiple buffers. I've written a couple of articles on "input driven resampling" with NAudio that explain how to do this:
Input driven resampling with NAudio using ACM
Fully managed input driven resampling with WDL
Related
I have wrote the following simple test:
[Test]
public void TestUTF8()
{
var c = "abc☰def";
var b = Encoding.UTF8.GetBytes(c);
Assert.That(b.Length, Is.EqualTo(9));
//Assuming, you are reading a byte stream and got partial result with the first 5 bytes
var p = Encoding.UTF8.GetChars(b, 0, 5);
Trace.WriteLine(new string(p));
Assert.That(p.Length, Is.EqualTo(3));
}
The Trace outputs abc� and the last assert fails because p.Length is 4.
However, I wanted Trace outputs abc and the last assert passes, since in reality I know the stream will have valid chars and when it is not the case for the last few bytes, just leave them there waiting for more data to come.
So how can I achieve this in C#?
Encoding.GetChars isn't really designed for bytes coming from a stream where some state needs to be kept track of during the decoding process because a single character might span multiple buffer segments. To do that work you should use a Decoder obtained from Encoding.GetDecoder. However, Decoder.Convert is really low-level allowing you control over both the input and output buffers and somewhat difficult to use. Decoder.GetChars is somewhat easier to use and does the important work of storing state between calls. We can easily expand on Peter Duniho's answer for arbitrary buffer size:
public static void Main(string[] args)
{
var c = "abc☰def";
var b = Encoding.UTF8.GetBytes(c);
var result = DecodeFromStream(new MemoryStream(b), Encoding.UTF8, 3);
Console.WriteLine(result);
Console.WriteLine(c == result);
}
private static string DecodeFromStream(Stream dataStream, Encoding encoding, int bufferSize)
{
Decoder decoder = encoding.GetDecoder();
StringBuilder sb = new StringBuilder();
int inputByteCount;
byte[] inputBuffer = new byte[bufferSize];
char[] charBuffer = new char[encoding.GetMaxCharCount(inputBuffer.Length)];
while ((inputByteCount = dataStream.Read(inputBuffer, 0, inputBuffer.Length)) > 0)
{
int readChars = decoder.GetChars(inputBuffer, 0, inputByteCount, charBuffer, 0);
if (readChars > 0)
sb.Append(charBuffer, 0, readChars);
}
return sb.ToString();
}
I want to communicate with a DSP using RS232, so I use System.IO.SerialPort to achieve this. Everything goes well except the reading performance.
Every 200ms, the port can received a package of 144 bytes. But in the tests, the applications almost skip every other package. I try to print the system time in the console. It amaze me that the code below (when length = 140) take me over 200ms. It let the application can not handle the data in time.
Does anything wrong I do?
Port Property:
BaudRate = 9600
Parity = None
StopBits = One
private byte[] ReadBytesInSpicifiedLength(int length)
{
byte[] des = new byte[length];
for (int i = 0; i < length; i++)
{
des[i] = (byte)serialPort.ReadByte();
}
return des;
}
You're doing a lot of individual I/O calls, which means a lot of kernel transitions. Those are expensive. Not being able to reach 720 bytes per second is surprising, but you can make the data handling an order of magnitude faster by doing block reads:
private byte[] ReadBytesWithSpecifiedLength(int length)
{
byte[] des = new byte[length];
serialPort.BaseStream.Read(des, 0, des.Length);
return des;
}
If you have timeouts enabled, you could get partial reads. Then you need to do something like:
private byte[] ReadBytesWithSpecifiedLength(int length)
{
byte[] des = new byte[length];
int recd = 0;
do {
int partial = serialPort.BaseStream.Read(des, recd, length - recd);
if (partial == 0) throw new IOException("Transfer Interrupted");
recd += partial;
} while (recd < length);
return des;
}
The nice thing about BaseStream is that it also has async support (via ReadAsync). That's what new C# code should be using.
I am doing some data chunking and I'm seeing an interesting issue when sending binary data in my response. I can confirm that the length of the byte array is below my data limit of 4 megabytes, but when I receive the message, it's total size is over 4 megabytes.
For the example below, I used the largest chunk size I could so I could illustrate the issue while still receiving a usable chunk.
The size of the binary data is 3,040,870 on the service side and the client (once the message is deserialized). However, I can also confirm that the byte array is actually just under 4 megabytes (this was done by actually copying the binary data from the message and pasting it into a text file).
So, is WCF causing these issues and, if so, is there anything I can do to prevent it? If not, what might be causing this inflation on my side?
Thanks!
The usual way of sending byte[]s in SOAP messages is to base64-encode the data. This encoding takes 33% more space than binary encoding, which accounts for the size difference almost precisely.
You could adjust the max size or chunk size slightly so that the end result is within the right range, or use another encoding, e.g. MTOM, to eliminate this 33% overhead.
If you're stuck with soap, you can offset the buffer overhead Tim S. talked about using the System.IO.Compression library in .Net - You'd use the compress function first, before building and sending the soap message.
You'd compress with this:
public static byte[] Compress(byte[] data)
{
MemoryStream ms = new MemoryStream();
DeflateStream ds = new DeflateStream(ms, CompressionMode.Compress);
ds.Write(data, 0, data.Length);
ds.Flush();
ds.Close();
return ms.ToArray();
}
On the receiving end, you'd use this to decompress:
public static byte[] Decompress(byte[] data)
{
const int BUFFER_SIZE = 256;
byte[] tempArray = new byte[BUFFER_SIZE];
List<byte[]> tempList = new List<byte[]>();
int count = 0;
int length = 0;
MemoryStream ms = new MemoryStream(data);
DeflateStream ds = new DeflateStream(ms, CompressionMode.Decompress);
while ((InlineAssignHelper(count, ds.Read(tempArray, 0, BUFFER_SIZE))) > 0) {
if (count == BUFFER_SIZE) {
tempList.Add(tempArray);
tempArray = new byte[BUFFER_SIZE];
} else {
byte[] temp = new byte[count];
Array.Copy(tempArray, 0, temp, 0, count);
tempList.Add(temp);
}
length += count;
}
byte[] retVal = new byte[length];
count = 0;
foreach (byte[] temp in tempList) {
Array.Copy(temp, 0, retVal, count, temp.Length);
count += temp.Length;
}
return retVal;
}
I'm trying to encrypt and decrypt a file stream over a socket using RijndaelManaged, but I keep bumping into the exception
CryptographicException: Length of the data to decrypt is invalid.
at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount)
at System.Security.Cryptography.CryptoStream.FlushFinalBlock()
at System.Security.Cryptography.CryptoStream.Dispose(Boolean disposing)
The exception is thrown at the end of the using statement in receiveFile, when the whole file has been transferred.
I tried searching the web but only found answers to problems that arise when using Encoding when encrypting and decrypting a single string. I use a FileStream, so I don't specify any Encoding to be used, so that should not be the problem. These are my methods:
private void transferFile(FileInfo file, long position, long readBytes)
{
// transfer on socket stream
Stream stream = new FileStream(file.FullName, FileMode.Open);
if (position > 0)
{
stream.Seek(position, SeekOrigin.Begin);
}
// if this should be encrypted, wrap the encryptor stream
if (UseCipher)
{
stream = new CryptoStream(stream, streamEncryptor, CryptoStreamMode.Read);
}
using (stream)
{
int read;
byte[] array = new byte[8096];
while ((read = stream.Read(array, 0, array.Length)) > 0)
{
streamSocket.Send(array, 0, read, SocketFlags.None);
position += read;
}
}
}
private void receiveFile(FileInfo transferFile)
{
byte[] array = new byte[8096];
// receive file
Stream stream = new FileStream(transferFile.FullName, FileMode.Append);
if (UseCipher)
{
stream = new CryptoStream(stream, streamDecryptor, CryptoStreamMode.Write);
}
using (stream)
{
long position = new FileInfo(transferFile.Path).Length;
while (position < transferFile.Length)
{
int maxRead = Math.Min(array.Length, (int)(transferFile.Length - position));
int read = position < array.Length
? streamSocket.Receive(array, maxRead, SocketFlags.None)
: streamSocket.Receive(array, SocketFlags.None);
stream.Write(array, 0, read);
position += read;
}
}
}
This is the method I use to set up the ciphers. byte[] init is a generated byte array.
private void setupStreamCipher(byte[] init)
{
RijndaelManaged cipher = new RijndaelManaged();
cipher.KeySize = cipher.BlockSize = 256; // bit size
cipher.Mode = CipherMode.ECB;
cipher.Padding = PaddingMode.ISO10126;
byte[] keyBytes = new byte[32];
byte[] ivBytes = new byte[32];
Array.Copy(init, keyBytes, 32);
Array.Copy(init, 32, ivBytes, 0, 32);
streamEncryptor = cipher.CreateEncryptor(keyBytes, ivBytes);
streamDecryptor = cipher.CreateDecryptor(keyBytes, ivBytes);
}
Anyone have an idea in what I might be doing wrong?
It looks to me like you're not properly sending the final block. You need to at least FlushFinalBlock() the sending CryptoStream in order to ensure that the final block (which the receiving stream is looking for) is sent.
By the way, CipherMode.ECB is more than likely an epic fail in terms of security for what you're doing. At least use CipherMode.CBC (cipher-block chaining) which actually uses the IV and makes each block dependent on the previous one.
EDIT: Whoops, the enciphering stream is in read mode. In that case you need to make sure you read to EOF so that the CryptoStream can deal with the final block, rather than stopping after readBytes. It's probably easier to control if you run the enciphering stream in write mode.
One more note: You cannot assume that bytes in equals bytes out. Block ciphers have a fixed block size they process, and unless you are using a cipher mode that converts the block cipher to a stream cipher, there will be padding that makes the ciphertext longer than the plaintext.
After the comment made by Jeffrey Hantin, I changed some lines in receiveFile to
using (stream) {
FileInfo finfo = new FileInfo(transferFile.Path);
long position = finfo.Length;
while (position < transferFile.Length) {
int maxRead = Math.Min(array.Length, (int)(transferFile.Length - position));
int read = position < array.Length
? streamSocket.Receive(array, maxRead, SocketFlags.None)
: streamSocket.Receive(array, SocketFlags.None);
stream.Write(array, 0, read);
position += read;
}
}
->
using (stream) {
int read = array.Length;
while ((read = streamSocket.Receive(array, read, SocketFlags.None)) > 0) {
stream.Write(array, 0, read);
if ((read = streamSocket.Available) == 0) {
break;
}
}
}
And voila, she works (because of the ever so kind padding that I didn't care to bother about earlier). I'm not sure what happens if Available returns 0 even though all data hasn't been transferred, but I'll tend to that later in that case. Thanks for your help Jeffrey!
Regards.
cipher.Mode = CipherMode.ECB;
Argh! Rolling your own security code is almost always a bad idea.
Mine i just removed the padding and it works
Commented this out - cipher.Padding = PaddingMode.ISO10126;
My task is to decompress a packet(received) using zlib and then use an algoritm to make a picture from the data
The good news is that I have the code in C++,but the task is to do it in C#
C++
//Read the first values of the packet received
DWORD image[200 * 64] = {0}; //used for algoritm(width always = 200 and height always == 64)
int imgIndex = 0; //used for algoritm
unsigned char rawbytes_[131072] = {0}; //read below
unsigned char * rawbytes = rawbytes_; //destrination parameter for decompression(ptr)
compressed = r.Read<WORD>(); //the length of the compressed bytes(picture)
uncompressed = r.Read<WORD>(); //the length that should be after decompression
width = r.Read<WORD>(); //the width of the picture
height = r.Read<WORD>(); //the height of the picture
LPBYTE ptr = r.GetCurrentStream(); //the bytes(file that must be decompressed)
outLen = uncompressed; //copy the len into another variable
//Decompress
if(uncompress((Bytef*)rawbytes, &outLen, ptr, compressed) != Z_OK)
{
printf("Could not uncompress the image code.\n");
Disconnect();
return;
}
//Algoritm to make up the picture
// Loop through the data
for(int c = 0; c < (int)height; ++c)
{
for(int r = 0; r < (int)width; ++r)
{
imgIndex = (height - 1 - c) * width + r;
image[imgIndex] = 0xFF000000;
if(-((1 << (0xFF & (r & 0x80000007))) & rawbytes[((c * width + r) >> 3)]))
image[imgIndex] = 0xFFFFFFFF;
}
}
I'm trying to do this with zlib.NET ,but all demos have that code to decompress(C#)
private void decompressFile(string inFile, string outFile)
{
System.IO.FileStream outFileStream = new System.IO.FileStream(outFile, System.IO.FileMode.Create);
zlib.ZOutputStream outZStream = new zlib.ZOutputStream(outFileStream);
System.IO.FileStream inFileStream = new System.IO.FileStream(inFile, System.IO.FileMode.Open);
try
{
CopyStream(inFileStream, outZStream);
}
finally
{
outZStream.Close();
outFileStream.Close();
inFileStream.Close();
}
}
public static void CopyStream(System.IO.Stream input, System.IO.Stream output)
{
byte[] buffer = new byte[2000];
int len;
while ((len = input.Read(buffer, 0, 2000)) > 0)
{
output.Write(buffer, 0, len);
}
output.Flush();
}
My problem:I don't want to save the file after decompression,because I have to use the algoritm shown in the C++ code.
How to convert the byte[] array into a stream similiar to the one in the C# zlib code to decompress the data and then how to convert the stream back into byte array?
Also,How to change the zlib.NET code to NOT save files?
Just use MemoryStreams instead of FileStreams:
// Assuming inputData is a byte[]
MemoryStream input = new MemoryStream(inputData);
MemoryStream output = new MemoryStream();
Then you can use output.ToArray() afterwards to get a byte array out.
Note that it's generally better to use using statements instead of a single try/finally block - as otherwise if the first call to Close fails, the rest won't be made. You can nest them like this:
using (MemoryStream output = new MemoryStream())
using (Stream outZStream = new zlib.ZOutputStream(output))
using (Stream input = new MemoryStream(bytes))
{
CopyStream(inFileStream, outZStream);
return output.ToArray();
}
I just ran into this same issue.
For Completeness... (since this stumped me for several hours)
In the case of ZLib.Net you also have to call finish(), which usually happens during Close(), before you call return output.ToArray()
Otherwise you will get an empty/incomplete byte array from your memory stream, because the ZStream hasn't actually written all of the data yet:
public static void CompressData(byte[] inData, out byte[] outData)
{
using (MemoryStream outMemoryStream = new MemoryStream())
using (ZOutputStream outZStream = new ZOutputStream(outMemoryStream, zlibConst.Z_DEFAULT_COMPRESSION))
using (Stream inMemoryStream = new MemoryStream(inData))
{
CopyStream(inMemoryStream, outZStream);
outZStream.finish();
outData = outMemoryStream.ToArray();
}
}
public static void DecompressData(byte[] inData, out byte[] outData)
{
using (MemoryStream outMemoryStream = new MemoryStream())
using (ZOutputStream outZStream = new ZOutputStream(outMemoryStream))
using (Stream inMemoryStream = new MemoryStream(inData))
{
CopyStream(inMemoryStream, outZStream);
outZStream.finish();
outData = outMemoryStream.ToArray();
}
}
In this example I'm also using the zlib namespace:
using zlib;
Originally found in this thread:
ZLib decompression
I don't have enough points to vote up yet, so...
Thanks to Tim Greaves for the tip regarding finish before ToArray
And Jon Skeet for the tip regarding nesting the using statements for streams (which I like much better than try/finally)