Generate a sliding windows on a temporary pushed buffer in C# - c#

I get some big chunks of audio samples pushed by the sound card.
I want to start from the beginning of the chunk and apply a function on part of the chunk. This function is checking zero-crossing rate.
I thought of copying a part of the chunk to temporary buffer - something like a shifted buffer. And always pushing new sample to the temp buffer. I don't want to miss samples comes from the sound card(so that the previous bytes that have not been checked yet won't run over).
What is the best way to generate such a case?
This is how my event of audio samples from sound card look like:
void myWaveIn_DataAvailable(object sender, WaveInEventArgs e)
{
for (int index = 0; index < e.BytesRecorded; index += 2)//Here I convert in a loop the stream into floating number samples
{
short sample = (short)((e.Buffer[index + 1] << 8) |
e.Buffer[index + 0]);
SamplesBuf.Add = (sample / 32768f);//IEEE 32 floating number
}
//Do some processing on SamplesBuf
}

Related

c# screen transfer over socket efficient improve ways

thats how i wrote your beautiful code(some simple changes for me for easier understanding)
private void Form1_Load(object sender, EventArgs e)
{
prev = GetDesktopImage();//get a screenshot of the desktop;
cur = GetDesktopImage();//get a screenshot of the desktop;
var locked1 = cur.LockBits(new Rectangle(0, 0, cur.Width, cur.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
var locked2 = prev.LockBits(new Rectangle(0, 0, prev.Width, prev.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
ApplyXor(locked1, locked2);
compressionBuffer = new byte[1920* 1080 * 4];
// Compressed buffer -- where the data goes that we'll send.
int backbufSize = LZ4.LZ4Codec.MaximumOutputLength(this.compressionBuffer.Length) + 4;
backbuf = new CompressedCaptureScreen(backbufSize);
MessageBox.Show(compressionBuffer.Length.ToString());
int length = Compress();
MessageBox.Show(backbuf.Data.Length.ToString());//prints the new buffer size
}
the compression buffer length is for example 8294400
and the backbuff.Data.length is 8326947
I didn't like the compression suggestions, so here's what I would do.
You don't want to compress a video stream (so MPEG, AVI, etc are out of the question -- these don't have to be real-time) and you don't want to compress individual pictures (since that's just stupid).
Basically what you want to do is detect if things change and send the differences. You're on the right track with that; most video compressors do that. You also want a fast compression/decompression algorithm; especially if you go to more FPS that will become more relevant.
Differences. First off, eliminate all branches in your code, and make sure memory access is sequential (e.g. iterate x in the inner loop). The latter will give you cache locality. As for the differences, I'd probably use a 64-bit XOR; it's easy, branchless and fast.
If you want performance, it's probably better to do this in C++: The current C# implementation doesn't vectorize your code, and that will help you a great deal here.
Do something like this (I'm assuming 32bit pixel format):
for (int y=0; y<height; ++y) // change to PFor if you like
{
ulong* row1 = (ulong*)(image1BasePtr + image1Stride * y);
ulong* row2 = (ulong*)(image2BasePtr + image2Stride * y);
for (int x=0; x<width; x += 2)
row2[x] ^= row1[x];
}
Fast compression and decompression usually means simpler compression algorithms. https://code.google.com/p/lz4/ is such an algorithm, and there's a proper .NET port available for that as well. You might want to read on how it works too; there is a streaming feature in LZ4 and if you can make it handle 2 images instead of 1 that will probably give you a nice compression boost.
All in all, if you're trying to compress white noise, it simply won't work and your frame rate will drop. One way to solve this is to reduce the colors if you have too much 'randomness' in a frame. A measure for randomness is entropy, and there are several ways to get a measure of the entropy of a picture ( https://en.wikipedia.org/wiki/Entropy_(information_theory) ). I'd stick with a very simple one: check the size of the compressed picture -- if it's above a certain limit, reduce the number of bits; if below, increase the number of bits.
Note that increasing and decreasing bits is not done with shifting in this case; you don't need your bits to be removed, you simply need your compression to work better. It's probably just as good to use a simple 'AND' with a bitmask. For example, if you want to drop 2 bits, you can do it like this:
for (int y=0; y<height; ++y) // change to PFor if you like
{
ulong* row1 = (ulong*)(image1BasePtr + image1Stride * y);
ulong* row2 = (ulong*)(image2BasePtr + image2Stride * y);
ulong mask = 0xFFFCFCFCFFFCFCFC;
for (int x=0; x<width; x += 2)
row2[x] = (row2[x] ^ row1[x]) & mask;
}
PS: I'm not sure what I would do with the alpha component, I'll leave that up to your experimentation.
Good luck!
The long answer
I had some time to spare, so I just tested this approach. Here's some code to support it all.
This code normally run over 130 FPS with a nice constant memory pressure on my laptop, so the bottleneck shouldn't be here anymore. Note that you need LZ4 to get this working and that LZ4 is aimed at high speed, not high compression ratio's. A bit more on that later.
First we need something that we can use to hold all the data we're going to send. I'm not implementing the sockets stuff itself here (although that should be pretty simple using this as a start), I mainly focused on getting the data you need to send something over.
// The thing you send over a socket
public class CompressedCaptureScreen
{
public CompressedCaptureScreen(int size)
{
this.Data = new byte[size];
this.Size = 4;
}
public int Size;
public byte[] Data;
}
We also need a class that will hold all the magic:
public class CompressScreenCapture
{
Next, if I'm running high performance code, I make it a habit to preallocate all the buffers first. That'll save you time during the actual algorithmic stuff. 4 buffers of 1080p is about 33 MB, which is fine - so let's allocate that.
public CompressScreenCapture()
{
// Initialize with black screen; get bounds from screen.
this.screenBounds = Screen.PrimaryScreen.Bounds;
// Initialize 2 buffers - 1 for the current and 1 for the previous image
prev = new Bitmap(screenBounds.Width, screenBounds.Height, PixelFormat.Format32bppArgb);
cur = new Bitmap(screenBounds.Width, screenBounds.Height, PixelFormat.Format32bppArgb);
// Clear the 'prev' buffer - this is the initial state
using (Graphics g = Graphics.FromImage(prev))
{
g.Clear(Color.Black);
}
// Compression buffer -- we don't really need this but I'm lazy today.
compressionBuffer = new byte[screenBounds.Width * screenBounds.Height * 4];
// Compressed buffer -- where the data goes that we'll send.
int backbufSize = LZ4.LZ4Codec.MaximumOutputLength(this.compressionBuffer.Length) + 4;
backbuf = new CompressedCaptureScreen(backbufSize);
}
private Rectangle screenBounds;
private Bitmap prev;
private Bitmap cur;
private byte[] compressionBuffer;
private int backbufSize;
private CompressedCaptureScreen backbuf;
private int n = 0;
First thing to do is capture the screen. This is the easy part: simply fill the bitmap of the current screen:
private void Capture()
{
// Fill 'cur' with a screenshot
using (var gfxScreenshot = Graphics.FromImage(cur))
{
gfxScreenshot.CopyFromScreen(screenBounds.X, screenBounds.Y, 0, 0, screenBounds.Size, CopyPixelOperation.SourceCopy);
}
}
As I said, I don't want to compress 'raw' pixels. Instead, I'd much rather compress XOR masks of previous and the current image. Most of the times this will give you a whole lot of 0's, which is easy to compress:
private unsafe void ApplyXor(BitmapData previous, BitmapData current)
{
byte* prev0 = (byte*)previous.Scan0.ToPointer();
byte* cur0 = (byte*)current.Scan0.ToPointer();
int height = previous.Height;
int width = previous.Width;
int halfwidth = width / 2;
fixed (byte* target = this.compressionBuffer)
{
ulong* dst = (ulong*)target;
for (int y = 0; y < height; ++y)
{
ulong* prevRow = (ulong*)(prev0 + previous.Stride * y);
ulong* curRow = (ulong*)(cur0 + current.Stride * y);
for (int x = 0; x < halfwidth; ++x)
{
*(dst++) = curRow[x] ^ prevRow[x];
}
}
}
}
For the compression algorithm I simply pass the buffer to LZ4 and let it do its magic.
private int Compress()
{
// Grab the backbuf in an attempt to update it with new data
var backbuf = this.backbuf;
backbuf.Size = LZ4.LZ4Codec.Encode(
this.compressionBuffer, 0, this.compressionBuffer.Length,
backbuf.Data, 4, backbuf.Data.Length-4);
Buffer.BlockCopy(BitConverter.GetBytes(backbuf.Size), 0, backbuf.Data, 0, 4);
return backbuf.Size;
}
One thing to note here is that I make it a habit to put everything in my buffer that I need to send over the TCP/IP socket. I don't want to move data around if I can easily avoid it, so I'm simply putting everything that I need on the other side there.
As for the sockets itself, you can use a-sync TCP sockets here (I would), but if you do, you will need to add an extra buffer.
The only thing that remains is to glue everything together and put some statistics on the screen:
public void Iterate()
{
Stopwatch sw = Stopwatch.StartNew();
// Capture a screen:
Capture();
TimeSpan timeToCapture = sw.Elapsed;
// Lock both images:
var locked1 = cur.LockBits(new Rectangle(0, 0, cur.Width, cur.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
var locked2 = prev.LockBits(new Rectangle(0, 0, prev.Width, prev.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
try
{
// Xor screen:
ApplyXor(locked2, locked1);
TimeSpan timeToXor = sw.Elapsed;
// Compress screen:
int length = Compress();
TimeSpan timeToCompress = sw.Elapsed;
if ((++n) % 50 == 0)
{
Console.Write("Iteration: {0:0.00}s, {1:0.00}s, {2:0.00}s " +
"{3} Kb => {4:0.0} FPS \r",
timeToCapture.TotalSeconds, timeToXor.TotalSeconds,
timeToCompress.TotalSeconds, length / 1024,
1.0 / sw.Elapsed.TotalSeconds);
}
// Swap buffers:
var tmp = cur;
cur = prev;
prev = tmp;
}
finally
{
cur.UnlockBits(locked1);
prev.UnlockBits(locked2);
}
}
Note that I reduce Console output to ensure that's not the bottleneck. :-)
Simple improvements
It's a bit wasteful to compress all those 0's, right? It's pretty easy to track the min and max y position that has data using a simple boolean.
ulong tmp = curRow[x] ^ prevRow[x];
*(dst++) = tmp;
hasdata |= tmp != 0;
You also probably don't want to call Compress if you don't have to.
After adding this feature you'll get something like this on your screen:
Iteration: 0.00s, 0.01s, 0.01s 1 Kb => 152.0 FPS
Using another compression algorithm might also help. I stuck to LZ4 because it's simple to use, it's blazing fast and compresses pretty well -- still, there are other options that might work better. See http://fastcompression.blogspot.nl/ for a comparison.
If you have a bad connection or if you're streaming video over a remote connection, all this won't work. Best to reduce the pixel values here. That's quite simple: apply a simple 64-bit mask during the xor to both the previous and current picture... You can also try using indexed colors - anyhow, there's a ton of different things you can try here; I just kept it simple because that's probably good enough.
You can also use Parallel.For for the xor loop; personally I didn't really care about that.
A bit more challenging
If you have 1 server that is serving multiple clients, things will get a bit more challenging, as they will refresh at different rates. We want the fastest refreshing client to determine the server speed - not slowest. :-)
To implement this, the relation between the prev and cur has to change. If we simply 'xor' away like here, we'll end up with a completely garbled picture at the slower clients.
To solve that, we don't want to swap prev anymore, as it should hold key frames (that you'll refresh when the compressed data becomes too big) and cur will hold incremental data from the 'xor' results. This means you can basically grab an arbitrary 'xor'red frame and send it over the line - as long as the prev bitmap is recent.
H264 or Equaivalent Codec Streaming
There are various compressed streaming available which does almost everything that you can do to optimize screen sharing over network. There are many open source and commercial libraries to stream.
Screen transfer in Blocks
H264 already does this, but if you want to do it yourself, you have to divide your screens into smaller blocks of 100x100 pixels, and compare these blocks with previous version and send these blocks over network.
Window Render Information
Microsoft RDP does lot better, it does not send screen as a raster image, instead it analyzes screen and creates screen blocks based on the windows on the screen. It then analyzes contents of screen and sends image only if needed, if it is a text box with some text in it, RDP sends information to render text box with a text with font information and other information. So instead of sending image, it sends information on what to render.
You can combine all techniques and make a mixed protocol to send screen blocks with image and other rendering information.
Instead of handling data as an array of bytes, you can handle it as an array of integers.
int* p = (int*)((byte*)scan0.ToPointer() + y * stride);
int* p2 = (int*)((byte*)scan02.ToPointer() + y * stride2);
for (int x = 0; x < nWidth; x++)
{
//always get the complete pixel when differences are found
if (*p2 != 0)
*p = *p2
++p;
++p2;
}

Inserting adjustable delay on each Input Source

I'm currently using Naudio to create a mixer with multiple inputs and multiple outputs.
For each input, while the audio is playing, I would like to be able to add a sample sized delay whenever I choose to.
The effect will be similar to pausing for, say 100 samples, and then playing again.
For example, a mixingSampleProvider is currently playing audio to output Channel 5, using multiplexingSampleProvider. On a certain button click, i would like to delay mixingSampleProvider by 200 samples, before it continues playing from the point it paused for a moment.
I would like to know if it's possible to do so, and what options can be looked at to achieve this effect.
Edit:
The specific problem which requires fixing, would be the knowledge gap I have in inserting Zeros to the buffer, while pushing all the rest of the samples back, while audio is playing. Here, I'm not looking for code to be written, but am looking for resources to attempt to create a new class to solve this issue. Or whether it is even possible at all.
Edit 2: Code I have tried (failed)
public int Read(float[] buffer, int offset, int count)
{
int sampleRead = source.Read(buffer,offset,count);
int delaySamples = Math.Min(count, DelayBySamples - delayPos);
float[] copyBuffer = buffer;
for (int n = 0; n < delaySamples;n++ )
{
buffer[offset + n] = 0;
}
for (int n = 0; n < sampleRead;n++ )
{
buffer[offset + delaySamples + n] = copyBuffer[offset+n];
}
delayPos += delaySamples;
sampleRead += delaySamples;
return sampleRead;
}
You should implement this by creating your own custom ISampleProvider that wraps each input to your mixer (Decorator pattern). Then have a method that tells it to insert n samples of silence. Then in the Read method of your SampleProvider if silence is required, then put up to the desired number of silent samples into the read buffer. And when the pause finishes, start returning samples from the source provider again.
You can have a look at the source code to the OffsetSampleProvider for inspiration, which allows you to add a specified number of silence samples at the start of your stream.

NAudio fft result gives intensity on all frequencies C#

I have a working implementation of NAudio's wasapi loopback recording and the FFT of the data.
Most of the data I get is just as it should be but every once in a while (10 sec to minutes intervals) it shows amplitude on almost all frequencies.
Basicly the picture is rolling from right to left with time and frequencies going on logarithmic scale from lowest frequencies on the bottom. The lines are the errors. As far as i can tell those are not supposed to be there.
I get the audio buffer and send the samples to an aggregator (applies Hamming window) which implements the NAudio FFT. I have checked the data (FFT result) before I modify it in any way (the picture is not from the raw FFT output, but desibel scaled) confirming the FFT result is giving those lines. I could also point out the picture is modified with LockBits so I thought I had something wrong with the logic there, but that's why I checked the FFT output data which shows the same problem.
Well I could be wrong and the problem might be somewhere I said it isn't but it really seems it originates from the FFT OR the buffer data (data itself or the aggregation of samples). Somehow I doubt the buffer itself is corrupted like this.
If anyone has any idea what could cause this I would greatly appreciate it!
UPDATE
So I decided to draw the whole FFT result range rather than half of it. It showed something strange. I'm not sure of FFT but I thought Fourier transformation should give a result that is mirrored around the middle. This certainly is not the case here.
The picture is in linear scale so the exact middle of the picture is the middle point of the FFT result. Bottom is the first and top is the last.
I was playing a 10kHz sine wave which gives the two horizontal lines there but the top part is beyond me. It also seems like the lines are mirrored around the bottom quarter of the picture so that seems strange to me as well.
UPDATE 2
So I increased the FFT size from 4096 to 8192 and tried again. This is the output with me messing with the sine frequency.
It would seem the result is mirrored twice. Once in the middle and then again on the top and bottom halves. And the huge lines are now gone.. And it would seem like the lines only appear on the bottom half now.
After some further testing with different FFT lengths it seems the lines are completely random in that account.
UPDATE 3
I have done some testing with many things. The latest thing I added was overlapping of samples so that I reuse the last half of the sample array in the beginning of the next FFT. On Hamming and Hann windows it gives me massive intensities (quite like in the second picture I posted) but not with BlackmannHarris. Disabling overlapping removes the biggest errors on every window function. The smaller errors like in the top picture still remain even with BH window. I still have no idea why those lines appear.
My current form allows control over which window function to use (of the three previously mentioned), overlapping (on/off) and multiple different drawing options. This allows me to compare all the affecting parties effects when changed.
I shall investigate further (I am quite sure I have made a mistake at some point) but good suggestions are more than welcome!
The problem was in the way I handled the data arrays. Working like a charm now.
Code (removed excess and might have added mistakes):
// Other inputs are also usable. Just look through the NAudio library.
private IWaveIn waveIn;
private static int fftLength = 8192; // NAudio fft wants powers of two!
// There might be a sample aggregator in NAudio somewhere but I made a variation for my needs
private SampleAggregator sampleAggregator = new SampleAggregator(fftLength);
public Main()
{
sampleAggregator.FftCalculated += new EventHandler<FftEventArgs>(FftCalculated);
sampleAggregator.PerformFFT = true;
// Here you decide what you want to use as the waveIn.
// There are many options in NAudio and you can use other streams/files.
// Note that the code varies for each different source.
waveIn = new WasapiLoopbackCapture();
waveIn.DataAvailable += OnDataAvailable;
waveIn.StartRecording();
}
void OnDataAvailable(object sender, WaveInEventArgs e)
{
if (this.InvokeRequired)
{
this.BeginInvoke(new EventHandler<WaveInEventArgs>(OnDataAvailable), sender, e);
}
else
{
byte[] buffer = e.Buffer;
int bytesRecorded = e.BytesRecorded;
int bufferIncrement = waveIn.WaveFormat.BlockAlign;
for (int index = 0; index < bytesRecorded; index += bufferIncrement)
{
float sample32 = BitConverter.ToSingle(buffer, index);
sampleAggregator.Add(sample32);
}
}
}
void FftCalculated(object sender, FftEventArgs e)
{
// Do something with e.result!
}
And the Sample Aggregator class:
using NAudio.Dsp; // The Complex and FFT are here!
class SampleAggregator
{
// FFT
public event EventHandler<FftEventArgs> FftCalculated;
public bool PerformFFT { get; set; }
// This Complex is NAudio's own!
private Complex[] fftBuffer;
private FftEventArgs fftArgs;
private int fftPos;
private int fftLength;
private int m;
public SampleAggregator(int fftLength)
{
if (!IsPowerOfTwo(fftLength))
{
throw new ArgumentException("FFT Length must be a power of two");
}
this.m = (int)Math.Log(fftLength, 2.0);
this.fftLength = fftLength;
this.fftBuffer = new Complex[fftLength];
this.fftArgs = new FftEventArgs(fftBuffer);
}
bool IsPowerOfTwo(int x)
{
return (x & (x - 1)) == 0;
}
public void Add(float value)
{
if (PerformFFT && FftCalculated != null)
{
// Remember the window function! There are many others as well.
fftBuffer[fftPos].X = (float)(value * FastFourierTransform.HammingWindow(fftPos, fftLength));
fftBuffer[fftPos].Y = 0; // This is always zero with audio.
fftPos++;
if (fftPos >= fftLength)
{
fftPos = 0;
FastFourierTransform.FFT(true, m, fftBuffer);
FftCalculated(this, fftArgs);
}
}
}
}
public class FftEventArgs : EventArgs
{
[DebuggerStepThrough]
public FftEventArgs(Complex[] result)
{
this.Result = result;
}
public Complex[] Result { get; private set; }
}
And that is it I think. I might have missed something though.
Hope this helps!

C# serial command to move Arduino servo X degrees?

I have an Arduino wired up with a servo (Pin 9, 5.5v and Ground), it will run with any ol' testing on the Arduino; however, when I send a serial command to move it, well nothing happens. The rx light flashes so I know the Arduino is getting the info. I think the issue is in my byte conversion.
Code Time:
Arduino Code:
#include <Servo.h>
Servo myservo; // create servo object to control a servo
// a maximum of eight servo objects can be created
void setup()
{
myservo.attach(9);
// attaches the servo on pin 9 to the servo object and sets the rotation to 0
myservo.write(0);
}
int pos1= 0;
int pos2= 0;
int pos3= 0;
int totalMove = 0;
void loop()
{
if (Serial.available() > 0 && totalMove > 0)
{
pos1 = Serial.read() - '0';
// pos2 = Serial.read() - '0';
// pos3 = Serial.read() - '0';
// totalMove = ((pos3) + (pos2*10) + pos1*100);
myservo.write(pos1);
}
}
You see the other pos holders because evenutally I would like to be able to send values larger than 9, however for now I just need to get it to respond :)
C# code:
public void moveServo()
{
if (!serialPort1.IsOpen)
{
Console.WriteLine("Oops");
serialPort1.Open();
return;
}
serialPort1.DtrEnable = true;
serialPort1.DataReceived +=
new System.IO.Ports.SerialDataReceivedEventHandler(
serialPort1_DataReceived);
serialPort1.Write(new byte[] {57}, 0, 1);
}
Any ideas?
Have you made sure it works using other software? That's typically the first step, even if you have to use hyperterminal. That'll shake out any cable problems and give you the correct parameters.
Also I recommend PortMon, from SysInternals. It lets you monitor serial port activity while your application runs.
Make sure you're setting all the serial port parameters; baud rate, data bits, stop bits, parity, handshake, and read and write timeout. You can also set the value to use for a NewLine character.
Also, rather than relying on the data recieved event, you might try reading it yourself.
You are sending this byte
serialPort1.Write(new byte[] {57}, 0, 1);
which basically is the character '9'. The receiver code is
pos1 = Serial.read() - '0';
which means that pos1 has the value 9 (note the missing '). This value is then written directly to the Servo instance.
myservo.write(pos1);
Summing together all parts: You can effectively send only the values 0 to 9 to the servo. But the reference page tells you that write requires the range 0 to 180. Sending only 0 to 9 to the servo might just wiggle it a little bit.
maybe it's because of your logic level.
the lpt and serial port output is 2.5v and some driver's need 5v to set and reset.
so you need an ic like max232 to convert logic level from 2.5v to 5volt.

Does XNA have a BGRA/BGRX surface format?

I have a video stream coming in from the Kinect. It's data is packed in a 32bit BGRX format. I would like to move this data directly into a Texture2d but I cant find a matching SurfaceFormat. The closest thing I can find is SurfaceFormat.Color which looks to be 32bit RGBX.
Assuming that there is not a compatible format. What is the quickest way to convert and push the data to a texture 2d
I was thinking something like this would be good, but it seems to slow down the framerate:
Edit: I changed the algorithm a bit and it seems to be running decently now.
void nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
{
byte x;
for (int i = 0; i < e.ImageFrame.Image.Bits.Length; i += 4)
{
x = e.ImageFrame.Image.Bits[i];
e.ImageFrame.Image.Bits[i] = e.ImageFrame.Image.Bits[i + 2];
e.ImageFrame.Image.Bits[i + 2] = x;
}
canvas.SetData(e.ImageFrame.Image.Bits);
}
This is the fastest thing I have been able to come up with:
void nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
{
byte[] bits = e.ImageFrame.Image.Bits;
for (int i = 0; i < bits.Length; i += 4)
{
frameBuffer[i] = bits[i + 2];
frameBuffer[i + 1] = bits[i + 1];
frameBuffer[i + 2] = bits[i];
}
canvas.SetData(frameBuffer);
}
Without the kinect involved at all I get a framerate of 4200
Moving the bytes directly to the texture2d without correcting the format difference yield: 3100
If you give your self a local variable with e.ImageFrame.Image.Bits and copy the bytes into a preallocated buffer fixing the bytes as you go I get: 2700
The algorithm i have listed in my original question where you swap the bits in place and then send to texture2d yield about 750
So to recap changing to this algorithm bumped the framerate to 2700 up from 750

Categories

Resources