c# thermal image server stream for workstation client - c#

I am Tommaso and I have just signed up.I would like to open a new this discussion hoping it could be interesting.
I am working with thermal camera(C# ,Visual studio 2012,windows 7 -x64) and I have already create a server that performs the following task:
Get raw frame from camera
Eventually rotation
Convert raw pixel value to Kelvin
Calculate min , avg and max pixel in a frame
Check temperature alert ,alarm.
Now I am asked to allow 4 work stations to see the real time thermal frames stream from cameras. Unfortunately in this project these are located in a real wide area at many meters (600-700 m) from main server. At 3,75 frame/s, frame resolution of 640x512 pixel and pixel depth of 14 bit (16) we are talking about 2,5Mbyte per second. No compression is provided.
So I decide to use the frames arrived at the server creating a socket to listen for 1 or 4 work stations need the stream. So each time a client connects, I create a dedicate queue where the main thread enqueues frames and where the socket thread dequeues and sends them to the connected client.
Here is my question: Due to importance of a single frame do you suggest to use the reliable and heaviest TCP or a the simplest UDP considering the amount of flow?
Sorry for my prolixity but it's just for an explanation .
If you want to know more about my project please ask .
Tommaso

You want to stream video. If a frame doesn't reach to it's destination, there would be no problem. Because after 250ms (I'll assume your video is 4 fps) another frame will be sent. Since every frame is not viral, you better use UDP.

Related

AForge - incomplete list of VideoCapabilities - high frame rate

I use AForge and C # to record video from a Logitech BRIO webcam connected via USB. I want to get about 60 fps. This camera can do such registration - other applications give such possibility for it. AForge does not allow to choose VideoCapabilities with this frame rate (list is incomplete). I mean option with AverageFrameRate=60 fps (about MaximumFrameRate AForge gives information e.g. 120 fps). These settings in other applications are not related to the low number of bits per pixel as I read in other threads.Where is the problem and how to solve it?
I understand that if VideoCapabilities has two options with the same resolution, only one of them will be shown - the one with the most bits per pixel. Is it possible to choose the second one (with fewer bits per pixel)?

Arduino mega 2560 with Raspberry pie 2 project

I am currently working on a project where I need to control 16 pumps 1 stepper motor and 2 Distance sensors - 21 digital pins and 2 analog pins. I need to make a UI and have this use UI send information to the Arduino which will control my system. I would only need to receive 1 or 0 from each button press from the UI in order to determine which pump to needs to be turned on. I'm using an Arduino mega 2560 and coding the UI in Visual Studio C#.
I have done various research on serial communication for the Arduino, including using the serialevent() function and the firmata library. However I am having trouble understanding how all this ties together and if what I am wanting to do is even possible! Here are my questions:
Is this possible?
Is this possible by using Serialevent1()........... serialevent21()? or using Serial.availble() and Serial.read()
Instead of reading one button click on the UI at a time. Can the inputs on the UI be collected and sent to the arduino as a group. Then have the UI restart and clear out the values.
Any information and/or advice will help! I just need to be pointed in the correct direction!
Thanks
DG
Have you considered the following article?
It uses a Arduino mega 2560 and the article provides both the c# code and the Arduino code.
It communicates over the serial port and sends data in both directions.
Yes it is
The article above uses the Serial.print and readSerialInputCommand which is similar to Serial.read. You can use Serial.Read instead if you wish as it performs the same task and returns a different datatype.
You can compile the values into a group. If you want to be super optimized you can use bit-wise operators and compile the first 21 pin values into a byte array and send it.
However since its only 21 digital pins I recommend just using a string with the each character in the string linked to a pin. Eg: "10110" could set pin0, pin2, pin3 HIGH and set pin1, pin5 LOW.
I would recommend not restarting your UI as it will need to reconnect to serial port. Rather just clear all the values with your code.

Sony RemoteAPI delays after starting Liveview

Good worktime!
I'm writing C# warp around Sony Remote API, using Android test app as an example.
There became a problem with Liveview.
I start liveview streaming by the API method "startLiveview". Liveview data consumer works in a separate thread and it simply passes through frames if there's no time to draw them all (just like in the test app, as far as i understand).
However, after the startLiveview method is processed and fetching stream is started, the camera experience some difficulties with all other API commands processing in time.
E.g., after my warp discovers the camera and connects to it, performs startRecMode and so on, it can take pictures very fast. However, since liveview is started, the camera can't process all actTakePicture calls in time. There can be seconds or even dozens of seconds i have to wait before the shutter clicks.
I've tried to stop liveview before starting picture taking. It doesn't matter - stopLiveview command experiences the same problem, it takes even minutes to process it (there are minutes to return from System.Net.WebClient.UploadString).
I've tried to use startLiveviewWithSize instead of startLiveview and to pass the smallest size available (there is "M" with Sony A7R i'm using). No result.
What can i do to successfully stop the liveview or, as a max wish, to get rid of performance penalties when the liveview is on?
Thank you in advance!
p.S. Using MS VS 2010, .NET Framework 3.5, Sony ILCE Alpha 7R camera, all preprocessing settings found are switched off.

Multiple Inputs to MixingWaveProvider cause quality to suffer

I am making a VOIP program for fun, and I got it mostly working. Since my last question, another issue has come up. When there are two or more voices being played through the client using a MixingWaveProvider, there are strange stutters, clicks, snaps, and static in the final mixed audio. Most of the time, it sounds like a portion of someone's voice plays, pauses, and lets another person's voice play for a short while. This continues for as long as both are talking (Each voice seems to "take turns" outputting to the waveMixer).
I won't bother posting the Speex encoding/decoding code, as this issue happens with or without it being used. I get the input through a WaveInEvent, which feeds it's information into a UDP network stream. The UDP stream sends the sound data to the other clients.
Here is the code that I use to initialize the WaveOut and MixingWaveProvider32:
waveOut = new DirectSoundOut(settings.GetOutputDevice(), 50);
waveMixer = new MixingWaveProvider32();
waveOut.Init(waveMixer);
waveOut.Play();
When a client connects, I input the received packet data into the user's BufferedWaveProvider:
provider = new BufferedWaveProvider(format) { DiscardOnBufferOverflow = true };
wave16ToFloat = new Wave16ToFloatProvider(provider);
After that, I use this code to add the above 32bit provider to the MixingWaveProvider32:
waveMixer.AddInputStream(wave16ToFloat);
It seems that the issue is less severe with streams added before MixingWaveProvider32 is passed to WaveOut. However, I really need to be able to add them dynamically. Assuming that is why this happens.
This may have something to do with my network implementation, so I will look into that if something else isn't found here. Could it be possible that each voice data packet is blocking the next one from being read, thus causing the back and forth kind of sound? If so, how could I buffer the data on the server longer or wait to send in larger chunks on the client?
Edit:
I am almost sure that this is caused by the BufferedWaveProviders draining completely several times a second. The packets are not filling them fast enough, and they drain leaving nothing left to transmit. As I asked above, is there any way that I can send them from the client in large chunks? Or can I make the buffers drain slower somehow?
Edit 2:
I have now implemented a auto pausing buffer that will make sure it stays filled. The buffer unpauses when it's internal buffer goes above 1 second of sound, and it pauses when the data gets below .5 seconds. However, the buffer hovers around 1 second of sound, and I have checked that it is not running out/pausing the sound mid stream. Though this should be a good thing, the sound distortion still exists, and it is just as bad as before. It seems to be something wrong with the mixer or my setup.
Sounds like you have already diagnosed the problem. If the BufferedWaveProviders aren't filling up then you will get silence. You need to implement some kind of auto-pause that delays playback until there is enough buffered audio. A cheating way to do this is to start off each buffer with five seconds of silence, allowing hopefully another five seconds of audio to be received while this buffer plays out.

Best image compression for C#

I'm currently working on some software which captures your monitor image and sends it over to clients over the internet.
So far I have it working in my local area network but when I go to test it over the internet, hardly any of the images get through to the client.
I am using Lidgren for my networking. At the moment, I get a Bitmap from the screen, convert it to a JPEG with 30 quality, G-zip it and send it on its way. Each image is about 80KB in size, and I try to send 10 images a second to the client. Now that's like a 7mbit upload connection required, mines is only 2mbit.
So basically, is anyone aware of any compression libraries or techniques which will dramatically decrease the file size of each image. This might be completely impossible but I thought that I would give it a go.
Any help is much appreciated, Thanks!
Do you really need to send the whole frame each time? Could you not just send what has changed between the current and the previous frame and then apply these changes to the client frame to bring it up to date? This should be pretty quick assuming the server isn't watching a video or some-such. This answer suggests this is what both RDP (Microsoft) and VNC use for remote desktop viewing.
See https://stackoverflow.com/a/4098515/171703 and https://stackoverflow.com/a/1876848/171703 for some ideas on how to do this.

Categories

Resources