How to speed up packet processing? - c#

I am developing a call recording application in C#.net using pcap.net library. For packet capturing i am using Wireshark's Dumpcap.exe. And the packet files are creating in 5 second duration. To read each packet file what i have done is
OfflinePacketDevice selectedDevice = new OfflinePacketDevice(filename);
using (PacketCommunicator communicator =
selectedDevice.Open(65536, // portion of the packet to capture
// 65536 guarantees that the whole packet will be captured on all the link layers
PacketDeviceOpenAttributes.Promiscuous, // promiscuous mode
0)) // read timeout
{
communicator.ReceivePackets(0, DispatcherHandler);
In DispatcherHandler method i am processing each packets. DispatcherHandler call takes 0 seconds for each file .
I am getting delay when prcessing the RTP packets packets in the same method..
To identify rtp packets I used ordered dictionary with key as ipadrress+portnumber. So I need to check whether this key exists in dictionary when each rtp packet comes. This task getting slower in processing each dump file.
if (objPortIPDict.Contains(ip.Source.ToString().Replace(".", "") + port))
{
// here i write the rtp payload to a file
}

I read a couple of strange things:
1) Why use Contains in Dictionary?
objPortIPDict.Contains(ip.Source.ToString().Replace(".", "") + port)
If objPortIPDict is a dictiaonary use ContainsKey
2) Derivates from first. If this is a Dictionary, its ContainsKeyhas O(1) time of execution, so can not be affected by amount of data in dictionary itself.
Yes, it can be affected, if the amount of data becomse so big, that entire application becomes slower, but pick time will remain always constant in regard of current application state domain.

Related

UDP packets always arrive at the transport layer (detected by wireshark) but not are not always read by the application

Context
I have been creating a system where an raspberry PI is sending images to a remote client in real-time.
The raspberry PI captures the images using a raspberry PI camera. A captured image is available as a 3-dimensional array of all the pixels (rows, colums and rgb). By sending and displaying the images really fast it will appear as a video to the user.
My goal is to send these images in real-time with the image resolution being as high as possible. An acceptable frame rate is around 30 fps. I selected the protocol UDP and not TCP. I did this because data can be transferred much faster in UDP due to less overhead. Re-transmissions of individual packets is not necessary because losing some pixels is acceptable in my case. The raspberry PI and the client are located in the same network so not many packets will be dropped anyway.
Taking into account that the maximum transmission unit (MTU) on the ethernet layer is 1500 bytes, and the UDP packets should not be fragmented or dropped, I selected a maximum payload length of 1450 bytes, of which 1447 bytes are data, and 3 bytes are application layer overhead. The remaining 50 bytes are reserved
for overhead that is automatically added by the TCP/IP and transport layers.
I mentioned that captured images are available as an array. Assuming the size of this array is, for example, 1.036.800 bytes (e.g. width=720 * height=480 * numberOfColors=3), then 717 (1.036.800 / 1447) UDP packets are needed to send the entire array. The c++ application on the raspberry PI does this by fragmenting the array into fragments of 1447 bytes, and adding an fragment index number, which is between 1-717, as overhead to the packet. We also add an image number, to distinguish from a previously sent image/array. The packet looks like this:
udp packet
Problem
On the client side, I developed a C# application that receives all the packets and reassembles the array using the included index numbers. Using the EgmuCV library, the received array is converted to an image and drawn in a GUI. However, some of the received images are drawn with black lines/chunks. When debugging, I discovered that this problem is not caused by drawing the image, but the black chunks are actually missing array fragments that did never arrive. Because the byte values in an array are initialized as 0 by default, the missing fragments are shown as black chunks
Debugging
Using Wireshark on the client's side, I searched for the index of such a missing fragment, and was surprised to find it, intact. This would mean that the data is received correctly on the transport layer (and observed by wireshark), but never read on the application layer.
This image shows that a chunk of a received array is missing, at index 174.000. Because there are 1447 data bytes in a packet, the index of this missing data corresponds to an UDP packet with the fragment index 121 (174.000/1447). The hexadecimal equivalent for 121 is 79. The following image shows the packet corrosponding UDP packet in wireshark, proving the data was still intact on the transport layer. image
What have I tried soo far
When I lower the frame rate, there will be less black chunks, and they are often smaller. With a framerate of 3FPS there is no black at all. However, this frame rate is not desired. That is a speed of around (3fps * 720x480x3) 3.110.400 bits per second (379kb/s). A normal computer should be capable to read more bits per seconds than this. And as I explained, the packets DID arrive in wireshark, they are only not read in the application layer.
I have also tried changing the UDP payload length from 1447 to 500. This only makes it worse, see image.
I implemented multi threading so that data is read and processed in different threads.
I tried a TCP implementation. The images were received intact, but it was not fast enough to transfer the images in real-time.
It is notable that a 'black chunk' does not represent a single missing fragment of 1447 bytes, but many consecutive fragments. So at some point when reading for data, a number of packets is not read. Also not every image has this problem, some are arrived intact.
I am wondering what is wrong with my implementation that results in this unwanted effect. So I will be posting some of my code below.
Please note that the exception 'SocketException' is never really thrown and the Console.Writeline for 'invalid overhead' is also never printed. The _client.Receive always receives 1450 bytes, expect for the last fragment of an array, which is smaller.
Also
Besides solving this bug, if anyone has alternative suggestions for transmitting these arrays in a more efficient way (requiring less bandwidth but without quality loss), I would gladly hear it. As long as the solution has the array as input/output on both endpoints.
Most importantly: NOTE that the missing packets were never returned by the UdpClient.Receive() method.
I did not post code for c++ application running on the raspberry PI, because the data did arrive (in wireshark) as I have already proved. So the transmission is working fine, but receiving is not.
private const int ClientPort = 50000;
private UdpClient _client;
private Thread _receiveThread;
private Thread _processThread;
private volatile bool _started;
private ConcurrentQueue<byte[]> _receivedPackets = new ConcurrentQueue<byte[]>();
private IPEndPoint _remoteEP = new IPEndPoint(IPAddress.Parse("192.168.4.1"), 2371);
public void Start()
{
if (_started)
{
throw new InvalidCastException("Already started");
}
_started = true;
_client = new UdpClient(_clientPort);
_receiveThread = new Thread(new ThreadStart(ReceiveThread));
_processThread = new Thread(new ThreadStart(ProcessThread));
_receiveThread.Start();
_processThread.Start();
}
public void Stop()
{
if (!_started)
{
return;
}
_started = false;
_receiveThread.Join();
_receiveThread = null;
_processThread.Join();
_processThread = null;
_client.Close();
}
public void ReceiveThread()
{
_client.Client.ReceiveTimeout = 100;
while (_started)
{
try
{
byte[] data = _client.Receive(ref _remoteEP);
_receivedPackets.Enqueue(data);
}
catch(SocketException ex)
{
Console.Writeline(ex.Message);
continue;
}
}
}
private void ProcessThread()
{
while (_started)
{
byte[] data;
bool dequeued = _receivedPackets.TryDequeue(out data);
if (!dequeued)
{
continue;
}
int imgNr = data[0];
int fragmentIndex = (data[1] << 8) | data[2];
if (imgNr <= 0 || imgNr > 255 || fragmentIndex <= 0)
{
Console.WriteLine("Received data with invalid overhead");
return;
}
// i omitted the code for this method because is does not interfere with the
// socket and therefore not really relevant to the issue that i described
ProccessReceivedData(imgNr, fragmentIndex , data);
}
}

Server do not receive all messages

I have 2 computers: I would call them Comp A, Comp B;
I have to:
send a sound file of PCM format from Comp А to the Comp B;
verify how precisely this file was transmitted;
play what I sent to the Comp B on the Comp B.
To send the file I use the function
socket.SendTo(packet,0,count,SocketFlags.None,remoteEP);
from System.Net.Sockets.
As a result I concluded that the file was being transmitted precisely. I monitor it using Wireshark on Comp A and Comp B. However packets of bytes coming to the Comp B, don't coincide at all with the file being transmitted.
The program which sends data of the file, opens this file in a right way. Then it passes right bytes of the source PCM file to the function Socket.SendTo(...). But Wireshark of Comp A (output) displays absolutely incorrect bytes i.e. Comp A sends incorrect bytes.
What could be the problem?
I figured out that function socket.SendTo(packet,0,count,SocketFlags.None,remoteEP);
sends correct bytes if I send them with a delay. I mean I can send 400 bytes (without loops) and my program sends 400 absolutely precise, correct bytes.
But I have a big PCM file. Its size is about 50 Mb. Its duration is 1 minute. I have to send this file during a minute so that this file would be transmitted evenly, uniformly. It means about 800 Kb needs to be transmitted per a second.
So here is my program code. I send every 800 Kb per second using timer function being called 2 times per second.
private void m_pTimer_Tick(object sender,EventArgs e)
{
uint sent_data = 0;
while ((sent_data <= (BUFFERSIZE / 120)) && ((num * RAW_PACKET) + sent_data < BUFFERSIZE))
{
uint bytes_count = ((BUFFERSIZE - (RAW_PACKET * num)) > RAW_PACKET) ? RAW_PACKET : (BUFFERSIZE - (RAW_PACKET * num));
byte[] buffer = new byte[bytes_count];
Array.Copy(ReadBuffer, num * RAW_PACKET, buffer, 0, bytes_count);
num++;
// Send and read next.
m_pUdpServer.SendPacket(buffer, 0, Convert.ToInt32(bytes_count), m_pTargetEP);
sent_data += bytes_count;
}
if ((num * RAW_PACKET) + sent_data == BUFFERSIZE)
{
m_pTimer.Enabled = false;
}
m_pPacketsReceived.Text = m_pUdpServer.PacketsReceived.ToString();
m_pBytesReceived.Text = m_pUdpServer.BytesReceived.ToString();
m_pPacketsSent.Text = m_pUdpServer.PacketsSent.ToString();
m_pBytesSent.Text = m_pUdpServer.BytesSent.ToString();
}
If I call a function m_pUdpServer.SendPacket(buffer, 0, Convert.ToInt32(bytes_count), m_pTargetEP); without a timer or any loops(while, etc.) I see correct result on output.
Well here 120 - a number of file parts that are being transmitted for a period of every timer function call. The timer function is called 2 times per second.
BUFFERSIZE is a total file size.
ReadBuffer is an array that contains all PCM file data.
RAW PACKET = 400 bytes.
sent_data is a total byte count being sent within every timer function call.
num is a total count of sent packets.
I suppose there is too many packets(bytes) to be sent within a timer function call. Therefore I see incorrect values on output.
So what is a solution of this problem?
I think I can make up a RTP packet (to add a sequence number to every sent packet). It will help me to identify received packets and to make up a correct sequence of received packets. It can help me if received packets have a correct byte sequence. Because if received packets have a mixed byte order(sequence) I don't understand how to restore a correct byte sequence in every received packet.
I was advised to refuse the timer call and to send packets evenly, uniformly using a synchronization by time. Actually I don't know how to do it. Maybe I should use threads, pool of threads or something like that. What do you think?
The only guarantee UDP gives you is that the entire message is delivered. However, if you send piece 1,2,3,4 in order they may be received in any order, for instance 4132. That is, UDP do not guarantee ordering.
You MUST include a sequence number to be able to store the PCM correctly.
UDP do not guarantee delivery either. If the server receive piece #4 but not #5 within X seconds it should probably request that piece again.
Or you'll just switch to TCP. Much easier. All you need is some way to tell the length of the file and then just transfer it.

Measuring the TCP Bytes waiting to be read by the Socket

I have a socket connection that receives data, and reads it for processing.
When data is not processed/pulled fast enough from the socket, there is a bottleneck at the TCP level, and the data received is delayed (I can tell by the tmestamps after parsing).
How can I see how much TCP bytes are awaiting to be read by the socket ? (via some external tool like WireShark or else)
private void InitiateRecv(IoContext rxContext)
{
rxContext._ipcSocket.BeginReceive(rxContext._ipcBuffer.Buffer, rxContext._ipcBuffer.WrIndex,
rxContext._ipcBuffer.Remaining(), 0, CompleteRecv, rxContext);
}
private void CompleteRecv(IAsyncResult ar)
{
IoContext rxContext = ar.AsyncState as IoContext;
if (rxContext != null)
{
int rxBytes = rxContext._ipcSocket.EndReceive(ar);
if (rxBytes > 0)
{
EventHandler<VfxIpcEventArgs> dispatch = EventDispatch;
dispatch (this, new VfxIpcEventArgs(rxContext._ipcBuffer));
InitiateRecv(rxContext);
}
}
}
The fact is that I guess the "dispatch" is somehow blocking the reception until it is done, ending up in latency (i.e, data that is processed bu the dispatch is delayed, hence my (false?) conclusion that there was data accumulated on the socket level or before.
How can I see how much TCP bytes are awaiting to be read by the socket
By specifying a protocol that indicates how many bytes it's about to send. Using sockets you operate a few layers above the byte level, and you can't see how many send() calls end up as receive() calls on your end because of buffering and delays.
If you specify the number of bytes on beforehand, and send a string like "13|Hello, World!", then there's no problem when the message arrives in two parts, say "13|Hello" and ", World!", because you know you'll have to read 13 bytes.
You'll have to keep some sort of state and a buffer in between different receive() calls.
When it comes to external tools like Wireshark, they cannot know how many bytes are left in the socket. They only know which packets have passed by the network interface.
The only way to check it with Wireshark is to actually know the last bytes you read from the socket, locate them in Wireshark, and count from there.
However, the best way to get this information is to check the Available property on the socket object in your .NET application.
You can use socket.Available if you are using normal Socket class. Otherwise you have to define a header byte which gives number of bytes to be sent from other end.

How can I read from a socket repeatedly?

To start I am coding in C#. I am writing data of varying sizes to a device through a socket. After writing the data I want to read from the socket because the device will write back an error code/completion message once it has finished processing all of the data. Currently I have something like this:
byte[] resultErrorCode = new byte[1];
resultErrorCode[0] = 255;
while (resultErrorCode[0] == 255)
{
try
{
ReadFromSocket(ref resultErrorCode);
}
catch (Exception)
{
}
}
Console.WriteLine(ErrorList[resultErrorCode[0] - 48]);
I use ReadFromSocket in other places, so I know that it is working correctly. What ends up happening is that the port I am connecting from (on my machine) changes to random ports. I think that this causes the firmware on the other side to have a bad connection. So when I write data on the other side, it tries to write data to the original port that I connected through, but after trying to read several times, the connection port changes on my side.
How can I read from the socket continuously until I receive a completion command? If I know that something is wrong with the loop because for my smallest test file it takes 1 min and 13 seconds pretty consistently. I have tested the code by removing the loop and putting the code to sleep for 1 min and 15 seconds. When it resumes, it successfully reads the completion command that I am expecting. Does anyone have any advice?
What you should have is a separate thread which will act like a driver of your external hardware. This thread will receive all data, parse it and transmit the appropriate messages to the rest of your application. This portion of code will give you an idea of how receive and parse data from your hardware.
public void ContinuousReceive(){
byte[] buffer = new byte[1024];
bool terminationCodeReceived = false;
while(!terminationCodeReceived){
try{
if(server.Receive(buffer)>0){
// We got something
// Parse the received data and check if the termination code
// is received or not
}
}catch (SocketException e){
Console.WriteLine("Oops! Something bad happened:" + e.Message);
}
}
}
Notes:
If you want to open a specific port on your machine (some external hardware are configured to talk to a predefined port) then you should specify that when you create your socket
Never close your socket until you want to stop your application or the external hardware API requires that. Keeping your socket open will resolve the random port change
using Thread.Sleep when dealing with external hardware is not a good idea. When possible, you should either use events (in case of RS232 connections) or blocking calls on separate threads as it is the case in the code above.

Handling dropped TCP packets in C#

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.

Categories

Resources