I have 2 computers: I would call them Comp A, Comp B;
I have to:
send a sound file of PCM format from Comp А to the Comp B;
verify how precisely this file was transmitted;
play what I sent to the Comp B on the Comp B.
To send the file I use the function
socket.SendTo(packet,0,count,SocketFlags.None,remoteEP);
from System.Net.Sockets.
As a result I concluded that the file was being transmitted precisely. I monitor it using Wireshark on Comp A and Comp B. However packets of bytes coming to the Comp B, don't coincide at all with the file being transmitted.
The program which sends data of the file, opens this file in a right way. Then it passes right bytes of the source PCM file to the function Socket.SendTo(...). But Wireshark of Comp A (output) displays absolutely incorrect bytes i.e. Comp A sends incorrect bytes.
What could be the problem?
I figured out that function socket.SendTo(packet,0,count,SocketFlags.None,remoteEP);
sends correct bytes if I send them with a delay. I mean I can send 400 bytes (without loops) and my program sends 400 absolutely precise, correct bytes.
But I have a big PCM file. Its size is about 50 Mb. Its duration is 1 minute. I have to send this file during a minute so that this file would be transmitted evenly, uniformly. It means about 800 Kb needs to be transmitted per a second.
So here is my program code. I send every 800 Kb per second using timer function being called 2 times per second.
private void m_pTimer_Tick(object sender,EventArgs e)
{
uint sent_data = 0;
while ((sent_data <= (BUFFERSIZE / 120)) && ((num * RAW_PACKET) + sent_data < BUFFERSIZE))
{
uint bytes_count = ((BUFFERSIZE - (RAW_PACKET * num)) > RAW_PACKET) ? RAW_PACKET : (BUFFERSIZE - (RAW_PACKET * num));
byte[] buffer = new byte[bytes_count];
Array.Copy(ReadBuffer, num * RAW_PACKET, buffer, 0, bytes_count);
num++;
// Send and read next.
m_pUdpServer.SendPacket(buffer, 0, Convert.ToInt32(bytes_count), m_pTargetEP);
sent_data += bytes_count;
}
if ((num * RAW_PACKET) + sent_data == BUFFERSIZE)
{
m_pTimer.Enabled = false;
}
m_pPacketsReceived.Text = m_pUdpServer.PacketsReceived.ToString();
m_pBytesReceived.Text = m_pUdpServer.BytesReceived.ToString();
m_pPacketsSent.Text = m_pUdpServer.PacketsSent.ToString();
m_pBytesSent.Text = m_pUdpServer.BytesSent.ToString();
}
If I call a function m_pUdpServer.SendPacket(buffer, 0, Convert.ToInt32(bytes_count), m_pTargetEP); without a timer or any loops(while, etc.) I see correct result on output.
Well here 120 - a number of file parts that are being transmitted for a period of every timer function call. The timer function is called 2 times per second.
BUFFERSIZE is a total file size.
ReadBuffer is an array that contains all PCM file data.
RAW PACKET = 400 bytes.
sent_data is a total byte count being sent within every timer function call.
num is a total count of sent packets.
I suppose there is too many packets(bytes) to be sent within a timer function call. Therefore I see incorrect values on output.
So what is a solution of this problem?
I think I can make up a RTP packet (to add a sequence number to every sent packet). It will help me to identify received packets and to make up a correct sequence of received packets. It can help me if received packets have a correct byte sequence. Because if received packets have a mixed byte order(sequence) I don't understand how to restore a correct byte sequence in every received packet.
I was advised to refuse the timer call and to send packets evenly, uniformly using a synchronization by time. Actually I don't know how to do it. Maybe I should use threads, pool of threads or something like that. What do you think?
The only guarantee UDP gives you is that the entire message is delivered. However, if you send piece 1,2,3,4 in order they may be received in any order, for instance 4132. That is, UDP do not guarantee ordering.
You MUST include a sequence number to be able to store the PCM correctly.
UDP do not guarantee delivery either. If the server receive piece #4 but not #5 within X seconds it should probably request that piece again.
Or you'll just switch to TCP. Much easier. All you need is some way to tell the length of the file and then just transfer it.
Related
When using Microsoft Visual Studio with .NET code, there are multiple ways to read data from serial ports, mainly these:
Read
SerialPort.Read(byte[] buffer, int offset, int count);
This reads until the defined buffer is full
Read Existing
SerialPort.ReadExisting();
This reads the currently existing data bytes at the serial Port
Read To
SerialPort.ReadTo(string delimter);
This reads until a defined delimiter, e.g. "\r" is found.
Problem
The issue I am currently facing, is that my device operates in two modes, in normal mode, I can send command, and a response with \n in the end is sent, so this function can be processed with
string response = SerialPort.ReadTo("\n");
In data sending mode, the device sends data in bytes and variable package sizes, the data packets can be 5 to 1024 byte, and have no unique character at the end. In fact, the last characters are a checksum, so they will surely differ in each packet/stream. Therefore, the function
SerialPort.Read(byte[] buffer, int offset, int count);
cannot be used, since count is unknown. Additionally, the function
SerialPort.ReadExisting();
Is of no use, since it will only read the first bytes of the packed, and not the complete data stream.
Workaround
My current workaround is the following, which has the issue that its slow, and relies on the estimation of the highest packet size of 1024. To be
private void SerialPortDataReceived(object sender, SerialDataReceivedEventArgs e)
{
Task.Delay(900).Wait() //9600baud = 1200byte/s = 1024byte/1200byte = 0.85
string response = SerialPort.readExisting();
}
The major issue with my workaround is that it has to wait the complete time, even if the device sends a small amount of bytes (e.g. 100). Additionally, if the device is switched to a higher speed (e.g. 38400baud) the wait time could be way lower.
How could I deal with this problem in a proper way?
I have 2 GUI applications, one in C++ and one in C#.
The applications are the same and there is a function that writes and reads from COM port.
When I run my C++ app I receive the right result from the Serial.Read which is a buffer with 24 bytes.
But when I run my C# app I receive uneven results:
* Just 1 byte buffer if I don`t put sleep between write and read.
* Different sizes if I do put sleep between write and read (between 10-22 bytes).
What could be the reason for that?
My C++ code:
serial.write(&c, 1, &written);
serial.read(read_buf, read_len, &received); // received = 24
My C# code:
serial.Write(temp_char, 0, 1);
received = serial.Read(read_buff, 0,read_len); // received = 1
C# with sleep:
serial.Write(temp_char, 0, 1);
Thread.Sleep(100);
received = serial.Read(read_buff, 0,read_len); // received = (10~22)
Serial ports just give a stream of bytes, they don't know how you've written blocks of data to them. When you call read, the bytes that have been received are returned, if that isn't a complete message you need to call read repeatedly until you have the whole message.
You need to define a protocol to indicate message boundaries, this could be a special character (e.g. a new line in a text based protocol) or you can prefix your messages with a length.
Context
I have been creating a system where an raspberry PI is sending images to a remote client in real-time.
The raspberry PI captures the images using a raspberry PI camera. A captured image is available as a 3-dimensional array of all the pixels (rows, colums and rgb). By sending and displaying the images really fast it will appear as a video to the user.
My goal is to send these images in real-time with the image resolution being as high as possible. An acceptable frame rate is around 30 fps. I selected the protocol UDP and not TCP. I did this because data can be transferred much faster in UDP due to less overhead. Re-transmissions of individual packets is not necessary because losing some pixels is acceptable in my case. The raspberry PI and the client are located in the same network so not many packets will be dropped anyway.
Taking into account that the maximum transmission unit (MTU) on the ethernet layer is 1500 bytes, and the UDP packets should not be fragmented or dropped, I selected a maximum payload length of 1450 bytes, of which 1447 bytes are data, and 3 bytes are application layer overhead. The remaining 50 bytes are reserved
for overhead that is automatically added by the TCP/IP and transport layers.
I mentioned that captured images are available as an array. Assuming the size of this array is, for example, 1.036.800 bytes (e.g. width=720 * height=480 * numberOfColors=3), then 717 (1.036.800 / 1447) UDP packets are needed to send the entire array. The c++ application on the raspberry PI does this by fragmenting the array into fragments of 1447 bytes, and adding an fragment index number, which is between 1-717, as overhead to the packet. We also add an image number, to distinguish from a previously sent image/array. The packet looks like this:
udp packet
Problem
On the client side, I developed a C# application that receives all the packets and reassembles the array using the included index numbers. Using the EgmuCV library, the received array is converted to an image and drawn in a GUI. However, some of the received images are drawn with black lines/chunks. When debugging, I discovered that this problem is not caused by drawing the image, but the black chunks are actually missing array fragments that did never arrive. Because the byte values in an array are initialized as 0 by default, the missing fragments are shown as black chunks
Debugging
Using Wireshark on the client's side, I searched for the index of such a missing fragment, and was surprised to find it, intact. This would mean that the data is received correctly on the transport layer (and observed by wireshark), but never read on the application layer.
This image shows that a chunk of a received array is missing, at index 174.000. Because there are 1447 data bytes in a packet, the index of this missing data corresponds to an UDP packet with the fragment index 121 (174.000/1447). The hexadecimal equivalent for 121 is 79. The following image shows the packet corrosponding UDP packet in wireshark, proving the data was still intact on the transport layer. image
What have I tried soo far
When I lower the frame rate, there will be less black chunks, and they are often smaller. With a framerate of 3FPS there is no black at all. However, this frame rate is not desired. That is a speed of around (3fps * 720x480x3) 3.110.400 bits per second (379kb/s). A normal computer should be capable to read more bits per seconds than this. And as I explained, the packets DID arrive in wireshark, they are only not read in the application layer.
I have also tried changing the UDP payload length from 1447 to 500. This only makes it worse, see image.
I implemented multi threading so that data is read and processed in different threads.
I tried a TCP implementation. The images were received intact, but it was not fast enough to transfer the images in real-time.
It is notable that a 'black chunk' does not represent a single missing fragment of 1447 bytes, but many consecutive fragments. So at some point when reading for data, a number of packets is not read. Also not every image has this problem, some are arrived intact.
I am wondering what is wrong with my implementation that results in this unwanted effect. So I will be posting some of my code below.
Please note that the exception 'SocketException' is never really thrown and the Console.Writeline for 'invalid overhead' is also never printed. The _client.Receive always receives 1450 bytes, expect for the last fragment of an array, which is smaller.
Also
Besides solving this bug, if anyone has alternative suggestions for transmitting these arrays in a more efficient way (requiring less bandwidth but without quality loss), I would gladly hear it. As long as the solution has the array as input/output on both endpoints.
Most importantly: NOTE that the missing packets were never returned by the UdpClient.Receive() method.
I did not post code for c++ application running on the raspberry PI, because the data did arrive (in wireshark) as I have already proved. So the transmission is working fine, but receiving is not.
private const int ClientPort = 50000;
private UdpClient _client;
private Thread _receiveThread;
private Thread _processThread;
private volatile bool _started;
private ConcurrentQueue<byte[]> _receivedPackets = new ConcurrentQueue<byte[]>();
private IPEndPoint _remoteEP = new IPEndPoint(IPAddress.Parse("192.168.4.1"), 2371);
public void Start()
{
if (_started)
{
throw new InvalidCastException("Already started");
}
_started = true;
_client = new UdpClient(_clientPort);
_receiveThread = new Thread(new ThreadStart(ReceiveThread));
_processThread = new Thread(new ThreadStart(ProcessThread));
_receiveThread.Start();
_processThread.Start();
}
public void Stop()
{
if (!_started)
{
return;
}
_started = false;
_receiveThread.Join();
_receiveThread = null;
_processThread.Join();
_processThread = null;
_client.Close();
}
public void ReceiveThread()
{
_client.Client.ReceiveTimeout = 100;
while (_started)
{
try
{
byte[] data = _client.Receive(ref _remoteEP);
_receivedPackets.Enqueue(data);
}
catch(SocketException ex)
{
Console.Writeline(ex.Message);
continue;
}
}
}
private void ProcessThread()
{
while (_started)
{
byte[] data;
bool dequeued = _receivedPackets.TryDequeue(out data);
if (!dequeued)
{
continue;
}
int imgNr = data[0];
int fragmentIndex = (data[1] << 8) | data[2];
if (imgNr <= 0 || imgNr > 255 || fragmentIndex <= 0)
{
Console.WriteLine("Received data with invalid overhead");
return;
}
// i omitted the code for this method because is does not interfere with the
// socket and therefore not really relevant to the issue that i described
ProccessReceivedData(imgNr, fragmentIndex , data);
}
}
I am developing a call recording application in C#.net using pcap.net library. For packet capturing i am using Wireshark's Dumpcap.exe. And the packet files are creating in 5 second duration. To read each packet file what i have done is
OfflinePacketDevice selectedDevice = new OfflinePacketDevice(filename);
using (PacketCommunicator communicator =
selectedDevice.Open(65536, // portion of the packet to capture
// 65536 guarantees that the whole packet will be captured on all the link layers
PacketDeviceOpenAttributes.Promiscuous, // promiscuous mode
0)) // read timeout
{
communicator.ReceivePackets(0, DispatcherHandler);
In DispatcherHandler method i am processing each packets. DispatcherHandler call takes 0 seconds for each file .
I am getting delay when prcessing the RTP packets packets in the same method..
To identify rtp packets I used ordered dictionary with key as ipadrress+portnumber. So I need to check whether this key exists in dictionary when each rtp packet comes. This task getting slower in processing each dump file.
if (objPortIPDict.Contains(ip.Source.ToString().Replace(".", "") + port))
{
// here i write the rtp payload to a file
}
I read a couple of strange things:
1) Why use Contains in Dictionary?
objPortIPDict.Contains(ip.Source.ToString().Replace(".", "") + port)
If objPortIPDict is a dictiaonary use ContainsKey
2) Derivates from first. If this is a Dictionary, its ContainsKeyhas O(1) time of execution, so can not be affected by amount of data in dictionary itself.
Yes, it can be affected, if the amount of data becomse so big, that entire application becomes slower, but pick time will remain always constant in regard of current application state domain.
I am writing an application that receives some input from a long range radio via a serial connection. I am currently using the SerialPort C# class to receive and send data, but I am having a few issues. I've noticed that the function I have for receiving data is not correctly setting the buffer byte array size. All data is being sent in bytecode (Hex).
Say the other node sends 103 bytes of data. Stepping through my code and setting a breakpoint at the "Read()" line, I see that "serialPort1.BytesToRead-1" evaluates to 103, BUT the byte[] array is only initialized to 17. I have no explanation for this behavior. As a result, only the first 17 bytes are put into the array. Continuing through the step through, this same event is triggered, this time with "serialPort1.BytesToRead-1" evaluating to 85 (presumably since only the first 17 of the 103 bytes were read.
If I hardcore the data array size at 103, it works flawlessly in one pass. However, at the moment I am unable to store all the data in my byte array in one pass, which is causing a lot of problems. Anyone know why my byte array is being initialized to such an arbitrary size???
private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
byte[] data = new byte[serialPort1.BytesToRead - 1];
serialPort1.Read(data, 0, data.Length);
DisplayData(ByteToHex(data) /*+ "\n"*/);
}
Updated: Here's the new method I am attempting. isHeader is a boolean value initially set to true (as the first two bytes being received from the packet are in fact the length of the packet).
const int NUM_HEADER_BYTES = 2;
private void serialPort1_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
byte[] headerdata = new byte[2];
if (isHeader)
{
serialPort1.Read(headerdata, 0, NUM_HEADER_BYTES);
int totalSize = (headerdata[0] << 8 | headerdata[1]) >> 6;
serialPort1.ReceivedBytesThreshold = totalSize - NUM_HEADER_BYTES;
data = new byte[totalSize - NUM_HEADER_BYTES];
isHeader = false;
}
else
{
serialPort1.Read(data, 0, data.Length);
double[][] results = ParseData(data, data.Length);
serialPort1.ReceivedBytesThreshold = NUM_HEADER_BYTES;
isHeader = true;
DisplayData(ByteToHex(data) /*+ "\n"*/);
}
}
BytesToRead is equal to the number of bytes waiting in the buffer. It changes from moment to moment as new data arrives, and that's what you're seeing here.
When you step through with the debugger, that takes additional time, and the rest of the serial data comes in while you're stepping in the debugger, and so BytesToRead changes to the full value of 104.
If you know that you need 103 bytes, I believe setting ReceivedBytesThreshold to 104 will trigger the DataRecieved event at the proper time. If you don't know the size of the message you need to receive, you'll need to do something else. I notice you're throwing away one byte (serialPort1.BytesToRead - 1), is that an end-of-message byte that you can search for as you read data?
Debuggers won't deliver when it comes to real time data transfer. Use debug traces.
BTW I'd go with polling data myself, not putting my trust on events. With serial ports this is a sound and reliable approach.
Edit per comment:
Serial data transfer rate is bounded by your baud-rate.
You're worried about losing data, so let's look at the numbers:
Assuming:
baud_rate = 19600 [bytes/sec] // it's usually *bits*, but we want to err upwards
buffer_size = 4096 [bytes] // windows allocated default buffer size
So it takes:
4096/19600 [sec] ~ 200 [ms]
To overflow the buffer (upper bound).
So if you sample at 50 Hz, you're running on an order of magnitude safety net, and that's a good spot. On each 'sample' you read the whole buffer. There is no timing issue here.
Of course you should adopt the numbers to your case, but I'll be surprised if your low bandwidth RF channel will result in a transfer rate for which 50 Hz won't be a sufficient overkill.
LAST EDIT:
Needless to say, if what you currently have works, then don't touch it.