SharpPcap - Incoming packets are dropped - c#

I am writing a C# application which communicates with an external device via ethernet. I am using SharpPcap Version 4.5.0 for this.
Unfortunately, I had to realize that some incoming packets are dropped. For testing, I also put a switch between the external device and my computer, which also logs every packet. On this log, the packet is visible. Hence I am quite sure that the packet is really sent (and it's not an error of the external device).
This is the code that I use:
public bool TryActivateChannel(uint channelNumber, out string message)
{
message = string.Empty;
devices[(int)channelNumber].Open(DeviceMode.Promiscuous);
devices[(int)channelNumber].OnPacketArrival += PacketArrived;
devices[(int)channelNumber].StartCapture();
return true;
}
public bool CloseChannel(uint channelNumber, out string message)
{
message = string.Empty;
devices[(int)channelNumber].OnPacketArrival -= PacketArrived;
devices[(int)channelNumber].Close();
return true;
}
private void PacketArrived(object sender, CaptureEventArgs e)
{
if (e.Packet.LinkLayerType != PacketDotNet.LinkLayers.Ethernet)
{
return;
}
else
{
inputQueue.Enqueue(e);
}
}
devices is just CaptureDeviceList.Instance and inputQueue is a ConcurrentQueue, which is dequeued in another Thread. This thread writes every incoming packet into a *.pcap file (where the packets are missing). Additionally, I look at the Statistics property of my ICaptureDevice, which claims that no packet is dropped. I also tried to run it on a different computer, in order to make sure it is not a problem of the network card.
At this point, I am really helpless. Did I do anything wrong in my code? Is this a known issue? I read somewhere else the SharpPcap can manage up to 3 MBit/s. I am far away from this value, hence I don't believe it's a perfomance problem.
Addendum: Instead of the ConcurrentQueue, I also tried the approach with the List provided by the author. There, I have the same result: Some packets are missing. I also had a version without a second Thread, where the packets are processed directly in the event handler. Same result: Packets are missing. Moreover, I captured simultaneously with Wireshark. Here, the packets are also missing. I realized that the missing packets all have in common that they have a certain length (about more than 60 bytes). For shorter packets, I never observed that they are missing. I am using WinPcap 4.1.3. Is the problem located there?

For the record, if you don't see the packets in WireShark, the problem is neither in your code nor in SharpPcap.
It means it's either in the hardware or in the driver/OS.
Common reasons that you don't receive packets:
The packets were VLAN tagged, depending on the adapter configuration, it may drop VLAN tagged frames before they reach the OS.
Firewall: some firewalls are capable of preventing packets from reaching the Npcap/WinPcap driver, this usually affects IP packets.
Faulty driver: Example: The Npcap bug https://github.com/nmap/npcap/issues/119
Packets "discarded": this means that the packet was rejected by the hardware itself,
you can check for this using the command netstat -e, usual reasons:
Bad cables: yes, really.
Frame collision: occurs more frequently with half duplex cables and when the time between packets is too short.

Related

Using SerialPort to discard RF Device Buffer

I'm writting a small application that automatically connects to the correct serial port by sending a list of commands, and then waiting for a response back from the serial device (RF Transmitter). The serial port objects sends certain commands in decimal format, a reset, login and then a query command.
When the query command is sent, the device then replies back with a response - when this response is received I know I have the correct serial port connection.
All of this works fine, but sometimes I receive an error back from the device - Error 130: TX Queue Overflow. This error can be resolved by simply restarted the device (RF Transmitter), but the frequency of this error is just silly.
Am I correct in thinking that a TX Overflow error would be caused when the buffer on the hardware becomes full? I thought a simple DiscardInBuffer just after opening a connection to the device would fix this - but it doesn't.
When should I use the DiscardInBuffer, am I using it in the correct context?
-- Edit
After some more comments and thoughts, I've come to the conclusion that the SerialPort.DiscardInBuffer won't do anything for my current situation, rather I need to discard the buffer on the actual RF Device - Hence why inplugging it works.
You've sent too much data to the device, and its output queue has overflowed, meaning it is not able to forward the data as fast as you're providing it.
There's no method you can call on the SerialPort class to fix this, these are two completely different buffers we're talking about. Calling SerialPort.DiscardOutBuffer will only discard the output data pending for your serial port, not the device.
To temporarily fix the issue, the manual indicates that you can:
Use the command “reset txqueue” to clear the queue.
The better solution, however, is to prevent the issue and not flood the device with data. The exact way to do this will depend on your hardware.
One way might be to introduce some sort of CommandQueue class which has an associated SerialPort object to push the commands to the hardware. In this class, you could queue up commands to be sent, and send them out a configurable maximum rate. You would use a timer, and only send commands out if one hasn't been sent in the last X msec.
Another way would be to implement some sort of software flow control. It appears that your device supports querying the queue length with the "?STATE" command (page 13). It will respond with:
STATE x1/x2 x3 x4
x1: Number of datapackets in TX queue
x2: Size of TX queue
x3: Status byte (8 bit hexadecimal)
Normal state: status byte = 0
Bit 0 = 1: Error in transceiver
Bit 1 = 1: Error in EEPROM
x4: Current value of the dataset counter (number of last received and saved datapacket)
You could query this before attempting to send a data packet, and simply sleep while the queue is full.
Having written a lot of code to interface with finicky hardware (Serial, Ethernet, etc.) in C#, I can offer the following advice:
Implement an abstract class TN9000DeviceBase which has abstract methods for all of the commands supported by the device.
Derive a class TN9000SerialDevice : TN9000DeviceBase which executes the command using serial port.
This will allow you to come back and implement it via Ethernet when requirements change.

C# Serial Port Check if Device is Connected

I've been working with the SerialPort class a lot lately. Currently I'm trying to figure out the proper way to check if a device is connected to the comm port my application uses. Is there any proper way to check if a device is connected to the comm port? My current method is as follows:
while (isReading == true)
{
try
{
received += serialPort.ReadExisting();
if (received.Contains('>'))
isReading = false;
}
catch (Exception e)
{
}
if (tick == 10000)
if (received == "")
{
Console.WriteLine("No Data Received. Device isn't connected.");
isReading = false;
}
tick++;
}
Console.WriteLine(received);
It works but I feel it's a little hacky and unreliable. I can keep it if need be but I'd like it if there's a proper alternative to doing this.
Edit: I actually have to set the tick value to about 10,000 to ensure it's reliable. Otherwise I fail to receive data on occasion. Even setting it to 1000 or 5000 is unreliable. Even then, it's not guaranteed to be reliable across multiple machines.
I too need to work with serial ports, and believe me they are a pain.
My method to check if a device is connected usually revolves around issuing a polling command.
While you method may work, I cant help but be reluctant to use a while loop when an event will suffice.
The .NET serial port class offers some useful events:
Serial.DataReceived Serial.ErrorReceived and Serial.Write
Usually I would issue a polling command at a specified interval to ensure the device is connected.
When the device responds it will fire the DataReceived event, and you can deal with the response accordingly (along with any other neccessary data). This can be used in conjunction with a simple Timer or incremented variable to time the response. Note you will need to set the ReadTimeout and WriteTimeout value appropriately. This, along with the ReadExisting and/or ReadLine method may be of use in your DataReceived event handler.
So, to summarize, (in pseudo code)
Send Polling command, Start Timer
Timer to CountDown for a specified time
If Timer fires, then assume no response
If DataRecieved fires (and expected response) assume connection
(of course handle any specific Exceptions (e.g TimeOutException, InvalidOperationException)
Unfortunately with serial ports, there's no proper way to determine if a certain device is connected. You could write a magic message that only your device would respond correctly to, but as described in this answer, this method could cause problems for other connected devices.
Ultimately, you just have to depend on the user selecting the correct port.
Also, if you happen to lose connection to the device, you would only know when you fail to read/write to it. In this case, just throw a LostConnection event.
I would agree that is a hacky because any device could be connected and sending '>'; but that doesn't mean its your device.
Instead, be dynamic and use something like SerialPort.GetPortNames and WMI Queries to interrogate the devices plugged into the COM ports.
You could use this example as a starting point.
After reading documentation and examples, you should be able to create a list of all device information that registers drivers on the computer and their connected COM port.
Edit:
Since the device doesn't register itself, consider looking at the product drivers for Visual Studio that might make your job a lot easier.

How to safely stream data through a server socket to another socket?

I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.

Why I can't get all UDP packets?

My program use UdpClient to try to receive 27 responses from 27 hosts. The size of the response is 10KB. My broadband incoming bandwidth is 150KB/s.
The 27 responses are sent from the hosts almost at the same time and for every 10 secs.
However, I can only receive 8 - 17 responses each time. The number of responses that I can receive is quite dynamic but within the range.
Can anyone tell me why? why can't I receive all?
I understand UDP is not reliable. but I tried receiving 5 - 10 responses at the same time, it worked. I guess the network links are not so bad.
The code is very simple. ON the 27 hosts, I just use UdpClient to send 10KB to my machine.
On my machine, I have one UdpClient receive datagrams. Each time I get a data, I create a thread to handle it (basically handling it means just print out "I received 10KB", but it runs in a thread).
listener = new UDPListener(Port);
listener.Start();
while (true) {
try {
UDPContext context = listener.Accept();
ThreadPool.QueueUserWorkItem(new WaitCallback(HandleMessage), context);
} catch (Exception) { }
}
If I reduce the size of the response down to 3KB, the case gets much better that roughly 25 responses can be received.
Any more idea? UDP buffer problems???
As you said yourself, UDP is not reliable. So chances are packets are dropped somewhere.
Note that packet drop is caused just as much by overloaded switches/routers/network cards as by bad links. If someone sends you 27 10Kb responses "simultaneously". it might very well be that the buffers of your network card, or a nearby switch are full, and packets get dropped.
Until you have some code to show, there's probably not much else to say.
The 10kb packets are probably being fragmented. If even one of the fragments is dropped, the packet can't be reassembled. Depending on your network, the 3kb packets may not be fragmented, but in any case they would be fragmented less, increasing the chances that they make it through. You could run a PMTU discovery tool to find out the largest packet size the links support.
I think that UDP is not reliable at all, so i think this trouble is because you are getting a bottleneck (how tipically it is call) UDP sends everything but disordered and without check
so i think you must create kind of this protocol using UDP, i tell this cause i already made it
the key is trying to send all the packages with an ID. this way the receiver knows wich packages are missing and could ask for them to the transmitter, like TCP normally does

NetworkStream.Write returns immediately - how can I tell when it has finished sending data?

Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
 
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
the CLR
the operating system kernel
a virtualized network interface
a switch
a software firewall
a hardware firewall
a router performing network address translation
a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
Perhaps try setting
tcp.NoDelay = true

Categories

Resources