Determine if UDP host is reachable? - c#

I have a metro app talking to a device over wifi using UDP. However, when I disconnect the device from the network or start the app with the device disconnected, nothing happens. ConnectAsync doesn't throw an exception, the app doesn't throw an exception, the app runs like nothing's wrong.
I can't ping the other end but If I give it a formatted string it will respond. The device is currently connected to a router which has internet access but I'm eventually planning to use a router without internet access. I've never done anything with UDP so I'm at a loss here.
Here is an implementation of a UDP listener/writer(taken from Pete Bright at 10rem.net)
class Network
{
private DatagramSocket _socket;
public bool IsConnected { get; set; }
public bool recieved;
public string ret;
public Network()
{
IsConnected = false;
_socket = new DatagramSocket();
_socket.MessageReceived += OnSocketMessageReceived;
}
public async void Connect(HostName remoteHostName, string remoteServiceNameOrPort)
{
try
{
await _socket.ConnectAsync(remoteHostName, remoteServiceNameOrPort);
}
catch (Exception e)
{
var msg = new MessageDialog(e.ToString());
msg.ShowAsync();
}
IsConnected = true;
}
private void OnSocketMessageReceived(DatagramSocket sender, DatagramSocketMessageReceivedEventArgs args)
{
var reader = args.GetDataReader();
var count = reader.UnconsumedBufferLength;
var data = reader.ReadString(count);
ret = data.Trim();
recieved = true;
}
DataWriter _writer =null;
public async void SendMessage(string message)
{
if (String.IsNullOrEmpty(message)) return;
if (_writer == null)
{
var stream = _socket.OutputStream;
_writer = new DataWriter(stream);
}
_writer.WriteString(message);
await _writer.StoreAsync();
}
}

UDP Sockets are "connection-less", so the protocol does not know anything about whether or not the server and client are connected. To know if a a connection is still "active" you will have to implement your own connection detection.
I might recommend reading beej's guide to sockets. It's a good read and pretty funny:
http://beej.us/guide/bgnet/

As it was said, there is no concept like there is in tcp/ip with sync/ack, etc to communicate back and forth and ensure something is there.
Clients are neither connected nor disconnected, only listening or sending really.
So with that said you need to implement a receive timeout from the client.
There are some funny jokes with UDP, since you send data and just essentially fling it out into space. The order the packets are received can't matter either or you are stuck implementing your own scheme here as well.
What you'll need to do here is actually try to reach the device. If you care, then you can do this every X seconds.
As it is stated here: How to test a remote UDP Port
(keep with me, a better approach below this but wanted to provide multiple means)
You can use UdpClient, set a receive timeout on the underlying socket,
make a connection to that remote server/port, Send some small message
(byte[] !) and call Receive.
IF the port is closed you get an exception saying that the connection
was forcibly closed (SocketException with
ErrorCode 10054 = WSAECONNRESET)... which means the port is NOT open.
However- I think a better approach is to actually agree upon a protocol id or some specific data that the clients send every X seconds. If received, then update your 'client connected' table, otherwise consider them disconnected until the client sends a packet with a protocol id over.
A great series on this that you can probably easily adapt to c# is at:
http://gafferongames.com/networking-for-game-programmers/virtual-connection-over-udp/
I believe your code above can be refactored as well to only Send() to an address rather than connect, since there really is no true connect.

To help out people that stumble upon this:Apparently my google-fu is pretty weak. This shows how to set timeouts for TCP and UDP sockets. Default behavior is to never time out(which is consistent with what I saw).
Edit: It doesn't work. Even with a timeout of 500ms I'm still seeing the same behavior of "no exception thrown".

Related

How can I investigate application/network bottlenecks in my TCP application server and the environment?

I'm trying to write a high-performance TCP server (a LDAP server) using this tutorial by David Fowler as a base part of the MyServerListener.cs to handle incoming connections.
This is a simple .net 7 console app (with little changes) that I borrowed from David, it just accepts incoming clients, process the requests and writes hello to the response :
internal class Program
{
const int PORT = 389; // injecting from config
const int BACKLOG_LENGTH = 200; // max backlog size in windows server
static async Task Main(string[] args)
{
var listenSocket = new Socket(SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(new IPEndPoint(IPAddress.Any, port));
Console.WriteLine("Listening on port " + port);
listenSocket.Listen(BACKLOG_LENGTH);
while (true)
{
var socket = await listenSocket.AcceptAsync();
_ = ProcessLinesAsync(socket);
}
}
private static async Task ProcessLinesAsync(Socket socket)
{
#if DEBUG
Console.WriteLine($"[{socket.RemoteEndPoint}]: connected");
#endif
// Create a PipeReader over the network stream
var stream = new NetworkStream(socket);
var reader = PipeReader.Create(stream);
var writer = PipeWriter.Create(stream);
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
while (TryReadLine(ref buffer, out ReadOnlySequence<byte> line))
{
// Process the line.
ProcessLine(line);
try
{
// writing a sample message to the response
var helloBytes = Encoding.ASCII.GetBytes("hello\n");
await writer.WriteAsync(helloBytes);
}
catch (Exception ex)
{
throw;
}
}
// Tell the PipeReader how much of the buffer has been consumed.
reader.AdvanceTo(buffer.Start, buffer.End);
// Stop reading if there's no more data coming.
if (result.IsCompleted)
{
break;
}
}
// Mark the PipeReader as complete.
await reader.CompleteAsync();
#if DEBUG
Console.WriteLine($"[{socket.RemoteEndPoint}]: disconnected");
#endif
}
private static bool TryReadLine(ref ReadOnlySequence<byte> buffer, out ReadOnlySequence<byte> line)
{
// Look for a EOL in the buffer.
SequencePosition? position = buffer.PositionOf((byte)'\n');
if (position == null)
{
line = default;
return false;
}
// Skip the line + the \n.
line = buffer.Slice(0, position.Value);
buffer = buffer.Slice(buffer.GetPosition(1, position.Value));
return true;
}
private static void ProcessLine(in ReadOnlySequence<byte> buffer)
{
foreach (var segment in buffer)
{
// Doing some tasks
#if DEBUG
Console.Write(Encoding.UTF8.GetString(segment.Span));
Console.WriteLine();
#endif
}
}
}
This server listens on a port (389), processes the incoming request, doing some jobs and then writes a message to the response using PipeReader and PipeWriter.
I'm trying to do my best to a less memory/heap allocation code (using span<>, memory<>, ...) as I can, to keep my codebase so fast and optimize. But for now, I'm trying to test the production environment with the above code to examine the throughput; I mean: the server resources, my TCP server application itself, clients and the network;
I'm using Apache JMeter to test (load/stress test).
In some scenarios (sending more than 5000 request/sec) I get Connection refused error messages in JMeter logs, but I don't have any high pressure in the server or client's (JMeter[s]) resources (CPU/Memory).
I tried to optimize the server's configuration and changed some TCP related parameters (I googled about them) like MaxUserPort: 65534, TcpTimedWaitDelay: 30 or different backlog size, but no improvements.
So I'm almost sure that there is sth related to the network (packet dropping/rejecting or sth like this).
I also turned off firewall in the testing clients and the server, But I don't have any access to the network configurations (and I don't know what are they) like firewalls, ISA, TMG, etc.
_____________
Update 1:
I already increased our clients ephemeral ports to the maximum range using this PS script:
netsh int ipv4 set dynamic tcp start=5000 num=65535
and now we have this :
netsh int ipv4 show dynamicport tcp
Start Port : 1024
Number of Ports : 64511
And we also checked JMeter logs to see any error indicating this situation (Ephemeral ports exhaustion), at first we saw this message :
Non HTTP response code: java.net.BindException,Non HTTP response
message: Address already in use
But now, it's gone and we don't have large number of TIME_WAIT ports to worry about.
And we are also testing our scenario with SO_LINGER:0 and monitoring real times TIME_WAIT ports (using some tools), and we are sure that this isn't our concern right now.
_____________
So my question is, how can I find out why I can't send more traffic (threads/requests per seconds in JMeter clients) to the server to testing my TCP server application performance? Because for now, the server CPU doesn't increase more than ~10%.
At this point, is this a network related problem? How can I be sure about that? e.g: can I use some network analyzers (e.g: PRTG network monitor) to find out any dropped TCP packets? Or any other tips welcomed
Most probably TCP ports are not recycled fast enough, there is a network parameter which controls the time which connection can stay in TIME_WAIT state so you might also want to reduce TcpTimedWaitDelay
Also it might be a good idea to increase maximum number of TCP connections via TcpNumConnections parameter
And last but not the least it might be the case JMeter is not capable of sending the requests fast enough so you might need to play the same trick on the load generator side. In addition make sure to follow JMeter Best Practices and monitor CPU/RAM/Network/Disk/Swap usage on JMeter side as it might be the case you will need to switch to Distributed Testing if one machine is not capable of giving more than 5k requests per second.

Server not responding to incoming connections

I'm writing a Connect-Four game in C#, and now want to include the possibility to play games online using TCP. Each instance of the game exe should work as both a server, in order to listen to incoming game invitations, and a client, to send said invitations. Of course, only one at a time is important.
I have read and watched a few C# tutorials on this (namely Jeff Chastine's tutorial 22) and I understand the basics of network communication. After getting past a few permission-errors, fixed by executing as administrator, I am now running into two issues.
1) When I try connecting from a machine on the same network, I always get an error saying the desired server did not respond to the request. When I enter the debugger, the program is stuck at the .AcceptTcpClient call (as if no connection has been attempted). I understand that this is a blocking call, but the code should continue when a connection is attempted. I have not tried connecting two machines in different networks, as I have only one network available.
2) This one is a rather minor issue regarding threading: even though I call listenerThread.Abort() when I close the application, the thread does not stop. I do not have too tight a grip on threads in C#, so I assume this problem is a rather easy fix.
Initialisation of listener and listenerThread
listenerThread = new Thread(ListenForInvites);
listener = new TcpListener(Dns.Resolve("localhost").AddressList[0], setting.port);
client = new TcpClient();
The method for listening to incoming connections
private void ListenForInvites()
{
try
{
listener.Start();
TcpClient enemyClient = listener.AcceptTcpClient(); // the call where it gets stuck even if someone connects
onlineSr = new StreamReader(enemyClient.GetStream());
onlineSw = new StreamWriter(enemyClient.GetStream());
onlineSw.WriteLine($"ACCEPT {player.name} {player.color.R} {player.color.G} {player.color.B}"); // I am using my own protocol, not HTTP (no clue if this is a horrible idea)
HandleConnection().Wait();
}
catch (Exception e)
{
MessageBox.Show(e.Message, "Error");
}
}
The method for attempting the connection
public void SendInvite(string ip)
{
try
{
string[] ipSplit = ip.Split(':');
client.Connect(ipSplit[0], Convert.ToInt16(ipSplit[1]));
onlineSr = new StreamReader(client.GetStream());
onlineSw = new StreamWriter(client.GetStream());
onlineSw.WriteLine($"INVITE {player.name} {player.color.R} {player.color.G} {player.color.B}"); // player is an instance variable
onlineSw.Flush();
HandleConnection().Wait();
}
catch (Exception e)
{
MessageBox.Show(e.Message, "Fehler");
}
}
What am I doing wrong? Any help is appreciated.

Tcp.BeginConnect thinks it is connected, even if there is no server

I am trying to make an aSync connection to a server using TcpClient.BeginConnect, but am encountering some difficulties. This is my first time using Tcp so please bear with me.
The connection itself works fine when the server is running, i can send and receive messages without problem. However when I stop the server and try to connect to it, Tcp.BeginConnect will pretend it is actually connected to a server without returning an error, until i try to actually send data which will obviously fail.
When i use TcpClient.Connect() instead it'll return A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. when no connection is established after a few seconds, letting me know the connection failed.
Is there a way to get this same behaviour with TcpClient.BeginConnect? Or am I doing something wrong myself?
i looked around and found C# BeginConnect callback is fired when not connected which is somewhat similair and the answer was that EndConnect had to be called in the callback before the socket becomes usuable, but i'm already doing that.
my code:
public static void OpenTcpASyncConnection()
{
if (client == null)
{
client = new TcpClient();
IAsyncResult connection = client.BeginConnect(serverIp, serverPort, new AsyncCallback(ASyncCallBack), client);
bool succes = connection.AsyncWaitHandle.WaitOne();//returns true
if (!succes)
{
client.Close();
client.EndConnect(connection);
throw new Exception("TcpConnection::Failed to connect.");
}
else
{
Debug.LogFormat("TcpConnection::Connecting to {0} succeeded", serverIp);
}
}
else
{
Debug.Log("TcpConnection::Client already exists");
}
}
public static void ASyncCallBack(IAsyncResult ar)
{
Debug.Log("Pre EndConnect");
client.EndConnect(ar);
Debug.Log("Post EndConnect");//this never gets called?
}
the boolean succes is true even if the server is offline (or does this always return true as long as the operation finishes?), thus i assume it thinks it is actually connected, and the Debug.Log after client.EndConnect(ar) never gets called. Not a single error gets returned.
In summary; Am I forgetting something/doing something wrong? or is this expected behaviour?
Edit: language is C# with the .net 3.5 framework. It is ment for a Unity application though i'm not inheriting from monobehaviour for this. If you require any additional information I will try to provide this.
Kind regards and thanks for your time,
Remy.

TCP socket.receive() seems to drop packets

I am working on client-server appliction in C#. The comunication between them is with TCP sockets. The server listen on specific port for income clients connection. After a new client arrived, his socket being saved in a socket list. I define every new client socket with receive timeout of 1 ms. To receive from the client sockets without blocking my server I use the threadpool like this:
private void CheckForData(object clientSocket)
{
Socket client = (Socket)clientSocket;
byte[] data = new byte[client.ReceiveBufferSize];
try
{
int dataLength = client.Receive(data);
if (dataLength == 0)// means client disconnected
{
throw (new SocketException(10054));
}
else if (DataReceivedEvent != null)
{
string RemoteIP = ((IPEndPoint)client.RemoteEndPoint).Address.ToString();
int RemotePort = ((IPEndPoint)client.RemoteEndPoint).Port;
Console.WriteLine("SERVER GOT NEW MSG!");
DataReceivedEvent(data, new IPEndPoint(IPAddress.Parse(RemoteIP), RemotePort));
}
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForData), client);
}
catch (SocketException e)
{
if (e.ErrorCode == 10060)//recieve timeout
{
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForData), client);
}
else if(e.ErrorCode==10054)//client disconnected
{
if (ConnectionLostEvent != null)
{
ConnectionLostEvent(((IPEndPoint)client.RemoteEndPoint).Address.ToString());
DisconnectClient(((IPEndPoint)client.RemoteEndPoint).Address.ToString());
Console.WriteLine("client forcibly disconected");
}
}
}
}
My problem is when sometimes the client send 2 messages one after another, the server doesn't receive the second message. I checked with wireshark and it shows that both of the messages were received and also got ACK.
I can force this problem to occur when I am putting break point here:
if (e.ErrorCode == 10060)//recieve timeout
{
ThreadPool.QueueUserWorkItem(new WaitCallback(CheckForData), client);
}
Then send the two messages from the client, then releasing the breakpoint.
Does anyone met this problem before?
my problem is when sometimes the client send 2 messages one after another, the server doesn't receive the second message
I think it's much more likely that it does receive the second message, but in a single Receive call.
Don't forget that TCP is a stream protocol - just because the data is broken into packets at a lower level doesn't mean that one "send" corresponds to one "receive". (Multiple packets may be sent due to a single Send call, or multiple Send calls may be coalesced into a single packet, etc.)
It's generally easier to use something like TcpClient and treat its NetworkStream as a stream. If you want to layer "messages" on top of TCP, you need to do so yourself - for example, prefixing each message with its size in bytes, so that you know when you've finished receiving one message and can start on the next. If you want to handle this asynchronously, I'd suggest sing C# 5 and async/await if you possibly can. It'll be simpler than dealing with the thread pool explicitly.
Message framing is what you need to do. Here: http://blog.stephencleary.com/2009/04/message-framing.html
if you are new to socket programming, I recommend reading these FAQs http://blog.stephencleary.com/2009/04/tcpip-net-sockets-faq.html

C# UDP Client recive too slow

I try for communication between PLC (Electronic Device) and PC. Firewall turned off. I see received package by wireshark.
question 1: Receiving messagges is too slow, Why? it takes several time to arrive in my code. My code is below.
question 2: How can WireShark Software capture quickly this messages? How can I achive this in C#?
question 3: I have to turn off Firewall for receiving messages. But wireshark don't need turn off firewall. How can I achive this by never turn off firewall. I try basically 1 to 1 local communication.
private void udpcommincate()
{
sock_rcv = new UdpClient(6002);
try
{
sock_rcv.BeginReceive(new AsyncCallback(recv), null);
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
private void recv(IAsyncResult res)
{
IPEndPoint RemoteIpEndPoint = new IPEndPoint(IPAddress.Any, 6002);
plc_gelen = sock_rcv.EndReceive(res, ref RemoteIpEndPoint);
flag= BitConverter.ToInt32(plc_gelen, 0);
sock_rcv.BeginReceive(new AsyncCallback(recv), null);
}
For simple UDP communications you don't need all this asynchronous machinery - it takes time to post request to thread pool, dispatch the callback, etc. etc. If you want speed, just do blocking read in a loop, all in one thread.
and 3. Wireshark taps into special kernel interface (implemented in the winpcap library) that gets a copy of all packets matching given filter, often before in-kernel firewall gets its hands on them.

Categories

Resources