This may be a little long of a post
I have a server and a client, and i have the ability to see the sent/rec messages and count them on both machines live, so that i dont have to go into debug in VS2012,
i am running my server on an alternate machine in California so that the server and the client are on completely different IP's and running through the internet for live tests, they are using DNS names and resolving them fine.
Both Physical PC's have no antivirus and no firewall and windows firewall is off on both.
Both machines have wireshark installed for tracking my packets on UDP port 29999
How the sequence works is the client sends a logon, the server verifies the client by some credentials and the server sends player information (stats)
When starting the server executable the first time, the messages come through to the client without failing. every time
if you restart the client and try again the client does not receive the messages.
1) The counter on the server.exe increments properly,
2) Server's Wireshark on the server shows the messages sent
3) Client's Wireshark sees the UDP packets come in on the proper port from the proper IP address
but the client.exe message counter does not increment.
If i run the client in DEBUG mode in VS2012 and set a breakpoint as shown here :
while ((_NetworkIncomingMsg = _NetworkClient.ReadMessage()) != null)
{
ReadInTime = DateTime.Now; // <<-- break point here
// blah blah more code
}
It never hits, no message is ever received.
Its important to note that if i but a break point on the while statement, yes it is firing, but no message is read, and thus its null, and thus skips the code
I believe it has something to do with either the timing or the placement of the ReadServerMessages() method.
i have the ReadServerMessages() method being firing on a Timer as an event under its elapsed as shown here. The timer is constructed to fire ever 1.0 milliseconds. As this works pretty much flawlessly in every other portion of the software including when actully connected to a dedicated server and constantly sending packets.
public System.Timers.Timer ClientNetworkTick = new System.Timers.Timer(1.0);
void update_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
// Check if server sent new messages
try
{
ReadServerMessages();
}
catch (Exception ex)
{
}
}
Any thoughts? any thing i left out let me know thanks!
Related
I am writing a C# application which communicates with an external device via ethernet. I am using SharpPcap Version 4.5.0 for this.
Unfortunately, I had to realize that some incoming packets are dropped. For testing, I also put a switch between the external device and my computer, which also logs every packet. On this log, the packet is visible. Hence I am quite sure that the packet is really sent (and it's not an error of the external device).
This is the code that I use:
public bool TryActivateChannel(uint channelNumber, out string message)
{
message = string.Empty;
devices[(int)channelNumber].Open(DeviceMode.Promiscuous);
devices[(int)channelNumber].OnPacketArrival += PacketArrived;
devices[(int)channelNumber].StartCapture();
return true;
}
public bool CloseChannel(uint channelNumber, out string message)
{
message = string.Empty;
devices[(int)channelNumber].OnPacketArrival -= PacketArrived;
devices[(int)channelNumber].Close();
return true;
}
private void PacketArrived(object sender, CaptureEventArgs e)
{
if (e.Packet.LinkLayerType != PacketDotNet.LinkLayers.Ethernet)
{
return;
}
else
{
inputQueue.Enqueue(e);
}
}
devices is just CaptureDeviceList.Instance and inputQueue is a ConcurrentQueue, which is dequeued in another Thread. This thread writes every incoming packet into a *.pcap file (where the packets are missing). Additionally, I look at the Statistics property of my ICaptureDevice, which claims that no packet is dropped. I also tried to run it on a different computer, in order to make sure it is not a problem of the network card.
At this point, I am really helpless. Did I do anything wrong in my code? Is this a known issue? I read somewhere else the SharpPcap can manage up to 3 MBit/s. I am far away from this value, hence I don't believe it's a perfomance problem.
Addendum: Instead of the ConcurrentQueue, I also tried the approach with the List provided by the author. There, I have the same result: Some packets are missing. I also had a version without a second Thread, where the packets are processed directly in the event handler. Same result: Packets are missing. Moreover, I captured simultaneously with Wireshark. Here, the packets are also missing. I realized that the missing packets all have in common that they have a certain length (about more than 60 bytes). For shorter packets, I never observed that they are missing. I am using WinPcap 4.1.3. Is the problem located there?
For the record, if you don't see the packets in WireShark, the problem is neither in your code nor in SharpPcap.
It means it's either in the hardware or in the driver/OS.
Common reasons that you don't receive packets:
The packets were VLAN tagged, depending on the adapter configuration, it may drop VLAN tagged frames before they reach the OS.
Firewall: some firewalls are capable of preventing packets from reaching the Npcap/WinPcap driver, this usually affects IP packets.
Faulty driver: Example: The Npcap bug https://github.com/nmap/npcap/issues/119
Packets "discarded": this means that the packet was rejected by the hardware itself,
you can check for this using the command netstat -e, usual reasons:
Bad cables: yes, really.
Frame collision: occurs more frequently with half duplex cables and when the time between packets is too short.
just searched for a posibble solution to indetify when the client disconnecets.
i found this:
public bool IsConnected( Socket s)
{
try
{
return !(s.Poll(1, SelectMode.SelectRead) &&s.Available == 0);
}
catch (SocketException) { return false; }
}
im using a while loop in my main with thread.sleep(500) and running the Isconnectedmthod it works allright when i run it through the visual studio and when i click stop debugging it actually notify me in the server side program but when i just go to the exe in the bin directory and launch it-it's Indeed notify me for a connection but when i close the program (manually from the 'x' button) or through the task manager theIsConnected method apparently return still true.....
im using a simple tcp connection
client = new TcpClient();
client.Connect("10.0.0.2", 10);
server:
Socket s = tcpClient.Client;
while(true)
{
if (!IsConnected(s))
MessageBox.Show("disconnected");
}
(it's running on a thread btw).
any suggestion guys?
i even tried to close the connection when the client closes:
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
client.Close();
s.Close();
Environment.Exit(0);
}
dont know what to do
What you are asking for is not possible. TCP will not report an error on the connection unless an attempt is made to send on the connection. If all your program ever does is receive, it will never notice that the connection no longer exists.
There are some platform-dependent exceptions to this rule, but none involving the simple disappearance of the remote endpoint.
The correct way for a client to disconnect is for it to gracefully close the connection with a "shutdown" operation. In .NET, this means the client code calls Socket.Shutdown(SocketShutdown.Send). The client must then continue to receive until the server calls Socket.Shutdown(SocketShutdown.Both). Note that the shutdown "reason" is generally "send" for the endpoint initiating the closure, and "both" for the endpoint acknowledging and completing the closure.
Each endpoint will detect that the other endpoint has shutdown its end by the completion of a receive operation with 0 as the byte count return value for that operation. Neither endpoint should actually close the socket (i.e. call Socket.Close()) until this two-way graceful closure has completed. I.e. each endpoint has both called Socket.Shutdown() and seen a zero-byte receive operation completion.
The above is how graceful closure works, and it should be the norm for server/client interactions. Of course, things do break. A client could crash, the network might be disconnected, etc. Typically, the right thing to do is to delay recognition of such problems as long as possible; for example, as long as the server and client have no need to actually communicate, then a temporary network outage should not cause an error. Forcing one is pointless in that case.
In other words, don't add code to try to detect a connection failure. For maximum reliability, let the network try to recover on its own.
In some less-common cases, it is desirable to detect connection failures earlier. In these cases, you can enable "keep alive" on the socket (to force data to be sent over the connection, thus detecting interruptions in the connection…see SocketOptionName.KeepAlive) or implement some timeout mechanism (to force the connection to fail if no data is sent after some period of time). I would generally recommend against the use of this kind of technique, but it's a valid approach in some cases.
I have a TCP socket based client server system.
Everything works fine but when network is disconnected form client end and reconnect it again i get automatically SocketError.ConnectionReset send form client and regarding this command the socket is closed in the server side. this is also fine.
but when i look in to the client side it shows the socket is still connected with server. (regarding socket is still connected with server [It does not happen every time], sometime it shows disconnected and some times shows connected)
Does it make sense that "server get a SocketError.ConnectionReset from
client end but client is still connected"?
So i want to know what is the possible reasons of SocketError.ConnectionReset and how to handle such type of problem i have mentioned?
Again i say, Everything is working fine in normal environment (e.g if i exit the client it is disconnected the socket same for the server)
Thanks in advance.
EDIT:
Here is the code in the client side. actually it's a timer that tick every 3 second through programs lifetime and check if Socket is connected or not if its disconnected then it tries to reconnect again through a new socket instance
private void timerSocket_Tick(object sender, EventArgs e)
{
try
{
if (sck == null || !sck.Connected)
{
ConnectToServer();
}
}
catch (Exception ex)
{
RPLog.WriteDebugLog("Exception occcured at: "+ System.Reflection.MethodBase.GetCurrentMethod().ToString()+"Message: "+ex.Message);
}
}
In normal situation (without network disconnect/reconnect) if TCP server get a
SocketError.ConnectionReset form any client, in the client side i see
clients socket is disconnected and it tries to reconnect it again
through the code shown. but when situation happen explained earlier,
server gets a SocketError.ConnectionReset but client shows it still
connected. though the TCP server shows the reset command is send form the exact client
side.
There are several causes but the most common is that you have written to a connection that has already been closed but he other end. In other words, an application protocol error. When it happens you have no choice but to close the socket, it is dead. However you can fix the underlying cause.
When discussing a TCP/IP issue like this, you must mention the network details between the client and the server.
When one side says the connection is reset, it simply means that on the wire a RST packet appears. But to know who sends the RST packet and why, you must utilize network packet captures (by using Wireshark and any other similar tools),
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
You won't easily find out the cause at .NET Framework level.
The problem with using Socket.Connected as you are is that it only gives you the connected state as at the last Send or Receive operation. i.e. It will not tell you that the socket has disconnected unless you first try to send some data to it or receive data from it.
From MSDN description of the Socket.Connected property:
Gets a value that indicates whether a Socket is connected to a remote host as of the last Send or Receive operation.
So in your example, if the socket was functioning correctly when you last sent or received any data from it, the timerSocket_Tick() method would never call ConnectToServer(), even if the socket was now not connected.
how to handle such type of problem i have mentioned?
Close the socket and initiate a new connection.
Okay I know there is lots of info out there on this and I promise you I have read it all and tried umpteen different methods to get this working!!
I have a socket server program which runs on a laptop. I then have up to 50 laptops connected wirelessly via the same LAN to the server. The client laptops all connect to the server (using Socket.ConnectAsync) and the server uses async methods as well to send and receive data. The server shows a list of connected client laptops to the user and this list seems to be accurate and picks up whenever a client disconnects and connects. However, the client laptops never seem to detect when connection to the server has been lost under certain circumstances (ie if server program crashes, if server laptop goes in to standby mode etc.) I have got a timer on the client laptops which polls the connection every 5 seconds as follows:
bool SocketConnected(Socket s)
{
bool part1 = s.Poll(0, SelectMode.SelectWrite);
bool part2 = (s.Available == 0);
if (!part1 && part2)
{
return false;
}
else
{
return true;
}
}
I have tried using all selectmodes (SelectWrite,SelectRead,SelectError) and have tried using different time out values. I have tried checking s.Connected value after these operations and have tried all manners of other methods to determine the connection state and nothing seems to produce reliable results!! I think I can achieve the result I desire by sending dummy information every 5 seconds and checking s.Connected after doing so, however I don't really want to do this as each laptop is already sending lots of data to the server as it is. Any help at all is massively appreciated! Thanks
The only reliable way to check if a connection is alive is to send something to the other end and see if it arrives. You can do this either manually by sending and receiving a "ping" value from time to time, or automatically by enabling the KeepAlive socket option.
The MSDN documentation for Socket.Poll is very explicit about the exact situations (server crashes, standby) you mentioned:
This method cannot detect certain kinds of connection problems, such
as a broken network cable, or that the remote host was shut down
ungracefully. You must attempt to send or receive data to detect these
kinds of errors.
I tried creating a poison message scenario in the following manner.
1- Created a message queue on a server (transactional queue).
2- Created a receiver app that handles incoming messages on that server.
3- Created a client app located on a client machine which sends messages to that server with the specific name for the queue.
4- I used the sender client app with the following code (C# 4.0 framework):
System.Messaging.Message mm = new System.Messaging.Message("Some msg");
mm.TimeToBeReceived = new TimeSpan(0, 0, 50);
mm.TimeToReachQueue = new TimeSpan(0, 0, 30);
mm.UseDeadLetterQueue = true;
mq.Send(mm);
So this is setting the timeout to reach queue to 30 seconds.
First test worked fine. Message went through and was received by the server app.
My second test, I disconnected my ethernet cable, then did another send from the client machine.
I can see in the message queue on the client machine that the message is waiting to be sent ("Waiting for connection"). My problem is that when it goes beyond the 30 sec (or 50sec too), the message never goes in the Dead-letter queue on the client machine.
Why is it so ? ... I was expecting it to go there some it timed-out.
Tested on Windows 7 (client) / Windows server 2008 r2 (server)
Your question is a few days old already. Did you find out anything?
My interpretation of your scenario would be that the unplugged cable is the key.
In the scenario John describes, there is an existing connection and the receiver could not process the message correctly within the set time limit.
In you scenario, however, the receiving endpoint never gets the chance to process the message, so the timeout can never occur. As you said, the state of the message is Waiting for connection. A message that was never sent cannot logically have a timeout to reach its destination.
Just ask yourself, how many resources Windows/ MSMQ would unneccessaryly sacrifice - and how often - to check MessageQueues for how-many conditions if the queues is essentially inactive? There might be a lot of queues with a lot of messages on a system.
The behavior I would expect is that if you plug the network cable back in and the connection is re-established that then, only when it is needed, your poison message wil be checked for the timeout and eventually moved to the DeadLetter queue.
You might want to check this scenario out - or did you already check it out the meantime?