I have been working on a C# Websocket Server for some time now and there has been one issue that I have worked around but never resolved or found a proper reason for its existence.
My environments that I have tested have been using Google's Chrome browser over the series of versions over the past year or so on Windows XP and Windows 7. My server has been tested on both OS as well.
The problem I notice only occurs when the browser is running under Windows XP. Upon completion of the Websocket Handshake, the browser/client cannot send data to the server unless a message is sent from the server to the client first.
What I have done, is simply tagged on a Ping frame to the end of the server Handshake and all functions as expected. I have tested with other frames as well, so long as the server sends a message, the client will proceed as normal.
The message from the server does not need to be instant either. If the client attempts to send a message, it can wait indefinitely. As soon as the server sends a message to the client, the client proceeds.
Now, I figured I was doing something incorrectly on my Websocket Server, but if this was the case, then why does everything work as expected when the browser is running under Windows 7. I do not need to send a message to the client before the client will release a message to the server.
As a very basic example, here is server code that will never complete if Chrome connects from an XP machine;
byte[] textPound = {0x81, 0x01, 0x23};
Socket server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
IPEndPoint ip = new IPEndPoint(IPAddress.Any, 56100);
server.Bind(ip);
server.Listen(100);
byte[] buffer = new byte[1000];
Socket client = server.Accept();
int rec = client.Receive(buffer);
Handshake(buffer, ref rec);//custom function returns the handshake to the buffer
client.Send(buffer, rec, SocketFlags.None);
//client.Send(textPound);
rec = client.Receive(b);
client.Close();
So long as the .Send() is commented out .Receive() will never complete if Chrome is ran from an XP machine, no matter how you send a message from the browser. If you were to start a thread before .Receive() that would issue a .Send() after x amount of time, the process completes once that happens.
Has anyone experienced this, or know why this may be?
EDIT -- For those who do not know what the WebSocket Protocol is;
Protocol Documentation
API Documentation
If you are writing a new WebSocket implementation, you might have a look at AutobahnTestsuite.
AutobahnTestsuite provides a fully automated test suite to verify client and server implementations of The WebSocket Protocol for specification conformance and implementation robustness.
It's used by dozens of projects and is some kind of "defacto standard" for testing.
Disclosure: I am original author of Autobahn and work for Tavendo.
Related
I am trying to do discovery via UDP, my code sends a multicast message and other devices on the network reply back to me. I am using UdpClient on .NET 4.5.2, binding it on a random port and sending my message to a multicast address that the devices are listening on (e.g. 233.255.255.237:8003). The devices reply to me from the multicast port 8003, but some reply from their own ip (e.g. 10.0.23.66) and some from local broadcast ip (e.g. 10.0.23.255).
This runs just fine on Windows, I can see replies from all devices, but when running on Mono (version 5.2.0.224), I can only see the messages sent from the devices that don't use the local broadcast ip.
When I do tcpdump and open it in Wireshark I can clearly see the UDP messages reaching me, even those with Source = 10.0.23.255 they have the correct destination ip and port(my random port), but the code never picks them up...
I have searched SO and the web and tried literally everything over the past 2 days, different UdpClient constructors, 2 UdpClients on the same port(otherwise no replies are be picked up, receiving code has to listen on the same port as sending code is using) - one for sending and one for receiving, using a plain socket for receiving, setting EnableBroadcast = true, binding to a specific port(multicast port and other), using JoinMulticastGroup(shouldn't matter though, I am only sending messages to multicast address, not receiving the, the replies come directly to my local andpoint), and then some, but nothing works and I'm at the end of my wits...
Maybe there is a mono bug / different behavior, or some arcane setting that can be used, I would appreciate any help.
Here is how my code looks like(omitting parts like cleanup when disposing, etc.):
client = new UdpClient { MulticastLoopback = false, EnableBroadcast = true };
client.Client.Bind(new IPEndPoint(IPAddress.Any, 0));
client.BeginReceive(EndReceive, null);
private void EndReceive(IAsyncResult ar)
{
try
{
var source = new IPEndPoint(IPAddress.Any, 0);
var data = client.EndReceive(ar, ref source);
Console.WriteLine("{0} received msg:\r\n{1}", source.Address, Encoding.UTF8.GetString(data));
}
catch (Exception e)
{
Console.WriteLine(e);
}
client.BeginReceive(EndReceive, null);
}
For sending the multicast message I use client.Send() in try catch as well, the messages are definitely sent and clients are responding as seen on Wireshark, just under Windows I get all the responses written out to Console, under Linux/Mono only the ones that respond from Source=10.0.23.XXX and and the ones from 10.0.23.255 seem to be filtered out (I compared the messages carefully in Wireshark, they are the same whether my code runs on Win or Linux/Mono).
I have a TCP socket based client server system.
Everything works fine but when network is disconnected form client end and reconnect it again i get automatically SocketError.ConnectionReset send form client and regarding this command the socket is closed in the server side. this is also fine.
but when i look in to the client side it shows the socket is still connected with server. (regarding socket is still connected with server [It does not happen every time], sometime it shows disconnected and some times shows connected)
Does it make sense that "server get a SocketError.ConnectionReset from
client end but client is still connected"?
So i want to know what is the possible reasons of SocketError.ConnectionReset and how to handle such type of problem i have mentioned?
Again i say, Everything is working fine in normal environment (e.g if i exit the client it is disconnected the socket same for the server)
Thanks in advance.
EDIT:
Here is the code in the client side. actually it's a timer that tick every 3 second through programs lifetime and check if Socket is connected or not if its disconnected then it tries to reconnect again through a new socket instance
private void timerSocket_Tick(object sender, EventArgs e)
{
try
{
if (sck == null || !sck.Connected)
{
ConnectToServer();
}
}
catch (Exception ex)
{
RPLog.WriteDebugLog("Exception occcured at: "+ System.Reflection.MethodBase.GetCurrentMethod().ToString()+"Message: "+ex.Message);
}
}
In normal situation (without network disconnect/reconnect) if TCP server get a
SocketError.ConnectionReset form any client, in the client side i see
clients socket is disconnected and it tries to reconnect it again
through the code shown. but when situation happen explained earlier,
server gets a SocketError.ConnectionReset but client shows it still
connected. though the TCP server shows the reset command is send form the exact client
side.
There are several causes but the most common is that you have written to a connection that has already been closed but he other end. In other words, an application protocol error. When it happens you have no choice but to close the socket, it is dead. However you can fix the underlying cause.
When discussing a TCP/IP issue like this, you must mention the network details between the client and the server.
When one side says the connection is reset, it simply means that on the wire a RST packet appears. But to know who sends the RST packet and why, you must utilize network packet captures (by using Wireshark and any other similar tools),
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
You won't easily find out the cause at .NET Framework level.
The problem with using Socket.Connected as you are is that it only gives you the connected state as at the last Send or Receive operation. i.e. It will not tell you that the socket has disconnected unless you first try to send some data to it or receive data from it.
From MSDN description of the Socket.Connected property:
Gets a value that indicates whether a Socket is connected to a remote host as of the last Send or Receive operation.
So in your example, if the socket was functioning correctly when you last sent or received any data from it, the timerSocket_Tick() method would never call ConnectToServer(), even if the socket was now not connected.
how to handle such type of problem i have mentioned?
Close the socket and initiate a new connection.
I have a server which uses TcpClient.AcceptTcpClient() to accept clients.
I also have a client which also uses C# TcpClient like this:
var client = new TcpClient();
client.connect(ip, port);
This is working as it should, but when I try to connect to my server with the tcp_socket_send from kontiki (C-code), it shows unwanted behavior. It looks like when it sends a message, it sends a SYN. Then the server returns SYN-ACK, so that seems fine. When the client then sends the actual message, the server has accepted it as a new TcpClient.
I hope I made my question clear enough, otherwise ask away.
Thanks
EDIT
I've isolated the problem, but I do not have the answer yet.
The problem is that the way kontiki sends data is SYN, then PSH. The way my C# tcpclient sends data is directly PSH. It leaves the SYN. The problem is (It think) that the SYN closes my previous connection somehow.
I know this question has been asked many times. I've read ALL the answers and tried EVRY piece of code I could find. After a few days I'm so desperate that I have to ask you for help.
I have a device and a PC in my home network. The device sends UDP broadcast messages. On my PC I can see those messages in wireshark:
Source Destination Length
192.168.1.102 0.0.0.0 UDP 60 Source port: 9050 Destination port: 0
That means the packets are arriving on my PC. My next step was to create a C# application that receives those packets. As mentioned above I tried every possible solution, but it just won't receive anything.
So I guess there must be something very basic I'm doing wrong.
Can anyone help me out? Thanks!
Just experienced the same issue, and wanted to share what fixed it for me.
Briefly: it seems that Windows Firewall was somehow the cause of this weird behavior, and just disabling the service does not help. You have to explicitly allow incoming UDP packets for specific program (executable) in Windows Firewall inbound rules list.
For full case description, read on.
My network setup is: IP of my (receiving) machine was 192.168.1.2, IP of sending machine was 192.168.1.50, and subnet mask on both machines was 255.255.255.0.
My machine is running Windows 7 x64.
This is the code (roughly) that I used:
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
IPEndPoint iep = new IPEndPoint(IPAddress.Any, 0);
sock.Bind(iep);
sock.EnableBroadcast = true;
EndPoint ep = (EndPoint)iep;
byte[] buffer = new byte[1000];
sock.ReceiveFrom(buffer, ref ep);
Initially this did not work unless I sent a broadcast packet from that socket before I call ReceiveFrom on it. I.e. adding this line before ReceiveFrom call:
sock.SendTo(someData, new IPEndPoint(IPAddress.Broadcast, somePort))
When I didn't send the broadcast packet first from receiving socket, incoming broadcast packets were not received by it, even though they appeared in Wireshark (destination of packets was 255.255.255.255).
I thought that it looks like firewall is messing with incoming packets (unless some kind of UDP hole is opened first by outgoing packet - even though I haven't heard before that UDP hole punching applies to broadcast packets somehow), so I went to services and disabled Windows firewall service altogether. This changed nothing.
However, after trying everything else, I re-enabled the firewall service, and tried to run the program again. This time, firewall prompt appeared asking me whether I want to allow MyProgram.vshost.exe process (I was debugging in Visual Studio) through firewall, I accepted it, and voila - everything worked! Incoming packets were being received now!
You are OK, they have something wired in the code which causes the problem. (I haven't read the article, just copy pasted)
It always works from the local machine, but from a remote machine it will fail for some reason.
To fix this:
In the Broadcst.cs they broadcast twice. once for the localhost and then for the target IP adress (iep2). simply remove the
sock.SendTo(data, iep1);
and it should work.
No idea why.
Context: I'm porting a linux perl app to C#, the server listens on a udp port and maintains multiple concurrent dialogs with remote clients via a single udp socket. During testing, I send out high volumes of packets to the udp server, randomly restarting the clients to observe the server registering the new connections. The problem is this: when I kill a udp client, there may still be data on the server destined for that client. When the server tries to send this data, it gets an icmp "no service available" message back and consequently an exception occurs on the socket.
I cannot reuse this socket, when I try to associate a C# async handler with the socket, it complains about the exception, so I have to close and reopen the udp socket on the server port. Is this the only way around this problem?, surely there's some way of "fixing" the udp socket, as technically, UDP sockets shouldn't be aware of the status of a remote socket?
Any help or pointers would be much appreciated. Thanks.
I think you are right in saying: 'the server should not be aware'. If you send an UDP packet to some IP/port which may or may not be open, there is no way of knowing for the server if it reached it's destination.
The only way for the server to know is to have the client send an ACK back. (Also both the client and server must have resend mechanisms in place in cases of lost packages).
So clearly something else is going on in your code (or with the .Net udp implementation)
EDIT:
After Nikolai's remark I checked the docs. And indeed there is a distinction in .Net to about being 'connected' or 'connectionless' when using UDP.
If you use code like this:
UdpClient udpClient = new UdpClient(11000); //sourceport
try{
udpClient.Connect("www.contoso.com", 11000); //'connect' to destmachine and port
// Sends a message to the host to which you have connected.
Byte[] sendBytes = Encoding.ASCII.GetBytes("Is anybody there?");
udpClient.Send(sendBytes, sendBytes.Length);
then apparently you are 'connected'
However if you use code like this:
UdpClient udpClientB = new UdpClient();
udpClientB.Send(sendBytes, sendBytes.Length, "AlternateHostMachineName", 11000);
then you can send to whomever you choose without 'connecting'.
I'm not sure what your code looks like, but it might be worthwhile to check if you are using the correct set of commands which doesn't assume a 'connection'