I do Socket.Receive immediately after a handshake but sometimes the programme hangs waiting for the packet.
int reciveLength = tcpSock.Receive(handShake, SocketFlags.None);
int bitfieldLength = tcpSock.Receive(bitfeildRecive, SocketFlags.None);
The first is received perfectly, the second does not seem to be received. I think it is a race condition as the first is sent at "83.1969" and the second at "83.1970".
When the timeout is reached bitfeildRecive will just be 65535 bytes of 0s.
I can see the packet in Wireshark, it is only one packet. How can I get the program to catch the next packet to be send?
Is there away to do this with Socket.Receive?
Related
I have a 3Rd party server where I send it data and then wait for a response.
It will first respond with ack then with the full response
I am doing something like this. Is there a better way.
//Connect
eftSocket.Connect(remoteEp);
// Encode the data string into a byte array.
byte[] msg = Encoding.ASCII.GetBytes(strRequest);
// Send the data through the socket.
int bytesSent = eftSocket.Send(msg);
//receive first time
int bytesRec = eftSocket.Receive(bytes);
//Do we get an ack,
if (bytes[0] != 6)
{
throw new Exception("Could not connect to EFT terminal");
}
//receive 2nd time
bytesRec = eftSocket.Receive(bytes);
That code looks broken. At first you have to embrace that TCP is a stream and that there are no packets. That means even if the client sends something of a given length you can never expect that you receive it in one turn. Which means if you do eftSocket.Receive(bytes) you might receive any amount of bytes which were sent from the remote, not necessarily the ack byte in the first call and the remaining stuff in a second call. The only thing that you know is that you won't receive more than bytes.Length with it. If you hand it over a significantly large buffer you might receive everything (ACK + remaining data) in one turn. Therefore you should always check the return value first. And if you expect more bytes repeat the Receive calls with the required offsets.
Then you should at first check whether your receive suceeded at all. You might receive 0, which means the remote closed the socket. If that's the case bytes[0] will yield anything which was stored in it before the Receive started.
You should also properly close your socket. It's not obvious from the code example if it happens, but if you return from the code with an Exception it might be missing.
I have the following code
byte[] bytes = Encoding.Default.GetBytes(data);
IAsyncResult res = socket.BeginSend(bytes, 0, bytes.Length, 0, new AsyncCallback(SendCallback), socket);
int waitingCounter = 0;
while (!res.IsCompleted && waitingCounter<10)
{
if (Tracing.TraceInfo) Tracing.WriteLine("Waiting for data to be transmited. Give a timeout of 1 second", _traceName);
Thread.Sleep(1 * 1000);
waitingCounter++;
}
This code has been installed in many machines, but there are some cases where the condition res.IsCompleted takes long to become true.
The reason is related with the network maybe a firewall, proxy? or to the client (too slow) or to the server?
I have not been able to reproduce this scenario.
Edit: I try to reproduce the error by using an asynchronous client and a synchronous server
with the following modifications:
Client=>
while (true) {
Send(client, "This is a test<EOF>");
sendDone.WaitOne();
}
Server=>
while (true){
Console.WriteLine("Waiting for a connection...");
// Program is suspended while waiting for an incoming connection.
Socket handler = listener.Accept();
data = null;
// Show the data on the console.
Console.WriteLine("Text received : {0}", data);
handler.Shutdown(SocketShutdown.Both);
handler.Close();
}
In the second Send(), I get a socket exception. What is normal because the server is not reading the data.
But actually what I want to reproduce is:
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
Waiting for data to be transmited. Give a timeout of 1 second
As it happens in one of our installations
Edit:
An answer disappeared from this question!!!
Even though BeginSend won't stall your application, it should still have the same constraints as Send. This answer explains why things could go wrong (I've paraphrased):
For your application, the TCP buffer, the network and the local
sending TCP buffer is one big buffer. If the remote application gets
delayed in reading new bytes from its TCP buffer, eventually your
local TCP buffer will end up being (nearly) full. The send won't
complete until the TCP buffer has enough space to store the payload
you're trying to send.
Don't forget when the first check res.IsCompleted, you are always waiting a second for the next check.
I'm currently writing some async UDP network code in C#. I'm sending small packets (less than 50 bytes of data in each so far) back and forth and my first thought was to split them into two different packets and still send it as one packet but receive it as two. So a header or an extra information packet is always added to the start of the real packet. That would contain an ID and the data length.
So I thought I could split it on the receiving end (async receive) and first receive the header and then the actual information. This is so that I don't have to worry about the order between packets and "packet headers".
So I wrote code that basically worked like this:
Client sends 30 bytes of data to the server, where the first 3 bytes is the packet header.
The server would have called (PACKET_HEADER_SIZE = 3):
socket.BeginReceiveFrom(state.Buffer, 0, PACKET_HEADER_SIZE, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
Then receives the data:
private void ReceivePacketInfo(IAsyncResult ar)
{
StateObj state = (StateObj) ar.AsyncState;
int bytesRead = socket.EndReceiveFrom(ar, ref endPoint);
state.BytesReceived += read;
if (state.BytesReceived < state.Buffer.Length)
{
_socket.BeginReceiveFrom(state.Buffer, state.BytesReceived, state.Buffer.Length - state.BytesReceived, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
}
else
{
//my thought was to receive the rest of the packet here
}
}
but when calling socket.EndReceiveFrom(ar) I get a SocketException:
"A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself"
So now I have a couple of questions.
Do I have to make sure I receive the whole packet (in this case both the header and the packet) before I call EndReceiveFrom?
Can I assume that I will either get the whole packet in one go or get nothing so that my if-statement in ReceivePacketInfo would be redundant (as long as it's size is less than the maximum packet size, of course)?
If I cannot, is there a good way of solving my problem? I could tag all my packet headers and all my packets to be able to map them together I suppose. I could also try to have a standardized "packet ending" so that I just read until I hit the end of the packet.
Thanks in advance for any help!
Can I assume that I will either get the whole packet in one go or get nothing
That's almost the only thing, that UDP can guarantee - the content of a packet. If packet is received it is guaranteed to have same size and same content. So you have to make sure, that you buffer is large enough for a packet.
The order of a packets is not guaranteed and the delivery itself. It is up to you and your application to handle dropped packets and out of order packets.
Problem just started on client side. Here is my code where I receive TCP/IP message. On my local PC this listener receives many K no problem. I tried to increase buffer size but on client site they still report issues related to it.. Still get's only first 1K (1024 bytes)
public void Start()
{
//Define TCP listener
tcpListener = new TcpListener(IPAddress.Any, IDLocal.LocalSession.PortNumber);
try
{
//Starting TCP listenere
tcpListener.Start();
while (true)
{
var clientSocket = tcpListener.AcceptSocket();
if (clientSocket.Connected)
{
var netStream = new NetworkStream(clientSocket);
// Check to see if this NetworkStream is readable.
if (netStream.CanRead)
{
var myReadBuffer = new byte[1024];
var myCompleteMessage = new StringBuilder();
// Incoming message may be larger than the buffer size.
do
{
var numberOfBytesRead = netStream.Read(myReadBuffer, 0, myReadBuffer.Length);
myCompleteMessage.AppendFormat("{0}", Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
} while (netStream.DataAvailable);
//All we do is response with "OK" message
var sendBytes = Encoding.ASCII.GetBytes("OK");
netStream.Write(sendBytes, 0, sendBytes.Length);
clientSocket.Close();
netStream.Dispose();
// Raise event with message we received
DataReceived(myCompleteMessage.ToString());
}
}
}
}
catch (Exception e)
{
//If we catch network related exception - send event up
IDListenerException(e.Message);
}
}
I don't see any problem with the code you posted to extract the message into a string, so I 'm guessing that something else is afoot.
TCP isn't required to send all data you queue to it in one go. This means it can send as few as it want to at a time, and it can choose to split your data into pieces at will. In particular, it is guaranteed to split your data if they don't fit into one packet. Typically, the maximum packet size (aka MTU) is 1532 bytes IIRC.
Therefore there's a real possibility that the data is sent, but as more than one packet. The delay between reception of the first and second packet could mean that when the first one arrives your code happily reads everything it contains and then stops (no more data) before the second packet has had time to arrive.
You can test this hypothesis by either observing network traffic, or allowing your app to pull more messages from the wire and see if it finally does get all the data you sent (albeit in pieces).
Ultimately the underlying issue is TCP's fundamental stream-based (and not message-based) nature; even if you get this code to work correctly, there is no guarantee that it will continue working in the future because it makes assumptions about stuff that TCP does not guarantee.
To be safe, you will need to incorporate a message-based structure (e.g. prepending each piece of data with exactly 4 bytes that hold its length; then, you can just keep reading forever until you have received that many bytes).
now lets take a secnario where we use a locking socket receive and the packet is 5000 bytes
with receivetimeout set to one second
s.SetSocketOption (SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 1000);
int bytes_recevied = 0 ;
byte [] ReceiveBuffer = new byte[8192] ;
try
{
bytes_received = s.Receive(RecevieBuffer) ;
}
catch(SocketException e)
{
if( e.ErrorCode == 10060)
{
Array.Clear(ReceiveBuffer,0,ReceiveBuffer.Length);
}
}
now our secnario dictates that 4000 bytes have gone threw alreadys,the socket is still blocking and some error accured on the receiving end ,
now on the receiving end we would handly dispose of the 4000 bytes by catching the socket ecxecption
is there any guaranty that the socket on the sending end wont thows 1000 bytes that remain
does the sending socket know to truncate them if he wasent disconnected when we attempt to receive again wont they be the first bytes we receive ?
what im asking is :
a) does tcp have some mecanisem that tells the socket to dispose of the rest of them message ?
b) is there a socket flag that we could send or receive with that tells the buffers to dispose of the rest of the message ?
First off, TCP/IP operates on streams, not packets. So you need some kind of message framing in your protocol, regardless of buffer sizes, blocking calls, or MTUs.
Secondly, each TCP connection is independent. The normal design is to close a socket when there's a communications error. A new socket connection can then be established, which is completely independent from the old one.