I am having some trouble figuring out why I am only receiving one reply from a server application running on my computer(LocalHost). I do not have the source for this server application but it is a java application. Messages that is sent is a xml structure and have to end with a EoT tag.
The communication:
Client connect to sever.
Client sends message to server.
Server sends message recived to client.
Client sends message to server.
Server sends a End of Transmission character.
Client sends message to server.
Server sends a End of Transmission character.
This is how my client connect, send and receive:
public bool ConnectSocket(string server, int port)
{
System.Net.IPHostEntry hostEntry = null;
try
{
// Get host related information.
hostEntry = System.Net.Dns.GetHostEntry(server);
}
catch (System.Exception ex)
{
return false;
}
// Loop through the AddressList to obtain the supported AddressFamily. This is to avoid
// an exception that occurs when the host IP Address is not compatible with the address family
// (typical in the IPv6 case).
foreach (System.Net.IPAddress address in hostEntry.AddressList)
{
System.Net.IPEndPoint ipe = new System.Net.IPEndPoint(address, port);
System.Net.Sockets.Socket tempSocket = new System.Net.Sockets.Socket(ipe.AddressFamily, System.Net.Sockets.SocketType.Stream,
System.Net.Sockets.ProtocolType.Tcp);
tempSocket.Connect(ipe);
if (tempSocket.Connected)
{
m_pSocket = tempSocket;
m_pSocket.NoDelay = true;
return true;
}
else
continue;
}
return false;
}
}
public void Send(string message)
{
message += (char)4;//We add end of transmission character
m_pSocket.Send(m_Encoding.GetBytes(message.ToCharArray()));
}
private void Recive()
{
byte[] tByte = new byte[1024];
m_pSocket.Receive(tByte);
string recivemessage = (m_Encoding.GetString(tByte));
}
Your Receive code looks very wrong; you should never assume that packets arrive in the same constructions that the server sends messages - TCP is just a stream. So: you must catch the return from Receive, to see how many bytes you received. It could be part of one message, an entire message, multiple entire messages, or the last half of one message and the first half of the next. Normally, you need some kind of "framing" decision, which could mean "messages split by LF characters", or could mean "the length of each message is prefixed by network-byte-order integer, 4 bytes". This usually means you need to buffer until you have a full frame, and worry about spare data at the end of the buffer that is part of the next frame. Key bits to add, though:
int bytes = m_pSocket.Receive(tByte);
// now process "bytes" bytes **only** from tByte, nothing that this
// could be part of a message, an entire message, several messages, or
// the end of message "A", the entire of message "B", and the first byte of
// message "C" (which might not be an entire character)
In particular, with text formats, you cannot start decoding until you are sure you have an entire message buffered, because a multi-byte character might be split between two messages.
There may well also be problems in your receive loop, but you don't show that (nothing calls Receive), so we can't comment.
Related
On my Xamarin.Android app I'm sending udp broadcast message to find my servers on the local LAN.
My servers then return the MachineName they run on which I display in a ListView on my app in the form of
<MachineName> - <Ip address>
This all works well on the first time, however from the second time on all it reads is empty bytes.
The number of bytes it reads is correct but they are all zero.
Here is the code:
private static void ListenForBroadcastResponses()
{
udp.BeginReceive(OnBroadcastResponse, new object());
}
private static void OnBroadcastResponse(IAsyncResult ar)
{
// Recieve the message
IPEndPoint ip = new IPEndPoint(IPAddress.Any, port);
byte[] bytes = udp.EndReceive(ar, ref ip);
// Decode it
string message = Encoding.ASCII.GetString(bytes);
// If message is the awaited message from client or from the awaited port
if (ip.Port == port) //|| message == BRDCAST_ANSWER)
{
// Raise server found event
var handler = ServerFound;
if (handler != null)
handler(null, new ServerEventArgs
{
Server = new Server(message, ip.Address)
});
}
// Start listening again
ListenForBroadcastResponses();
}
Debug screenshots:
First time full bytes are recieved:
Second time and on bytes are empty:
What is wrong here?
Eventually I figured it out. It seems to be a bug caused by multiple threads trying to listen simultaneously (see this post) to replies so I'll post my solution:
I've added a RecieveTimeout of X seconds to the UdpClient.
Upon timeout, I execute an EndRecieve and Close method calls.
This will trigger the callback passed in BeginRecieve to execute so I've added a check to the callback method if the client is still open
if (udp.Client != null) ...
It fixed things for me, so hopefully it will help others
I had never used UDP before, so I gave it a go. To see what would happen, I had the 'server' send data every half a second, and the client receive data every 3 seconds. So even though the server is sending data much faster than the client can receive, the client still receives it all neatly one by one.
Can anyone explain why/how this happens? Where is the data buffered exactly?
Send
class CSimpleSend
{
CSomeObjServer obj = new CSomeObjServer();
public CSimpleSend()
{
obj.changedVar = varUpdated;
obj.threadedChangeSomeVar();
}
private void varUpdated(int var)
{
string send = var.ToString();
byte[] packetData = System.Text.UTF8Encoding.UTF8.GetBytes(send);
string ip = "127.0.0.1";
int port = 11000;
IPEndPoint ep = new IPEndPoint(IPAddress.Parse(ip), port);
Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
client.SendTo(packetData, ep);
Console.WriteLine("Sent Message: " + send);
Thread.Sleep(100);
}
}
All CSomeObjServer does is increment an integer by one every half second
Receive
class CSimpleReceive
{
CSomeObjClient obj = new CSomeObjClient();
public Action<string> showMessage;
Int32 port = 11000;
UdpClient udpClient;
public CSimpleReceive()
{
udpClient = new UdpClient(port);
showMessage = Console.WriteLine;
Thread t = new Thread(() => ReceiveMessage());
t.Start();
}
private void ReceiveMessage()
{
while (true)
{
//Thread.Sleep(1000);
IPEndPoint remoteIPEndPoint = new IPEndPoint(IPAddress.Any, port);
byte[] content = udpClient.Receive(ref remoteIPEndPoint);
if (content.Length > 0)
{
string message = Encoding.UTF8.GetString(content);
if (showMessage != null)
showMessage("Recv:" + message);
int var_out = -1;
bool succ = Int32.TryParse(message, out var_out);
if (succ)
{
obj.alterSomeVar(var_out);
Console.WriteLine("Altered var to :" + var_out);
}
}
Thread.Sleep(3000);
}
}
}
CSomeObjClient stores the variable and has one function (alterSomeVar) to update it
Ouput:
Sent Message: 1
Recv:1
Altered var to :1
Sent Message: 2
Sent Message: 3
Sent Message: 4
Sent Message: 5
Recv:2
Altered var to :2
Sent Message: 6
Sent Message: 7
Sent Message: 8
Sent Message: 9
Sent Message: 10
Recv:3
Altered var to :3
The operating system kernel maintains separate send and receive buffers for each UDP and TCP socket. If you google SO_SNDBUF and SO_RCVBUF you'll find lots of information about them.
When you send data, it is copied from your application space into the send buffer. From there it is copied to the network interface card, and then onto the wire. The receive side is the reverse: NIC to receive buffer, where it waits until you read it. Additionally copies and buffering can also occur, depending on the OS.
It is critical to note that the sizes of these buffers can vary radically. Some systems might default to as little as 4 kilobytes, while others give you 2 megabytes. You can find the current size using getsockopt() with SO_SNDBUF or SO_RCVBUF and likewise set it using setsockopt(). But many systems limit the size of the buffer, sometimes to arbitrarily small amounts. This is typically a kernel value like net.core.wmem_max or net.core.rmem_max, but the exact reference will vary by system.
Also note that setsockopt() can fail even if you request an amount less than the supposed limit. So to actually get a desired size, you need to repeatedly call setsockopt() using decreasing amounts until it finally succeeds.
The following page is a Tech Note from my company which touches on this topic a little bit and provides references for some common systems: http://www.dataexpedition.com/support/notes/tn0024.html
It looks to me like the UdpClient-Class provides a buffer for received data. Try using a socket directly. You might also want to set that sockets ReceiveBufferSize to zero, even though I believe it is only used for TCP connections.
i am developing a server (with c#) and a client (with flash, actionscript 3.0) application. Server sends data (datas are arround 90 bytes) to clients continuously and clients behave according to data they received (data is json formatted)
for a while, everything works as expected but after some time passed, clients start to receive messages laggy. they keep waiting for some time and then they behave according to last message (some messages lost). after some time passed clients starts to wait and process all the messages at the same time. I could not figured out what causing this. My network condition is stable.
here is some part of my c# code, sending message:
public void Send(byte[] buffer)
{
if (ClientSocket != null && ClientSocket.Connected)
{
ClientSocket.BeginSend(buffer, 0, buffer.Length, 0, WriteCallback, ClientSocket);
}
}
private void WriteCallback(IAsyncResult result)
{
//
}
and some part of my client, receiving message (actionscript)
socket.addEventListener(ProgressEvent.SOCKET_DATA, onResponse);
function onResponse(e:ProgressEvent):void {
trace(socket.bytesAvailable);
if(socket.bytesAvailable > 0) {
try
{
var serverResponse:String = socket.readUTFBytes(socket.bytesAvailable);
....
I hope i could explain my problem. How should i optimize my code? What can be causing lags. Thanks.
You really need to give more detail as to how you're setting up the socket (is it TCP or UDP?)
Assuming it's a TCP socket, then it would appear that your client relies on each receive call returning the same number of bytes that were sent by the server's Send() call. This is however not the case, and could well be the cause of your issues if a message is only being partially received on the client, or multiple messages are received at once.
For example, the server may send a 90 byte message in a single call, but your client may receive it in one 90-byte receive, or two 45-byte chunks, or even 90 x 1-byte chunks, or anything in between. Multiple messages sent by the server may also be partially combined when received by the client. E.g. two 90-byte messages may be received in a single 180-byte chunk, or a 150-byte and a 30-byte chunk, etc. etc.
You need therefore to provide some kind of framing on your messages so that when the stream of data is received by the client, it can be reliably reconstructed into individual messages.
The most basic framing mechanism would be to prefix each message sent with a fixed-length field indicating the message size. you may be able to get away with a single byte if you can guarantee that your messages will never be > 255 bytes long, which will simplify the receiving code.
On the client side, you first need to receive the length prefix, and then read up to that many bytes off the socket to construct the message data. If you receive fewer than the required number of bytes, your receiving code must wait for more data (appending it to the partially-received message when it is eventually received) until it has a complete message of the.
Once the full message is received it can be processed as you are currently.
Unfortunately I don't know ActionScript, so can't give you an example of the client-side code, but here's how you might write the server and client framing in C#:
Server side:
public void SendMessage(string message)
{
var data = Encoding.UTF8.GetBytes(message);
if (data.Length > byte.MaxValue) throw new Exception("Data exceeds maximum size");
var bufferList = new[]
{
new ArraySegment<byte>(new[] {(byte) data.Length}),
new ArraySegment<byte>(data)
};
ClientSocket.Send(bufferList);
}
Client side:
public string ReadMessage()
{
var header = new byte[1];
// Read the header indicating the data length
var bytesRead = ServerSocket.Receive(header);
if (bytesRead > 0)
{
var dataLength = header[0];
// If the message size is zero, return an empty string
if (dataLength == 0) return string.Empty;
var buffer = new byte[dataLength];
var position = 0;
while ((bytesRead = ServerSocket.Receive(buffer, position, buffer.Length - position, SocketFlags.None)) > 0)
{
// Advance the position by the number of bytes read
position += bytesRead;
// If there's still more data to read before we have a full message, call Receive again
if (position < buffer.Length) continue;
// We have a complete message - return it.
return Encoding.UTF8.GetString(buffer);
}
}
// If Receive returns 0, the socket has been closed, so return null to indicate this.
return null;
}
I am writing a simple test program to send a string from a client to a server and then display the text on the server program. How would I go about doing this.
Here is my client code.
using System;
using System.Net;
using System.Net.Sockets;
class client
{
static String hostName = Dns.GetHostName();
public static void Main(String[] args)
{
String test = Console.ReadLine();
IPHostEntry ipEntry = Dns.GetHostByName(hostName);
IPAddress[] addr = ipEntry.AddressList;
//The first one in the array is the ip address of the hostname
IPAddress ipAddress = addr[0];
TcpClient client = new TcpClient();
//First parameter is Hostname or IP, second parameter is port on which server is listening
client.Connect(ipAddress, 8);
client.Close();
}
}
Here is my server code
using System;
using System.Net;
using System.Net.Sockets;
class server
{
static int port = 8;
static String hostName = Dns.GetHostName();
static IPAddress ipAddress;
public static void Main(String[] args)
{
IPHostEntry ipEntry = Dns.GetHostByName(hostName);
//Get a list of possible ip addresses
IPAddress[] addr = ipEntry.AddressList;
//The first one in the array is the ip address of the hostname
ipAddress = addr[0];
TcpListener server = new TcpListener(ipAddress,port);
Console.Write("Listening for Connections on " + hostName + "...\n");
//start listening for connections
server.Start();
//Accept the connection from the client, you are now connected
Socket connection = server.AcceptSocket();
Console.Write("You are now connected to the server\n\n");
int pauseTime = 10000;
System.Threading.Thread.Sleep(pauseTime);
connection.Close();
}
}
You can use the Send and Receive overloads on Socket. Asynchronous version exists as well, through the BeginSomething & EndSomething methods.
You send raw bytes, so you'll need to decide upon a protocol however simple. To send some text, select an encoding (I would use UTF-8) and get the raw bytes with Encoding.GetBytes.
Example:
Byte[] bytes = Encoding.UTF8.GetBytes("Royale With Cheese");
UTF8 is a static property on the Encoding class, it's there for your convenience.
You can then send the data to the server/client with:
int sent = connection.Send(bytes);
Receive:
Byte[] bytes = new Bytes[100];
int received = connection.Receive(bytes);
This looks easy, but there are caveats. The connection may at any time be dropped, so you must be prepared for exceptions, especially SocketException. Also if you look at the examples you can see that I have two variables sent and received. They hold how many bytes Send and Receive actually sent. You cannot rely on the socket sending or receiving all the data (especially not when receiving, what if the other party sent less than you expected?)
One way to do this is too loop and send until all the data is indicated as sent:
var bytes = Encoding.UTF8.GetBytes("Royale With Cheese");
int count = 0;
while (count < bytes.Length) // if you are brave you can do it all in the condition.
{
count += connection.Send(
bytes,
count,
bytes.Length - count, // This can be anything as long as it doesn't overflow the buffer, bytes.
SocketFlags.None)
}
Send and Receive are synchronous, they block until they've sent something. You should probably set some kind of timeout value on the socket (Socket.SendTimeout & Socket.ReceiveTimeout.) The default is 0 which means they may block forever.
Now how about receiving? Let's try to do a similar loop.
int count = 0;
var bytes = new Byte[100];
do
{
count += connection.Receive(
bytes,
count,
bytes.Length - count,
SocketFlags.None);
} while (count < bytes.Length);
Hmm. Well... what happens if the client sends less than a 100? We would block until it hits the timeout, if there is one, or the client sends enough data. What happens if the client sends more than a 100? You will only get the 100 first bytes.
In your case we could try reading very small amounts and print. Read, print, loop:
var sunIsBurning = true;
while (sunIsBurning) {
var bytes = new Byte[16];
int received = socket.Receive(bytes);
if (received > 0)
Console.Out.WriteLine("Got {0} bytes. START:{1}END;", received, Encoding.UTF8.GetString(bytes));
}
Here we say "receive data in 16 byte chunks as long as sunIsBurning, decode as UTF-8". This means the data sent by the other party must be UTF-8 or you'll get an exception (it should probably be caught by you) This also means that the server will only stop receiving when the connection fails.
If you have received 15 bytes, and the client kills the connection, those 15 bytes will never be printed. You will need better error handling for that :)
An uncaught exception will kill your application in most cases; not desirable for a server.
"connection fails" could mean a lot of things, it might be that the connection was reset by the peer or your network card caught fire.
Problem:
When I do something like this:
for (int i = 0; i < 100; i++)
{
SendMessage( sometSocket, i.ToString());
Thread.Sleep(250); // works with this, doesn't work without
}
With or without the sleep the server logs sending of separate messages. However without the sleep the client ends up receiving multiple messages in single OnDataReceived so the client will receive messages like:
0,
1,
2,
34,
5,
678,
9 ....
Server sending Code:
private void SendMessage(Socket socket, string message)
{
logger.Info("SendMessage: Preparing to send message:" + message);
byte[] byteData = Encoding.ASCII.GetBytes(message);
if (socket == null) return;
if (!socket.Connected) return;
logger.Info("SendMessage: Sending message to non " +
"null and connected socket with ip:" + socket.RemoteEndPoint);
// Record this message so unit testing can very this works.
socket.Send(byteData);
}
Client receiving code:
private void OnDataReceived(IAsyncResult asyn)
{
logger.Info("OnDataReceived: Data received.");
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = theSockId.Socket.EndReceive(asyn);
char[] chars = new char[iRx + 1];
System.Text.Decoder d = System.Text.Encoding.UTF8.GetDecoder();
int charLen = d.GetChars(theSockId.DataBuffer, 0, iRx, chars, 0);
System.String szData = new System.String(chars);
logger.Info("OnDataReceived: Received message:" + szData);
InvokeMessageReceived(new SocketMessageEventArgs(szData));
WaitForData(); // .....
Socket Packet:
public class SocketPacket
{
private Socket _socket;
private readonly int _clientNumber;
private byte[] _dataBuffer = new byte[1024]; ....
My hunch is it's something to do with the buffer size or its just the between the OnDataReceived and EndReceive we're getting multiple messages.
Update: It turns out when I put a Thread.Sleep at the start of OnDataReceived it gets every message. Is the only solution to this wrapping my message in a prefix of length and an string to signify the end?
This is expected behaviour. A TCP socket represents a linear stream of bytes, not a sequence of well-delimited “packets”. You must not assume that the data you receive is chunked the same way it was when it was sent.
Notice that this has two consequences:
Two messages may get merged into a single callback call. (You noticed this one.)
A single message may get split up (at any point) into two separate callback calls.
Your code must be written to handle both of these cases, otherwise it has a bug.
There is no need to abandon Tcp because it is stream oriented.
You can fix the problems that you are having by implementing message framing.
See
http://blogs.msdn.com/malarch/archive/2006/06/26/647993.aspx
also:
http://nitoprograms.blogspot.com/2009/04/message-framing.html
TCP sockets don't always send data right away -- in order to minimize network traffic, TCP/IP implementations will often buffer the data for a bit and send it when it sees there's a lull (or when the buffer's full).
If you want to ensure that the messages are processed one by one, you'll need to either set socket.NoDelay = true (which might not help much, since data received may still be bunched up together in the receive buffer), implement some protocol to separate messages in the stream (like prefixing each message with its length, or perhaps using CR/LF to separate them), or use a message-oriented protocol like SCTP (which might not be supported without additional software) or UDP (if you can deal with losing messages).