I am writing a simple test program to send a string from a client to a server and then display the text on the server program. How would I go about doing this.
Here is my client code.
using System;
using System.Net;
using System.Net.Sockets;
class client
{
static String hostName = Dns.GetHostName();
public static void Main(String[] args)
{
String test = Console.ReadLine();
IPHostEntry ipEntry = Dns.GetHostByName(hostName);
IPAddress[] addr = ipEntry.AddressList;
//The first one in the array is the ip address of the hostname
IPAddress ipAddress = addr[0];
TcpClient client = new TcpClient();
//First parameter is Hostname or IP, second parameter is port on which server is listening
client.Connect(ipAddress, 8);
client.Close();
}
}
Here is my server code
using System;
using System.Net;
using System.Net.Sockets;
class server
{
static int port = 8;
static String hostName = Dns.GetHostName();
static IPAddress ipAddress;
public static void Main(String[] args)
{
IPHostEntry ipEntry = Dns.GetHostByName(hostName);
//Get a list of possible ip addresses
IPAddress[] addr = ipEntry.AddressList;
//The first one in the array is the ip address of the hostname
ipAddress = addr[0];
TcpListener server = new TcpListener(ipAddress,port);
Console.Write("Listening for Connections on " + hostName + "...\n");
//start listening for connections
server.Start();
//Accept the connection from the client, you are now connected
Socket connection = server.AcceptSocket();
Console.Write("You are now connected to the server\n\n");
int pauseTime = 10000;
System.Threading.Thread.Sleep(pauseTime);
connection.Close();
}
}
You can use the Send and Receive overloads on Socket. Asynchronous version exists as well, through the BeginSomething & EndSomething methods.
You send raw bytes, so you'll need to decide upon a protocol however simple. To send some text, select an encoding (I would use UTF-8) and get the raw bytes with Encoding.GetBytes.
Example:
Byte[] bytes = Encoding.UTF8.GetBytes("Royale With Cheese");
UTF8 is a static property on the Encoding class, it's there for your convenience.
You can then send the data to the server/client with:
int sent = connection.Send(bytes);
Receive:
Byte[] bytes = new Bytes[100];
int received = connection.Receive(bytes);
This looks easy, but there are caveats. The connection may at any time be dropped, so you must be prepared for exceptions, especially SocketException. Also if you look at the examples you can see that I have two variables sent and received. They hold how many bytes Send and Receive actually sent. You cannot rely on the socket sending or receiving all the data (especially not when receiving, what if the other party sent less than you expected?)
One way to do this is too loop and send until all the data is indicated as sent:
var bytes = Encoding.UTF8.GetBytes("Royale With Cheese");
int count = 0;
while (count < bytes.Length) // if you are brave you can do it all in the condition.
{
count += connection.Send(
bytes,
count,
bytes.Length - count, // This can be anything as long as it doesn't overflow the buffer, bytes.
SocketFlags.None)
}
Send and Receive are synchronous, they block until they've sent something. You should probably set some kind of timeout value on the socket (Socket.SendTimeout & Socket.ReceiveTimeout.) The default is 0 which means they may block forever.
Now how about receiving? Let's try to do a similar loop.
int count = 0;
var bytes = new Byte[100];
do
{
count += connection.Receive(
bytes,
count,
bytes.Length - count,
SocketFlags.None);
} while (count < bytes.Length);
Hmm. Well... what happens if the client sends less than a 100? We would block until it hits the timeout, if there is one, or the client sends enough data. What happens if the client sends more than a 100? You will only get the 100 first bytes.
In your case we could try reading very small amounts and print. Read, print, loop:
var sunIsBurning = true;
while (sunIsBurning) {
var bytes = new Byte[16];
int received = socket.Receive(bytes);
if (received > 0)
Console.Out.WriteLine("Got {0} bytes. START:{1}END;", received, Encoding.UTF8.GetString(bytes));
}
Here we say "receive data in 16 byte chunks as long as sunIsBurning, decode as UTF-8". This means the data sent by the other party must be UTF-8 or you'll get an exception (it should probably be caught by you) This also means that the server will only stop receiving when the connection fails.
If you have received 15 bytes, and the client kills the connection, those 15 bytes will never be printed. You will need better error handling for that :)
An uncaught exception will kill your application in most cases; not desirable for a server.
"connection fails" could mean a lot of things, it might be that the connection was reset by the peer or your network card caught fire.
Related
I am trying to receive a message from an equipment. This equipment is an authentication terminal, and it will send the message as soon as the user set his credentials.
Also, the manual of the equipment says the message will be sent in the ILV format, standing I for identification, L for length and V for value.
a normal message would be:
I -> 0x00 (byte 0 indicating success)
L -> 0x04 0x00 (two bytes for length, being 4 the length in this case)
V -> 0x35 0x32 0x38 0x36 (the message itself)
The message is sent in TCP protocol, so I created a socket using the TcpListener class, following this sample from Microsoft:
https://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener(v=vs.110).aspx
new Thread(() =>
{
TcpListener server = null;
try
{
Int32 port = 11020;
IPAddress localAddr = IPAddress.Parse("192.168.2.2");
server = new TcpListener(localAddr, port);
server.Server.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
server.Start();
byte[] bytes = new byte[256];
String data = null;
while (true)
{
TcpClient client = server.AcceptTcpClient();
data = null;
NetworkStream stream = client.GetStream();
int i = 0;
while((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
// this code is never reached as the stream.Read above runs for a while and receive nothing
data = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
}
client.Close();
}
}
catch (SocketException ex)
{
// Actions for exceptions.
}
finally
{
server.Stop();
}
}).Start();
If the stream.Read is removed, then the code flows (although I got nothing either), however if I put any stream.Read statement the execution holds for a while like it was waiting for some response, and then it ends with no response, all bytes read is zero.
I am running Wireshark on the computer and the data is being sent.
Anybody knows what I am doing wrong?
I think the problem is right in the Read method, which halts until the buffer fills completely, but that's obviously won't happen anytime.
In that while-loop, you should spin by checking for available data first, then read them and append to the byte-array. Moreover, since the loop become too "tight", it's better to yield the control to the task scheduler for a bit.
Here is an example:
while (true)
{
if (stream.DataAvailable)
{
int count = stream.Read(bytes, i, bytes.Length);
i += count;
// ...add something to detect the end of message
}
else
{
Thread.Sleep(0);
}
}
It's worthwhile to notice that there's no timeout, but that's a very common feature in order to quit the loop when no data (or broken) are incoming.
Furthermore, the above pattern is not the best way to accumulate data, because you may get an exception when too many bytes are received. Consider a spurious stream of 300 bytes: that will overflow the available buffer. Either use the "bytes" as a temporary buffer for what the Read method gives, or provide a safety check before calling the Read method (so that you may provide the best byte count to read).
Useful links here:
https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.networkstream.read?view=netframework-4.7.1#System_Net_Sockets_NetworkStream_Read_System_Byte___System_Int32_System_Int32_
I'm using C# Socket class to send/receive TCP messages to/from our server by short connection. The pseudo-code (exclude some false tolerant logic to make it clear) is as followings.
class MyTCPClient {
private Socket mSocket;
MyTCPClient() {
mSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
}
void Connect(ipEndpoint, sendTimeOut, receiveTimeOut) {
mSocket.Connect(ipEndpoint);
mSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.SendTimeout, sendTimeOut);
mSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, receiveTimeOut);
}
void Send(msg) {
mSocket.Send(msg);
}
void Receive(data) {
ByteStreamWriter bsw = new ByteStreamWriter();
int totalLen = 0;
while (true) {
byte[] temp = new byte[1024];
int len = mSocket.Receive(temp, SocketFlags.None);
System.Threading.Thread.Sleep(50); // This is the weirdest part
if (len <= 0) break;
totalLen += len;
bsw.WriteBytes(temp);
}
byte[] buff = bsw.GetBuffer();
data = new byte[totalLen];
Array.Copy(buff, 0, data, 0, totalLen);
}
~MyTCPClient() {
mSocket.Close();
}
}
I was using this class to request the same message from our server several times by short connection, and the following things occurred (only when the message size was larger than one MTU -- 1500 bytes).
If the "Sleep(50)" was commented out, the most of the times (90%) I received wrong "data", specifically to say, the "totalLen" was right, but the "data" was wrong.
If I replaced "Sleep(50)" to "Sleep(10)", about half of the times I received wrong "data", and "totalLen" was also right always.
If I used "Sleep(50)", occasionally, I received wrong "data".
I can guarantee that, every time, our server sent the right data, and the client received the right data too at TCP layer (I used WireShark to monitor all messages through the port I used). Is there anyone who can help to answer why my C# code cannot get the right data?
Also, if I use mSocket.Available instead of the return value of Socket.Receive() to judge the while loop, I would always get the right data and data length...
You don't truncate temp to len bytes before writing it. All the bytes after len in temp have not been filled with current data and thus contain nonsensical, stale date.
When copying to another stream you could use stream.Write(temp, 0, len)
I had never used UDP before, so I gave it a go. To see what would happen, I had the 'server' send data every half a second, and the client receive data every 3 seconds. So even though the server is sending data much faster than the client can receive, the client still receives it all neatly one by one.
Can anyone explain why/how this happens? Where is the data buffered exactly?
Send
class CSimpleSend
{
CSomeObjServer obj = new CSomeObjServer();
public CSimpleSend()
{
obj.changedVar = varUpdated;
obj.threadedChangeSomeVar();
}
private void varUpdated(int var)
{
string send = var.ToString();
byte[] packetData = System.Text.UTF8Encoding.UTF8.GetBytes(send);
string ip = "127.0.0.1";
int port = 11000;
IPEndPoint ep = new IPEndPoint(IPAddress.Parse(ip), port);
Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
client.SendTo(packetData, ep);
Console.WriteLine("Sent Message: " + send);
Thread.Sleep(100);
}
}
All CSomeObjServer does is increment an integer by one every half second
Receive
class CSimpleReceive
{
CSomeObjClient obj = new CSomeObjClient();
public Action<string> showMessage;
Int32 port = 11000;
UdpClient udpClient;
public CSimpleReceive()
{
udpClient = new UdpClient(port);
showMessage = Console.WriteLine;
Thread t = new Thread(() => ReceiveMessage());
t.Start();
}
private void ReceiveMessage()
{
while (true)
{
//Thread.Sleep(1000);
IPEndPoint remoteIPEndPoint = new IPEndPoint(IPAddress.Any, port);
byte[] content = udpClient.Receive(ref remoteIPEndPoint);
if (content.Length > 0)
{
string message = Encoding.UTF8.GetString(content);
if (showMessage != null)
showMessage("Recv:" + message);
int var_out = -1;
bool succ = Int32.TryParse(message, out var_out);
if (succ)
{
obj.alterSomeVar(var_out);
Console.WriteLine("Altered var to :" + var_out);
}
}
Thread.Sleep(3000);
}
}
}
CSomeObjClient stores the variable and has one function (alterSomeVar) to update it
Ouput:
Sent Message: 1
Recv:1
Altered var to :1
Sent Message: 2
Sent Message: 3
Sent Message: 4
Sent Message: 5
Recv:2
Altered var to :2
Sent Message: 6
Sent Message: 7
Sent Message: 8
Sent Message: 9
Sent Message: 10
Recv:3
Altered var to :3
The operating system kernel maintains separate send and receive buffers for each UDP and TCP socket. If you google SO_SNDBUF and SO_RCVBUF you'll find lots of information about them.
When you send data, it is copied from your application space into the send buffer. From there it is copied to the network interface card, and then onto the wire. The receive side is the reverse: NIC to receive buffer, where it waits until you read it. Additionally copies and buffering can also occur, depending on the OS.
It is critical to note that the sizes of these buffers can vary radically. Some systems might default to as little as 4 kilobytes, while others give you 2 megabytes. You can find the current size using getsockopt() with SO_SNDBUF or SO_RCVBUF and likewise set it using setsockopt(). But many systems limit the size of the buffer, sometimes to arbitrarily small amounts. This is typically a kernel value like net.core.wmem_max or net.core.rmem_max, but the exact reference will vary by system.
Also note that setsockopt() can fail even if you request an amount less than the supposed limit. So to actually get a desired size, you need to repeatedly call setsockopt() using decreasing amounts until it finally succeeds.
The following page is a Tech Note from my company which touches on this topic a little bit and provides references for some common systems: http://www.dataexpedition.com/support/notes/tn0024.html
It looks to me like the UdpClient-Class provides a buffer for received data. Try using a socket directly. You might also want to set that sockets ReceiveBufferSize to zero, even though I believe it is only used for TCP connections.
I wrote two small applications (a client and a server) to test UDP communication and I found the 'connection' (yeah, I know, there's no real connection) between them gets lost frecuently for no reason.
I know UDP is an unreliable protocol, but the problem here does not seem the losing of packets, but the losing of the communication channel between the apps.
Here's client app code:
class ClientProgram
{
static void Main(string[] args)
{
var localEP = new IPEndPoint(GetIPAddress(), 0);
Socket sck = new UdpClient(localEP).Client;
sck.Connect(new IPEndPoint(IPAddress.Parse("[SERVER_IP_ADDRESS]"), 10005));
Console.WriteLine("Press any key to request a connection to the server.");
Console.ReadLine();
// This signals the server this clients wishes to receive data
SendData(sck);
while (true)
{
ReceiveData(sck);
}
}
private static void ReceiveData(Socket sck)
{
byte[] buff = new byte[8];
int cnt = sck.Receive(buff);
long ticks = BitConverter.ToInt64(buff, 0);
Console.WriteLine(cnt + " bytes received: " + new DateTime(ticks).TimeOfDay.ToString());
}
private static void SendData(Socket sck)
{
// Just some random data
sck.Send(new byte[] { 99, 99, 99, 99 });
}
private static IPAddress GetIPAddress()
{
IPHostEntry he = Dns.GetHostEntry(Dns.GetHostName());
if (he.AddressList.Length == 0)
return null;
return he.AddressList
.Where(ip => ip.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork && !IPAddress.IsLoopback(ip))
.FirstOrDefault();
}
}
Here's the server app code:
class ServerProgram
{
private static int SLEEP = 5;
static void Main(string[] args)
{
// This is a static IP address
var localEP = new IPEndPoint(GetIPAddress(), 10005);
Socket sck = new UdpClient(localEP).Client;
// When this methods returs, a client is ready to receive data
var remoteEP = ReceiveData(sck);
sck.Connect(remoteEP);
while (true)
{
SendData(sck);
System.Threading.Thread.Sleep( ServerProgram.SLEEP * 1000);
}
}
private static EndPoint ReceiveData(Socket sck)
{
byte[] buff = new byte[8];
EndPoint clientEP = new IPEndPoint(IPAddress.Any, 0);
int cnt = sck.ReceiveFrom(buff, ref clientEP);
Console.WriteLine(cnt + " bytes received from " + clientEP.ToString());
return (IPEndPoint)clientEP;
}
private static void SendData(Socket sck)
{
DateTime n = DateTime.Now;
byte[] b = BitConverter.GetBytes(n.Ticks);
Console.WriteLine("Sending " + b.Length + " bytes : " + n.TimeOfDay.ToString());
sck.Send(b);
}
private static IPAddress GetIPAddress()
{
// Same as client app...
}
}
(this is just test code, don't pay attention to the infinite loops or lack of data validation)
The problem is after a few messages sent, the client stops receiving them. The server keeps sending, but the client gets stuck at sck.Receive(buff). If I change SLEEP constant to a value higher than 5, the 'connection' gets lost after 3 or 4 messages most of the time.
I can confirm the client machine doesn´t receive any packet when the connection is lost since I use Wireshark to monitor the communication.
The server app runs on a server with direct connection to Internet, but the client is a machine in a local network, behind a router. None of them has a firewall running.
Does anyone have a clue what could be happening here?
Thank you!
EDIT - Additional data:
I tested the client app in several machines in the same network and the connection get lost always. I also tested the client app in other network behind other router and there is no problem there. My router is a Linksys RV042 and never had any problem with it, in fact, this app is the only one with problems.
PROBLEM SOLVED - SHORT ANSWER: It was a hardware problem.
I don't see any overt problems in the source code. If it is true that:
The server keeps sending packets (confirmed by WireShark on the server)
The client never receives the packets (confirmed by WireShark on the client)
..then the problem could be related to networking equipment between the two machines, which don't always handle UDP flows as expected especially when behind routers/firewalls.
I would recommend the following approach to troubleshooting:
Run client & server on the same system and use the loopback interface (confirm UDP code works on the loopback)
Run the client & server on two different systems that are plugged into the same Ethernet switch (confirm that UDP communication works switch-local)
If everything works switch-local you can be fairly sure you have a network configuration problem.
Problem:
When I do something like this:
for (int i = 0; i < 100; i++)
{
SendMessage( sometSocket, i.ToString());
Thread.Sleep(250); // works with this, doesn't work without
}
With or without the sleep the server logs sending of separate messages. However without the sleep the client ends up receiving multiple messages in single OnDataReceived so the client will receive messages like:
0,
1,
2,
34,
5,
678,
9 ....
Server sending Code:
private void SendMessage(Socket socket, string message)
{
logger.Info("SendMessage: Preparing to send message:" + message);
byte[] byteData = Encoding.ASCII.GetBytes(message);
if (socket == null) return;
if (!socket.Connected) return;
logger.Info("SendMessage: Sending message to non " +
"null and connected socket with ip:" + socket.RemoteEndPoint);
// Record this message so unit testing can very this works.
socket.Send(byteData);
}
Client receiving code:
private void OnDataReceived(IAsyncResult asyn)
{
logger.Info("OnDataReceived: Data received.");
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = theSockId.Socket.EndReceive(asyn);
char[] chars = new char[iRx + 1];
System.Text.Decoder d = System.Text.Encoding.UTF8.GetDecoder();
int charLen = d.GetChars(theSockId.DataBuffer, 0, iRx, chars, 0);
System.String szData = new System.String(chars);
logger.Info("OnDataReceived: Received message:" + szData);
InvokeMessageReceived(new SocketMessageEventArgs(szData));
WaitForData(); // .....
Socket Packet:
public class SocketPacket
{
private Socket _socket;
private readonly int _clientNumber;
private byte[] _dataBuffer = new byte[1024]; ....
My hunch is it's something to do with the buffer size or its just the between the OnDataReceived and EndReceive we're getting multiple messages.
Update: It turns out when I put a Thread.Sleep at the start of OnDataReceived it gets every message. Is the only solution to this wrapping my message in a prefix of length and an string to signify the end?
This is expected behaviour. A TCP socket represents a linear stream of bytes, not a sequence of well-delimited “packets”. You must not assume that the data you receive is chunked the same way it was when it was sent.
Notice that this has two consequences:
Two messages may get merged into a single callback call. (You noticed this one.)
A single message may get split up (at any point) into two separate callback calls.
Your code must be written to handle both of these cases, otherwise it has a bug.
There is no need to abandon Tcp because it is stream oriented.
You can fix the problems that you are having by implementing message framing.
See
http://blogs.msdn.com/malarch/archive/2006/06/26/647993.aspx
also:
http://nitoprograms.blogspot.com/2009/04/message-framing.html
TCP sockets don't always send data right away -- in order to minimize network traffic, TCP/IP implementations will often buffer the data for a bit and send it when it sees there's a lull (or when the buffer's full).
If you want to ensure that the messages are processed one by one, you'll need to either set socket.NoDelay = true (which might not help much, since data received may still be bunched up together in the receive buffer), implement some protocol to separate messages in the stream (like prefixing each message with its length, or perhaps using CR/LF to separate them), or use a message-oriented protocol like SCTP (which might not be supported without additional software) or UDP (if you can deal with losing messages).