I have implemented a web socket server using Alchemy web sockets, and am now trying to stress test it. I have written the following method in C# to create numerous clients to connect to the server and send some data:
private void TestWebSocket()
{
int clients = 10;
long messages = 10000;
long messagesSent = 0;
String host = "127.0.0.1";
String port = "11005";
WSclient[] clientArr = new WSclient[clients];
for (int i = 0; i < clientArr.Length; i++)
{
clientArr[i] = new WSclient(host, port);
}
Random random = new Random();
var sw = Stopwatch.StartNew();
for (int i = 0; i < messages; i++)
{
clientArr[i % clients].Send("Message " + i);
messagesSent++;
}
sw.Stop();
Console.WriteLine("Clients " + clients);
Console.WriteLine("Messages to Send" + messages);
Console.WriteLine("Messages Sent " + messagesSent);
Console.WriteLine("Time " + sw.Elapsed.TotalSeconds);
Console.WriteLine("Messages/s: " + messages / sw.Elapsed.TotalSeconds);
Console.ReadLine();
for (int i = 0; i < clientArr.Length; i++)
{
clientArr[i].Disconnect();
}
Console.ReadLine();
}
However the server is receiving less messages (even with a small number e.g. 100). Or sometimes multiple messages are received as a single message e.g.:
Message1 = abc Message2 = def
Received As = abcdef
I am trying to more or less replicate the example shown here . At the moment both the server and the client are running locally. Any ideas on what the problem is or on how to improve the test method?
There are two open issues on the github project that sound similar:
Server drops inbound messages and receives corrupted input
JSON messages truncated
One of the commenters reported better luck with Fleck
TCP is a streaming protocol, not a message oriented protocol. That means that the receiver is responsible for finding the beginning/end of each message contained within the stream. It also means that the receiver is responsible not only for breaking apart large reads into individual messages, but sometimes it will also need to collect small reads until a complete message is received. The example messages provided show that two were sent and TWO were received, but apparently your server cannot determine where one message ends and the other begins. You probably need to add some sort of internal protocol with your data to mark the beginning and end of each message. If your messages are always exactly the same length, you could just work with the size, but that's less reliable and potentially difficult to port reliably to other communication methods (if that is ever needed later in the program's life -- something that almost always happens to me!)
If your messages are all the same length, the receiver can normally limit the read size (I don't know your library, though) to that length so that picking apart large reads is not necessary. HOWEVER, small reads may still occur due to the way the TCP/IP stack may collect data from the stream into packets for transmission on the physical network. If you don't want to write collection code, then you need to find a peek function that will tell you how much data is available to read before you actually perform the read, allowing your program to wait until there is at least enough for one whole message ready to read.
Related
I'm fairly new in trying to program with Sockets. I have a class whose instance variables include a client's socket and a client's thread, in the name called clientInfo. I created a list of clientInfos to keep track of the connections going into the server, where I've successfully managed to have multiple clients send messages to each other.
listOfClients.Add(new clientInfo(listen.Accept()));
The thread of the clientInfo is in an infinite loop to always receive incoming data, as shown in the code below. The idea that I had was, if I get an exception from the server trying to receive data from a disconnected client, all I should do is remove the client in the list causing the exception, right?
I would iterate through the clients to find exactly at which spot in the list the error is coming from by sending a heartbeat message. Should sending fail, I now have the exact location of the problematic socket and then I would then close their socket, abort the thread, and remove the clientInfo from the list, right? I hope that I have the right idea for that logic. However, when I do so, I've still yet to truly solve the exception which is why (I think) the code shoots itself in the foot by closing all other connections as well. Honestly, I'm at a loss of what to do to solve this.
There's also the unfortunate factor of sending packets to each socket in the list, where the ObjectDisposedException is raised should I close, abort, and remove a socket from a list. Is there a way to completely remove an item from the list as if it were never added in the first place? I assumed removeAt(i) would have done so, but I'm wrong about that.
I've read many answers stating that the best way to handle clients disconnecting is to use socket.close() and list.removeAt(i). My desired goal is that, even if 98 out of 100 clients unexpectedly lose connection, I would like the remaining two clients to still be able to send each other packets through the server. Am I on the right path or is my approach completely wrong?
byte[] buff;
int readBytes;
while (true) {
try {
buff = new byte[clientSocket.SendBufferSize];
readBytes = clientSocket.Receive(buff);
//This line raises an exception should a client disconnect unexpectedly.
if (readBytes > 0) {
Packet pack = new Packet(buff);
handleData(pack);
}
}
catch(SocketException e) {
Console.WriteLine("A client disconnected!");
for (int i = 0; i < listOfClients.Count; i++) {
try {
string message = "This client is alive!";
Packet heartbeat = new Packet(Packet.PacketType.Send, "Server");
heartbeat.data.Add(message);
clientSocket.Send(heartbeat.toByte());
}
catch (SocketException ex) {
Console.WriteLine("Removing " + listOfClients[i].clientEndPointy.Address + ":" + listOfClients[i].clientEndPointy.Port);
//listOfClients[i].clientSocket.Disconnect(reuseSocket: true);
listOfClients[i].clientSocket.Close();
listOfClients[i].clientThread.Abort();
listOfClients.RemoveAt(i);
}
}
}
}
I'm currently having to send files over the size of 4MB through several servers using MSMQ. The files are initially sent in chunks, like so:
using (MessageQueueTransaction oTransaction = new MessageQueueTransaction())
{
// Begin the transaction
oTransaction.Begin();
// Start reading the file
using (FileStream oFile = File.OpenRead(PhysicalPath))
{
// Bytes read
int iBytesRead;
// Buffer for the file itself
var bBuffer = new byte[iMaxChunkSize];
// Read the file, a block at a time
while ((iBytesRead = oFile.Read(bBuffer, 0, bBuffer.Length)) > 0)
{
// Get the right length
byte[] bBody = new byte[iBytesRead];
Array.Copy(bBuffer, bBody, iBytesRead);
// New message
System.Messaging.Message oMessage = new System.Messaging.Message();
// Set the label
oMessage.Label = "TEST";
// Set the body
oMessage.BodyStream = new MemoryStream(bBody);
// Log
iByteCount = iByteCount + bBody.Length;
Log("Sending data (" + iByteCount + " bytes sent)", EventLogEntryType.Information);
// Transactional?
oQueue.Send(oMessage, oTransaction);
}
}
// Commit
oTransaction.Commit();
}
These messages are sent from Machine A to Machine B, and then forwarded to Machine C. However, I've noticed that the PeekCompleted event on Machine B is triggered before all messages are sent.
For example, a test run just now showed 8 messages sent, and were processed on Machine B in groups of 1, 1 and then 6.
I presume this is due to the transactional part ensuring the messages arrive in exactly the right order, but not guaranteeing they are all collected at exactly at the same time.
The worry I have is that when Machine B passes the messages to Machine C, these now count as 3 separate transactions, and I'm unsure as to whether the transactions themselves are delivered in the correct order (for example, 1 then 6 then 1).
My question is, is it possible to receive messages using PeekCompleted by transaction (meaning, all 8 messages are collected first), and pass them on so Machine C gets all 8 messages together? Even in a system where multiple transactions are being sent at the same time?
Or are the transactions themselves guaranteed to arrive in the correct order?
I think I missed this when looking at the topic:
https://msdn.microsoft.com/en-us/library/ms811055.aspx
That these messages will either be sent together, in the order they
were sent, or not at all. In addition, consecutive transactions
initiated from the same machine to the same queue will arrive in the
order they were committed relative to each other. Moreover
So, no matter how diluted the transactions get, the order will never be affected.
First, I don't know if Stackoverflow is the best site to post this kind of message, but I don't know another sites like this.
In oder to understand properly tcp programmation in C#, I decided to do all possible ways from scratch. Here is what I want to know (not in the right order:
- Simple One Thread Socket Server (this article)
- Simple Multiple Threads Socket Server (I don't know how, cause threads are complicated)
- Simple Thread Socket Server (put the client management in another thread)
- Multiple Threads Socket Server
- Using tcpListener
- Using async / Await
- Using tasks
The ultimate objective is to know how to do the best tcp server, without just copy/paste some parts of come, but understand properly all things.
So, this is my first part : a single thread tcp server.
There is my code, but I don't think anybody will correct something, because it's quite a copy from MSDN : http://msdn.microsoft.com/en-us/library/6y0e13d3(v=vs.110).aspx
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
namespace SimpleOneThreadSocket
{
public class ServerSocket
{
private int _iPport = -1;
private static int BUFFER_SIZE = 1024;
private Socket _listener = null;
public ServerSocket(int iPort)
{
// Create a TCP/IP socket.
this._listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Save the port
this._iPport = iPort;
}
public void Start()
{
byte[] buffer = null;
String sDatasReceived = null;
// Bind the socket to loopback address
try
{
this._listener.Bind(new System.Net.IPEndPoint(System.Net.IPAddress.Loopback, _iPport));
this._listener.Listen(2);
}
catch (Exception e)
{
System.Console.WriteLine(e.ToString());
}
// Listening
try
{
Console.WriteLine("Server listening on 127.0.0.1:" + _iPport);
while (true)
{
Socket client = this._listener.Accept();
Console.WriteLine("Incoming connection from : " + IPAddress.Parse(((IPEndPoint)client.RemoteEndPoint).Address.ToString()) + ":" + ((IPEndPoint)client.RemoteEndPoint).Port.ToString());
// An incoming connection needs to be processed.
while (true)
{
buffer = new byte[BUFFER_SIZE];
int bytesRec = client.Receive(buffer);
sDatasReceived += Encoding.ASCII.GetString(buffer, 0, bytesRec);
if (sDatasReceived.IndexOf("<EOF>") > -1)
{
// Show the data on the console.
Console.WriteLine("Text received : {0}", sDatasReceived);
// Echo the data back to the client.
byte[] msg = Encoding.ASCII.GetBytes(sDatasReceived);
client.Send(msg);
sDatasReceived = "";
buffer = null;
}
else if (sDatasReceived.IndexOf("exit") > -1)
{
client.Shutdown(SocketShutdown.Both);
client.Close();
sDatasReceived = "";
buffer = null;
break;
}
}
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
}
}
But I have some questions about that :
Listen Method from Socket have a parameter : backlog. According to MSDN, backlog is the number of available connection. I don't know why, when I put 0, I can connect to my server with multiple Telnet sessions. EDIT : 0 & 1 both allow 2 connections (1 current, 1 pending), 2 allow 3 connections (1 current, 2 pending), etc... So I didn't understand well the meaning of MSDN.
Can you confirm that Accept Method will take each connection one after one, that's why I see text from differents Telnet session in my server ?
Can you confirm (my server is a C# library) I can't kill my server (with this kind of code) without killing the process ? It could be possible with threads but it will come later.
If something is wrong in my code, please help me :)
I will come back soon with a simple multiple thread socket server, but I don't know how (I think one step is available before using threads or async/await).
First off, do your best not to even learn this. If you can possibly use a SignalR server, then do so. There is no such thing as a "simple" socket server at the TCP/IP level.
If you insist on the painful route (i.e., learning proper TCP/IP server design), then there's a lot to learn. First, the MSDN examples are notoriously bad starting points; they barely work and tend to not handle any kind of error conditions, which is absolutely necessary in the real world when working at the TCP/IP level. Think of them as examples of how to call the methods, not examples of socket clients or servers.
I have a TCP/IP FAQ that may help you, including a description of the backlog parameter. This is how many connections the OS will accept on your behalf before your code gets around to accepting them, and it's only a hint anyway.
To answer your other questions: A single call to Accept will accept a single new socket connection. The code as-written has an infinite loop, so it will work like any other infinite loop; it will continue executing until it encounters an exception or its thread is aborted (which happens on process shutdown).
If something is wrong in my code, please help me
Oh, yes. There are lots of things wrong with this code. It's an MSDN socket example, after all. :) Off the top of my head:
The buffer size is an arbitrary value, rather low. I would start at 8K myself, so it's possible to get a full Ethernet packet in a single read.
The Bind explicitly uses the loopback address. OK for playing around, I guess, but remember to set this to IPAddress.Any in the real world.
backlog parameter is OK for testing, but should be int.MaxValue on a true server to enable the dynamic backlog in modern server OSes.
Code will fall through the first catch and attempt to Accept after a Bind/Listen failed.
If any exception occurs (e.g., from Listen or Receive), then the entire server shuts down. Note that a client socket being terminated will result in an exception that should be logged/ignored, but it would stop this server.
The read buffer is re-allocated on each time through the loop, even though the old buffer is never used again.
ASCII is a lossy encoding.
If a client cleanly shuts down without sending <EOF>, then the server enters an infinite busy loop.
Received data is not properly separated into messages; it is possible that the echoed message contains all of one message and part of another. In this particular example it doesn't matter (since it's just an echo server and it's using ASCII instead of a real encoding), but this example hides the fact that you need to handle message framing properly in any real-world application.
The decoding should be done after the message framing. This isn't necessary for ASCII (a lossy encoding), but it's required for any real encodings like UTF8.
Since the server is only either receiving or sending at any time (and never both), it cannot detect or recover from a half-open socket situation. A half-open socket will cause this server to hang.
The server is only capable of a single connection at a time.
That was just after a brief readthrough. There could easily be more.
Okay from my knowledge UDP works like this:
You have data you want to send, you say to the UDP client, hey send this data.
The UDP client then says, sure why not, and sends the data to the selected IP and Port.
If it get´s through or in the right order is another story, it have sent the data, you didn´t ask for anything else.
Now from this perspective, it´s pretty much impossible to send data and assemble it.
for example, i have a 1mb image, and i send it.
So i send divide it in 60kb files (or something to fit the packages), and send them one by one from first to last.
So in theory, if all get´s added, the image should be exactly the same.
But, that theory breaks as there is no law that tells the packages if it can arrive faster or slower than another, so it may only be possible if you make some kind of wait timer, and hope for the best that the arrive in the order they are sent.
Anyway, what i want to understand is, why does this work:
void Sending(object sender, NAudio.Wave.WaveInEventArgs e)
{
if (connect == true && MuteMic.Checked == false)
{
udpSend.Send(e.Buffer, e.BytesRecorded, otherPartyIP.Address.ToString(), 1500);
}
}
Recieving:
while (connect == true)
{
byte[] byteData = udpReceive.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
}
So this is basically, it sends the audio buffer through udp.
The receiving par just adds the udp data received in a buffer and plays it.
Now, this works.
And i wonder.. why?
How can this work, how come the data is sent in the right order and added so it appears as a constant audio stream?
Cause if i would to this with an image, i would probably get all the data.
But they would be in a random order probably, and i can only solve that by marking packages and stuff like that. And then there is simply no reason for it, and TCP takes over.
So if someone can please explain this, i just don´t get it.
Here is a code example that is when sending an image, and well, it works. But it seems to work better when the entire byte array isn´t sent, meanign some part of the image is corrupted (not sure why, probably something to do with how the size of the byte array are).
Send:
using (var udpcap = new UdpClient(0))
{
udpcap.Client.SendBufferSize *= 16;
bsize = ms.Length;
var buff = new byte[7000];
int c = 0;
int size = 7000;
for (int i = 0; i < ms.Length; i += size)
{
c = Math.Min(size, (int)ms.Length - i);
Array.Copy(ms.GetBuffer(), i, buff, 0, c);
udpcap.Send(buff, c, adress.Address.ToString(), 1700);
}
Receive:
using (var udpcap = new UdpClient(1700))
{
udpcap.Client.SendBufferSize *= 16;
var databyte = new byte[1619200];
int i = 0;
for (int q = 0; q < 11; ++q)
{
byte[] data = udpcap.Receive(ref adress);
Array.Copy(data, 0, databyte, i, data.Length);
i += data.Length;
}
var newImage = Image.FromStream(new MemoryStream(databyte));
gmp.DrawImage(newImage,0,0);
}
You should be using TCP. You write: it´s pretty much impossible to send data and assemble it. for example, i have a 1mb image, and i send it. So i send divide it in 60kb files (or something to fit the packages), and send them one by one from first to last. ... But, that theory breaks as there is no law that tells the packages if it can arrive faster or slower than another, so it may only be possible if you make some kind of wait timer, and hope for the best that the arrive in the order they are sent. That's exactly what TCP does: ensure that all the pieces of a stream of data are received in the order they were sent, with no omissions, duplications, or modifications. If you really want to re-implement that yourself, you should be reading RFC 793 - it talks at length about how to build a reliable data stream atop an unreliable packet service.
But really, just use TCP.
You're missing a lot of helpful details from your question, but based on the level of understanding presented I'll attempt to answer at a similar level:
You're absolutely right, in general the UDP protocol doesn't guarantee order of delivery or even delivery at all. Your local host is going to send the packets (i.e. parts of your request message) in the order it receives them from the sending application, and from there its up to network components to decide how your request gets delivered. In local networks however (within a handful of hops of the original requester) there aren't really a lot of directions for the packets to go. As such they will likely just flow in line and never see a hiccup.
On the greater internet however, there is likely a wide variety of routing choices available to each router between your requesting host and the destination. Every router along the way can make a choice on which direction parts of your message take. Assuming all paths are equal (which they aren't) and guaranteed reliability of every network segment between the 2 hosts its likely to see similar results as within network (with added latency). Unfortunately neither of the posed conditions can be considered reliable (different paths on the internet perform differently depending on client and server, and no single path on the internet should ever be considered to be reliable (that's why it's a "net").
These are of course based on general observations from my own experience in network support and admin roles. Members of other StackExchange sites may be able to provide better feedback.
I am sending and receiving bytes between a server and a client. The server regularly sends some message in the form of bytes and client receives them.
Message format is below:
{Key:Value,Key:Value,Key:Value}
Now at the client side instead of receiving this message, I am receiving multiple copies of this message which is not suitable for this.
The client is receiving like this:
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,Key:Value}
{Key:Value,Key:Value,
Can someone help me figure out the problem?
Updated
This code is sending instructions.
var client = (param as System.Net.Sockets.Socket);
while (true)
{
try
{
var instructions = "{";
instructions += "Window:" + window + ",";
instructions += "Time:" + System.DateTime.Now.ToShortTimeString() + ",";
instructions += "Message:" + msgToSend + "";
instructions += "}";
var bytes = System.Text.Encoding.Default.GetBytes(instructions);
client.Send(bytes, 0, bytes.Length, System.Net.Sockets.SocketFlags.None);
}
catch (Exception ex)
{
continue;
}
}
This code is receiving at client side.
while (true)
{
try
{
var data = new byte[tcpClient.ReceiveBufferSize];
stream.Read(data, 0, tcpClient.ReceiveBufferSize);
instructions = System.Text.Encoding.Default.GetString(data.ToArray());
}
catch (Exception ex)
{
continue;
}
}
Okay, a few problems with this code:
You're using Encoding.Default, which is almost certainly not what you want to do
You're always decoding the whole string, rather than just the amount you've actually managed to read - you're ignoring the return value of stream.Read
You're just continuing after an exception, with no logging, error handling or anything
As Dean says, you're repeatedly sending the same data
Ideally, it would be useful for your messages to have a prefix saying how long each one is, in bytes. Then in the receiving side you can read that length, then loop to repeatedly read into a buffer until you've read all the data you need. Then perform the decoding.
If you can't change the protocol, you'll still need to loop round, but checking for the end delimiter ("}" presumably) explicitly - and noting that you may receive data from the next message which you'll have to store until you next want to read.
You've got:
while (true)
In the sender: it's just going to keep sending the same thing over and over...
Also, if you get an exception trying to send or receive the data, you can't just try again and expect it to work. Depending on the exact error, you might need to reestablish the connection, or it might be that the network has gone away completely. In any case, simply retrying again is almost always going to be the wrong thing to do.
Problem has figured out like Dean Harding said.
But beside you should be more clearly about "client" or "server".
Basicaly:
Only server side should wait (by a loop) for msgs. Client (sender) sends msgs when needed or in condition.
You can sending msg in loop but should control and regulate it by a "Sleep" or "Timer". In this way, you can spare resource and give more time for receiver can process msg completely.
Your are sending your data through TCP. TCP is a stream-oriented protocol, so you know the client will receive the same stream of bytes in the same order, but you loose the packet boundaries. Your protocol seems to be packet-oriented instead. Then you have the choice:
switch to a packet-oriented protocol (UDP) or
delimit the packets yourself at the receiving side (as Jon Skeet said, by looking for the delimiters).
Keep in mind that TCP has some reliability features not found in UDP. If reliability is not a concern, switch to UDP. Otherwise, finding the delimiters at the client side should be easier than implementing your own reliability layer.