My program uses sockets for inter-process communication. There is one server listening on a socket port(B) on localhost waiting for a list of TCP clients to connect. And on the other end of the server is another a socket(A) that sends out data to internet. The server is designed to take everything the TCP clients send him and forward to a server on the internet. My question is if two of the TCP clients happened to send data at the same time, is this going to be a problem for the server's outgoing socket(A)?
Thanks
The MSDN docs recommend that you use BeginSend and EndSend if multiple threads will be using the same socket to transmit data.
So I would suggest that you either use those methods or write outgoing data to a synchronized queue, from which a single thread will then pick data off of the queue and send it over socket(A)
You don't describe how you multiplex the traffic of multiple client streams onto a single outgoing stream. Just arbitrarily putting chunks of client traffic into the stream is guaranteed not the work. The receiving end on the opposite end of the intertube will have no idea what bytes belong to what conversation.
I'd recommend you focus on the opposite end first. What machine is out there, what does it do, what does it need to know about the multiple clients at the local end.
Related
I'm working on a game that depends on the standard System.Net.Sockets library for networking. What's the most efficient and standardized "system" I should use? Should the client send data requests every set amount of seconds, when a certain event happens? My other question, is a port forward required for a client to listen and receive data? How is this done, is there another socket created specifically for listening only on the client? How can I send messages and listen on the same socket on the client? I'm having a difficult time grasping the concept of networking, I started messing with it two days ago.
Should the client send data requests every set amount of seconds, when a certain event happens?
No. Send your message as soon as you can. The socket stack has algorithms that determine when data is actually sent. For instance the Nagle algorithm.
However, if you send a LOT of messages it can be beneficial to enqueue everything in the same socket method call. However, you need to send several thousand of messages per client and second for that to give you any benefit.
My other question, is a port forward required for a client to listen and receive data?
No. Once a socket connection have been established it's bidirectional. i.e. both end points and send and receive information without screwing something up for the other end point.
But to achieve that you typically have to use asynchronous operations so that you can keep receiving all the time.
How is this done, is there another socket created specifically for listening only on the client?
The server has a dedicated socket (a listener) which only purpose is to accept client sockets. When the listener have accepted a new connection from a remote end point you get a new socket object which represents the connection to the newly connected endpoint.
How can I send messages and listen on the same socket on the client?
The easiest way is to use asynchronous receives and blocking sends.
If you do not want to take care of everything by yourself, you can try my Apache licensed library http://sharpmessaging.net.
Creating a stable, high quality server will require you to have a wealth of knowledge on networking and managing your objects.
I highly recommend you start with something smaller before attempting to create your own server from scratch, or at the very least play around with a server for a different game that's already made, attempt to improve upon it or add new features.
That being said, there are a few ways you can setup the server, if you plan on having more than a couple of clients you don't generally want them to all send data whenever they feel like it as this can bog down the server, you want to structure it in such a way that the client sends as little data as possible on a scheduled basis and the server can request more when its ready. How that's setup and structured is up to you.
A server generally has to have a port forwarded on the router in order for requests to make it to the server from the internet, and here is why. When your computer makes a connection to a website (stackoverflow for example) it sends out a request on a random port, the router remembers the port that you sent out on and remembers who sent it (you), when the server sends the information you requested back the router knows you wanted that data and sends it back to you, in the case of RUNNING a server there is no outbound request to a client (Jack for example), so the router doesnt know where jacks request is supposed to go. By adding a port forwarding rule in the router your saying that all information passed to port 25565 (for example) is supposed to go to your server.
Clients generally do not need to forward ports because they are only making outbound requests and receiving data.
Server Starts, server starts listening on port 25565
Client starts, client connects to server on port 25565 and initiates a connection
Server responds to client on whatever port the client used to connect (this is done behind the scenes in sockets)
Communication continues from here.
I am writing a program which will first authenticate to a server then send and receive messages through UDP. While sending and receiving messages, I also need to maintain the connection to the server, so I need to periodically send KeepAlive to the server and receive response from the server.
I understand it is possible to send and receive packets on the same socket at the same time, but I am concerned about the scenario where I take a while to build a packet for sending, and during that time, the server sent me two messages.
When I call receive(), does the second message overwrites the first message?
Is it the best practice to set up two UDP sockets for what I am trying to accomplish? one socket for sending and one socket for receiving.
Thank you,
Lex
It is unfortunate that the server, which you say you have no control over, requires UDP and has an authentication scheme. This is a serious security problem, because it is relatively easy for an attacker to pretend to be the same UDP end point as your client which has been authenticated.
As far as your specific questions go:
In a perfect world, if the server sends you two datagrams (messages) before you have tried to receive any, you will still receive both datagrams, in the order in which they were sent, because the OS has buffered them and presents them both to you in order.
The problem is that it's not a perfect world, and especially when you are dealing with UDP. UDP has three significant limitations:
Delivery of any given datagram is not guaranteed. Any datagram may be lost at any time.
Order of delivery is not guaranteed. Datagrams are not necessarily received in the same order in which they were sent.
Uniqueness of delivery is not guaranteed. Any given datagram may be delivered more than once.
So not only is it possible that if the server sends a second datagram, the first one may be lost if you have not received it yet, the first one may be lost in any case. You have no guarantee that any datagram will be delivered to you.
Now, there is also the issue of what you do with the datagram once you have received it. Unfortunately, it's not clear from your question whether this is part of what you're asking about. But naturally, once the call to ReceiveFrom() completes (or Receive() if you have used Connect() on the UDP Socket) and you have the datagram in hand, it's entirely up to your code how this is handled and whether previously-received data is overwritten or not.
It is not best practices to create two separate UDP Socket instances, one to handle receiving and one to handle sending. Not only is it unnecessary, you would have to either have some mechanism for the server to understand that two different remote endpoints actually represent the same client (since they would normally have different port numbers, even if you guaranteed they were on the same IP address), or you would have to abuse the Socket by using the "reuse address" option (which is a way to allow the same port to be used by two different Sockets, but which leads to all kinds of other issues).
It is perfectly fine to use the same Socket instance for both receiving and sending, and in fact this is how you're expected to use it. The best practice is to do it that way.
I am writing socket client for IO device, which is the tcp server. I am able co connect and listen for data which the server periodically sends. But from time to time, I need to send some instruction to the server and read its response.
What is the proper way to handle this task, to be able listening for incoming data and after instruction is sent, receive response and to know, witch part of received data the response is?
What if device is sending data, while I need to send data to device? how can I dispatch such traffic? Do I need one thread to read and one for writing? Is it possible to handle this using single thread per device (there will be up to hundreds of devices connected)?
I am using Socket.Receive and Socket.Send methods.
What is the proper way to handle this task, to be able listening for
incoming data and after instruction is sent, receive response and to
know, witch part of received data the response is?
The protocol must support this. When you send data it is unclear when it is going to be received. The protocol must allow you to differentiate between normal data and a response. You could prepend a header to each message that has a boolean field indicating this.
What if device is sending data, while I need to send data to device?
You can send and receive on the same socket concurrently. This is not a problem. You would usually have on thread for reading. You can write on demand. No need for a thread dedicated to that.
there will be up to hundreds of devices connected
This means you need to have hundreds of reads outstanding. This is doable with one thread per socket. On the other hand this starts to be a good use case for async IO. Find out how to use async/await with sockets. If async/await is not available to you because you are on VS2010 use one of the other ways of achieving async IO with sockets.
I need to create a server process which can push high frequency data (1000 updates per second) to around 50 client. I'm thinking the best way you do this is using async sockets with the SocketAsyncEventArgs type.
The client -> server connections will be long running at least several days to indefinite. I plan to have a server process listening and the clients connect and the server starts pushing the data to the clients.
Can someone point me to or show me an example of how to do this? I can't find any example showing a server process pushing an object to a client.
EDIT: This is over a gigibit LAN. Using windows server with 16 cores and 24gb ram
thanks
First, some more requirements from your side is required. You have server with lots of muscle, but it will fail miserably if you don't do what has to be done.
can the client live without some of the data? I mean, does the stream of the data need to reach other side in proper order, without any drops?
how big is 'the data'? few bytes or?
fact: scheduling interval on windows is 10 msec.
fact: no matter WHEN you send, clients will receive it depending on lots of stuff - network config, number of routers in-between, client processor load, and so on. so you need some kind of timestamping here
Depending on all this, you could design a priority queue with one thread servicing it and sending out UDP datagrams for each client. Also, since (4) is in effect, you can 'clump' some of your data together and have 10 updates per second of 100 data.
If you want to achieve something else, then LAN will be required here with lots of quality network equipment.
If you want to use .NET Sockets to create this server-client project, then this is a good outline of what's needed:
Since the server will be transferring data to several clients simultaneously, you'll need to use the asynchronous Socket.Beginxxx methods or the SocketAsyncEventArgs class.
You'll have clients connect to your server. The server will accept those connections and then add the newly connected client to an internal clients list.
You'll have a thread running within the server, that periodically sends notifications to all sockets in the clients list. If any exceptions/errors occurs while sending data to a socket, then that client is removed from the list.
You'll have to make sure that access to the clients list is synchronized since the server is a multithreaded application.
You don't need to worry about buffering your send data since the TCP stack takes care of that. If you do not want to buffer your data at all (i.e. have the socket send data immediately), then set Socket.NoDelay to true.
It doesn't seem like you need any data from your clients, but if you do, you'd have to make sure your server has a Socket.BeginReceive loop if using Socket.BeginXXX pattern or Socket.ReceiveAsync method if using SocketAsyncEventArgs.
Once you have the connection and transmission of data between server and client going, you then need to worry about serialization and deserialization of objects between client and server.
Serialization which occurs on the server is easy, since you can use the BinaryFormatter or other encoders to encode your object and dump the data onto the socket.
Deserialization on the other hand, which occurs on the client, can be pretty complex because an object can span multiple packets and you can have multiple objects in one packet. You essentially need a way to identify the beginning and end of an object within the stream, so that you can pluck out the object data and deserialize it.
One way to do this is to embed your data in a well known protocol, like HTTP, and send it using that format. Unfortunately, this also means you'd have to write a HTTP parser at the client. Not an easy task.
Another way is to leverage an existing encoding scheme like Google's protocol buffers. This approach would require learning how to use the protocol buffers stack.
You can also embed the data in an XML structure and then have a stream-to-XML decoder on your client side. This is probably the easiest approach but the least efficient.
As you can see, this is not an easy project, but you can get started with the Socket.BeginSend examples here and here, and the SocketAsyncEventArgs example here
Other Tips:
You can improve the reliability of your communication by having the client maintain two connections to the server for redundancy purposes. The reason being that TCP connections take a while to establish, so if one fails, you can still receive data from the other one while the client attempts to reconnect the failed connection.
You can look into using TCPClient class for the client implementation, since it's mostly reading a stream from a network connection.
What about rendezvous or 29 west? It would save reinventing the wheel. Dunno about amqp or zeromq they might work fine too....
What makes more sense?
use one socket to send and receive data to/from a embedded hardware device
use one socket to send data and separate socket to read data
Communication is not very intensive but the important point is to receive data as fast as possible. Application works under Windows XP and up.
Sockets were designed for two way communication, so most likely the developers of the embedded device didn't design their system to work off two sockets.
I have some experience working with embedded hardware and I've seen them work various ways:
Device connects to your application and starts streaming data via UDP
In this scenario I've seen up to three sockets in play. One TCP listening socket that accepts a connection from the embedded device. The embedded device then sends through some connection parameters, such as how quickly it's going to send you the data. The embedded device then starts streaming data via upd. Once you've received the data you send a message down a second upd socket to say "I got that one". The device then starts streaming the next bit of data (again via upd). This then continues ad infinitum. I've seen variations where the initial TCP connection is skipped and device just constantly stream data.
Request/Response
How many sockets you'll need here depends on who's making the initial connection, as that'll determine who needs the listening socket. Since you're making the initial connection, I'll use that. This is the more connection oriented scenario. Here you make a connection to the device and request some data, the device then sends you the response to that data. In this scenarioyou can only use one socket. As the device will respond to each request on the socket it was received.
So to answer you question "What makes more sense?", it completely depends on the design of your embedded device. If it's responding on the same socket as you're requesting, the answer is simple as only one socket is possible. Streaming devices via upd should give better performance with two sockets, but again only if your device supports it.
As for the second part of your question, "to receive data as fast as possible.", that's easy go asynchronous. Here are some excellent blogs on asynchronous socket programming:
.NET Sockets - Two Way - Single Client (C# Source Code - Included)
.NET Sockets in Two Directions with Multiple Client Support (C# Source Code Included)
If you're using a custom/third party protocol to communicate with the device you can't go wrong having a read through these either:
How to Transfer Fixed Sized Data With Async Sockets
Part 2: How to Transfer Variable Length Messages With Async Sockets
Im no expert but is there any downside to just using one socket?
It can already send and receive and my guess is that you end up getting more overhead if you have one socket for reading and one for sending...