I'm messing around (for the first time) with Socket programming in C#, I'm making a Skype like application(Video call, IM, file share, screen share) and i have a couple of questions...
1) How should sockets truly work? On the client side I have a while loop which is effictively keeping the socket open, Is this correct? Or should i be closing the socket after each Send/Recieve (Im using BeginSend() and BeginRecieve()) and creating a new socket? interact() is called after the connection has been made. my code:
private static void interact()
{
try
{
while (true)
{
receive(client);
send(client);
}
}
catch (Exception e)
{
Logging.errorDisconnect(client, e);
}
}
2) How would you design a robust client/server application, by using BeginSend/BegingRecieve from System.Net.Sockets, or creating your own Task implementation?
3) Is there any good tutorials on Client/Server architecture for robust/scaleable applications? I've had a look at P2P but i'm not entirely sure its what i need.
Also, i'm trying to avoid 3rd party implementations so i can learn how it's done myself..
Thanks in advance..
I assume you want to run a persistent connection and occasionally send commands to the remote side. This usually means that both sides must have a read loop running at all times to receive commands. Alternating between reading and writing makes sense for a request/reply model which you do not have here (because both sides can send requests and you might mistake an incoming request for an expected reply).
The loop is not keeping the socket alive. If you tell me why you were thinking that I can clarify. You decide the lifetime of the connection independently from any kind of "loop".
What kind of call style you want to use (sync, APM, TAP) does not affect how the socket behaves or what goes over the wire. You can choose freely. Sockets are super hard to get right, start with synchronous IO. Async IO is considerably harder and likely unnecessary here.
In general you should try hard to avoid using sockets because they are difficult and low-level. Try using some higher-level RPC mechanism such as WCF or HTTP. If you insist on a custom wire format protobuf is a good choice.
Related
Now, I'm interested to know - if I have a program of mine connection to a Server through TCP and the server sends a message to my program, but I am sending UDP packets at the same time, will the TCP packet get to me? Everything is in one class!
Thanks for your help!
Your question is actually on the border of several issues that all network application programmers must know and consider.
First of all: all data received from the network is stored in operating system's internal buffer, where it awaits to be read. The buffer is not infinite, so if you wait long enough, some data may be dropped. Usually the chunks of data that are written there are single packets, but not always. You can never make any assumptions of how much data will be available for reading in TCP/IP communication. In UDP, on the other hand, you must always read a single packet, otherwise the data will be lost. You can use recvfrom to read UDP packets and I suggest using it.
Secondly: using blocking and non-blocking approach is one of the most important decisions for your network app. There is a lot of information about it in the Internet: C- Unix Sockets - Non-blocking read , what is the definition of blocking read vs non- blocking read? or a non-blocking tutorial.
As for threads: threads are never required to write a multiple connection handler application. Sometimes they will simplify your code, sometimes they will make it run faster. There are some well-known programming patterns for using threads, like handling each separate connection in a separate thread. More often than not, especially for an inexperienced programmer, using threads will only be a source of strange errors and headaches.
I hope that my post answers your question and addresses the discussion I've been having below another answer.
Depends on what you mean by "at the same time". Usually the answer is "yes", you can have multiple TCP/IP connections and multiple UDP sockets all transmitting and receiving at the same time.
Unless you're really worried about latency - where a few microseconds can cause you trouble. If that's the case, one connection may interfere with the other.
Short answer - Yes.
You can have many connections at once, all sending and receiving; assuming that you've got multiple sockets.
You don't mention the number of clients that your server will have to deal with, or what you are doing with the data being sent/received.
Dependent on your implementation, multiple threads may also be required (As Dariusz Wawer points out, this is not essential, but I mention them because a scalable solution that can handle larger numbers of clients will likely use threads).
Check out this post on TCP and UDP Ports for some further info:
TCP and UDP Ports Explained
A good sample C# tutorial can be found here: C# Tutorial - Simple Threaded TCP Server
This is maybe more of a thing for discussion than a question.
Background:
I have two pairs of server/client, one pair written in Java and one written in C#. Both pairs are doing the same thing. No problems when I am using Java\Java and C#\C# combination. I would like to make combinations of Java\C# and C#\Java work as well. No problem with I\O, I am using byte array representing XML formatted string. I am bound to use TCP.
I need to care about graceful disconnect, in Java there is no such thing. When you close socket of client, server side socket remains in passive close state - therefore the server thread handling this socket is still alive, and in case of many clients I could end up with thousands of unnecessary threads. In C# it is enough to call TcpClient.Available to determine, whether there are data in buffer or whether the client has been closed(SocketException). In Java I could think of two approaches:
you have to write something to the underlying stream of socket to really test, whether the other side is still opened or not.
you have to make the other side aware, that you are closing one side of connection before you close it. I have implemented this one, before closing client socket I am sending packet containing 0x04 byte(end of transmission) to server, and server reacts on this byte by closing the server side of socket.
Unfortunatelly, both approaches have caused me a dilemma when it comes to C#\Java and Java\C# pairs of client\server. If I want these pairs to be able to communicate with each other, I will have to add code for sending the 0x04 byte to the C# client, and of course code handling it to C# server, which is kind of overhead. But I could live with unnecessary network traffic, main problem is, that this C# code is part of core communication library which I do not want to touch unless it is absolutely necessary
Question:
Is there other approach in Java to gracefully disconnect, which does not require writing to underlying stream of socket and checking the result so I do not have to handle this in C# code as well? I have a free hand when it comes to used libraries, I do not have to use only Java SE\EE.
I have given (what I think is a) concise explanation on this "graceful disconnection" topic in this answer, I hope you find it useful.
Remember that both sides have to use the same graceful disconnection pattern. I have found myself trying to gracefully disconnect a Client socket but the Server kept sending data (did not shutdown its own side), which resulted in an infinite loop...
the answer was apparent. There is nothing like graceful disconnect in C# as well, it is about TCP protocol contract. so there is same problem as in Java, with one slight difference, you can workaround on certain circumstances -> if you send 0 bytes through NetworkStream and then check TcpClient.Connected state, it is somehow correct. However from what I have found on the internet this is neither reliable nor preffered way how to test connection in C#. Of course in my C# library it has been used. In Java you cannot use this at all.
So my solution with sending "end of transmission" packet to server when client disconnects is solid, I have introduced it in C# library as well. Now I just tremble in fear for the first bug to emerge...
last point, I have to implement also some kind of mechanism that would collect handling threads on server for crashed clients. Keep that in mind if you are doing similar thing, no client is bound to end expectedly;)
I very confuse about Synchronous and Asynchronous Socket in C#, i want develop a game play on LAN network, but i confuse which one is better for my application Hangman game.
This game can play with 1player mode or 2 players mode.
In 1 player mode just a player interact with serer
But 2 players mode 2 players interact with server by turn base. It mean if player A guess wrong word, he lose his turn and player B take this turn.
Can you give me suggest about Synchornous and Asynchronous.
Beside that how can client can find server if client dont need enter server ip? it mean what should i choose between TCP and UDP
and last question is can i create a server is asynchronous but clients are synchronous, is it ok?
Thank You Very much
The important part of about choosing Asynchronous vs Synchronous is how you make the communications interact with your GUI thread. Don't let a synchronous socket block your UI. I see the article here gives an idea what to expect and gives some guidance about using Asynchronous with Windows programming.
Winsock tips
Your second question about TCP/UDP there are a lot of difference between the two you should be aware of. First and foremost, TCP is going to guaranteed packet delivery while the connection is valid. Given your situation and the simple requirements and lack of performance needs. TCP is probably your best choice. If you are designing a high performance game where you have to allow for dropped packets and handle latency better, UDP becomes a better option but then you have to take into consideration what happens when you drop packets and have things like Out of Order packets. TCP hides all of that complexity from you and will make working with it simpler.
Mixing synchronous and Asynchronous client/server should cause not problems. They only know about the communication link itself (TCP/UDP).
so regarding your questions :
Can you give me suggest about Synchornous and Asynchronous: In your case given the complexity of the application you can use either sync or async sockets, as twilson stated the sync sockets blocks your main thread while the async ones don't so if you have performance issues go for the asynchronous sockets
Beside that how can client can find server if client dont need enter server ip? it mean what should i choose between TCP and UDP: well there's a fair difference between TCP and UDP connections, you usually use UDP (connectionless) when you have peformance issues like Voip apps, real time games,video chat and so on, while in onther cases you use TCP, so in your case TCP should suit you good.
and last question is can i create a server is asynchronous but clients are synchronous, is it ok?
Yes you could use this kind of implementantion even if it.s a good practice to have the same type of socket on both clients and server.
I am developing open source socket server library: https://sourceforge.net/projects/socketservers/
And I would to like to add socket reuse feature to this lib. I have implement draft of this feature, but I do not see any benefits in my tests. The client makes 32K connect-disconnects by 8 items to the server and measure the time. But here is no difference between reusing socket and not reusing socket - same time elapsed for this test.
What I am doing wrong in test?
What benefit should server get when reuse sockets, and how to measure this benefit?
I can explain what happens from an unmanaged point of view and how DisconnectEx() is used, perhaps someone can then map this to the managed scenario.
In unmanaged code you would use DisconnectEx() to reuse a socket for an subsequent AcceptEx() or ConnectEx() call, more likely the former. So you'd initially create x sockets and post your overlapped async accept operations using AcceptEx(). When clients connect to these pending connection you would do your server stuff and then at the end call DisconnectEx() on the socket and post a new AcceptEx() using that socket. This avoids the need to create a new socket at this point and it's thus more efficient for the server. The performance difference is probably pretty small but worth having on heavily loaded servers that are accepting lots of short lived connections.
So I suggest you post some code showing how you're reusing your socket after calling Disconnect(true) on it...
The question is if the OS or the runtime does not perform the reuse automatically when invoking a new socket.The Socket.Disconnect Method documentation points into this direction:
Closes the socket connection and allows reuse of the socket.
So this seem to be an over-optimization.
In case you mean something like SO_REUSEADDR or SO_REUSEPORT:
Socket reuse is essentially important if e.g. your server crashes but there are still connections lingering.
If you restart your server, you'd normally have to wait till the operating system gracefully closed those connections, before you can rebind your socket to that port.
This could mean, that some processes which heavily rely on your server to come to a halt till it has been restarted.
Due to the socket reuse feature, you circumvent this problem.
There might be other uses for this, but I can only think of this one right now.
Hope that helped.
I suppose similar questions were already asked, but I was unable to find any. Please feel free to point me to an existing solutions.
I'll explain my scenario. I'd like to create a server application. There are many clients (currently only a few dozens, but it should scale up to 1000+) that connect to the server (which is running on a single machine).
Each client periodically sends a small amount of data to the server to process (processing is quick). The server can also send small amounts of data to each client on a regular basis. The response time should be low (<100 ms), but realtime or anything like that is not required.
My first idea was back from when I was still programming in VB6: Create a server socket to listen to incoming requests, then create a client socket for each possible client (singlethreaded). I doubt this scales well. It is also difficult to implement the communication.
So I figured I'd create a listener thread to accept new client connections and a different thread to actually read the incoming data by the clients. Since there are going to be many clients, I don't want to create a thread for each client. Instead, I'd prefer to use a single thread to read all incoming data in a loop, then either processing these data directly or creating work items for a different thread to process. I guess this approach would scale well enough. Any comments on this idea are most welcome.
The remaining problem I'm worried about is easy of communication. The above solution seems to require a manual protocol, possibly sending ASCII commands via TCP. While this would work, I think there should be a better way nowadays.
Some interface/proxyish way seems reasonable. I worked a bit with Java RMI before. From my point of understanding, .NET Remoting serves a similar purpose. Is Remoting a feasible solution to the scenario I described (many clients)? Is there an even better way I don't know of yet?
Edit:
This is not in LAN, but internet, if that matters.
If possible, it should also run under Linux.
As AresnMkrt pointed out, you should try WCF.
Just take it as is (with netTcpBinding, but don't forget to switch security off) and create a Tracer Bullet - measure if performance meets your requirements.
If not, you can try to tune WCF - WCF is very extensible, and you can modify message serialization to send ASCII messages as you want.
Are you sure you need a binary protocol? Rather, are you sure you need to invent a whole new protocol where plain RESTful service with JSON/XML will suffice? WCF can help you in this regard a lot.