I am developing open source socket server library: https://sourceforge.net/projects/socketservers/
And I would to like to add socket reuse feature to this lib. I have implement draft of this feature, but I do not see any benefits in my tests. The client makes 32K connect-disconnects by 8 items to the server and measure the time. But here is no difference between reusing socket and not reusing socket - same time elapsed for this test.
What I am doing wrong in test?
What benefit should server get when reuse sockets, and how to measure this benefit?
I can explain what happens from an unmanaged point of view and how DisconnectEx() is used, perhaps someone can then map this to the managed scenario.
In unmanaged code you would use DisconnectEx() to reuse a socket for an subsequent AcceptEx() or ConnectEx() call, more likely the former. So you'd initially create x sockets and post your overlapped async accept operations using AcceptEx(). When clients connect to these pending connection you would do your server stuff and then at the end call DisconnectEx() on the socket and post a new AcceptEx() using that socket. This avoids the need to create a new socket at this point and it's thus more efficient for the server. The performance difference is probably pretty small but worth having on heavily loaded servers that are accepting lots of short lived connections.
So I suggest you post some code showing how you're reusing your socket after calling Disconnect(true) on it...
The question is if the OS or the runtime does not perform the reuse automatically when invoking a new socket.The Socket.Disconnect Method documentation points into this direction:
Closes the socket connection and allows reuse of the socket.
So this seem to be an over-optimization.
In case you mean something like SO_REUSEADDR or SO_REUSEPORT:
Socket reuse is essentially important if e.g. your server crashes but there are still connections lingering.
If you restart your server, you'd normally have to wait till the operating system gracefully closed those connections, before you can rebind your socket to that port.
This could mean, that some processes which heavily rely on your server to come to a halt till it has been restarted.
Due to the socket reuse feature, you circumvent this problem.
There might be other uses for this, but I can only think of this one right now.
Hope that helped.
Related
I am currently using .net core and would like to be able to drop some tcp and/or udp connections that are open on the host machine.
I have already built all the logic for checking the open connections and I am now just needing to find some resource that will help me drop the connection on the machine.
With windows I can use a third party program passing some parameters during initialization. However I need it to work on a linux machine.
Windows Solution = http://www.nirsoft.net/utils/cports.html
Network connections are a very common example of unmanaged Resources, that need to be disposed off. A big reason we Dipose of SQL Connections, is because of the underlying network connection.
When dealing with disposeable classes my advise is: "Create. Use. Dispose. All in the same piece of code, ideally using a using statement."
Note that with Networking you usually have to apply some form of Multitasking. Async/Await is the more modern approach, but a seperate thread can also be used. That prevents the UI thread from being blocked, while still making sure the using is not split up logically - and thus stays reliable.
Now, I'm interested to know - if I have a program of mine connection to a Server through TCP and the server sends a message to my program, but I am sending UDP packets at the same time, will the TCP packet get to me? Everything is in one class!
Thanks for your help!
Your question is actually on the border of several issues that all network application programmers must know and consider.
First of all: all data received from the network is stored in operating system's internal buffer, where it awaits to be read. The buffer is not infinite, so if you wait long enough, some data may be dropped. Usually the chunks of data that are written there are single packets, but not always. You can never make any assumptions of how much data will be available for reading in TCP/IP communication. In UDP, on the other hand, you must always read a single packet, otherwise the data will be lost. You can use recvfrom to read UDP packets and I suggest using it.
Secondly: using blocking and non-blocking approach is one of the most important decisions for your network app. There is a lot of information about it in the Internet: C- Unix Sockets - Non-blocking read , what is the definition of blocking read vs non- blocking read? or a non-blocking tutorial.
As for threads: threads are never required to write a multiple connection handler application. Sometimes they will simplify your code, sometimes they will make it run faster. There are some well-known programming patterns for using threads, like handling each separate connection in a separate thread. More often than not, especially for an inexperienced programmer, using threads will only be a source of strange errors and headaches.
I hope that my post answers your question and addresses the discussion I've been having below another answer.
Depends on what you mean by "at the same time". Usually the answer is "yes", you can have multiple TCP/IP connections and multiple UDP sockets all transmitting and receiving at the same time.
Unless you're really worried about latency - where a few microseconds can cause you trouble. If that's the case, one connection may interfere with the other.
Short answer - Yes.
You can have many connections at once, all sending and receiving; assuming that you've got multiple sockets.
You don't mention the number of clients that your server will have to deal with, or what you are doing with the data being sent/received.
Dependent on your implementation, multiple threads may also be required (As Dariusz Wawer points out, this is not essential, but I mention them because a scalable solution that can handle larger numbers of clients will likely use threads).
Check out this post on TCP and UDP Ports for some further info:
TCP and UDP Ports Explained
A good sample C# tutorial can be found here: C# Tutorial - Simple Threaded TCP Server
This is maybe more of a thing for discussion than a question.
Background:
I have two pairs of server/client, one pair written in Java and one written in C#. Both pairs are doing the same thing. No problems when I am using Java\Java and C#\C# combination. I would like to make combinations of Java\C# and C#\Java work as well. No problem with I\O, I am using byte array representing XML formatted string. I am bound to use TCP.
I need to care about graceful disconnect, in Java there is no such thing. When you close socket of client, server side socket remains in passive close state - therefore the server thread handling this socket is still alive, and in case of many clients I could end up with thousands of unnecessary threads. In C# it is enough to call TcpClient.Available to determine, whether there are data in buffer or whether the client has been closed(SocketException). In Java I could think of two approaches:
you have to write something to the underlying stream of socket to really test, whether the other side is still opened or not.
you have to make the other side aware, that you are closing one side of connection before you close it. I have implemented this one, before closing client socket I am sending packet containing 0x04 byte(end of transmission) to server, and server reacts on this byte by closing the server side of socket.
Unfortunatelly, both approaches have caused me a dilemma when it comes to C#\Java and Java\C# pairs of client\server. If I want these pairs to be able to communicate with each other, I will have to add code for sending the 0x04 byte to the C# client, and of course code handling it to C# server, which is kind of overhead. But I could live with unnecessary network traffic, main problem is, that this C# code is part of core communication library which I do not want to touch unless it is absolutely necessary
Question:
Is there other approach in Java to gracefully disconnect, which does not require writing to underlying stream of socket and checking the result so I do not have to handle this in C# code as well? I have a free hand when it comes to used libraries, I do not have to use only Java SE\EE.
I have given (what I think is a) concise explanation on this "graceful disconnection" topic in this answer, I hope you find it useful.
Remember that both sides have to use the same graceful disconnection pattern. I have found myself trying to gracefully disconnect a Client socket but the Server kept sending data (did not shutdown its own side), which resulted in an infinite loop...
the answer was apparent. There is nothing like graceful disconnect in C# as well, it is about TCP protocol contract. so there is same problem as in Java, with one slight difference, you can workaround on certain circumstances -> if you send 0 bytes through NetworkStream and then check TcpClient.Connected state, it is somehow correct. However from what I have found on the internet this is neither reliable nor preffered way how to test connection in C#. Of course in my C# library it has been used. In Java you cannot use this at all.
So my solution with sending "end of transmission" packet to server when client disconnects is solid, I have introduced it in C# library as well. Now I just tremble in fear for the first bug to emerge...
last point, I have to implement also some kind of mechanism that would collect handling threads on server for crashed clients. Keep that in mind if you are doing similar thing, no client is bound to end expectedly;)
I very confuse about Synchronous and Asynchronous Socket in C#, i want develop a game play on LAN network, but i confuse which one is better for my application Hangman game.
This game can play with 1player mode or 2 players mode.
In 1 player mode just a player interact with serer
But 2 players mode 2 players interact with server by turn base. It mean if player A guess wrong word, he lose his turn and player B take this turn.
Can you give me suggest about Synchornous and Asynchronous.
Beside that how can client can find server if client dont need enter server ip? it mean what should i choose between TCP and UDP
and last question is can i create a server is asynchronous but clients are synchronous, is it ok?
Thank You Very much
The important part of about choosing Asynchronous vs Synchronous is how you make the communications interact with your GUI thread. Don't let a synchronous socket block your UI. I see the article here gives an idea what to expect and gives some guidance about using Asynchronous with Windows programming.
Winsock tips
Your second question about TCP/UDP there are a lot of difference between the two you should be aware of. First and foremost, TCP is going to guaranteed packet delivery while the connection is valid. Given your situation and the simple requirements and lack of performance needs. TCP is probably your best choice. If you are designing a high performance game where you have to allow for dropped packets and handle latency better, UDP becomes a better option but then you have to take into consideration what happens when you drop packets and have things like Out of Order packets. TCP hides all of that complexity from you and will make working with it simpler.
Mixing synchronous and Asynchronous client/server should cause not problems. They only know about the communication link itself (TCP/UDP).
so regarding your questions :
Can you give me suggest about Synchornous and Asynchronous: In your case given the complexity of the application you can use either sync or async sockets, as twilson stated the sync sockets blocks your main thread while the async ones don't so if you have performance issues go for the asynchronous sockets
Beside that how can client can find server if client dont need enter server ip? it mean what should i choose between TCP and UDP: well there's a fair difference between TCP and UDP connections, you usually use UDP (connectionless) when you have peformance issues like Voip apps, real time games,video chat and so on, while in onther cases you use TCP, so in your case TCP should suit you good.
and last question is can i create a server is asynchronous but clients are synchronous, is it ok?
Yes you could use this kind of implementantion even if it.s a good practice to have the same type of socket on both clients and server.
We are currently investigating the most efficient way of communicating between 120-140 embedded hardware devices running on the .NET Micro framework and a server.
Each embedded device needs to send to, and request information from the server on a fairly regular basis all in real time through TCP.
My question is this: Would it be better to initialise 140 TCP connections to the server, and then hang on to these connections, or initialise a new connection for each requests to and from the devices? Would holding on to and managing 140 TCP connections put a lot of strain on the server?
When the server detects new data in the database it needs to send this new info to 1..* devices (information is targeted to specific devices), if I held on to the 140 connections I would need to do a lookup for the correct connection each time I needed to send information instead of just sending to an IP:PORT associated with the new data.
I guess another possibly stupid question would be is it actually possibly to hang on to 140 TCP connections on a single port?
Any suggestions/comments are appreciated!
In general you are better maintaining the connections for as long as possible. If you have each device opening a connection each time it sends a message you can end up effectively DoS'ing the server as it ends up with lots of sockets in the TIME_WAIT state taking up space in it's tables.
I worked on a system where there were a bunch of clients talking to a server and while they could be turned on and off regularly, it was still better to maintain the connection (and re-establish it when it had dropped and a new message needed to be sent). You may end up needing to write slightly more complex code, but I've found it to be well worth the effort for the reduced load on the server.
Modern operating systems may have bigger buffers than the ones I actually encountered the DoS effect on, but it's fundamentally not the best idea to be using lots of connections like that.
Things can get relatively complicated on the client side, especially when the device tends to go to sleep transparently to the application because that means connections will time out while the app thinks they are still open. When we did this we ended up with relatively complex network code because we needed to deal with the fact that the sockets could (and would) fail as a matter of course and we simply needed to setup a new connection and re-attempt sending the message. You just tuck this code away into your libraries and forget about it once it's done though.
In actual fact in practice our initial application had even more complex code because it was dealing with a network library that was semi-aware of the stop start nature of the devices and tried to resend failed messages, sometimes meaning that the same message got sent twice. We ended up doing an extra layer of communication on top in order to ensure duplicates got rejected. If you're using C# or regular BSD style sockets you shouldn't have that problem though I'm guessing. This was a proprietary library that managed the reconnects but caused headaches with the resends and it's inappropriate default time-outs.
You usually can connect much more than 140 "clients" to a server (that is with decent network / HW / RAM)...
I recommend always to test this sort of thing with real scenarios (load etc.) to decide since there are aspects like network (performance, stability...), HW (server RAM etc.) and SW (what does the server exactly do?) that can only be checked by you.
Depending on the protocol you could/should even put some timeout/reconnect mechanism in there.
The lookup you mean would be really fast - just use ConcurrentDictionary to hold the needed information with IP:PORT as the key (assuming the server runs on a full .NET 4).
For some references see:
http://msdn.microsoft.com/en-us/library/dd287191.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/17/c.net-little-wonders-the-concurrentdictionary.aspx
EDIT - as per comments:
Holding on to a TCP/IP connection doesn't take much processing client-side... it costs a bit of memory. I would recommend to do a small test (1-2 clients) to check this assumption for your specific case.
If you are talking about a system with hardware devices then I suggest to go with closing the connection every time the client finishes sending data.
To make sure the client gets some update from the server, the client can wait for a 5 second period for any data to arrive from the server. If the data is received within/before this timeframe, then close the connection and process the data. If not, close the connection and wait after sending next set of data.
This way scaling becomes much easier. Keeping the connections open always leads to strain on the resources and in my opinion is not necessary unless it is some life-saving device like heart rate monitor, oxygen supply monitor etc.,