Is there any way (preferably in C#) how to regularly measure connection layer latency (roundtrip) without changing the application protocol and without creating separate dedicated connection - e.g. using some similar SYN-ACK trick like tcping do but without closing/opening connection?
I'm connecting to the servers via given ASCII based protocol (and always using TCP_NODELAY). Servers send me large amount of discrete messages and I'm regularly sending 'heartbeat' payload (but there is no response payload to the heartbeat).
I cannot change the protocol and in many cases I also cannot create more than one physical connection to the server.
Keep in mind that TCP does windowing, so this could cause issues when trying to implement an elegant SEQ/ACK solution. (you would want sequence, not synchronize)
[EDIT: Snipped a very overcomplicated and confusing explaination.]
I'd have to say the best way is to use a simple stopwatch method of starting a timer, making a very thin request or poll, and measure the time back from it. If that query really is the lightest you can make it, then that should give you the minimum amount of time you can reasonably expect to wait, which sometimes more valuable than the ping (which can be misleading).
If you really absolutely need just the network time to machine and back, just use an ICMP ping.
Related
I have two PCs connected by direct Ethernet cable over 1Gbps link. One of them act as TCP Server and other act as TCP Client/s. Now I would like to achieve maximum possible network throughput between these two.
Options I tried:
Creating multiple clients on PC-1 with different port numbers, connecting to the TCP Server. The reason for creating multiple clients is to increase the network throughput but here I have an issue.
I have a buffer Queue of Events to be sent to Server. There will be multiple messages with same Event Number. The server has to acquire all the messages then sort the messages based on the Event number. Each client now dequeues the message from Concurrent Queue and sends to the server. After sent, again the client repeats the same. I have put constraint on the client side that Event-2 will not be sent until all messaged labelled with Event-1 is sent. Hence, I see the sent Event order correct. And the TCP server continuously receives from all the clients.
Now lets come to the problem:
The server is receiving the data in little random manner, like I have shown in the image. The randomness between two successive events is getting worse after some time of acquisition. I can think of this random behaviour is due to parallel worker threads being executed for IO Completion call backs.
technology used: F# Socket Async with SocketEventArgs
Solution I tried: Instead of allowing receive from all the clients at server side, I tried to poll for the next available client with pending data then it ensured the correct order but its performance is not at all comparable to the earlier approach.
I want to receive in the same order/ nearly same order (but not non-deterministic randomness) as being sent from the clients. Is there any way I can preserve the order and also maintain the better throughput? What are the best ways to achieve nearly 100% network throughput over two PCs?
As others have pointed out in the comments, a single TCP connection is likely to give you the highest throughput, if it's TCP you want to use.
You can possibly achieve slightly (really marginally) higher throughput with UDP, but then you have the hassle of recreating all the goodies TCP gives you for free.
If you want bidirectional high volume high speed throughput (as opposed to high volume just one way at a time), then it's possible one connection for each direction is easier to cope with, but I don't have that much experience with it.
Design tips
You should keep the connection open. The client will need to ask "are you still there?" at regular intervals if no other communication goes on. (On second thought, I realize that the only purpose of this is to allow quick reponse and the possiblity for the server to initiate a message transaction. So I revise it to: keep the connection open for a full transaction at least.)
Also, you should split up large messages - messages over a certain size. Keep the number of bytes you send in each chunk to a maximum round hex number, typically 8K, 16K, 32K or 64K on a local network. Experiment with sizes. The suggested max sizes has been optimal since Windows 3 at least. You need some sort of protocol with a chunck consisting of a fixed header (typically a magic number for check and resynch, a chunk number also for check and for analysis, and a total packet length) followed by the data.
You can possibly further improve throughput with compression (usually low quick compression) - it depends very much on the data, and whether you're on a fast or slow network.
Then there's this hassle that one typically runs into - problems with the Nagle algorith - and I no longer remember enough of the details there. I believe I used to overcome that by sending an acknowledgement in return for each chunk sent, and I suspect by doing that you satisfy the design requirements, and so avoid waiting for the last bytes to come in. But do google this.
We are currently developing a software solution which has a client and a number of WCF services that it consumes. The issues we are having is WCF services timing out after a period of inactivity. As far as I understand, there are 2 ways to resolve this:
Increase timeouts (as far as I understood, this is generally not recommended. Eg. setting timeout to infinite/weeks is considered bad practice)
Periodically ping the WCF services from the Client (I'm not sure that I'm a huge fan of his as it will add redundant, periodic calls)
Handle timeout issues and attempt to reconnect (this is slow and requires a lot of manual code)
Reliable Sessions - some sources mention that this is the in-built WCF pinging and message reliability mechanism, but other sources mention that this will still time out.
What is the recommended/best way of resolving this issue? Is there any official reading material on this? I could not find all that much info myself
Thanks!
As i can see, you have to use a combination of your stated points.
You are right, increasing the timeouts is bad practice and can give you a lot of problems.
If you don't want to use Reliable Sessions, then Ping is the only applicable way to hold the connection.
You need to handle this things, no matter if a timeout occurs, the connection is lost or a exception is thrown. There are a plenty of possibilities that your connection can fault.
Reliable Sessions are a good way not to implement a ping, but technically, it does nearly the same. WCF automatically sends an "I am still here" Request.
The conclusion of this is, that you need point 3 and point 2 or 4. To reduce the manually code for point 3, you can use Proxies or a wrapper around your ServiceClient, that establishes a new connection if the old one is faulted during a request. Point 4 is easy to implement, because you only need some small additions to your binding in your config. And the traffic overhead is not that big. Point 2 is the most expensive way, you need to handle a Thread/Task that only pings the server and the service needs to be extended. But as you stated before, Reliable Sessions can fail, and Pings should bring you on the safe side.
You should ask yourself what is your WCF endpoint is doing? Is the way you have your command setup the most optimal?
Perhaps it'd be better to have your endpoint that takes a long time be based on a polling system that allows there to be a quick query instead of waiting on the results of the endpoints actions.
You should also consider data transfer as a possible issue. Is the amount of data you're transferring back a lot?
To get a more pointed answer, we'd need to know more about the specific endpoint as well as any other responsibilities there are for the service.
the question is :
how to send data over multiple internet connections avilable on current pc?
possibly-partially simlilar to This Post
though my idea is(like raid-0 is using multiple hdd's , to take advatage of multiple nic's)
actually multiple internet connections/accounts to maximize
the throughoutput of the upload bandwidth (which usually has 1/8 of the total bandwidth)
the concept i am trying to implement is to use the fastest protocol, regardless the data integrity
so i could send data from one point to the other (having a "client" part of application to handle the data... check for integrity while putting data back to one piece)
or maybe just use tcp if it does not worth it (handling the integrity in application level to increase speed)
i know there's an existing application Called "Connectify" that calimes to do someting similar,
though my idea was to make something little different and i need to understand the basics
so i could start this project for testing and development.
...thanks in advance !
As a generalization of the approach you will need to take in this case would be to create multiple TCP Clients bound to the individual network adapters in your machine. You can iterate through each of the adapters available, test to make sure that they have a connection to the outside world, then add them to a collection where then for each packet of data you want to transmit, you send the packet out.
See http://msdn.microsoft.com/en-us/library/3bsb3c8f.aspx on how to bind TCPClient to individual IPEndPoints.
Because of the nature of the way TCP operates, you will have to construct essentially a wrapper for each packet of data which also includes an order id to ensure that packets received out of order (which will happen most of the time in this case), can be pieced back together again.
Let me know if you need any more help understanding things.
We are currently investigating the most efficient way of communicating between 120-140 embedded hardware devices running on the .NET Micro framework and a server.
Each embedded device needs to send to, and request information from the server on a fairly regular basis all in real time through TCP.
My question is this: Would it be better to initialise 140 TCP connections to the server, and then hang on to these connections, or initialise a new connection for each requests to and from the devices? Would holding on to and managing 140 TCP connections put a lot of strain on the server?
When the server detects new data in the database it needs to send this new info to 1..* devices (information is targeted to specific devices), if I held on to the 140 connections I would need to do a lookup for the correct connection each time I needed to send information instead of just sending to an IP:PORT associated with the new data.
I guess another possibly stupid question would be is it actually possibly to hang on to 140 TCP connections on a single port?
Any suggestions/comments are appreciated!
In general you are better maintaining the connections for as long as possible. If you have each device opening a connection each time it sends a message you can end up effectively DoS'ing the server as it ends up with lots of sockets in the TIME_WAIT state taking up space in it's tables.
I worked on a system where there were a bunch of clients talking to a server and while they could be turned on and off regularly, it was still better to maintain the connection (and re-establish it when it had dropped and a new message needed to be sent). You may end up needing to write slightly more complex code, but I've found it to be well worth the effort for the reduced load on the server.
Modern operating systems may have bigger buffers than the ones I actually encountered the DoS effect on, but it's fundamentally not the best idea to be using lots of connections like that.
Things can get relatively complicated on the client side, especially when the device tends to go to sleep transparently to the application because that means connections will time out while the app thinks they are still open. When we did this we ended up with relatively complex network code because we needed to deal with the fact that the sockets could (and would) fail as a matter of course and we simply needed to setup a new connection and re-attempt sending the message. You just tuck this code away into your libraries and forget about it once it's done though.
In actual fact in practice our initial application had even more complex code because it was dealing with a network library that was semi-aware of the stop start nature of the devices and tried to resend failed messages, sometimes meaning that the same message got sent twice. We ended up doing an extra layer of communication on top in order to ensure duplicates got rejected. If you're using C# or regular BSD style sockets you shouldn't have that problem though I'm guessing. This was a proprietary library that managed the reconnects but caused headaches with the resends and it's inappropriate default time-outs.
You usually can connect much more than 140 "clients" to a server (that is with decent network / HW / RAM)...
I recommend always to test this sort of thing with real scenarios (load etc.) to decide since there are aspects like network (performance, stability...), HW (server RAM etc.) and SW (what does the server exactly do?) that can only be checked by you.
Depending on the protocol you could/should even put some timeout/reconnect mechanism in there.
The lookup you mean would be really fast - just use ConcurrentDictionary to hold the needed information with IP:PORT as the key (assuming the server runs on a full .NET 4).
For some references see:
http://msdn.microsoft.com/en-us/library/dd287191.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/17/c.net-little-wonders-the-concurrentdictionary.aspx
EDIT - as per comments:
Holding on to a TCP/IP connection doesn't take much processing client-side... it costs a bit of memory. I would recommend to do a small test (1-2 clients) to check this assumption for your specific case.
If you are talking about a system with hardware devices then I suggest to go with closing the connection every time the client finishes sending data.
To make sure the client gets some update from the server, the client can wait for a 5 second period for any data to arrive from the server. If the data is received within/before this timeframe, then close the connection and process the data. If not, close the connection and wait after sending next set of data.
This way scaling becomes much easier. Keeping the connections open always leads to strain on the resources and in my opinion is not necessary unless it is some life-saving device like heart rate monitor, oxygen supply monitor etc.,
I suppose similar questions were already asked, but I was unable to find any. Please feel free to point me to an existing solutions.
I'll explain my scenario. I'd like to create a server application. There are many clients (currently only a few dozens, but it should scale up to 1000+) that connect to the server (which is running on a single machine).
Each client periodically sends a small amount of data to the server to process (processing is quick). The server can also send small amounts of data to each client on a regular basis. The response time should be low (<100 ms), but realtime or anything like that is not required.
My first idea was back from when I was still programming in VB6: Create a server socket to listen to incoming requests, then create a client socket for each possible client (singlethreaded). I doubt this scales well. It is also difficult to implement the communication.
So I figured I'd create a listener thread to accept new client connections and a different thread to actually read the incoming data by the clients. Since there are going to be many clients, I don't want to create a thread for each client. Instead, I'd prefer to use a single thread to read all incoming data in a loop, then either processing these data directly or creating work items for a different thread to process. I guess this approach would scale well enough. Any comments on this idea are most welcome.
The remaining problem I'm worried about is easy of communication. The above solution seems to require a manual protocol, possibly sending ASCII commands via TCP. While this would work, I think there should be a better way nowadays.
Some interface/proxyish way seems reasonable. I worked a bit with Java RMI before. From my point of understanding, .NET Remoting serves a similar purpose. Is Remoting a feasible solution to the scenario I described (many clients)? Is there an even better way I don't know of yet?
Edit:
This is not in LAN, but internet, if that matters.
If possible, it should also run under Linux.
As AresnMkrt pointed out, you should try WCF.
Just take it as is (with netTcpBinding, but don't forget to switch security off) and create a Tracer Bullet - measure if performance meets your requirements.
If not, you can try to tune WCF - WCF is very extensible, and you can modify message serialization to send ASCII messages as you want.
Are you sure you need a binary protocol? Rather, are you sure you need to invent a whole new protocol where plain RESTful service with JSON/XML will suffice? WCF can help you in this regard a lot.