Maximum recommended socket sending capacity - c#

I'm creating a program (it's a game) which will have multiplayer. It will need both client-to-server and server-to-client, with the server being run on one of the player's machines, not on a separate machine. I'll be using C# sockets, and I need to know the maximum amount of data that they can handle. For instance, my data will be between 256b and 128kb, I can assume. Am I going to have any trouble sending that through the sockets? (There will never be more than 7 clients connected to the server-handler machine).
EDIT
After reading some other posts, I don't think it will be a problem. Others have said that sending data upwards of 1MB doesn't cause a problem, so I don't think I'll have an issue.
P.S.
If this is not the case, please let me know. Thanks!

Related

Very Slow PLC Programming and Fault Finding

I am working in a full production environment that has a range of PLCS around our production mill, each of these PLC's talk back through a 'DataHighway +' network back to a special PC on our LAN Network called the MicroLinks PC. This has the ROCKWELL OPC RSLinx Classic server software on it.
So, recently I have put together a piece of .NET software in c# using the OPC .NET API to read to ROCKWELL OPC server on the Microlinks PC and sync data back into our MYSQL database that is sat on our WINDOWS R2 server PC
Ever since turning on the .net software, the engineers on site have experienced a massive slow down in developing new PLC scripts and fault finding.
Some of the reports are even as bad as 10 second lags.
Consequently, we have had to turn of the .NET software to sync the data to allow the Engineers to do their work swiftly without issues.
So i am looking for some advice on where or what i should look for, any resources to read for this type of problem etc. As PLC and networks are way out of my depth, I am just the .NET programmer.
Here is the structure of our network:
I'm not sure which type of rockwell PLCs you are using. I'm most familiar with the ControlLogix platform so I'll talk about that.
The ethernet card in a controllogix PLC connects at 100Mb/s but the card can't actually handle 100Mb/s continuously. A 1756-ENBt card can handle about 5000 packets per second, the EN2T roughly double that. There are formulas in the rockwell docs explain how to calculate packets per second but another option when you have a running system is to connect the 'Logix5000 task monitor' that comes with RS Logix and verify the CPU usage of the Ethernet card I think Rockwell recommends you keep it under 60%. If you are requesting too many packets then this CPU won't keep up
The PLC itself can starve communications. Controllogix has a "overhead time slice" setting which is the percent of time the PLC spends servicing communication tasks as opposed to running its own logic. Increasing this percentage can improve comms a bit.
It sounds like your program is putting a large burden on the PLC. Does it get better if you slow down your app so that it is not pulling as much data as fast?
One easy way to reduce the number of packets required to retrieve a block of data without slowing down the update rate is to put it all in one array. RSLinx will then be able to optimize the request instead of pulling individual tags
I have had plenty of troubles using Rockwell RSLinx on my local PC trying to find the IP address of a PLC plugged directly into my ethernet port. Using the "Autobrowse" option, it completely locks up my PC trying to scan the ports and IP addresses for targets.
It might just be poorly optimized Rockwell software causing issues. You also may be exchanging a whole lot of data and your server PC is struggling to keep up.
I would contact Rockwell/Allen Bradley support for help with this. They will probably want some cash to help you.
You're almost deinifitely over-polling the PLC. Try polling less and less frequently until find a value that doesn't slow down the network For example, if you're requesting data every 100ms now, change that once per second. Then once per minute. Then once every 15 minutes. At each step check the comms speed for the programming terminals.

SharpSNMP max-repetitions increase causes buffer size exception through GPRS

I am trying to send SNMP requests to a remote location.
I am using the SharpSNMP 8.5.0 library and the Snmp.BulkWalk example from a code project post ( here ).
In the example, they use 10 as max-repetitions and using sniffing software I noticed that is creating multiple datagram packets to make the walk within in the subtree. Actually I am getting 120 packets results back every time. So I decided to try a higher max-repetitions number and I noticed that the packets number is going down, actually I can get all the data in one packet. Now I have another problem: the remote device is using GPRS when I snmpwalk on the device from the server using GPRS, I get a timeout or a buffer out of size error. When I run the same solution on my local PC and I access the remote device from my router(no GPRS involved) I don't get any errors and get all the data!
Can someone explain this behavior? Does it have to do with a GPRS limitation? GPRS is unreliable? Or is it a network limitation on the server?
(The MTU in the server is 1500). Does anyone have an experience on the best practices and the optimal packet size that can send through SNMP-UDP datagram packets?
Though I am the author of that library, I could not answer the GPRS part, as I am not a mobile network expert.
What I could answer is the packet number part, which is relatively simple if you check out the definition of "max-repititions",
https://www.webnms.com/snmp/help/snmpapi/snmpv3/v2c/maxrepetition.html
By setting a larger value to this parameter, a single packet can contain more results, and obviously less packets are needed.
I used 10 in that Code Project article, because it was just an example. You might see from the link above that other libraries might use 50 as the default.
Regarding best practices for SNMP packet size, I've always been told that you should avoid exceeding the network MTU. In other words, set the max-repetitions so that the Ethernet frames don't regularly exceed 1500 bytes. (Of course, this assumes that the size of your table cells is predicable.)
While using larger packets should work on most well-configured networks, it's advisable to avoid having fragmented packets on the network. Perhaps packet re-assembly might create larger overhead in the networking equipment. And if you're going to fragment the PDUs over several packets anyway, the drawback of having to do a few more back-and-forth requests is not that bad.
For example, Cisco equipment seems to follow this best practice, and it's recommended in a Microsoft article.
(BTW, next time you have two separate questions, consider posting them as two questions!)

Persisting 140 TCP connections?

We are currently investigating the most efficient way of communicating between 120-140 embedded hardware devices running on the .NET Micro framework and a server.
Each embedded device needs to send to, and request information from the server on a fairly regular basis all in real time through TCP.
My question is this: Would it be better to initialise 140 TCP connections to the server, and then hang on to these connections, or initialise a new connection for each requests to and from the devices? Would holding on to and managing 140 TCP connections put a lot of strain on the server?
When the server detects new data in the database it needs to send this new info to 1..* devices (information is targeted to specific devices), if I held on to the 140 connections I would need to do a lookup for the correct connection each time I needed to send information instead of just sending to an IP:PORT associated with the new data.
I guess another possibly stupid question would be is it actually possibly to hang on to 140 TCP connections on a single port?
Any suggestions/comments are appreciated!
In general you are better maintaining the connections for as long as possible. If you have each device opening a connection each time it sends a message you can end up effectively DoS'ing the server as it ends up with lots of sockets in the TIME_WAIT state taking up space in it's tables.
I worked on a system where there were a bunch of clients talking to a server and while they could be turned on and off regularly, it was still better to maintain the connection (and re-establish it when it had dropped and a new message needed to be sent). You may end up needing to write slightly more complex code, but I've found it to be well worth the effort for the reduced load on the server.
Modern operating systems may have bigger buffers than the ones I actually encountered the DoS effect on, but it's fundamentally not the best idea to be using lots of connections like that.
Things can get relatively complicated on the client side, especially when the device tends to go to sleep transparently to the application because that means connections will time out while the app thinks they are still open. When we did this we ended up with relatively complex network code because we needed to deal with the fact that the sockets could (and would) fail as a matter of course and we simply needed to setup a new connection and re-attempt sending the message. You just tuck this code away into your libraries and forget about it once it's done though.
In actual fact in practice our initial application had even more complex code because it was dealing with a network library that was semi-aware of the stop start nature of the devices and tried to resend failed messages, sometimes meaning that the same message got sent twice. We ended up doing an extra layer of communication on top in order to ensure duplicates got rejected. If you're using C# or regular BSD style sockets you shouldn't have that problem though I'm guessing. This was a proprietary library that managed the reconnects but caused headaches with the resends and it's inappropriate default time-outs.
You usually can connect much more than 140 "clients" to a server (that is with decent network / HW / RAM)...
I recommend always to test this sort of thing with real scenarios (load etc.) to decide since there are aspects like network (performance, stability...), HW (server RAM etc.) and SW (what does the server exactly do?) that can only be checked by you.
Depending on the protocol you could/should even put some timeout/reconnect mechanism in there.
The lookup you mean would be really fast - just use ConcurrentDictionary to hold the needed information with IP:PORT as the key (assuming the server runs on a full .NET 4).
For some references see:
http://msdn.microsoft.com/en-us/library/dd287191.aspx
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/17/c.net-little-wonders-the-concurrentdictionary.aspx
EDIT - as per comments:
Holding on to a TCP/IP connection doesn't take much processing client-side... it costs a bit of memory. I would recommend to do a small test (1-2 clients) to check this assumption for your specific case.
If you are talking about a system with hardware devices then I suggest to go with closing the connection every time the client finishes sending data.
To make sure the client gets some update from the server, the client can wait for a 5 second period for any data to arrive from the server. If the data is received within/before this timeframe, then close the connection and process the data. If not, close the connection and wait after sending next set of data.
This way scaling becomes much easier. Keeping the connections open always leads to strain on the resources and in my opinion is not necessary unless it is some life-saving device like heart rate monitor, oxygen supply monitor etc.,

Is TCP suitable for network game programming consisting of regular positional updates?

Suppose you were forced to use TCP sockets over UDP sockets (ie: something that Silverlight insists on). Would it be possible to create a multiplayer game that involves sending real time positional updates to up to say eight players so that each player could accurately see every other player in real time, even though UDP would be the better protocol to use? Given the option, would you wish to go as far as to select a different technology (ie: Java), simply to gain UDP support?
Thanks,
Nick
As long as a few milliseconds aren't important i see no reason to use UDP.
To receive UDP packets, you must have a public IP address.
To receive UDP packets, you need to be able to listen on a port. Not all frameworks in all environments can do this, often for security reasons and such.
As you describe Silverlight as a target platform, we can anticipate that this won't always be the case for your players.
Use TCP.
As an alternative to Silverlight, you might look at Haxe (or Flash).
(From the comments, there is mention of STUN and stuff; that's an interesting if difficult angle to pursue.)
It depends on how fast of real-time you are looking at. For example, if you try to make a space battle, and everyone is close, but moving at a high speed, then you may find that the milliseconds difference makes a difference, but, if you are doing something like an auto racing game then it won't make any difference, so TCP is fine.
So, try it, get some numbers and decide if it is acceptable.
The bigger problem will be the difference in bandwidth, so, if one person is playing over a really slow connection, and everyone else are on very fast connections, then that slower player will be a problem. You may need to scale the updates to the slowest connection, and you may find that TCP/UDP issues are not enough of a concern, as the difference in connection speeds are a far bigger problem.
So, test with various connection speeds, with differing numbers of users, each with their own connection speeds, and see if, as one user, the game is still enjoyable.
UPDATE
It is not bandwidth that will be the concern, but the latency, as was pointed out in a comment. I had picked the wrong term, as several people might be able to respond quickly and be closer to real-time, but one user may be much slower, perhaps on a congested network, slow computer, or whatever, but they may only send updates every 1000ms, whereas everyone else is doing it every 100ms.

Running graphics display on multiple systems, keeping synched

I have a series of systems on a LAN running a synchronized display routine. For example, think of a chorus line. The program they ran is fixed. I have each "client" download the entire routine, and then contact the central "server" at fixed points in the routine for synchronization. The routine itself is mundane with, perhaps, 20 possible instructions.
Each client runs the same routine, but they can be doing completely different things at any one time. One part of the chorus line can be kicking left, another part kicking right, but all in time with each other. Clients can join and drop out at any time, but they're all assigned a part. If no-one is there to run the part, it just doesn't get run.
This is all coded in C# .Net.
The client display is a Windows Forms application. The server accepts TCP connections, and then services them round-robin fashion, keeping a master clock of what's going on. The clients send a signal that says "I've reached sync-point 32" (or 19, or 5, or whatever) and waits for the server to acknowledge and then moves on. Or the server can say "No, you need to start at sync-point 15".
This all works great. There is a minor bit of delay between the first and last clients to hit a sync-point, but it's hardly noticeable. Ran for months.
Then the Specification changed.
Now the clients need to respond to near real-time instructions from the server -- it's no longer a pre-set dance program. The server is going to be sending instructions out and the dance program is made up on the fly. I get the fun job of re-designing the protocol, the servicing loops, and the programming instructions.
My toolkit includes anything in a standard .Net 3.5 toolbox. Installing new software is a pain in the arse, since so many systems (clients) can be involved.
I'm looking for suggestions on keeping the clients synced (some sort of latching system? UDP? Broadcast?), distribution of the "dance program", anything that might make this easier than a traditional Client/Server TCP arrangement.
Keep in mind that there are time/speed limitations going on as well. I could put the dance program in a network database, but I'd have to shove instructions in fairly quickly and there'd be a lot of readers using a rather thick protocol (DBI, SqlClient, etc..) to get a small bit of text. That seems overly complex. And I still need something to keep them all displaying in sync.
Suggestions? Opinions? Wild-ass speculation? Code examples?
PS: Answers may not get marked as "correct" (since this isn't a "correct" answer), but +1 votes for good suggestions for sure.
I did something similar (quite a while back) with synchronizing a bank of 4 displays, each run by a single system, receiving messages from a central server.
The architecture we finally settled on after a fair amount of testing involved having one "master" machine. In your case, this would be having one of your 20 clients that acts as the master, and have it connect to the server via TCP.
The server then would send the entire series of commands for the series through to that one machine.
That machine then used UDP to broadcast real-time instructions to each of the other machines (the 19 other clients on its LAN) to keep their displays up to date. We used UDP for a couple reasons here - there was lower overhead involved, which helped keep the total resource usage down. Also, since you're updating in real-time, if one or two "frames" was out of sync, it was never noticable, at least not noticeable enough for our purposes (having a human sitting and interacting with the system).
The key point to this working smoothly, though, is having an intelligent communication means between the main server and the "master" machine - you want to keep the bandwidth as low as possible. In a case like yours, I'd probably come up with a single binary blob that had the current instruction set for the 20 machines, in its smallest form. (Maybe something like 20 bytes, or 40 bytes if you need it, etc). The "master" machine would then worry about translating this out to the other 19 machines and itself.
There are some nice things about this - the server has a much easier time transmitting to one machine in the cluster instead of every machine in the cluster. This let us, for example, have one single, centralized server "drive" multiple clusters efficiently, without having ridiculous hardware requirements anywhere. It also keeps the client code very, very simple. It just has to listen for a UDP datagram and do whatever it says - in your case, it sounds like it would have one of 20 commands, so the client becomes very, very simple.
The "master" server is the trickiest. In our implementation, we actually had the same client code on it as the other 19 (as a separate proces) and one "translation" process that took the blob, broke it into 20 pieces, and transmitted them. It was fairly simple to write, and worked very well.

Categories

Resources