SharpSNMP max-repetitions increase causes buffer size exception through GPRS - c#

I am trying to send SNMP requests to a remote location.
I am using the SharpSNMP 8.5.0 library and the Snmp.BulkWalk example from a code project post ( here ).
In the example, they use 10 as max-repetitions and using sniffing software I noticed that is creating multiple datagram packets to make the walk within in the subtree. Actually I am getting 120 packets results back every time. So I decided to try a higher max-repetitions number and I noticed that the packets number is going down, actually I can get all the data in one packet. Now I have another problem: the remote device is using GPRS when I snmpwalk on the device from the server using GPRS, I get a timeout or a buffer out of size error. When I run the same solution on my local PC and I access the remote device from my router(no GPRS involved) I don't get any errors and get all the data!
Can someone explain this behavior? Does it have to do with a GPRS limitation? GPRS is unreliable? Or is it a network limitation on the server?
(The MTU in the server is 1500). Does anyone have an experience on the best practices and the optimal packet size that can send through SNMP-UDP datagram packets?

Though I am the author of that library, I could not answer the GPRS part, as I am not a mobile network expert.
What I could answer is the packet number part, which is relatively simple if you check out the definition of "max-repititions",
https://www.webnms.com/snmp/help/snmpapi/snmpv3/v2c/maxrepetition.html
By setting a larger value to this parameter, a single packet can contain more results, and obviously less packets are needed.
I used 10 in that Code Project article, because it was just an example. You might see from the link above that other libraries might use 50 as the default.

Regarding best practices for SNMP packet size, I've always been told that you should avoid exceeding the network MTU. In other words, set the max-repetitions so that the Ethernet frames don't regularly exceed 1500 bytes. (Of course, this assumes that the size of your table cells is predicable.)
While using larger packets should work on most well-configured networks, it's advisable to avoid having fragmented packets on the network. Perhaps packet re-assembly might create larger overhead in the networking equipment. And if you're going to fragment the PDUs over several packets anyway, the drawback of having to do a few more back-and-forth requests is not that bad.
For example, Cisco equipment seems to follow this best practice, and it's recommended in a Microsoft article.
(BTW, next time you have two separate questions, consider posting them as two questions!)

Related

How to preserve the message sent order at TCP server side with multiple clients

I have two PCs connected by direct Ethernet cable over 1Gbps link. One of them act as TCP Server and other act as TCP Client/s. Now I would like to achieve maximum possible network throughput between these two.
Options I tried:
Creating multiple clients on PC-1 with different port numbers, connecting to the TCP Server. The reason for creating multiple clients is to increase the network throughput but here I have an issue.
I have a buffer Queue of Events to be sent to Server. There will be multiple messages with same Event Number. The server has to acquire all the messages then sort the messages based on the Event number. Each client now dequeues the message from Concurrent Queue and sends to the server. After sent, again the client repeats the same. I have put constraint on the client side that Event-2 will not be sent until all messaged labelled with Event-1 is sent. Hence, I see the sent Event order correct. And the TCP server continuously receives from all the clients.
Now lets come to the problem:
The server is receiving the data in little random manner, like I have shown in the image. The randomness between two successive events is getting worse after some time of acquisition. I can think of this random behaviour is due to parallel worker threads being executed for IO Completion call backs.
technology used: F# Socket Async with SocketEventArgs
Solution I tried: Instead of allowing receive from all the clients at server side, I tried to poll for the next available client with pending data then it ensured the correct order but its performance is not at all comparable to the earlier approach.
I want to receive in the same order/ nearly same order (but not non-deterministic randomness) as being sent from the clients. Is there any way I can preserve the order and also maintain the better throughput? What are the best ways to achieve nearly 100% network throughput over two PCs?
As others have pointed out in the comments, a single TCP connection is likely to give you the highest throughput, if it's TCP you want to use.
You can possibly achieve slightly (really marginally) higher throughput with UDP, but then you have the hassle of recreating all the goodies TCP gives you for free.
If you want bidirectional high volume high speed throughput (as opposed to high volume just one way at a time), then it's possible one connection for each direction is easier to cope with, but I don't have that much experience with it.
Design tips
You should keep the connection open. The client will need to ask "are you still there?" at regular intervals if no other communication goes on. (On second thought, I realize that the only purpose of this is to allow quick reponse and the possiblity for the server to initiate a message transaction. So I revise it to: keep the connection open for a full transaction at least.)
Also, you should split up large messages - messages over a certain size. Keep the number of bytes you send in each chunk to a maximum round hex number, typically 8K, 16K, 32K or 64K on a local network. Experiment with sizes. The suggested max sizes has been optimal since Windows 3 at least. You need some sort of protocol with a chunck consisting of a fixed header (typically a magic number for check and resynch, a chunk number also for check and for analysis, and a total packet length) followed by the data.
You can possibly further improve throughput with compression (usually low quick compression) - it depends very much on the data, and whether you're on a fast or slow network.
Then there's this hassle that one typically runs into - problems with the Nagle algorith - and I no longer remember enough of the details there. I believe I used to overcome that by sending an acknowledgement in return for each chunk sent, and I suspect by doing that you satisfy the design requirements, and so avoid waiting for the last bytes to come in. But do google this.

Maximum recommended socket sending capacity

I'm creating a program (it's a game) which will have multiplayer. It will need both client-to-server and server-to-client, with the server being run on one of the player's machines, not on a separate machine. I'll be using C# sockets, and I need to know the maximum amount of data that they can handle. For instance, my data will be between 256b and 128kb, I can assume. Am I going to have any trouble sending that through the sockets? (There will never be more than 7 clients connected to the server-handler machine).
EDIT
After reading some other posts, I don't think it will be a problem. Others have said that sending data upwards of 1MB doesn't cause a problem, so I don't think I'll have an issue.
P.S.
If this is not the case, please let me know. Thanks!

UDP File Transfer - Yes, UDP

I have a requirement to create a UDP file transfer system. I know TCP is guaranteed and much more reliable, but I need to transfer huge files between locations and I think the speed advantage in this project outweighs the benefits using TCP. I’m just starting this project, but would like some guidance if anyone has done this before. I will be writing both sides (client and server) so I don’t need to worry about feature limitations in other products.
In a nutshell I need to:
Take large files and send them in chunks
Be able to throttle bandwidth from the client
Create some kind of packet numbering system for errors,
retransmitions and assembling files by chunk on server (yes, all the
stuff we get from TCP for free :-)
Configurable datagram size – I think some firewalls complain if they
get too big?
Anything else I may be missing
I’m starting this journey using UdpClient and would like to write this app in C#. Any words of wisdom (other than to use TCP)?
It’s been done with huge success. We used to use RocketStream.com, but they sold their product to another company for internal use only. We typically get speeds that are 30X faster than FTP or raw TCP byte transfers.
in regards to
Configurable datagram size – I think some firewalls complain if they get too big?
one datagram could be up to 65,536 bytes. cosidering all the ip header information you'll end up with 65,507 bytes for payload. but you have to consider how all devices are configured along you network path. typically most devices have set an MTU-size of 1500 bytes so this will be typically your limit "on the internet". if you set up a dedicated network between your locations you can increase your MTU an all devices.
further in regards to
Create some kind of packet numbering system for errors, retransmitions and assembling files by chunk on server (yes, all the stuff we get from TCP for free :-)
i think the best thing in your case would be to implement a application level protocol. like
32 byte sequence number
8 byte crc32 checksum (correct me on the bytesize)
any bytes left can be used for data
hope this gives you some bit of a direction
::edit::
from experience i can tell you UDP is about 10-15% faster than TCP on dedicated and UDP-tuned networks.
I'm not convinced the speed gain will be tremendous, but an interesting experiment. Such a protocol will look and behave more like one of the traditional modem based protocols, and probably ZModem is one of the better examples to get some inspiration from (implements an ack window, adaptive block size, etc).
There are already some people who tried this, check out this site.
That would be cool if you succeed.
Don't go in it without WireShark. You'll need it.
For the algorithm, I guess that you have pretty much the idea of how to start. Maybe some pointers:
start with MTU that will be common to both endpoints, and use packets of only that size, so you'll have control over packet fragmentation (when you come down from TCP, I hope that this is for the more control over low level stuff).
you'll probably want to look into STUN or TURN for punching the holes into NATs.
look into ZModem - that also has a nostalgic value :)
since you want to squeeze maximum from you link, try to put as much as you can in the 'control packets' so you don't waste a single byte.
I wouldn't use any CRC on packet level, because I guess that networks underneath are handling that stuff.
I just had an idea...
break up a file in 16k chunks (length is arbitrary)
create HASH of each chunk
transmit all hashes of the chunks, using any protocol
at receiving end, prepare by hashing everything you have on your hard drive, network, I mean everything, in 16k chunks
compare received hashes to your local hashes and reconstruct the data you have
download the rest using any protocol
I know that I'm 6 months behind the schedule, but I just couldn't resist.
Others have said more interesting things, but I would like to point out that you need to make sure you use a good compression algorithm. That will make a world of difference.
Also I would recommend validating your assumptions as to the speed improvement possibility, make a trivial system of sending data (not worrying about loss, corruption, or other problems) and see what bandwidth you get. This will at least give you a realistic upper bound for what can be done.
Finally consider why you are taking on this task? Will the speed gains be worth it after the amount of time spent developing it?

Tcp Reliability versus Udp Burdens for serious, high-performance server

Speed, optimization, and scalability are the typical comparisons between the Udp and Tcp protocols. Tcp touts reliability with the disadvantage of a little extra overhead, but speed is good to excellent. Once a Tcp socket is instanced, keeping the socket open requires some overhead. But compared to the oft described burdens of Udp, which protocol actually has more overhead?. I've also heard that there are scalability issues with Tcp...yet the Internet (Web pages/servers) runs on Tcp - so what is it about Tcp that inhibits scalability?
Okay...so Udp doesn't require that overhead of keeping a connection open. But, it requires that you write extra methods to ensure all of the packet gets there, hopefully in the order that you want it received. If a packet isn't received in full, then you have to tell the client or server to resend. And you also have to keep some sort of message collection for partial packets, rebuild the partial messages, and check for a complete message before the message can finally be processed. Not to mention if the second part of a message never makes it, you have to either say resend the entire thing, or resend the part we are missing, or whatever.
Basically, my questions are:
Why would I choose Udp over Tcp for a serious, high-performance server with the added "overhead" of message
checking and manual ACK versus the "overhead" of a continuous stream?
If Tcp is good enough for the likes of World of Warcraft, why isn't Tcp more widely accepted as the protocol to use for a game server?
Note: I am not opposed to implementing Udp options for a server. We are using C# on .Net 3.5 framework. So I would also be interested in the best practices for dealing with Udp burdens. I am also using the asynchronous methods at the socket level rather than using TcpListener, TcpClient, etc. etc.
Well, I would recommend reading up some more. There are plenty places to look at the pro's and con's of TCP vs. UDP and vice versa, here are a few:
What Are The Advantages Of Using TCP Over UDP?
When should I use UDP instead of TCP?
TCP and UDP
What are the advantages of UDP over TCP?
However, this link may interest you the most, as it is directly about networked game programming:
Gaffer on Games - UDP vs. TCP
If I were to quote something small:
The decision seems pretty clear then,
TCP does everything we want and its
super easy to use, while UDP is a huge
pain in the ass and we have to code
everything ourselves from scratch. So
obviously we just use TCP right?
Wrong.
Using TCP is the worst possible
mistake you can make when developing a
networked game! To understand why, you
need to see what TCP is actually doing
above IP to make everything look so
simple!
I still recommend doing your own research on the matter though, and make sure which of the protocols suits your needs at the end of the day. This being said, it does seem to be the case that majority of games use UDP for their data. Anything that updates the entire state continuously does not need the overhead of guaranteed packet delivery.
First, I'll just paraphrase Stevens from Unix Network Programming Section 22.4 "When to Use UDP instead of TCP":
He basically says the following:
UDP is the only option for broadcast / multicast - so you have to use it there.
UDP can be used for simple request / reply apps. But you have to add your own error detection meaning at least acks, timesouts and retransmission.
UDP should not be used for bulk data transfer ( file transfers ) since you would have to build in all the functionality arleady in TCP to make it work right.
UDP should be used for real time data where speed of delivery is most important and some data loss is not an issue such as real time sensor data, live multimedia streams, real time stock quotes, etc.
The answer to your first question is very dependent on your definition of "high-performance". If you're primary concern is low latency, i.e. the individual data packets / requests arriving as quickly as possible than UDP would be the way to go. There are two primary reasons for this. Assuming packets / requests are fairly independent of each other than using TCP would introduce a problem known as head-of-line blocking.
Let's say you send two independent packets / requests. First A then B. Since TCP is stream based, if A get's lost in the network and needs to be retransmitted then even if B has already successfully arrived it can't be delivered to the application by the stack until A arrives, introducing unnecessary latency. Not only that, but until A arrives, B can't be acknowledged by the stack which might cause B to also be retransmitted causing needless network congestion.
One way around this problem is to use separate connections for each request, however this also introduces latency and hogs system resources. UDP bypasses all these problems.
Another issue in high performance ( low latency ) servers is the Nagle Algorithm which can add significant latency in TCP communications.
The answer to your second question is that WoW probably sends streams of data, not independent request / reply pairs. Also, some of the latency of TCP can be removed by disabling the Nagle algorithm. If they do use some request / reply communications they may have simply made a design decision that reliability is more important to them than latency.
Define "serious high performance" - how many concurrent connections are you talking about and how much data is flowing?
Take a look at the answers to this question What do you use when you need reliable UDP? which list some of the reliable protocols that have already been built on UDP. You might find one that works for your situation, or you may at least find some useful ideas.
The key to using UDP effectively here is to have some level of reliability and some level of unreliability and you get more of an advantage the more each datagram is able to be handled independently of others. The advantage over TCP is that you get to act on each datagram and decide if you can use it as it arrives. This is why it works for action games.
So, IMHO, if you need 100% reliability AND in order delivery then go with TCP; don't try and reimplement TCP in UDP.
It's Reliability vs Performance.
FPS games don't require -all- the packets to reach the destination, to reach it in order, to be exceptionally big, or to assure big throughput. They only require the packets to reach the server AS SOON AS POSSIBLE. This is the ultimate priority and overhead of TCP is simply an unnecessary burden.
WoW, in its "not quite realtime" communication and often tons of data to transmit (in crowded areas), may have to deal with packets exceeding MTU (requiring fragmentation) and requires reliability (fewer bigger packets = packet lost hurts more). So its choice of TCP is logical. Same would go for most turn-based strategy games and the like. In games where the player with ping of 30ms beats the player with ping 50ms UDP is the king.
I think the biggest part of TCP/IP that inhibits scalability is that it maintains a buffer on all incoming / outgoing connections up to basically the size of the window. So if I have a high latency but high throughput client i'm talking to, I have to keep all sent packets in buffer until I receive an ack. So for a few connections this is fine, but for handling 100K connections, it can start to be problematic overhead. On the receiving end, if a packet is dropped, again it will buffer all new packets received until the one required is retransmitted.
If you're going to implement retransmission, you need to do the same thing, and hence will have the same overhead. However, UDP does give you an advantage, if you know the end-to-end link speeds, or if certain message can be delivered out of order, or certain messages don't need retransmission. Keeping the gaming scenario:
packet 1 = move to 1,1
packet 2 = shoot
packet 3 = move to 2,2
Most game designers, if packet 1 is lost, but packet 3 is received, packet 1 is no longer important because it contains out of date information anyways. However, you could opt to say packet 2 is important, so if it's not acked, send a retransmission.
If you need high throughput, and connect two servers directly with 1000Mbps ethernet, TCP/IP will take awhile to scale and have additional overhead, and will likely never achieve a true gigabit connection due to the congestion avoidance mechanisms. However, you know it's 1 Gbps, so you can set up you're UDP to transmit at up to a 1 Gbps (minus overhead) yourself.
To answer you're questions more directly:
If you are going to ack every packet anyways, there isn't a massive benefit to having UDP, other than you can process some messages while waiting for retransmission (unless you want in-order delivery as well).
Udp isn't considered for game servers as much, mainly out of the scenario above, and real time combat systems such as First person shooters, where a message can be dropped, and the new message to come will invalidate the dropped message anyways. World of warcraft can get away with using TCP, since they don't have to be as precise with timing, and likely have some good logic that makes it more difficult for you to tell the difference anyways. The combat system simply doesn't require the speed.
I'd also contend that some of the justification is holdover from years ago, when everyone had less-reliable, and slower Internet connections. TCP is also more lenient for sharing the network, so if there's a lot going on, it will slow down so everyone gets a share of the connection (congestion avoidance).
TCP/IP is a protocol designed by people far smarter than I over years of research. Tuning in the last several years has allowed it to perform better with the faster and faster average network speeds we are seeing, and doesn't require a great understanding to use.
However, replacing this with UDP, does require a significant understanding of networking. I've seen badly written UDP programs saturate 1Gbps links and kill all traffic on the link, because they implemented a rather naive retransmission algorithm.
Here's a list of things TCP/IP can now do that you'd loose by going UDP:
- In order arrival to you're program
- retransmission (Now with Fast retransmit, selective acknowledgement, and other features)
- Maximum segment size
- Path MTU Discovery
- Black Hole Detection (extension of Path MTU)
- Congestion avoidance
Because of this, I'd highly recommend sticking with TCP/IP if it suits you're needs.
Also not to nit pick, but you're comment about the Internet running on TCP/IP is wrong, there are in fact dozens of Internet routeable protocols check them out here. I think you were referring to web pages and web servers are all running on top of TCP/IP. Which again for the web is great where us humans won't notice a delay as long as the page shows up correctly. Even for TCP/IP, their is some challenge that TCP/IP isn't aggressive enough for the web: Google thinks tcp/ip should be more aggressive by default

To send an image every 50 ms., should I use TCP or UDP?

I am building a C# application, using the server-client model, where the server is sending an image (100kb) to the client through a socket every 50ms...
I was using TCP, but besides the overhead of this protocol, sometimes the client ended up with more than one image on the socket. And I still haven't though of a clever mechanism to split the bytes of each image (actually, I just need the most recent one).
I tried using UDP, but got to the conclusion that I can't send 100kb dgrams, only 64kb ones. And even so, I shouldn't use more than 1500bytes; otherwise the packet would be divided along the network and the chances of losing parts of the packet would be greater.
So now I'm a bit confused. Should I continue using TCP and put some escaping bytes in the end of each image so the client can separate them? Or should I use UDP, send dgrams of 1500 bytes and come up with a mechanism for ordering and recovering?
The key goal here is transmitting the images very fast. I don't mind losing some on the way as long as the client keeps receiving newer ones.
Or should I use another protocol? Thanks in advance!
You should consider using Real-time Transport Protocol (aka RTP).
The underlying IP protocol used by RTP is UDP, but it has additional layering to indicate time stamps, sequence order, etc.
RTP is the main media transfer protocol used by VoIP and video-over-IP systems. I'd be quite surprised if you can't find existing C# implementations of the protocol.
Also, if your image files are in JPEG format you should be able to produce an RTP/MJPEG stream. There are quite a few video viewers that already have native support for receiving and displaying such a stream, since some IP webcams output in that format.
First of all, your network might not be able to handle this no matter what you do, but I would go with UDP. You could try splitting up the images into smaller bits, and only display each image if you get all the parts before the next image has arrived.
Also, you could use RTP as others have mentioned, or try UDT. It's a fairly lightweight reliable layer on top of UDP. It should be faster than TCP.
I'd recommend using UDP if:
Your application can cope with an image or small burst of images not getting through,
You can squeeze your images into 65535 bytes.
If you're implementing a video conferencing application then it's worth noting that the majority use UDP.
Otherwise you should use TCP and implement an approach to delimit the images. One suggestoin in that regard is to take a look at the RTP protocol. It's sepcifically designed for carrying real-time data such as VoIP and Video.
Edit: I've looked around quite a few times in the past for a .Net RTP library and apart from wrappers for non .Net libraries or half completed ones I did not have much success. I just had another quick look and this may be of this one ConferenceXP looks a bit more promising.
The other answers cover good options re: UDP or a 'real' protocol like RTP.
However, if you want to stick with TCP, just build yourself a simple 'message' structure to cover your needs. The simplest? length-prefixed. First, send the length of the image as 4 bytes, then send the image itself. Easy enough to write the client and server for.
If the latest is more important than every picture, UDP should be your first choice.
But if you're dealing with frames lager than 64K your have to-do some form of re-framing your self. Don't be concerned with fragmented frames, as you'll have to deal with it or the lower layer will. And you only want completed pictures.
What you will want is some form of encapsulation with timestamps/sequences.

Categories

Resources