c#: How to safely send multicast packets without interference? - c#

Sorry if the title is hard to understand but I don't really know how to put it in short. Let me explain.
I am currently developing a LAN-filesharing university project.
Everyone running the application will have to notify others that they available for file transfer. My idea is to Join a multicast group upon launch, and send a sort of "keep alive" packet in multicast: with keep alive I mean that this packet will tell all the receivers that the sender is still available for transferring files, if other users want to. So e.g.: I'm running the app and it will send this packet every 50 s or so, and other people in my network running my application will receive this packet and keep me in their memory.
This is what the client does (this is just an example, not actual code), at this point of the app I have already joined the multicast group and set the destination end point:
// Sending first data, e.g.: my ip address...
client.Send(buffer, buffer.Length, MulticastAddressEndPoint);
// Sending other data, e.g.: my real name...
client.Send(buffer2, buffer2.Length, MulticastAddressEndPoint);
// Sending other data, e.g.: my username...
client.Send(buffer3, buffer3.Length, MulticastAddressEndPoint);
From the official documentation I read that Send:
Sends a UDP datagram to the host at the specified remote endpoint.
So I'm guessing that I am sending 3 datagrams.
My listener thread is something like this:
IPEndPoint from = new IPEndPoint(IPAddress.IPv6Any, port);
while(true)
{
// Receive the ip
byte[] data = client.Receive(ref from);
// Receive the first name
data = client.Receive(ref from);
// Receive the username
data = client.Receive(ref from);
}
Now for the real question:
Let's suppose that two people are sending these 3 packets at the exactly same time (with their values of course, so different ip address etc), and no packet is dropped and they are delivered all in the correct sequence (first ip, then name, then username). The question is: I have absolutely NO guarantees that I will receive packets in this order:
packet1_A | packet2_A | packet3_A | packet1_B | packet2_B | packet3_B
instead of this
packet1_A | packet1_B | packet2_A | packet2_B | packet3_A | packet3_B
am I right?
The only thing that I can do is pack all information in one single byte array and then send it, right? This seems the most reasonable thing to do, but what if my information exceeds the 1500 bytes of ethernet? My datagram will be sent in more packets, so would I experience the same "interference" or my NIC will detect packets belonging to the same datagram and join them again and then deliver it to the OS?

Related

UDP Multicast Sending (using JoinMulticastGroup) in C#

I know there are plenty of examples around the web regarding UDP multicasting in C#. This is more to get a clarification on the need to include the method JoinMulticastGroup when sending only. Most code examples I have come across nearly always include this method as part of the initialisation code. But surely if the program or class is only ever sending, then it is not required?
i.e. on another stackoverflow question someone uses the code
public void SendMessage(string message)
{
var data = Encoding.Default.GetBytes(message);
using (var udpClient = new UdpClient(AddressFamily.InterNetwork))
{
var address = IPAddress.Parse("224.100.0.1");
var ipEndPoint = new IPEndPoint(address, 8088);
udpClient.JoinMulticastGroup(address);
udpClient.Send(data, data.Length, ipEndPoint);
udpClient.Close();
}
}
Is the line udpClient.JoinMulticastGroup(address); not actually redundant in this case?
JoinMulticastGroup is indeed for enabling the socket to receive multicast packets destined for that group address. If your client is only sending, then it's not strictly necessary.
However, it doesn't hurt, and does help make the code clear that you're "part of" that multicast group. In this way, if the requirements change in the future, and this application needs to receive packets, then it will already be part of the multicast group.
A source host sends data to a multicast group by simply setting the destination IP address of the datagram to be the multicast group address. Any host can become a source and send data to a multicast group. Sources do not need to register in any way before they can begin sending data to a group, and do not need to be members of the group themselves.
-- metaswitch.com

Cheapest way to send small strings of data (30~40 bytes) over the internet?

I need to send small strings o data every minute or so from 100 or so android cellphones to some kind of server. The problem is that each MB i use is around 1.5 dollars, so the cost scales greatly if the data size is too big.
I have tried using post, but it used 400 bytes of data per string sent. I have tried to make a C# socket server where a client connects, sends the data and disconnects, but it still used 400 bytes of data, perhaps a bit more...how is this possible? Could netbalancer be measuring it wrong? I used client.Send("string") in c#.
Will it be any different if i do it from a mobile data cellphone? Im doing it to another laptop in a LAN.
I have also tried ftp and it was too bloated too.
I need the network consumption of each string sent be around 100-120 bytes or so, is this even possible? What tools could i use?
---Update---
Here is the C# client code (will be Java if I find out how to optimize the size)
// Read the first batch of the TcpServer response bytes.
Int32 port = 10000;
byte[] bytes = Encoding.ASCII.GetBytes("0001#37.12489#-106.35871");
TcpClient client = new TcpClient("192.168.15.16", port);
int bytesSent = client.Client.Send(bytes);
client.Close();
Here is the PHP socket running on XAMPP. I could make it in any language, as I am probably going to have to use it as a relay to a server that doesn't allow sockets.
<?php
error_reporting(E_ALL);
/* Allow the script to hang around waiting for connections. */
set_time_limit(0);
/* Turn on implicit output flushing so we see what we're getting
* as it comes in. */
ob_implicit_flush();
$address = 'xxx.xxx.xx.xx';
$port = 10000;
if (($sock = socket_create(AF_INET, SOCK_STREAM, SOL_TCP)) === false)
{
echo "socket_create() failed: reason: " . socket_strerror(socket_last_error()) . "\n";
}
if (socket_bind($sock, $address, $port) === false)
{
echo "socket_bind() failed: reason: " . socket_strerror(socket_last_error($sock)) . "\n";
}
if (socket_listen($sock, 5) === false)
{
echo "socket_listen() failed: reason: " . socket_strerror(socket_last_error($sock)) . "\n";
}
do
{
if (($msgsock = socket_accept($sock)) === false)
{
echo "socket_accept() failed: reason: " . socket_strerror(socket_last_error($sock)) . "\n";
break;
}
$buf = socket_read($msgsock,50);
socket_close($msgsock);
echo "$buf\n";
} while (true);
socket_close($sock);
?>
Do not send character strings, send binary data (preferable in network-byte-order). Sending the info in binary format reduces the size of the pay-load.
Go for a UDP connection, as this reduces protocol overhead.
Invent a minimal protocol to detect packet loss and allow requering of packets.
An idea:
Add a serial number to each packet. Keep a minimum number of packets sent on the sender side. Allow the sender to receive answers from the receiver. Let the receiver send a request to have the sender resend any serial number missing.
This would allow to fill gaps, which were introduced during a time window defined by the amount of packets kept by the sender.
For the serial number one addtional byte per packet might do, two bytes most certainly will be enough. However the maximum serial number (and with this its size) depends on the maxium number of packets the sender holds as history to be able to resend them.

Can't receive message with Zeromq PGM protocol

I'm trying to create a server/client zeromq based PUB-SUB using PGM protocol, all on my local computer.
For some reason I get stuck on:
string a = clientsocket.Receive(Encoding.Unicode);
It's just for the test and I don't get an exception, the program simply waits.
Server code:
var context = ZmqContext.Create();
ZmqSocket serversocket = context.CreateSocket(SocketType.PUB);
try
{
serversocket.Bind("epgm://192.168.137.127;224.0.0.1:5555");
}
catch (ZmqException)
{
throw;
}
int x = 0;
Console.WriteLine("UP");
while (x < 100)
{
serversocket.Send("hello",Encoding.Unicode);
Console.WriteLine("hello sent {0}",x.ToString());
Thread.Sleep(2000);
x++;
}
Client code:
context = ZmqContext.Create();
clientsocket = context.CreateSocket(SocketType.SUB);
try
{
clientsocket.Connect("epgm://192.168.137.127;224.0.0.1:5555");
}
catch (ZmqException)
{
throw;
}
clientsocket.SubscribeAll();
clientsocket.ReceiveReady += PollingItemEvens;
string a = clientsocket.Receive(Encoding.Unicode);
if (a == "hello")
{
Application.Run(_form1);
}
var poller = new Poller(new List<ZmqSocket> {clientsocket});
while (true)
{
poller.Poll();
}
Edit [2014-08-04 1640 UTC+0000]
i have changed the epgm IP after reading the documentation.
yet it didnt solve the problem...
my IPv4 is 192.168.137.127
its a hotspot seance im on a laptop, it makes any different?
and can i see the epgm on 'netstat' on windwos cmd?
because i dont see anything
You should take a look at the pgm/epgm documentation for 0MQ:
In particular:
Connecting a socket
When connecting a socket to a peer address using zmq_connect() with
the pgm or epgm transport, the endpoint shall be interpreted as an
interface followed by a semicolon, followed by a multicast address,
followed by a colon and a port number.
An interface may be specified by either of the following:
•The interface name as defined by the operating system.
•The primary IPv4 address assigned to the interface, in its numeric representation.
Interface names are not standardised in any way and should be assumed
to be arbitrary and platform dependent. On Win32 platforms no short
interface names exist, thus only the primary IPv4 address may be used
to specify an interface.
A multicast address is specified by an IPv4 multicast address in its
numeric representation.
If you follow the documentation, an address of "epgm://224.0.0.1:8200" is invalid: it is missing the interface part of the address.
Pragmatic General Multicast PGM / EPGM
Uses a bit different structure for Addressing, with interface part added:
/* Connecting to the multicast address 224.0.0.1, port 8200, */
/* using the <localhost> first Ethernet network interface on Linux */
/* and the Encapsulated PGM protocol */
rc = zmq_connect( socket, "epgm://eth0;224.0.0.1:8200" );
assert ( rc == 0 );
/* Connecting to the multicast address 224.0.0.1, port 8200, */
/* using the <localhost> network interface setup with the address 192.168.1.1 */
/* and the standard PGM protocol */
rc = zmq_connect( socket, "pgm://192.168.1.1;224.0.0.1:8200" );
assert ( rc == 0 );
Now check and repair the ISO-OSI-L3 network addresses on the server side so that they match the valid local IPv4 network address, where your server resides and where it attempts to .PUB it's service.
Addendum
The 802.11 (Wi-Fi) standards specify support for multicasting as part of asynchronous services. An 802.11-client station, such as a wireless laptop or PDA (not an access point), begins a multicast delivery by sending multicast packets in 802.11 unicast data frames directed to only the access point. The access point responds with an 802.11 acknowledgement frame sent to the source station if no errors are found in the data frame.
If the 802.11-client sending the frame doesn't receive an acknowledgement, then the client will retransmit the frame. With multicasting, the leg of the data path from the wireless 802.11-client to the access point includes transmission error recovery. The 802.11 protocols ensure reliability between stations in both infrastructure and ad hoc configurations when using unicast data frame transmissions.
After receiving the unicast data frame from the 802.11-client, the access point transmits the data (that the originating 802.11-client wants to multicast) as a multicast frame, which contains a group address as the destination for the intended recipients. Each of the destination stations can receive the frame; however, they do not respond with acknowledgements. As a result, multicasting doesn't ensure a complete, reliable flow of data.
The lack of acknowledgements with multicasting means that some of the data your application is sending may not make it to all of the destinations, and there's no indication of a successful reception.
A note from Martin Sustrik ( co-father of ZeroMQ ):
However, it should be noted that multicast transports are inherently
complex to set up and are often fail due to inadequate networking
hardware, incorrect HW/OS setup etc.
Next step
Would be useful to post both the:
Key-benefits that made you to opt for EPGM transportClass
An application-neutral Validation-test-case for proving the subsequent phases of the life-cycle of each isolated parts of { ZeroMQ-layer | ZeroMQ-primitives } { are | are not } working as you expected them to.
May be inspired by: https://www.mail-archive.com/zeromq-dev#lists.zeromq.org/msg01580.html

UDP SocketException on EndReceiveFrom (C#)

I'm currently writing some async UDP network code in C#. I'm sending small packets (less than 50 bytes of data in each so far) back and forth and my first thought was to split them into two different packets and still send it as one packet but receive it as two. So a header or an extra information packet is always added to the start of the real packet. That would contain an ID and the data length.
So I thought I could split it on the receiving end (async receive) and first receive the header and then the actual information. This is so that I don't have to worry about the order between packets and "packet headers".
So I wrote code that basically worked like this:
Client sends 30 bytes of data to the server, where the first 3 bytes is the packet header.
The server would have called (PACKET_HEADER_SIZE = 3):
socket.BeginReceiveFrom(state.Buffer, 0, PACKET_HEADER_SIZE, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
Then receives the data:
private void ReceivePacketInfo(IAsyncResult ar)
{
StateObj state = (StateObj) ar.AsyncState;
int bytesRead = socket.EndReceiveFrom(ar, ref endPoint);
state.BytesReceived += read;
if (state.BytesReceived < state.Buffer.Length)
{
_socket.BeginReceiveFrom(state.Buffer, state.BytesReceived, state.Buffer.Length - state.BytesReceived, SocketFlags.None, ref endPoint, ReceivePacketInfo, state);
}
else
{
//my thought was to receive the rest of the packet here
}
}
but when calling socket.EndReceiveFrom(ar) I get a SocketException:
"A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself"
So now I have a couple of questions.
Do I have to make sure I receive the whole packet (in this case both the header and the packet) before I call EndReceiveFrom?
Can I assume that I will either get the whole packet in one go or get nothing so that my if-statement in ReceivePacketInfo would be redundant (as long as it's size is less than the maximum packet size, of course)?
If I cannot, is there a good way of solving my problem? I could tag all my packet headers and all my packets to be able to map them together I suppose. I could also try to have a standardized "packet ending" so that I just read until I hit the end of the packet.
Thanks in advance for any help!
Can I assume that I will either get the whole packet in one go or get nothing
That's almost the only thing, that UDP can guarantee - the content of a packet. If packet is received it is guaranteed to have same size and same content. So you have to make sure, that you buffer is large enough for a packet.
The order of a packets is not guaranteed and the delivery itself. It is up to you and your application to handle dropped packets and out of order packets.

Handling dropped TCP packets in C#

I'm sending a large amount of data in one go between a client and server written C#. It works fine when I run the client and server on my local machine but when I put the server on a remote computer on the internet it seems to drop data.
I send 20000 strings using the socket.Send() method and receive them using a loop which does socket.Receive(). Each string is delimited by unique characters which I use to count the number received (this is the protocol if you like). The protocol is proven, in that even with fragmented messages each string is correctly counted. On my local machine I get all 20000, over the internet I get anything between 17000-20000. It seems to be worse the slower connection that the remote computer has. To add to the confusion, turning on Wireshark seems to reduce the number of dropped messages.
First of all, what is causing this? Is it a TCP/IP issue or something wrong with my code?
Secondly, how can I get round this? Receiving all of the 20000 strings is vital.
Socket receiving code:
private static readonly Encoding encoding = new ASCIIEncoding();
///...
while (socket.Connected)
{
byte[] recvBuffer = new byte[1024];
int bytesRead = 0;
try
{
bytesRead = socket.Receive(recvBuffer);
}
catch (SocketException e)
{
if (! socket.Connected)
{
return;
}
}
string input = encoding.GetString(recvBuffer, 0, bytesRead);
CountStringsIn(input);
}
Socket sending code:
private static readonly Encoding encoding = new ASCIIEncoding();
//...
socket.Send(encoding.GetBytes(string));
If you're dropping packets, you'll see a delay in transmission since it has to re-transmit the dropped packets. This could be very significant although there's a TCP option called selective acknowledgement which, if supported by both sides, it will trigger a resend of only those packets which were dropped and not every packet since the dropped one. There's no way to control that in your code. By default, you can always assume that every packet is delivered in order for TCP and if there's some reason that it can't deliver every packet in order, the connection will drop, either by a timeout or by one end of the connetion sending a RST packet.
What you're seeing is most likely the result of Nagle's algorithm. What it does is instead of sending each bit of data as you post it, it sends one byte and then waits for an ack from the other side. While it's waiting, it aggregates all the other data that you want to send and combines it into one big packet and then sends it. Since the max size for TCP is 65k, it can combine quite a bit of data into one packet, although it's extremely unlikely that this will occur, particularly since winsock's default buffer size is about 10k or so (I forget the exact amount). Additionally, if the max window size of the receiver is less than 65k, it will only send as much as the last advertised window size of the receiver. The window size also affects Nagle's algorithm as well in terms of how much data it can aggregate prior to sending because it can't send more than the window size.
The reason you see this is because on the internet, unlike your network, that first ack takes more time to return so Naggle's algorithm aggregates more of your data into a single packet. Locally, the return is effectively instantaneous so it's able to send your data as quickly as you can post it to the socket. You can disable Naggle's algorithm on the client side by using SetSockOpt (winsock) or Socket.SetSocketOption (.Net) but I highly recommend that you DO NOT disable Naggling on the socket unless you are 100% sure you know what you're doing. It's there for a very good reason.
Well there's one thing wrong with your code to start with, if you're counting the number of calls to Receive which complete: you appear to be assuming that you'll see as many Receive calls finish as you made Send calls.
TCP is a stream-based protocol - you shouldn't be worrying about individual packets or reads; you should be concerned with reading the data, expecting that sometimes you won't get a whole message in one packet and sometimes you may get more than one message in a single read. (One read may not correspond to one packet, too.)
You should either prefix each method with its length before sending, or have a delimited between messages.
It's definitely not TCP's fault. TCP guarantees in-order, exactly-once delivery.
Which strings are "missing"? I'd wager it's the last ones; try flushing from the sending end.
Moreover, your "protocol" here (I'm taking about the application-layer protocol you're inventing) is lacking: you should consider sending the # of objects and/or their length so the receiver knows when he's actually done receiving them.
How long are each of the strings? If they aren't exactly 1024 bytes, they'll be merged by the remote TCP/IP stack into one big stream, which you read big blocks of in your Receive call.
For example, using three Send calls to send "A", "B", and "C" will most likely come to your remote client as "ABC" (as either the remote stack or your own stack will buffer the bytes until they are read). If you need each string to come without it being merged with other strings, look into adding in a "protocol" with an identifier to show the start and end of each string, or alternatively configure the socket to avoid buffering and combining packets.

Categories

Resources