I need to calculate the network latency on a system which has multiple connected adapters .
I am using the System.Net.NetworkInformation.Ping class to ping an address and use the RoundtripTime property to determine latency.
This works fine. However on a system with multiple connected adapters , I need to
provide the source IP to use, to determine the latency on each of the available connections.
This class however does not provide an option to ping using a particular source IP address
I need something similar to the ping DOS command . This command has the option of -S which allows you to provide a source IP address.
Is there a way to specify the source IP address in System.Net.NetworkInformation.Ping. The PingOptions class does not provide any such option .
Thanks.
I found this link (http://www.dreamincode.net/forums/topic/71263-using-the-ping-class-in-c%23/) helpful with looking at the Ping class but I have not found a way to set the source for a Ping.
One thing to keep in mind when using ICMP based pings is that networking equipment will often give ICMP traffic lower priority than normal packets, especially when the packets cross network boundaries such as WAN links. This can lead to pings being dropped or showing higher latency than traffic is actually experiencing and lends itself to being an indicator of problems rather than a measurement https://stackoverflow.com/a/1671489/901395
The biggest question may be is, is your application going to be on a network with QoS and if so what type of traffic are you really looking at measuring?
IPGlobalStatistics class may be of assistance: http://msdn.microsoft.com/en-us/library/system.net.networkinformation.ipglobalstatistics(v=vs.90).aspx
this answer may be helpful as well: https://stackoverflow.com/a/2506432/901395
using the code below to loop through the interfaces
class MainClass
{
static void Main()
{
if (!NetworkInterface.GetIsNetworkAvailable())
return;
NetworkInterface[] interfaces
= NetworkInterface.GetAllNetworkInterfaces();
foreach (NetworkInterface ni in interfaces)
{
Console.WriteLine(" Bytes Sent: {0}",
ni.GetIPv4Statistics().BytesSent);
Console.WriteLine(" Bytes Received: {0}",
ni.GetIPv4Statistics().BytesReceived);
}
}
}
//Provide any URL to ping.
Uri objURL = new Uri("ANY URL");
System.Net.NetworkInformation.Ping objPing = new System.Net.NetworkInformation.Ping();
System.Net.NetworkInformation.PingOptions objPingOptn = new System.Net.NetworkInformation.PingOptions();
//Decides if packet to be sent in a go or divide in small chunks
objPingOptn.DontFragment = true;
//Creating a buffer of 32 bytes.
string tPacketData = "DummyPacketsDataDummyPacketsData";
byte[] bBuffer = Encoding.ASCII.GetBytes(tPacketData);
//Can provide host name directly if available
System.Net.NetworkInformation.PingReply objPingRply = objPing.Send(objURL.Host, 120, bBuffer, objPingOptn);
objPing.Dispose();
if (objPingRply.Status == System.Net.NetworkInformation.IPStatus.Success)
return true;
else
return false;
Related
I'm trying to write a high-performance TCP server (a LDAP server) using this tutorial by David Fowler as a base part of the MyServerListener.cs to handle incoming connections.
This is a simple .net 7 console app (with little changes) that I borrowed from David, it just accepts incoming clients, process the requests and writes hello to the response :
internal class Program
{
const int PORT = 389; // injecting from config
const int BACKLOG_LENGTH = 200; // max backlog size in windows server
static async Task Main(string[] args)
{
var listenSocket = new Socket(SocketType.Stream, ProtocolType.Tcp);
listenSocket.Bind(new IPEndPoint(IPAddress.Any, port));
Console.WriteLine("Listening on port " + port);
listenSocket.Listen(BACKLOG_LENGTH);
while (true)
{
var socket = await listenSocket.AcceptAsync();
_ = ProcessLinesAsync(socket);
}
}
private static async Task ProcessLinesAsync(Socket socket)
{
#if DEBUG
Console.WriteLine($"[{socket.RemoteEndPoint}]: connected");
#endif
// Create a PipeReader over the network stream
var stream = new NetworkStream(socket);
var reader = PipeReader.Create(stream);
var writer = PipeWriter.Create(stream);
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
while (TryReadLine(ref buffer, out ReadOnlySequence<byte> line))
{
// Process the line.
ProcessLine(line);
try
{
// writing a sample message to the response
var helloBytes = Encoding.ASCII.GetBytes("hello\n");
await writer.WriteAsync(helloBytes);
}
catch (Exception ex)
{
throw;
}
}
// Tell the PipeReader how much of the buffer has been consumed.
reader.AdvanceTo(buffer.Start, buffer.End);
// Stop reading if there's no more data coming.
if (result.IsCompleted)
{
break;
}
}
// Mark the PipeReader as complete.
await reader.CompleteAsync();
#if DEBUG
Console.WriteLine($"[{socket.RemoteEndPoint}]: disconnected");
#endif
}
private static bool TryReadLine(ref ReadOnlySequence<byte> buffer, out ReadOnlySequence<byte> line)
{
// Look for a EOL in the buffer.
SequencePosition? position = buffer.PositionOf((byte)'\n');
if (position == null)
{
line = default;
return false;
}
// Skip the line + the \n.
line = buffer.Slice(0, position.Value);
buffer = buffer.Slice(buffer.GetPosition(1, position.Value));
return true;
}
private static void ProcessLine(in ReadOnlySequence<byte> buffer)
{
foreach (var segment in buffer)
{
// Doing some tasks
#if DEBUG
Console.Write(Encoding.UTF8.GetString(segment.Span));
Console.WriteLine();
#endif
}
}
}
This server listens on a port (389), processes the incoming request, doing some jobs and then writes a message to the response using PipeReader and PipeWriter.
I'm trying to do my best to a less memory/heap allocation code (using span<>, memory<>, ...) as I can, to keep my codebase so fast and optimize. But for now, I'm trying to test the production environment with the above code to examine the throughput; I mean: the server resources, my TCP server application itself, clients and the network;
I'm using Apache JMeter to test (load/stress test).
In some scenarios (sending more than 5000 request/sec) I get Connection refused error messages in JMeter logs, but I don't have any high pressure in the server or client's (JMeter[s]) resources (CPU/Memory).
I tried to optimize the server's configuration and changed some TCP related parameters (I googled about them) like MaxUserPort: 65534, TcpTimedWaitDelay: 30 or different backlog size, but no improvements.
So I'm almost sure that there is sth related to the network (packet dropping/rejecting or sth like this).
I also turned off firewall in the testing clients and the server, But I don't have any access to the network configurations (and I don't know what are they) like firewalls, ISA, TMG, etc.
_____________
Update 1:
I already increased our clients ephemeral ports to the maximum range using this PS script:
netsh int ipv4 set dynamic tcp start=5000 num=65535
and now we have this :
netsh int ipv4 show dynamicport tcp
Start Port : 1024
Number of Ports : 64511
And we also checked JMeter logs to see any error indicating this situation (Ephemeral ports exhaustion), at first we saw this message :
Non HTTP response code: java.net.BindException,Non HTTP response
message: Address already in use
But now, it's gone and we don't have large number of TIME_WAIT ports to worry about.
And we are also testing our scenario with SO_LINGER:0 and monitoring real times TIME_WAIT ports (using some tools), and we are sure that this isn't our concern right now.
_____________
So my question is, how can I find out why I can't send more traffic (threads/requests per seconds in JMeter clients) to the server to testing my TCP server application performance? Because for now, the server CPU doesn't increase more than ~10%.
At this point, is this a network related problem? How can I be sure about that? e.g: can I use some network analyzers (e.g: PRTG network monitor) to find out any dropped TCP packets? Or any other tips welcomed
Most probably TCP ports are not recycled fast enough, there is a network parameter which controls the time which connection can stay in TIME_WAIT state so you might also want to reduce TcpTimedWaitDelay
Also it might be a good idea to increase maximum number of TCP connections via TcpNumConnections parameter
And last but not the least it might be the case JMeter is not capable of sending the requests fast enough so you might need to play the same trick on the load generator side. In addition make sure to follow JMeter Best Practices and monitor CPU/RAM/Network/Disk/Swap usage on JMeter side as it might be the case you will need to switch to Distributed Testing if one machine is not capable of giving more than 5k requests per second.
I'm teaching myself some simple networking using Unity and Sockets and I'm running into problems synchronizing data between a client and server. I'm aware that there are other options using Unity Networking but, before I move on, I want to understand better how to improve my code using the System libraries.
In this example I'm simply trying to stream my mouse position over a multicast UDP socket. I'm encoding a string into a byte array and sending that array once per frame. I'm aware that sending these values as a string is un-optimal but, unless that is likely the bottleneck, I'm assuming It's ok come back to optimize that later.
Im my setup server is sending values at 60 fps, and the client is reading at the same rate. The problem I'm having is that when the client receives values it typically receives many at once. If I log the values I received with a ----- between each frame I typically get output like this:
------
------
------
------
------
------
------
119,396
91,396
45,391
18,379
-8,362
-35,342
-59,314
------
------
------
------
------
------
------
I would expect unsynchronized update cycles to lead to receiving two values per frame, but I'm not sure what might be accounting for the larger discrepancy.
Here's the Server code:
using UnityEngine;
using System.Net;
using System.Net.Sockets;
using System.Text;
public class Server : MonoBehaviour
{
Socket _socket;
void OnEnable ()
{
var ip = IPAddress.Parse ("224.5.6.7");
var ipEndPoint = new IPEndPoint(ip, 4567);
_socket = new Socket (AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption (ip));
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.MulticastTimeToLive, 2);
_socket.Connect(ipEndPoint);
}
void OnDisable ()
{
if (_socket != null)
{
_socket.Close();
_socket = null;
}
}
public void Send (string message)
{
var byteArray = Encoding.ASCII.GetBytes (message);
_socket.Send (byteArray);
}
}
And the client:
using UnityEngine;
using System.Net;
using System.Net.Sockets;
using System.Text;
public class Client : MonoBehaviour
{
Socket _socket;
byte[] _byteBuffer = new byte[16];
public delegate void MessageRecievedEvent (string message);
public MessageRecievedEvent messageWasRecieved = delegate {};
void OnEnable ()
{
var ipEndPoint = new IPEndPoint(IPAddress.Any, 4567);
var ip = IPAddress.Parse("224.5.6.7");
_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_socket.Bind (ipEndPoint);
_socket.SetSocketOption (SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(ip,IPAddress.Any));
}
void Update ()
{
while (_socket.Available > 0)
{
for (int i = 0; i < _byteBuffer.Length; i++) _byteBuffer[i] = 0;
_socket.Receive (_byteBuffer);
messageWasRecieved (Encoding.ASCII.GetString (_byteBuffer));
}
}
}
If anybody could shed light on what I can do to improve synchronization that would be a great help.
Network I/O is subject to a large number of external influences, and TCP/IP as a protocol has few requirements. Certainly none that would provide a guarantee of the behavior you seem to want.
Unfortunately, without a good Minimal, Complete, and Verifiable code example, it's not possible to verify that your server is in fact sending data at the interval you claim. It's entirely possible you have a bug that's causing this behavior.
But if we assume that the code itself is perfect, there are still no guarantees when using UDP that datagrams won't be batched up at some point along the way, such that a large number appear in the network buffer all at once. I would expect this to happen with higher frequency when the datagrams are sent through multiple network nodes (e.g. a switch and especially over the Internet), but it could just as easily happen when the server and client are both on the same computer.
Ironically, one option that might force the datagrams to be spread out more is to pad each datagram with extra bytes. The exact number of bytes required would depend on the exact network route; to do this "perfectly" might require writing some calibration logic that tries different padding amounts until the code sees datagrams arriving at the intervals it expects.
But that would significantly increase the complexity of your network I/O code, and yet still would not guarantee the behavior you'd like. And it has some obvious negative side-effects, including the extra overhead on the network (something people using metered network connections certainly won't appreciate), as well as increasing the likelihood of a UDP datagram being dropped altogether.
It's not clear from your question whether your project actually requires multicast UDP, or if that's just something in your code because that's what some tutorial or other example you're following was using. If multicast is not actually a requirement, another thing you definitely should try is to use direct UDP without multicasting.
FWIW: I would not implement this the way you have. Instead, I would use asynchronous receive operations, so that my client receives datagrams the instant they are available, rather than only checking periodically each frame of rendering. I would also include a sequence number in the datagram, and discard (ignore) any datagrams that arrive out of sequence (i.e. where the sequence number isn't strictly greater than the most recent sequence number already received). This approach will improve (perhaps only slightly) responsiveness, but also will handle situations where the datagrams arrive out of order, or are duplicated (two of the three main delivery issues one will experience with UDP…the third being, of course, failure of delivery).
I'm trying to create a server/client zeromq based PUB-SUB using PGM protocol, all on my local computer.
For some reason I get stuck on:
string a = clientsocket.Receive(Encoding.Unicode);
It's just for the test and I don't get an exception, the program simply waits.
Server code:
var context = ZmqContext.Create();
ZmqSocket serversocket = context.CreateSocket(SocketType.PUB);
try
{
serversocket.Bind("epgm://192.168.137.127;224.0.0.1:5555");
}
catch (ZmqException)
{
throw;
}
int x = 0;
Console.WriteLine("UP");
while (x < 100)
{
serversocket.Send("hello",Encoding.Unicode);
Console.WriteLine("hello sent {0}",x.ToString());
Thread.Sleep(2000);
x++;
}
Client code:
context = ZmqContext.Create();
clientsocket = context.CreateSocket(SocketType.SUB);
try
{
clientsocket.Connect("epgm://192.168.137.127;224.0.0.1:5555");
}
catch (ZmqException)
{
throw;
}
clientsocket.SubscribeAll();
clientsocket.ReceiveReady += PollingItemEvens;
string a = clientsocket.Receive(Encoding.Unicode);
if (a == "hello")
{
Application.Run(_form1);
}
var poller = new Poller(new List<ZmqSocket> {clientsocket});
while (true)
{
poller.Poll();
}
Edit [2014-08-04 1640 UTC+0000]
i have changed the epgm IP after reading the documentation.
yet it didnt solve the problem...
my IPv4 is 192.168.137.127
its a hotspot seance im on a laptop, it makes any different?
and can i see the epgm on 'netstat' on windwos cmd?
because i dont see anything
You should take a look at the pgm/epgm documentation for 0MQ:
In particular:
Connecting a socket
When connecting a socket to a peer address using zmq_connect() with
the pgm or epgm transport, the endpoint shall be interpreted as an
interface followed by a semicolon, followed by a multicast address,
followed by a colon and a port number.
An interface may be specified by either of the following:
•The interface name as defined by the operating system.
•The primary IPv4 address assigned to the interface, in its numeric representation.
Interface names are not standardised in any way and should be assumed
to be arbitrary and platform dependent. On Win32 platforms no short
interface names exist, thus only the primary IPv4 address may be used
to specify an interface.
A multicast address is specified by an IPv4 multicast address in its
numeric representation.
If you follow the documentation, an address of "epgm://224.0.0.1:8200" is invalid: it is missing the interface part of the address.
Pragmatic General Multicast PGM / EPGM
Uses a bit different structure for Addressing, with interface part added:
/* Connecting to the multicast address 224.0.0.1, port 8200, */
/* using the <localhost> first Ethernet network interface on Linux */
/* and the Encapsulated PGM protocol */
rc = zmq_connect( socket, "epgm://eth0;224.0.0.1:8200" );
assert ( rc == 0 );
/* Connecting to the multicast address 224.0.0.1, port 8200, */
/* using the <localhost> network interface setup with the address 192.168.1.1 */
/* and the standard PGM protocol */
rc = zmq_connect( socket, "pgm://192.168.1.1;224.0.0.1:8200" );
assert ( rc == 0 );
Now check and repair the ISO-OSI-L3 network addresses on the server side so that they match the valid local IPv4 network address, where your server resides and where it attempts to .PUB it's service.
Addendum
The 802.11 (Wi-Fi) standards specify support for multicasting as part of asynchronous services. An 802.11-client station, such as a wireless laptop or PDA (not an access point), begins a multicast delivery by sending multicast packets in 802.11 unicast data frames directed to only the access point. The access point responds with an 802.11 acknowledgement frame sent to the source station if no errors are found in the data frame.
If the 802.11-client sending the frame doesn't receive an acknowledgement, then the client will retransmit the frame. With multicasting, the leg of the data path from the wireless 802.11-client to the access point includes transmission error recovery. The 802.11 protocols ensure reliability between stations in both infrastructure and ad hoc configurations when using unicast data frame transmissions.
After receiving the unicast data frame from the 802.11-client, the access point transmits the data (that the originating 802.11-client wants to multicast) as a multicast frame, which contains a group address as the destination for the intended recipients. Each of the destination stations can receive the frame; however, they do not respond with acknowledgements. As a result, multicasting doesn't ensure a complete, reliable flow of data.
The lack of acknowledgements with multicasting means that some of the data your application is sending may not make it to all of the destinations, and there's no indication of a successful reception.
A note from Martin Sustrik ( co-father of ZeroMQ ):
However, it should be noted that multicast transports are inherently
complex to set up and are often fail due to inadequate networking
hardware, incorrect HW/OS setup etc.
Next step
Would be useful to post both the:
Key-benefits that made you to opt for EPGM transportClass
An application-neutral Validation-test-case for proving the subsequent phases of the life-cycle of each isolated parts of { ZeroMQ-layer | ZeroMQ-primitives } { are | are not } working as you expected them to.
May be inspired by: https://www.mail-archive.com/zeromq-dev#lists.zeromq.org/msg01580.html
I have a server application. I also have a client application. I am able to establish a tcp connection between the applications when both applications happen to be on the same network. so let's say that the computer running the server application is listening from new connections on port 2121 and it has the LAN ip address 192.168.0.120. On a different computer running the client application I will be able to establish a connection by providing port number 2121 and ip address 192.168.0.120.
Is there a way to find all computers on a network that are listening on port 2121?
One algorithm that I am thinking now is like:
get ip address of current computer and lets say it comes out as 192.168.0.145.
now most likely the server will be listening on ip addresss 192.168.0.?
then ping 192.168.0.1 on port 2121 then 192.168.0.2 on port 2121 ... and then keep going.
I don't know if that method is efficient. moreover there might be a possibility that the server happens to be listening on ip address 192.168.1.x
So what changes will I have to make to my server and client application so that the client is able to find all the servers listening on port 2121?
The algorithm you proposed is the one you need. One problem is in the dynamic generation of the candidate IP addresses.
Normally, the possible IP address range is the one given by the subnet mask ( http://en.wikipedia.org/wiki/Subnetwork ). More exactly, the part of the IP that change is that part when in the subnet mask you have 0bits (always at the end of mask).
In your example:
if the mask is 255.255.255.0, then your possible ip address range is
192.168.0.*.
if the IP can also be 192.168.1.* then probably the mask should be 255.255.0.0
you can also have mask like 255.255.255.128 and the range would be 192.18.1.[1-126]. You can practically learn more using http://www.subnet-calculator.com/
The only other possibilities which cause your problem that I see to have these distinct ranges are:
you have more DHCP servers in your network, which is really bad as you will have "race conditions". The solution here is to fix your infrastructure by removing all but 1 DHCP server
you have manually set IP addresses (probably on laptops). The solution is to change to DHCP (if you need a specific IP that will always be assigned to a specific computer, use static DHCP)
Getting back to the problem of finding the problem of checking if "something" is listening on a specific port, the ICMP protocol is not the best here, as the majority of firewalls filter both the broadcast ICMP and single ICMP. If we are truly talking of a server, chances are you had to manually open the port you are looking for. Also, even if all computers respond, you still don't know if they host your wanted service.
The solution below involves computing the possible range of candidate IP addresses. After that you iterate through them to see if you can connect to your port.
In this implementation I test sequentially, which proves to be very slow as the timeout for connect is 30 seconds if the host is not on. For several hundred candidates, it doesn't sound too good. However, if the majority of host are available (even if they don't host your service), everything will go several times faster.
You can improve the program by either finding out how to decrease this timeout (I couldn't find out how in my allocated time) or to use a custom timeout as presented in How to configure socket connect timeout . You could also use multi-threading and adding the address that worked in a thread-safe collection and work with it from there.
Also, you could try pinging (ICMP) before, but you could miss valid servers.
static void Main(string[] args)
{
Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
int wantedPort = 21; //this is the port you want
byte[] msg = Encoding.ASCII.GetBytes("type msg here");
foreach (NetworkInterface netwIntrf in NetworkInterface.GetAllNetworkInterfaces())
{
Console.WriteLine("Interface name: " + netwIntrf.Name);
Console.WriteLine("Inteface working: {0}", netwIntrf.OperationalStatus == OperationalStatus.Up);
//if the current interface doesn't have an IP, skip it
if (! (netwIntrf.GetIPProperties().GatewayAddresses.Count > 0))
{
break;
}
//Console.WriteLine("IP Address(es):");
//get current IP Address(es)
foreach (UnicastIPAddressInformation uniIpInfo in netwIntrf.GetIPProperties().UnicastAddresses)
{
//get the subnet mask and the IP address as bytes
byte[] subnetMask = uniIpInfo.IPv4Mask.GetAddressBytes();
byte[] ipAddr = uniIpInfo.Address.GetAddressBytes();
// we reverse the byte-array if we are dealing with littl endian.
if (BitConverter.IsLittleEndian)
{
Array.Reverse(subnetMask);
Array.Reverse(ipAddr);
}
//we convert the subnet mask as uint (just for didactic purposes (to check everything is ok now and next - use thecalculator in programmer mode)
uint maskAsInt = BitConverter.ToUInt32(subnetMask, 0);
//Console.WriteLine("\t subnet={0}", Convert.ToString(maskAsInt, 2));
//we convert the ip addres as uint (just for didactic purposes (to check everything is ok now and next - use thecalculator in programmer mode)
uint ipAsInt = BitConverter.ToUInt32(ipAddr, 0);
//Console.WriteLine("\t ip={0}", Convert.ToString(ipAsInt, 2));
//we negate the subnet to determine the maximum number of host possible in this subnet
uint validHostsEndingMax = ~BitConverter.ToUInt32(subnetMask, 0);
//Console.WriteLine("\t !subnet={0}", Convert.ToString(validHostsEndingMax, 2));
//we convert the start of the ip addres as uint (the part that is fixed wrt the subnet mask - from here we calculate each new address by incrementing with 1 and converting to byte[] afterwards
uint validHostsStart = BitConverter.ToUInt32(ipAddr, 0) & BitConverter.ToUInt32(subnetMask, 0);
//Console.WriteLine("\t IP & subnet={0}", Convert.ToString(validHostsStart, 2));
//we increment the startIp to the number of maximum valid hosts in this subnet and for each we check the intended port (refactoring needed)
for (uint i = 1; i <= validHostsEndingMax; i++)
{
uint host = validHostsStart + i;
//byte[] hostAsBytes = BitConverter.GetBytes(host);
byte[] hostBytes = BitConverter.GetBytes(host);
if (BitConverter.IsLittleEndian)
{
Array.Reverse(hostBytes);
}
//this is the candidate IP address in "readable format"
String ipCandidate = Convert.ToString(hostBytes[0]) + "." + Convert.ToString(hostBytes[1]) + "." + Convert.ToString(hostBytes[2]) + "." + Convert.ToString(hostBytes[3]);
Console.WriteLine("Trying: " + ipCandidate);
try
{
//try to connect
sock.Connect(ipCandidate, wantedPort);
if (sock.Connected == true) // if succesful => something is listening on this port
{
Console.WriteLine("\tIt worked at " + ipCandidate);
sock.Close();
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
}
//else -. goes to exception
}
catch (SocketException ex)
{
//TODO: if you want, do smth here
Console.WriteLine("\tDIDN'T work at " + ipCandidate);
}
}
}
Console.ReadLine();
}
sock.Close();
}
(sorry for my bad english) I am actually needing something similar to this and just found out about multicast. Here you can find an article and example. The sample app from the article worked fine on my lan. I do not know exactly how it works but maybe you can multicast something from the client and have the server(s) to respond with its IP? Or if that do not work, have the server multicasting his IP in a timed interval should do it. Sorry for the lack of informations, i just learned about this :)
An option i am not seeing beeing discussed here is to have a Master Server.
The idea is quite simple: A server where your application's server can register and where you application clients can get a list of active servers.
Server A is loaded and imediatly sends a hello message to the Master Server
Server B is loaded and sends an hello message to the Master Server
Both Server A and B keep sending hello's to Master Server every X Minutes so he knows they are still on
Client A is loaded - Needs to issue command - asks Master Server for List of Active Servers - Picks a server from the list - issues command
Things to keep in mind:
Master Server must be on a known address / port - either fixed ip or get ip throw well known ServerName
Purpose of Master Server is simply register servers and supply clients with their addresses - At first glance i see no other service it could provide your application
If any server is as good as any other for your application, i would advise the list to be ordered according to timestamp of last hello message received from that server - that way client will have at the top of that list the server most likelly to still be up (since it reported beeing up last) and can go down the list subsequentially.
More over, every time the Master Server receives an hello that list changes, so every so often client requests will get a different server list and use a different preferencial server, relieving load on servers accross the board.
can't you use the same method as when you get your ip.
let the client send a broadcast - if no response wait
server receive broadcast and send one back with its own ip.
now the client know that the server is out there and on what ip.
I assume you have a single server. If you can guarantee that the server location (ip address and port) is constant (or can be looked up) then each client application can 'register' with the server by connecting to it and informing the server about the ip address and local port to call back.
ICMP Ping does not determine if a computer is listening on a specific port, only if the computer is configured to response to a ping. ICMP is a protocol, different then TCP or UDP. It's only use for you would be to determine if an IP Address is use, and even then is becoming less viable.
You have two options.
Have the client constantly check every IP address on your local network and try to open port 2121. This is not a good option.
Have every server send out a ICMP ping to the broadcast address with specific data announcing it is on (and optionally not connected to a client) the never every so often (I would recommend a minute for testing, and 5 minutes minimum for production). All your software has to do is look for the broadcast ping and connect to the sending IP Address.
Update:
using System.Net.NetworkInformation;
private Ping _Ping = new Ping();
private PingOptions _PingOptions = new PingOptions(64, true);
private byte[] _PingID = Encoding.ASCII.GetBytes("MyPingID");
private _PingResponse = new AutoResetEvent(false);
public <classname> //Constructor
{
_Ping.PingCompleted += new PingCompletedEventHander(PingCompleted);
}
public void PingCompleted(object Sender, PingCompletedEventArgs e)
{
if (e.Cancelled)
{
//Status Unknown;
}
else if (e.Error != null)
{
//Status Error;
}
else if (e.Reply.Status == IPStatus.Success)
{
// Device Replying
}
else
{
// Status Unknown
}
}
public void StartPing(string AddressToPing)
{
IPAddress ipAddress = IPAddress.Parse(AddressToPing);
_Ping.SendAsync(ipAddress, 15000, _PingID, _PingOptions, _PingResponse);
}
you can make the server send his location to a specific port using UDP, and the client listen to it then the client establish a connection with the server based on the given ip and port.
I am trying to get some simple UDP communication working on my local network.
All i want to do is do a multicast to all machines on the network
Here is my sending code
public void SendMessage(string message)
{
var data = Encoding.Default.GetBytes(message);
using (var udpClient = new UdpClient(AddressFamily.InterNetwork))
{
var address = IPAddress.Parse("224.100.0.1");
var ipEndPoint = new IPEndPoint(address, 8088);
udpClient.JoinMulticastGroup(address);
udpClient.Send(data, data.Length, ipEndPoint);
udpClient.Close();
}
}
and here is my receiving code
public void Start()
{
udpClient = new UdpClient(8088);
udpClient.JoinMulticastGroup(IPAddress.Parse("224.100.0.1"), 50);
receiveThread = new Thread(Receive);
receiveThread.Start();
}
public void Receive()
{
while (true)
{
var ipEndPoint = new IPEndPoint(IPAddress.Any, 0);
var data = udpClient.Receive(ref ipEndPoint);
Message = Encoding.Default.GetString(data);
// Raise the AfterReceive event
if (AfterReceive != null)
{
AfterReceive(this, new EventArgs());
}
}
}
It works perfectly on my local machine but not across the network.
-Does not seem to be the firewall. I disabled it on both machines and it still did not work.
-It works if i do a direct send to the hard coded IP address of the client machine (ie not multicast).
Any help would be appreciated.
Does your local network hardware support IGMP?
It's possible that your switch is multicast aware, but if IGMP is disabled it won't notice if any attached hardware subscribes to a particular multicast group so it wouldn't forward those packets.
To test this, temporarily connect two machines directly together with a cross-over cable. That should (AFAICR) always work.
Also, it should be the server half of the code that has the TTL argument supplied to JoinMulticastGroup(), not the client half.
I've just spent 4 hours on something similar (I think), the solution for me was:
client.Client.Bind(new IPEndPoint(IPAddress.Any, SSDP_PORT));
client.JoinMulticastGroup(SSDP_IP,IP.ExternalIPAddresses.First());
client.MulticastLoopback = true;
Using a specific (first external) IP address on the multicast group.
I can't see a TTL specified anywhere in the code. Remember that TTL was originally meant to be in unit seconds, but is has become unit hops. This means that by using a clever TTL you could eliminate passing through the router. The default TTL on my machine is 32 - I think that should be more than adequate; but yours may actually be different (UdpClient.Ttl) if your system has been through any form of a security lockdown.
I can't recommend the TTL you need - as I personally need to do a lot of experimentation.
If that doesn't work, you could have a look at these articles:
OSIX Article
CodeProject Article
All-in-all it looks like there has been success with using Sockets and not UdpClients.
Your chosen multicast group could also be local-only. Try another one.
Your physical network layer could also be causing issues. I would venture to question switches and direct (x-over) connections. Hubs and all more intelligent should handle them fine. I don't have any literature to back that, however.