Timing control in C# Socket Programming - c#

I'm currently getting started on a socket programming project for a University network planning course. I'm also doing a Programming class in which we are learning C# so I thought it appropriate to use C# to complete this assignment.
The assignment is to create a simple Server-Client connection and have data sent between them. That is not too much of a problem, I've read a few tutorials and watched a few videos and I think I know what I'm doing. That part which has me a little confused is this:
"2) Every second, the server sends Client a command to seek measurement data, e.g., through a single letter "R" or "r" (request). (timing control is required here)."
This has me confused, What exactly is timing control? And how can I go about implementing it.
Thanks in advance.
EDIT: I found an "example" of timing control, but it is in the C/C++, Do I need to do something liek this?
/* Step 5.2: Send and Receive Data in loops */
time_old = getTime();
iterationStep = 1;
for (;;)
{
recvStatus = recv(TCPClient, recvBuffer,128,0);
if(recvStatus == 0)
break;
else if(recvStatus == SOCKET_ERROR)
{
printf("Failed in recv(): %d\n",WSAGetLastError());
break;
}
recvBuffer[recvStatus] = 0x00; /* '\0' */
time_new = getTime();
time_interval = time_new - time_old;
time_old = time_new;
printf("Step = %5d; Time Interval = %8.6f; Received String: %s\n", iterationStep,time_interval,recvBuffer);
iterationStep++;
}
/* if "EXIT" received, terminate the program */
if (!strcmp(recvBuffer,"EXIT"))
{
printf("TCP Server ready to terminate.");
break;
}
}
EDIT 2:
Im trying now to use this method but am having some trouble. I have created this method:
private static void SendRequest(Object source, ElapsedEventArgs e) {
byte[] buffer = Encoding.Default.GetBytes("WAAAAT");
acc.Send(buffer, 0, buffer.Length, 0);
}
But as you can see, I cannot use the "acc" socket because I can set it as a parameter. I tried to but then I get a lot of errors when calling the method, any advice?

It sounds like you will need to use the "System.Timers.Timer" class to execute your "send request" function every second.
Example:
static void Main()
{
// Create a timer with a one second interval.
aTimer = new System.Timers.Timer(1000);
// Hook up the Elapsed event for the timer.
aTimer.Elapsed += SendRequest;
aTimer.Enabled = true;
}
private static void SendRequest(Object source, ElapsedEventArgs e)
{
// maybe end existing request if not completed?
// send next request
}
Whether you time out(cancel) any existing requests before starting a new one will depend on your requirements as it is also possible to have many requests running in parallel. If you do need to kill a request, you can do so by disposing the System.Net.Sockets.Socket that initiated the connection attempt (you will need to handle any errors thrown by the Socket.Connect).

Timing control sounds like a mechanism to insure that the requests are fired 1 sec apart and that if a request doesn't complete in a timely fashion to terminate that request and fire the next one.
Take a scenario
Request 1 fires at point 0
After 1 sec request 1 hasn't received an answer but request 2 needs to fire.
You probably need to implement a timeout system based on a clock.

Related

c# detecting disconnect from server with timer

I have a client type application that is receiving packets from remote server.
Now and then it so happens that for some reason server disconnects me.
Sometimes there are some problems on my end ISP drops internet etc.
I have been trying to catch those exceptions and goog-ling for an answer but at the end every one
points to "make a timer and check periodically for received packets."
Now i have a function that is receiving all incoming traffic.
This function is executed every time packet is received.
Now my idea is to create a function that will create timer with let say 50 seconds time out.
This function will reset timer to 0 each time packet is received and restart it.
If timer reach 50 seconds it will throw an error "disconnected!" and some logic will follow
how to reconnect.
Now main problem i have is ... i can not "pause" my main packet receiving function.
I have tried to make it in another thread but program keep recreating new threads , killing threads by ID is a bad practice and i haven't gone down that road ... yet.
Is this a how i should handle my problem or someone has a better idea?
Below is my packet receive function.
public void OnReceive()
{
try
{
recv_pack = locSec.TrIncom();
if (recv_pack != null)
{
foreach (Packet packet in recv_pack)
{
byte[] packet_bytes = packet.GetBytes();
PacketHandler.HandlePacket(packet, locSec);
//here i would check for packet receive with timer
//CheckDisconnect();
}
}
}
catch()
{}
}
So far i have come up with this:
public bool CheckDisconnect()
{
bool KeepGoing = true;
for(int i = 0 ; i <= 50 && KeepGoing; i++ )
{
Thead.Sleep(1000);
if(i == 50)
{
KeepGoing = false;
Console.WriteLine("Disconnected!");
// ... starting reconnect procedure
}
}
}
Not sure if i understand completely, but if those two functions are in the same thread, can't you just make a global variable that controls the OnReceive() function and set it to false in your CheckDisconnect() function?

Why am I losing 40% of syslog messages unless I sleep the thread?

I'm using SyslogNet.Client to syslog (UDP) messages to a server. I loop through a collection of errors from a database and send them. Only ~40% of messages arrive. I understand that with UDP, there is no guarantee of message arrival, but this is a very high percentage. However, if I call Thread.Sleep(1) between each iteration of the loop, 100% of the messages arrive. I'm having trouble understanding why this is happening.
Here's the loop:
private static void SyslogErrors(Dictionary<int, string> errorCol)
{
foreach (var error in errorCol)
{
SyslogMessage("AppName", error.Value);
Thread.Sleep(1);
}
}
And here's SyslogMessage:
private static void SyslogMessage(string appName, string message)
{
using (_syslogSender = new SyslogUdpSender(Server, Port))
{
var msg = new SyslogMessage(DateTime.Now,
Facility.SecurityOrAuthorizationMessages1,
Severity.Informational,
Environment.MachineName,
appName,
message);
_syslogSender.Send(msg, new SyslogRfc3164MessageSerializer());
}
}
First of all, why is this happening? Secondly, what's the "best practices" way to slow the loop down? Thread.Sleep(1) doesn't seem like a very clean solution.
Thanks!
It looks like the OS rejects some logs when spammed too much, and the sleep will most likely do just fine. But there are indeed some more reliable solutions. You can setup a Timer object and call your logger using the Elapsed event.
Something like:
//likely much longer than needed, I figure you dont need this to breeze anyway
aTimer = new System.Timers.Timer(50);
aTimer.Elapsed += timeToLog;
aTimer.Enabled = true;
Then you log in time:
private static void timeToLog(Object source, ElapsedEventArgs e)
{
var error = getNextError(); //Keep an iterator somewhere
SyslogMessage("AppName", error.Value);
}
I can add the code you'll need in order to create and manage the iterator (getNextError), although we're getting outside the scope of your question.

BeginSend taking too long till callback

I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).

communicating with multiple slave (Modbus protocol based)

I am developing an application in which Lets says 50-60 Modbus supporting devices (Slaves) are connected to a Com Port which are communicating with my application in request response mechanism.
I want after every 15 min. request should be sent to every meter and response to be received from meter one by one.
communicating with multiple slave (Modbus protocol based)
For this i am making the use of System.Timers.timer to call the method lets say ReadAllSlave() after every 15 min.
In ReadAllSlave() i have used For loop to send the request and to receive response and using thread.sleep to maintain the delay..! but it seems that its not working and loop is executing in damn wired way.
private void StartPoll()
{
double txtSampleRate = 15 * 60 * 1000;
timer.Interval = txtSampleRate;
timer.AutoReset = true;
timer.Start();
}
void timer_Elapsed(object sender, ElapsedEventArgs e)
{
for(int index = 0; index<meterCount; Index++)
{
//Sending request to connected meter..
mb.SendFc3(m_slaveID[0], m_startRegAdd[0], m_noOfReg[0], ref value_meter);
if (mb.modbusStatus == "Read successful")
{
//Some code for writing the values in SQL Express database
}
//Wait for some time so that will not get timeout error for the next
//request..
Thread.Sleep(10000);
}
}
Can any one please suggest me the best approach to implement the same.
Thanks in advance.
It looks like your problem is a trivial one... You're always interrogating the same slave !
"index" is never used in your code...
What about something like this :
mb.SendFc3(m_slaveID[index], m_startRegAdd[index], m_noOfReg[index], ref value_meter);

Stop to wait for a packet from serial port

I'm receiving periodically some data via Serial Port, in order to plot it and do some more stuff. In order to achive this purpose, I send the data from my microcontroller to my computer with a header, which specifies the length of each packet.
I have the program running and working perfectly except a one last detail. When the header specifies a lenght, my program will not stop until it reachs that amount of bytes. So if, for some reason, some data from one packet is missed, the program wait and take the beginning of the next packet...and then start the real problems. Since that moment, every fails.
I thought about rising a Timer every 0.9 seconds ( the packages come every second) who will give a command in order to comeback to wait and reset variables. But I don't know how to do it, I tried but I obtain errors while running. Since IndCom ( see next code) resets in the midle of some function and errors as "Index out of bounds" arises.
I attach my code ( without timer)
private void routineRx(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
try
{
int BytesWaiting;
do
{
BytesWaiting = this.serialPort.BytesToRead;
//Copy it to the BuffCom
while (BytesWaiting > 0)
{
BuffCom[IndCom] = (byte)this.serialPort.ReadByte();
IndCom = IndCom + 1;
BytesWaiting = BytesWaiting - 1;
}
} while (IndCom < HeaderLength);
//I have to read until I got the whole Header which gives the info about the current packet
PacketLength = getIntInfo(BuffCom,4);
while (IndCom < PacketLength)
{
BytesWaiting = this.serialPort.BytesToRead;
//Copy it to the BuffCom
while (BytesWaiting > 0)
{
BuffCom[IndCom] = (byte)this.serialPort.ReadByte();
IndCom = IndCom + 1;
BytesWaiting = BytesWaiting - 1;
}
}
//If we have a packet--> check if it is valid and, if so, what kind of packet is
this.Invoke(new EventHandler(checkPacket));
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
I'm new in object-oriented programming and c#, so be clement, please! And thank you very much
What you might do is use a Stopwatch.
const long COM_TIMEOUT = 500;
Stopwatch spw = new Stopwatch();
spw.Restart();
while (IndCom < PacketLength)
{
//read byte, do stuff
if (spw.ElapsedMilliseconds > COM_TIMEOUT) break; //etc
}
Restart the stopwatch at the beginning and check the time in each while loop, then break out(and clean up) if the timeout hits. 900ms is probably too much, even, if you're only expecting a few bytes. Com traffic is quite fast - if you don't get the whole thing immediately it's probably not coming.
I like to use termination characters in communication protocols (like [CR], etc). This allows you to read until you find the termination character, then stop. This prevents reading into the next command. Even if you don't want to use termination characters, changing your code to something like this :
while (IndCom < PacketLength)
{
if (serialPort.BytesToRead > 0)
{
BuffCom[IndCom] = (byte)this.serialPort.ReadByte();
IndCom++;
}
}
it allows you to stop when you reach your packet size, leaving any remaining characters in the buffer for the next round through (ie: the next command). You can add the stopwatch timeout in the above also.
The other nice thing about termination characters is that you don't have to know in advance how long the packet should be - you just read until you reach the termination character and then process/parse the whole thing once you've got it. It makes your two-step port read into a one-step port read.

Categories

Resources