TCP/ IP event that start reading process - c#

I'm developing a solution to send data from a microcontroller with a GPRS modem ( server) to my application in the computer ( the one client) over a TCP/IP connection.
I designed first my application working with a serial port connection (to go step-by-step that was easier) and now I have to adapt it in order to receive from a socket instead of serial port, which I'm totally new in and I thought it would be more straight.
I've been reading for days and I didn't get anything clear. Most of the solutions I see work just to send and receive a single packet, or are too complicated.
My server is going to be sending packets every second and the client has to read them and then analyze them.
I want something like this:
this.serialPort.DataReceived += new SerialDataReceivedEventHandler(this.routineRx);
But for NetworkStream. Something that wakes up the reading process ( RoutineRx) when there's something to read.
Maybe I have to change my mind and forgetting about this kind of stuff. I'm trying to make the most of my old code.
Maybe I'm asking something that has been asked many times before, but I really didn't find it. If so,sorry.

There is no DataReceived event for a NetworkStream. You can use the DataAvailable property to determine when there is data read for you to read, and use Read to read that data.
There are indeed many examples of reading/writing TCP data in c#. It's different from reading serial port data, and you may have to rethink your design somewhat.

Related

using multiple network interfaces to broadcast fractions of data in each(load-balancing / bonding)

the question is :
how to send data over multiple internet connections avilable on current pc?
possibly-partially simlilar to This Post
though my idea is(like raid-0 is using multiple hdd's , to take advatage of multiple nic's)
actually multiple internet connections/accounts to maximize
the throughoutput of the upload bandwidth (which usually has 1/8 of the total bandwidth)
the concept i am trying to implement is to use the fastest protocol, regardless the data integrity
so i could send data from one point to the other (having a "client" part of application to handle the data... check for integrity while putting data back to one piece)
or maybe just use tcp if it does not worth it (handling the integrity in application level to increase speed)
i know there's an existing application Called "Connectify" that calimes to do someting similar,
though my idea was to make something little different and i need to understand the basics
so i could start this project for testing and development.
...thanks in advance !
As a generalization of the approach you will need to take in this case would be to create multiple TCP Clients bound to the individual network adapters in your machine. You can iterate through each of the adapters available, test to make sure that they have a connection to the outside world, then add them to a collection where then for each packet of data you want to transmit, you send the packet out.
See http://msdn.microsoft.com/en-us/library/3bsb3c8f.aspx on how to bind TCPClient to individual IPEndPoints.
Because of the nature of the way TCP operates, you will have to construct essentially a wrapper for each packet of data which also includes an order id to ensure that packets received out of order (which will happen most of the time in this case), can be pieced back together again.
Let me know if you need any more help understanding things.

Can your program receive info via TCP while UDP is currently sending packets?

Now, I'm interested to know - if I have a program of mine connection to a Server through TCP and the server sends a message to my program, but I am sending UDP packets at the same time, will the TCP packet get to me? Everything is in one class!
Thanks for your help!
Your question is actually on the border of several issues that all network application programmers must know and consider.
First of all: all data received from the network is stored in operating system's internal buffer, where it awaits to be read. The buffer is not infinite, so if you wait long enough, some data may be dropped. Usually the chunks of data that are written there are single packets, but not always. You can never make any assumptions of how much data will be available for reading in TCP/IP communication. In UDP, on the other hand, you must always read a single packet, otherwise the data will be lost. You can use recvfrom to read UDP packets and I suggest using it.
Secondly: using blocking and non-blocking approach is one of the most important decisions for your network app. There is a lot of information about it in the Internet: C- Unix Sockets - Non-blocking read , what is the definition of blocking read vs non- blocking read? or a non-blocking tutorial.
As for threads: threads are never required to write a multiple connection handler application. Sometimes they will simplify your code, sometimes they will make it run faster. There are some well-known programming patterns for using threads, like handling each separate connection in a separate thread. More often than not, especially for an inexperienced programmer, using threads will only be a source of strange errors and headaches.
I hope that my post answers your question and addresses the discussion I've been having below another answer.
Depends on what you mean by "at the same time". Usually the answer is "yes", you can have multiple TCP/IP connections and multiple UDP sockets all transmitting and receiving at the same time.
Unless you're really worried about latency - where a few microseconds can cause you trouble. If that's the case, one connection may interfere with the other.
Short answer - Yes.
You can have many connections at once, all sending and receiving; assuming that you've got multiple sockets.
You don't mention the number of clients that your server will have to deal with, or what you are doing with the data being sent/received.
Dependent on your implementation, multiple threads may also be required (As Dariusz Wawer points out, this is not essential, but I mention them because a scalable solution that can handle larger numbers of clients will likely use threads).
Check out this post on TCP and UDP Ports for some further info:
TCP and UDP Ports Explained
A good sample C# tutorial can be found here: C# Tutorial - Simple Threaded TCP Server

C#\Java and Java\C# pairs of client\server - graceful disconnect when client(socket) is closed

This is maybe more of a thing for discussion than a question.
Background:
I have two pairs of server/client, one pair written in Java and one written in C#. Both pairs are doing the same thing. No problems when I am using Java\Java and C#\C# combination. I would like to make combinations of Java\C# and C#\Java work as well. No problem with I\O, I am using byte array representing XML formatted string. I am bound to use TCP.
I need to care about graceful disconnect, in Java there is no such thing. When you close socket of client, server side socket remains in passive close state - therefore the server thread handling this socket is still alive, and in case of many clients I could end up with thousands of unnecessary threads. In C# it is enough to call TcpClient.Available to determine, whether there are data in buffer or whether the client has been closed(SocketException). In Java I could think of two approaches:
you have to write something to the underlying stream of socket to really test, whether the other side is still opened or not.
you have to make the other side aware, that you are closing one side of connection before you close it. I have implemented this one, before closing client socket I am sending packet containing 0x04 byte(end of transmission) to server, and server reacts on this byte by closing the server side of socket.
Unfortunatelly, both approaches have caused me a dilemma when it comes to C#\Java and Java\C# pairs of client\server. If I want these pairs to be able to communicate with each other, I will have to add code for sending the 0x04 byte to the C# client, and of course code handling it to C# server, which is kind of overhead. But I could live with unnecessary network traffic, main problem is, that this C# code is part of core communication library which I do not want to touch unless it is absolutely necessary
Question:
Is there other approach in Java to gracefully disconnect, which does not require writing to underlying stream of socket and checking the result so I do not have to handle this in C# code as well? I have a free hand when it comes to used libraries, I do not have to use only Java SE\EE.
I have given (what I think is a) concise explanation on this "graceful disconnection" topic in this answer, I hope you find it useful.
Remember that both sides have to use the same graceful disconnection pattern. I have found myself trying to gracefully disconnect a Client socket but the Server kept sending data (did not shutdown its own side), which resulted in an infinite loop...
the answer was apparent. There is nothing like graceful disconnect in C# as well, it is about TCP protocol contract. so there is same problem as in Java, with one slight difference, you can workaround on certain circumstances -> if you send 0 bytes through NetworkStream and then check TcpClient.Connected state, it is somehow correct. However from what I have found on the internet this is neither reliable nor preffered way how to test connection in C#. Of course in my C# library it has been used. In Java you cannot use this at all.
So my solution with sending "end of transmission" packet to server when client disconnects is solid, I have introduced it in C# library as well. Now I just tremble in fear for the first bug to emerge...
last point, I have to implement also some kind of mechanism that would collect handling threads on server for crashed clients. Keep that in mind if you are doing similar thing, no client is bound to end expectedly;)

Serial data logging

I have a device connected to my computer that sends serial data to the computer every 5 mins. I want to write a basic program to capture this serial data every 5 mins and put it into a database. I was hoping to use C# because I have used C# with databases before and found it quite easy.
Can anybody offer me any advice on how I might do this, I really have no idea where to start and I know in theory it sounds easy but when I started it I actually found it really hard.
Using C#, you can use the System.IO.Ports namespace to communicate over the serial ports - there's a nice article here.
Alternatively, you can use Python and the pySerial module. I've written an app to communicate over the serial port using pySerial - it's quite easy to use, and can run on many different operating systems including OSX and Windows (I'm assuming you're using Windows). Python also has built-in support for SQLite.
The problem with capturing data on a serial port is that serial ports aren't thread-safe, so if there is more than one listener, data will be corrupted.
If you are absolutely sure that you're the only one listening for data on this port, .NET has a built-in wrapper, System.IO.Ports.SerialPort, which you can use to connect to COM1, COM2, etc. You'll need to know the rate in bits/sec at which this device sends data (its baud rate), its error-checking (parity) protocol, and the format of the data it is sending (you'll get it as a byte array, which you must convert byte-by-byte into data you can work with). Then, your program should be able to open the port and listen for DataReceived events with a handler that will read and digest the data. Again, it's VERY important that you never have two threads trying to read at once; the easiest way is to set a volatile boolean indicating that a handler is reading data; if another handler is ever spawned while a previous one is still running, the first thing the new one should do is read that value, and since it's set, exit the new handler immediately.

Looking for best practise for writing a serial device communication app

I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device.
I have managed to make use of System.IO.SerialPort, and successfully connected to, sent data to and recieved from my device. The way things work is this.
My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connection to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window.
The device then waits.
I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like:
'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n'
I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'.
All pretty basic.
So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it.
Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this:
When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100).
On the data received, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the global variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable.
My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flaky.. putting stuff into variables.. then waiting...
The other option is to use ReadLine instead, but that's very blocking. So I remove the data received method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe ReadLine will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout?
Hopefully someone can guide me.
Well, Thread.Sleep() is blocking too. Much worse, actually, because you'd have to specify a sleep time that is always safe, even if the machine is under heavy load. Using ReadLine() is always better, it will be quicker and it cannot fail.
Note that your example doesn't require the client code to wait for a response. It can simply assume that the command was effective. All you need is an Error event to signal that something went wrong.
If there is a command that requires the client code to get the response that you should offer the option to wait as well as get the result asynchronously. That gives the client code options: waiting is slow but easy, async is difficult to program. It is a very common pattern in the .NET framework, the asynchronous method name starts with "Begin". Check the MSDN Library article about it.
You also should consider delivering asynchronous notifications on the thread that the client code prefers. The SynchronizingObject property is a good pattern for that.
If you do all of your reads on a background thread, then I don't see any problem with using ReadLine. It's the simplest and most robust solution.
You can use the ReadTimeout property to set the timeout for read operations.
You may want to read this Serial Port

Categories

Resources