I am trying to get data from a serial port continuously in a very fast speed. The baud rate is 230400.
When I print out the data, time stamp and also BytesToRead in to a file, I noticed a 200ms delay happens whenever BytesToRead drops to a single digit and readLine() is not reading anything in that 200ms. After the delay, BytesToRead goes back to around 3000 and this process happens again and again. Essentially I am not getting data continuously.
I thought maybe I am reading faster than the speed data accumulate in the buffer so I tried changing readBuffer size and put this thread to sleep for 1ms in order to let buffer keep up the speed I am reading. None of them worked. There are still some delays.
Any thoughts is welcomed.
private void dostuff()//The thread I created after the port is opened
{
var startTime = DateTime.Now;
var stopwatch = Stopwatch.StartNew();
while (serialPortEncoder.IsOpen)
{
if (serialPortEncoder.BytesToRead > 210)
{
try
{
var line = serialPortEncoder.ReadLine();
var timestamp = (startTime + stopwatch.Elapsed);
var lineString = string.Format("{0} ----{1}",
line,
timestamp.ToString("HH:mm:ss:fff") + " "+serialPortEncoder.BytesToRead+"\r\n");
richTextBoxEncoderData.BeginInvoke(new MethodInvoker(delegate()
{
richTextBoxEncoderData.Text = line;//update UI
}));
}
catch (Exception ex) { MessageBox.Show(ex.ToString()); }
}}
Unless there's a line feed every 210 bytes, your ReadLine() function is probably timing out and returning nothing. ReadLine() will read the input buffer till it encounters a newline value, then return whatever data was before it. http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.readline.aspx
What kind of information is coming across the port? If you want to read a specific size buffer, just use the Read method. If you need to read till there's a line feed, use ReadLine() and check every so often to see if it returns a string.
Your code is pretty fundamentally flawed, it suffers from the "hot wait loop" bug. Your loop is burning 100% core when the serial port doesn't have enough data. That will make Windows put your thread in a dog house for a while after your burned the quantum, giving other threads a chance to run. Being in that dog house for 200 msec is a bit long but certainly not unusual.
You should do this differently, you should give Windows a chance to wake you up when there's actually data available from the serial port. It favors threads that had an I/O complete when it looks for the next thread to schedule. That is very easy to do, simply remove the BytesToRead test. The ReadLine() call is a blocking call that doesn't return until a NewLine is received. Your thread will now consume close to 0% cpu cycles.
You will still lose arbitrary amounts of time when the machine is heavily loaded or the garbage collector runs. And no, that's not good enough to reliably read an encoder and close a feedback loop. Doing that reliably requires a microcontroller with predictable real-time behavior. Readily available from industrial electronics suppliers.
After some endless researching on the web, I found the cause of the delay. Device Manager--->Port---->advance----> change latency to 1ms will solve the problem. Now, every time when the buffer is dropping to zero, it only needs 2ms max to get back to normal. I am now polling data using a separate thread. It works very well.
But like Hans Passant said, I am trying to figure out a better way to design it.
Related
I am writing a program to simulate a device that transmits data over the serial port. To do this, I created a System.IO.Ports.SerialPort object in my form, and a System.Windows.Forms.Timer to do the transmitting at a given frequency. Everything works fine except that as the frequency approaches the limit of the serial port speed, it starts to lock up the UI and eventually becomes unresponsive when the data is being sent for transmission faster than the port data speed. My code is:
private void OnSendTimerTick(object sender, EventArgs e)
{
StringBuilder outputString = new StringBuilder("$", 51);
//code to build the first output string
SendingPort.WriteLine(outputString.ToString());
outputString = new StringBuilder("$", 44);
//code to build the second output string
SendingPort.WriteLine(outputString.ToString());
if (SendingPort.BytesToWrite > 100)
{
OnStartStopClicked(sender, e);
MessageBox.Show("Warning: Sending buffer is overflowing!");
}
}
I was expecting the WriteLine function to be asynchronous - return immediately while the port transmits in the background. Instead, it seems that the OnSendTimerTick function is properly threaded, but WriteLine seems to be running in the UI thread.
How can I get the serial port to behave in this way? Creating the SerialPort object in the timer seems like a bad idea, because then I'd have to open and close it on each timer tick.
It is only partly asynchronous, it will immediately return but only as long as the bytes you write will fit in the serial port driver's transmit buffer. That's going to come to a screeching stop when you flood the port with too much data, faster than it can transmit. You can make it truly asynchronous by using the SerialPort.BaseStream.BeginWrite() method. That doesn't make it any faster but moves the bottleneck somewhere else, possibly away from the UI.
If you're using System.Windows.Forms.Timer, the timer event handler is being executed on the UI thread. Same is true if you're using System.Timers.Timer when the SynchronizingObject is set to the form. If the serial port buffer fills up, the thread has to wait until it has enough space to hold the new data that you want to send.
I would suggest that you use a System.Threading.Timer to do this. The timer callback is called on a pool thread, meaning that the UI thread won't lock up if the WriteLine has to wait. If you do this, then you have to make sure that there is only one threading executing the timer callback at any time. Otherwise you can get data out of order. The best way to do that would be to make the timer a one-shot and re-initialize it at the end of every callback:
const int TimerFrequency = 50; // or whatever
System.Threading.Timer timer;
void InitTimer()
{
timer = new System.Threading.Timer(TimerCallback, null, TimerFrequency, Timeout.Infinite);
}
void TimerCallback(object state)
{
// do your stuff here
// Now reset the timer
timer.Change(TimerFrequency, Timeout.Infinite);
}
Passing a valid of Timeout.Infinite as the period parameter prevents the timer from being a periodic timer. Instead, it fires just once. The Timer.Change re-initializes the timer after each send.
A possibly better way to handle this is to eliminate the timer altogether by setting the WriteBufferSize to a sufficiently large value. Then your program can just dump all of its data into the buffer and let the SerialPort instance worry about dribbling it out across the wire. This assumes, of course, that you can create a buffer large enough to hold whatever your program is trying to send.
This could be resolved(slowness of the UI) if you created a Queue of strings to be written and had a background thread that wrote to the serial port from the queue. If you take that approach be careful of the size of the queue.
edit: For some reason I can't use Add Comment, so I'll just edit this. The documentation for BeginWrite has this statement "The default implementation of BeginWrite on a stream calls the Write method synchronously, which means that Write might block on some streams." It then goes on to exclude File and Network streams, but not SerialPort. I guess you can try it and see.
I have a network project, there is no timer in it. just a tcpclient that connect to a server and listen to receive any data from network.
TcpClient _TcpClient = new TcpClient(_IpAddress, _Port);
_ConnectThread = new Thread(new ThreadStart(ConnectToServer));
_ConnectThread.IsBackground = true;
_ConnectThread.Start();
private void ConnectToServer()
{
try
{
NetworkStream _NetworkStream = _TcpClient.GetStream();
byte[] _RecievedPack = new byte[1024 * 1000];
string _Message = string.Empty;
int _BytesRead;
int _Length;
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
}
catch (Exception exp)
{
// call a function to alarm that connection is false
}
}
But after a while the cpu usage of my application goes up(90%, 85%,...).
even if no data receive.
could anybody give me some tips about cpu usage. I'm totally blank. i don't know i should check which part of the project!
could anybody give me some tips about cpu usage
You should consider checking the loops in the application, like while loop, if you are spend so much time waiting for some condition to became true, then it will take much CPU time. for instance
while (true)
{}
or
while (_Flag)
{
//do something
}
If the code executed inside the while are synchronous, then the thread will be ending eating much of CPU cycles. to solve this problem you could executes the code inside the while in a different thread, so it will be asynchronous, and then use ManualResetEvent or AutoResetEvent to report back when operation executed, another thing to mentioned is to consider using System.Threading.Thread.Sleep method to till the thread to sleep and give the cpu time to execute other threads, example:
while(_Flag)
{
//do something
Thread.Sleep(100);//Blocks the current thread for 100 milliseconds
}
There are several issues with your code... the most important ones are IMHO:
Use async methods (BeginRead etc.), not blocking methods, and don't create your own thread. Thread are "expensive>" resources - and using blocking calls in threads is therefore a waste of resources. Using async calls lets the operating system call you back when an event (data received for instance) occured, so that no separate thread is needed (the callback runs with a pooled thread).
Be aware that Read may return just a few bytes, it doesn't have to fill the _ReceivedPackbuffer. Theoretically, it may just receive one or two bytes - not even enough for your call to ToInt32!
The CPU usage spikes, because you have a while loop, which does not do anything, if it does not receive anything from the network. Add Thread.Sleep() at the end of it, if not data was received, and your CPU usage will be normal.
And take the advice, that Lucero gave you.
I suspect that the other end of the connection is closed when the while loop is still running, in which case you'll repeatedly read zero bytes from the network stream (marking connection closed; see NetworkStream.Read on MSDN).
Since NetworkStream.Read will then return immediately (as per MSDN), you'll be stuck in a tight while loop that will consume a lot of processor time. Try adding a Thread.Sleep() or detecting a "zero read" within the loop. Ideally you should handle a read of zero bytes by terminating your end of the connection, too.
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
Have you attached a debugger and stepped through the code to see if it's behaving in the way you expect?
Alternatively, if you have a profiling tool available (such as ANTs) then this will help you see where time is being spent in your application.
Here is my Timer Elapsed Event, I am receiving the System.OutOfMemoryException on the line Thread thread = new Thread(threadStart);
I am receiving the error fairly fast (1~5 minutes, randomly), and it does not cause unexpected results in my program. I am just wondering what is causing this error, and I am afraid it may cause unexpected results if it is left unchecked. I have searched on the internet and am comming no where near the number of max threads.
readList contains about 46 enteries.
Any help would be appreciated.
private void glob_loopTimer_Elapsed(object sender, ElapsedEventArgs e)
{
try
{
ParameterizedThreadStart threadStart = new ParameterizedThreadStart(readHoldingRegisters);
foreach (readwriteDataGridRow.Read row in readList)
{
Thread thread = new Thread(threadStart);
thread.IsBackground = true;
thread.Start(System.Convert.ToInt32(row.Address));
}
}
catch (Exception ex)
{
UpdateConsole(new object[] { ex.Message.ToString() + " " + ex.StackTrace.ToString(), Color.Red });
Thread.CurrentThread.Abort(); // maybe?
}
}
EDIT:
Here is a bit more information.
My program is reading registers from a Serial Device using the Modbus RTU protocol.
A single register takes less than a tenth of a second to retrieve from readHoldingRegisters
I am open to suggestions on what else to use rather than threads.
note: I need to call readHoldingRegisters 40 - 100 times in a single 'pass'. The passes start when the user hits connect and end when he hits disconnect. Timers are not needed, they just offered a simple way for me to maintain the loop with a start and stop button.
EDIT: Solved
private void glob_loopTimer_Elapsed(object sender, ElapsedEventArgs e)
{
try
{
foreach (readwriteDataGridRow.Read row in readList)
{
readHoldingRegisters(row.Address);
}
}
catch (Exception ex)
{
UpdateConsole(new object[] { ex.Message.ToString() + " " + ex.StackTrace.ToString(), Color.Red });
}
}
The additional Threads were the problem and were not needed.
Ughh, do not, ever (well almost ever) abort threads. There are many preferable ways to make a System.Thread stop. Look around SO, you will find plenty of examples on why doing this is a bad idea and alternative approaches.
On with your question: The problem doesn't seem to be the number of rows in readList. It is more likely that your glob_looperTimer_Elapsed event handler is being executed many times and you are basically starting more and more threads.
What is the interval of your glob_loopTimer?
So how many times is glob_loopTimer_Elapsed called? The name implies that it is run on a periodic timer interval. If so, and if the 46 threads that get created on each invocation do not terminate about as quickly as the timer interval fires, then you could easily be spawning too many threads and running out of memory space as a result. Perhaps you could try logging when each thread starts and when each one finishes to get an idea about how many are in flight at once?
Keep in mind that every thread you allocate will have a certain amount of stack space allocated to it. Depending upon your runtime configuration, this amount of stack space may not be negligible (as in, it may be 1 MB per thread or more) and it may quickly consume your available memory even if you're not close to approaching the theoretical maximum number of threads supported by the OS.
Besides your problem I'll consider using ThreadPool or the TPL.
When using System.Thread there is no automisn to manage the threads...
Also each Thread allocates some memory which could lead to you problem.
The Threadpool and the TPL manage this resources by themselves
see also: -> Thread vs ThreadPool
Reusing threads that have already been created instead of creating new ones (an expensive process)
...
If you queue 100 thread pool tasks, it will only use as many threads as have already been created to service these requests (say 10
for example). The thread pool will make frequent checks (I believe
every 500ms in 3.5 SP1) and if there are queued tasks, it will make
one new thread. If your tasks are quick, then the number of new
threads will be small and reusing the 10 or so threads for the short
tasks will be faster than creating 100 threads up front.
If your workload consistently has large numbers of thread pool requests coming in, then the thread pool will tune itself to your
workload by creating more threads in the pool by the above process so
that there are a larger number of thread available to process requests
check Here for more in depth info on how the thread pool functions under the hood
I just know
Each thread also consumes (by default) around 1 MB of memory.
I am implementing a very basic thread in C#:
private Thread listenThread;
public void startParser()
{
this.listenThread = new Thread(new ThreadStart(checkingData));
this.listenThread.IsBackground = true;
this.listenThread.Start();
}
private void checkingData()
{
while (true)
{
}
}
Then I immediately get 100% CPU. I want to check if sensor data is read inside the while(true) loop. Why it is like this?
Thanks in advance.
while (true) is what killing your CPU.
You can add Thread.Sleep(X) to you while to give CPU some rest before checking again.
Also, seems like you actually need a Timer.
Look at one of the Timer classes here http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx.
Use Timer with as high pulling interval as you can afford, 1 sec, half a sec.
You need to tradeoff between CPU usage and the maximum delay you can afford between checks.
Let your loop sleep. It's running around and around and getting tired. At the very least, let it take a break eventually.
Because your function isn't doing anything inside the while block, it grabs the CPU, and, for all practical purposes, never lets go of it, so other threads can do their work
private void checkingData()
{
while (true)
{
// executes, immediately
}
}
If you change it to the following, you should see more reasonable CPU consumption:
private void checkingData()
{
while (true)
{
// read your sensor data
Thread.Sleep(1000);
}
}
you can use blocking queue. take a item from blocking queue will block the thread until there is a item put into the queue. that doesn't cost any cpu.
with .net4, you can use BlockingCollection http://msdn.microsoft.com/en-us/library/dd267312.aspx
under version 4, there is not blocking queue int .net framework.
you can find many implements of blocking queue if you google it.
here is a implementation
http://www.codeproject.com/KB/recipes/boundedblockingqueue.aspx
by the way. where does the data you wait come from?
EDIT
if you want to check file. you can use FileSystemWatcher to check it with thread block.
if your data comes from external API and the api doesn't block the thread, there is no way to block the thread except use Thread.Sleep
If you're polling for a condition, definitely do as others suggested and put in a sleep. I'd also add that if you need maximum performance, you can use a statistical trick to avoid sleeping when sensor data has been read. When you detect sensor data is idle, say, 10 times in a row, then start to sleep on each iteration again.
Quick preface of what I'm trying to do. I want to start a process and start up two threads to monitor the stderr and stdin. Each thread chews off bits of the stream and then fires it out to a NetworkStream. If there is an error in either thread, both threads need to die immediately.
Each of these processes with stdout and stdin monitoring threads are spun off by a main server process. The reason this becomes tricky is because there can easily be 40 or 50 of these processes at any given time. Only during morning restart bursts are there ever more than 50 connections, but it really needs to be able to handle 100 or more. I test with 100 simultaneous connections.
try
{
StreamReader reader = this.myProcess.StandardOutput;
char[] buffer = new char[4096];
byte[] data;
int read;
while (reader.Peek() > -1 ) // This can block before stream is streamed to
{
read = reader.Read(buffer, 0, 4096);
data = Server.ClientEncoding.GetBytes(buffer, 0, read);
this.clientStream.Write(data, 0, data.Length); //ClientStream is a NetworkStream
}
}
catch (Exception err)
{
Utilities.ConsoleOut(string.Format("StdOut err for client {0} -- {1}", this.clientID, err));
this.ShutdownClient(true);
}
This code block is run in one Thread which is right now not Background. There is a similar thread for the StandardError stream. I am using this method instead of listening to OutputDataReceived and ErrorDataReceived because there was an issue in Mono that caused these events to not always fire properly and even though it appears to be fixed now I like that this method ensures I'm reading and writing everything sequentially.
ShutdownClient with True simply tries to kill both threads. Unfortunately the only way I have found to make this work is to use an interrupt on the stdErrThread and stdOutThread objects. Ideally peek would not block and I could just use a manual reset event to keep checking for new data on stdOut or stdIn and then just die when the event is flipped.
I doubt this is the best way to do it. Is there a way to execute this without using an Interrupt?
I'd like to change, because I just saw in my logs that I missed a ThreadInterruptException thrown inside Utlities.ConsoleOut. This just does a System.Console.Write if a static variable is true, but I guess this blocks somewhere.
Edits:
These threads are part of a parent Thread that is launched en masse by a server upon a request. Therefore I cannot set the StdOut and StdErr threads to background and kill the application. I could kill the parent thread from the main server, but this again would get sticky with Peek blocking.
Added info about this being a server.
Also I'm starting to realize a better Queuing method for queries might be the ultimate solution.
I can tell this whole mess stems from the fact that Peek blocks. You're really trying to fix something that is fundamentally broken in the framework and that is never easy (i.e. not a dirty hack). Personally, I would fix the root of the problem, which is the blocking Peek. Mono would've followed Microsoft's implementation and thus ends up with the same problem.
While I know exactly how to fix the problem should I be allowed to change the framework source code, the workaround is lengthy and time consuming.
But here goes.
Essentially, what Microsoft needs to do is change Process.StartWithCreateProcess such that standardOutput and standardError are both assigned a specialised type of StreamReader (e.g. PipeStreamReader).
In this PipeStreamReader, they need to override both ReadBuffer overloads (i.e. need to change both overloads to virtual in StreamReader first) such that prior to a read, PeekNamedPipe is called to do the actual peek. As it is at the moment, FileStream.Read() (called by Peek()) will block on pipe reads when no data is available for read. While a FileStream.Read() with 0 bytes works well on files, it doesn't work all that well on pipes. In fact, the .NET team missed an important part of the pipe documentation - PeekNamedPipe WinAPI.
The PeekNamedPipe function is similar to the ReadFile function with the following exceptions:
...
The function always returns immediately in a single-threaded application, even if there is no data in the pipe. The wait mode of a named pipe handle (blocking or nonblocking) has no effect on the function.
The best thing at this moment without this issue solved in the framework would be to roll out your own Process class (a thin wrapper around WinAPI would suffice).
Why dont you just set both Threads to be backround and then kill the app? It would cause an immediate closing of both threads.
You're building a server. You want to avoid blocking. The obvious solution is to use the asynchronous APIs:
var myProcess = Process.GetCurrentProcess();
StreamReader reader = myProcess.StandardOutput;
char[] buffer = new char[4096];
byte[] data;
int read;
while (!myProcess.HasExited)
{
read = await reader.ReadAsync(buffer, 0, 4096);
data = Server.ClientEncoding.GetBytes(buffer, 0, read);
await this.clientStream.WriteAsync(data, 0, data.Length);
}
No need to waste threads doing I/O work :)
Get rid of peek and use the method below to read from the process output streams. ReadLine() returns null when the process ends. To join this thread with your calling thread either wait for the process to end or kill the process yourself. ShutdownClient() should just Kill() the process which will cause the other thread reading the StdOut or StdErr to also exit.
private void ReadToEnd()
{
string nextLine;
while ((nextLine = stream.ReadLine()) != null)
{
output.WriteLine(nextLine);
}
}