Quick preface of what I'm trying to do. I want to start a process and start up two threads to monitor the stderr and stdin. Each thread chews off bits of the stream and then fires it out to a NetworkStream. If there is an error in either thread, both threads need to die immediately.
Each of these processes with stdout and stdin monitoring threads are spun off by a main server process. The reason this becomes tricky is because there can easily be 40 or 50 of these processes at any given time. Only during morning restart bursts are there ever more than 50 connections, but it really needs to be able to handle 100 or more. I test with 100 simultaneous connections.
try
{
StreamReader reader = this.myProcess.StandardOutput;
char[] buffer = new char[4096];
byte[] data;
int read;
while (reader.Peek() > -1 ) // This can block before stream is streamed to
{
read = reader.Read(buffer, 0, 4096);
data = Server.ClientEncoding.GetBytes(buffer, 0, read);
this.clientStream.Write(data, 0, data.Length); //ClientStream is a NetworkStream
}
}
catch (Exception err)
{
Utilities.ConsoleOut(string.Format("StdOut err for client {0} -- {1}", this.clientID, err));
this.ShutdownClient(true);
}
This code block is run in one Thread which is right now not Background. There is a similar thread for the StandardError stream. I am using this method instead of listening to OutputDataReceived and ErrorDataReceived because there was an issue in Mono that caused these events to not always fire properly and even though it appears to be fixed now I like that this method ensures I'm reading and writing everything sequentially.
ShutdownClient with True simply tries to kill both threads. Unfortunately the only way I have found to make this work is to use an interrupt on the stdErrThread and stdOutThread objects. Ideally peek would not block and I could just use a manual reset event to keep checking for new data on stdOut or stdIn and then just die when the event is flipped.
I doubt this is the best way to do it. Is there a way to execute this without using an Interrupt?
I'd like to change, because I just saw in my logs that I missed a ThreadInterruptException thrown inside Utlities.ConsoleOut. This just does a System.Console.Write if a static variable is true, but I guess this blocks somewhere.
Edits:
These threads are part of a parent Thread that is launched en masse by a server upon a request. Therefore I cannot set the StdOut and StdErr threads to background and kill the application. I could kill the parent thread from the main server, but this again would get sticky with Peek blocking.
Added info about this being a server.
Also I'm starting to realize a better Queuing method for queries might be the ultimate solution.
I can tell this whole mess stems from the fact that Peek blocks. You're really trying to fix something that is fundamentally broken in the framework and that is never easy (i.e. not a dirty hack). Personally, I would fix the root of the problem, which is the blocking Peek. Mono would've followed Microsoft's implementation and thus ends up with the same problem.
While I know exactly how to fix the problem should I be allowed to change the framework source code, the workaround is lengthy and time consuming.
But here goes.
Essentially, what Microsoft needs to do is change Process.StartWithCreateProcess such that standardOutput and standardError are both assigned a specialised type of StreamReader (e.g. PipeStreamReader).
In this PipeStreamReader, they need to override both ReadBuffer overloads (i.e. need to change both overloads to virtual in StreamReader first) such that prior to a read, PeekNamedPipe is called to do the actual peek. As it is at the moment, FileStream.Read() (called by Peek()) will block on pipe reads when no data is available for read. While a FileStream.Read() with 0 bytes works well on files, it doesn't work all that well on pipes. In fact, the .NET team missed an important part of the pipe documentation - PeekNamedPipe WinAPI.
The PeekNamedPipe function is similar to the ReadFile function with the following exceptions:
...
The function always returns immediately in a single-threaded application, even if there is no data in the pipe. The wait mode of a named pipe handle (blocking or nonblocking) has no effect on the function.
The best thing at this moment without this issue solved in the framework would be to roll out your own Process class (a thin wrapper around WinAPI would suffice).
Why dont you just set both Threads to be backround and then kill the app? It would cause an immediate closing of both threads.
You're building a server. You want to avoid blocking. The obvious solution is to use the asynchronous APIs:
var myProcess = Process.GetCurrentProcess();
StreamReader reader = myProcess.StandardOutput;
char[] buffer = new char[4096];
byte[] data;
int read;
while (!myProcess.HasExited)
{
read = await reader.ReadAsync(buffer, 0, 4096);
data = Server.ClientEncoding.GetBytes(buffer, 0, read);
await this.clientStream.WriteAsync(data, 0, data.Length);
}
No need to waste threads doing I/O work :)
Get rid of peek and use the method below to read from the process output streams. ReadLine() returns null when the process ends. To join this thread with your calling thread either wait for the process to end or kill the process yourself. ShutdownClient() should just Kill() the process which will cause the other thread reading the StdOut or StdErr to also exit.
private void ReadToEnd()
{
string nextLine;
while ((nextLine = stream.ReadLine()) != null)
{
output.WriteLine(nextLine);
}
}
Related
I am trying to get data from a serial port continuously in a very fast speed. The baud rate is 230400.
When I print out the data, time stamp and also BytesToRead in to a file, I noticed a 200ms delay happens whenever BytesToRead drops to a single digit and readLine() is not reading anything in that 200ms. After the delay, BytesToRead goes back to around 3000 and this process happens again and again. Essentially I am not getting data continuously.
I thought maybe I am reading faster than the speed data accumulate in the buffer so I tried changing readBuffer size and put this thread to sleep for 1ms in order to let buffer keep up the speed I am reading. None of them worked. There are still some delays.
Any thoughts is welcomed.
private void dostuff()//The thread I created after the port is opened
{
var startTime = DateTime.Now;
var stopwatch = Stopwatch.StartNew();
while (serialPortEncoder.IsOpen)
{
if (serialPortEncoder.BytesToRead > 210)
{
try
{
var line = serialPortEncoder.ReadLine();
var timestamp = (startTime + stopwatch.Elapsed);
var lineString = string.Format("{0} ----{1}",
line,
timestamp.ToString("HH:mm:ss:fff") + " "+serialPortEncoder.BytesToRead+"\r\n");
richTextBoxEncoderData.BeginInvoke(new MethodInvoker(delegate()
{
richTextBoxEncoderData.Text = line;//update UI
}));
}
catch (Exception ex) { MessageBox.Show(ex.ToString()); }
}}
Unless there's a line feed every 210 bytes, your ReadLine() function is probably timing out and returning nothing. ReadLine() will read the input buffer till it encounters a newline value, then return whatever data was before it. http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.readline.aspx
What kind of information is coming across the port? If you want to read a specific size buffer, just use the Read method. If you need to read till there's a line feed, use ReadLine() and check every so often to see if it returns a string.
Your code is pretty fundamentally flawed, it suffers from the "hot wait loop" bug. Your loop is burning 100% core when the serial port doesn't have enough data. That will make Windows put your thread in a dog house for a while after your burned the quantum, giving other threads a chance to run. Being in that dog house for 200 msec is a bit long but certainly not unusual.
You should do this differently, you should give Windows a chance to wake you up when there's actually data available from the serial port. It favors threads that had an I/O complete when it looks for the next thread to schedule. That is very easy to do, simply remove the BytesToRead test. The ReadLine() call is a blocking call that doesn't return until a NewLine is received. Your thread will now consume close to 0% cpu cycles.
You will still lose arbitrary amounts of time when the machine is heavily loaded or the garbage collector runs. And no, that's not good enough to reliably read an encoder and close a feedback loop. Doing that reliably requires a microcontroller with predictable real-time behavior. Readily available from industrial electronics suppliers.
After some endless researching on the web, I found the cause of the delay. Device Manager--->Port---->advance----> change latency to 1ms will solve the problem. Now, every time when the buffer is dropping to zero, it only needs 2ms max to get back to normal. I am now polling data using a separate thread. It works very well.
But like Hans Passant said, I am trying to figure out a better way to design it.
I have a network project, there is no timer in it. just a tcpclient that connect to a server and listen to receive any data from network.
TcpClient _TcpClient = new TcpClient(_IpAddress, _Port);
_ConnectThread = new Thread(new ThreadStart(ConnectToServer));
_ConnectThread.IsBackground = true;
_ConnectThread.Start();
private void ConnectToServer()
{
try
{
NetworkStream _NetworkStream = _TcpClient.GetStream();
byte[] _RecievedPack = new byte[1024 * 1000];
string _Message = string.Empty;
int _BytesRead;
int _Length;
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
}
catch (Exception exp)
{
// call a function to alarm that connection is false
}
}
But after a while the cpu usage of my application goes up(90%, 85%,...).
even if no data receive.
could anybody give me some tips about cpu usage. I'm totally blank. i don't know i should check which part of the project!
could anybody give me some tips about cpu usage
You should consider checking the loops in the application, like while loop, if you are spend so much time waiting for some condition to became true, then it will take much CPU time. for instance
while (true)
{}
or
while (_Flag)
{
//do something
}
If the code executed inside the while are synchronous, then the thread will be ending eating much of CPU cycles. to solve this problem you could executes the code inside the while in a different thread, so it will be asynchronous, and then use ManualResetEvent or AutoResetEvent to report back when operation executed, another thing to mentioned is to consider using System.Threading.Thread.Sleep method to till the thread to sleep and give the cpu time to execute other threads, example:
while(_Flag)
{
//do something
Thread.Sleep(100);//Blocks the current thread for 100 milliseconds
}
There are several issues with your code... the most important ones are IMHO:
Use async methods (BeginRead etc.), not blocking methods, and don't create your own thread. Thread are "expensive>" resources - and using blocking calls in threads is therefore a waste of resources. Using async calls lets the operating system call you back when an event (data received for instance) occured, so that no separate thread is needed (the callback runs with a pooled thread).
Be aware that Read may return just a few bytes, it doesn't have to fill the _ReceivedPackbuffer. Theoretically, it may just receive one or two bytes - not even enough for your call to ToInt32!
The CPU usage spikes, because you have a while loop, which does not do anything, if it does not receive anything from the network. Add Thread.Sleep() at the end of it, if not data was received, and your CPU usage will be normal.
And take the advice, that Lucero gave you.
I suspect that the other end of the connection is closed when the while loop is still running, in which case you'll repeatedly read zero bytes from the network stream (marking connection closed; see NetworkStream.Read on MSDN).
Since NetworkStream.Read will then return immediately (as per MSDN), you'll be stuck in a tight while loop that will consume a lot of processor time. Try adding a Thread.Sleep() or detecting a "zero read" within the loop. Ideally you should handle a read of zero bytes by terminating your end of the connection, too.
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
Have you attached a debugger and stepped through the code to see if it's behaving in the way you expect?
Alternatively, if you have a profiling tool available (such as ANTs) then this will help you see where time is being spent in your application.
I need to read from NetworkStream which would send data randomly and the size of data packets also keep varying. I am implementing a multi-threaded application where each thread would have its own stream to read from. If there is no data on the stream, the application should keep waiting for the data to arrive. However, if the server is done sending data and has terminated the session, then it should exit out.
Initially I had utilised the Read method to obtain the data from the stream, but it used to block the thread and kept waiting until data appeared on the stream.
The documentation on MSDN suggests,
If no data is available for reading,
the Read method returns 0. If the
remote host shuts down the connection,
and all available data has been
received, the Read method completes
immediately and return zero bytes.
But in my case, I have never got the Read method to return 0 and exit gracefully. It just waits indefinitely.
In my further investigation, I came across BeginRead which watches the stream and invokes a callback method asynchronously, as soon as it receives the data. I have tried to look for various implementations using this approach as well, however, I was unable to identify when would using BeginRead be beneficial as opposed to Read.
As I look at it, BeginRead has just the advantage of having the async call, which would not block the current thread. But in my application, I already have a separate thread to read and process the data from stream, so that wouldn't make much difference for me.
Can anyone please help me understand the Wait and Exit mechanism for
BeginRead and how is it different from Read?
What would be the best way to implement the desired functionality?
I use BeginRead, but continue blocking the thread using a WaitHandle:
byte[] readBuffer = new byte[32];
var asyncReader = stream.BeginRead(readBuffer, 0, readBuffer.Length,
null, null);
WaitHandle handle = asyncReader.AsyncWaitHandle;
// Give the reader 2seconds to respond with a value
bool completed = handle.WaitOne(2000, false);
if (completed)
{
int bytesRead = stream.EndRead(asyncReader);
StringBuilder message = new StringBuilder();
message.Append(Encoding.ASCII.GetString(readBuffer, 0, bytesRead));
}
Basically it allows a timeout of the async reads using the WaitHandle and gives you a boolean value (completed) if the read was completed in the set time (2000 in this case).
Here's my full stream reading code copied and pasted from one of my Windows Mobile projects:
private static bool GetResponse(NetworkStream stream, out string response)
{
byte[] readBuffer = new byte[32];
var asyncReader = stream.BeginRead(readBuffer, 0, readBuffer.Length, null, null);
WaitHandle handle = asyncReader.AsyncWaitHandle;
// Give the reader 2seconds to respond with a value
bool completed = handle.WaitOne(2000, false);
if (completed)
{
int bytesRead = stream.EndRead(asyncReader);
StringBuilder message = new StringBuilder();
message.Append(Encoding.ASCII.GetString(readBuffer, 0, bytesRead));
if (bytesRead == readBuffer.Length)
{
// There's possibly more than 32 bytes to read, so get the next
// section of the response
string continuedResponse;
if (GetResponse(stream, out continuedResponse))
{
message.Append(continuedResponse);
}
}
response = message.ToString();
return true;
}
else
{
int bytesRead = stream.EndRead(asyncReader);
if (bytesRead == 0)
{
// 0 bytes were returned, so the read has finished
response = string.Empty;
return true;
}
else
{
throw new TimeoutException(
"The device failed to read in an appropriate amount of time.");
}
}
}
Async I/O can be used to achieve the same amount of I/O in less threads.
As you note, right now your app has one thread per Stream. This is OK with small numbers of connections, but what if you need to support 10000 at once? With async I/O, this is no longer necessary because the read completion callback allows context to be passed identifying the relevant stream. Your reads no longer block, so you don't need one thread per Stream.
Whether you use sync or async I/O, there is a way to detect and handle stream closedown on the relevant API return codes. BeginRead should fail with IOException if the socket has already been closed. A closedown while your async read is pending will trigger a callback, and EndRead will then tell you the state of play.
When your application calls BeginRead,
the system will wait until data is
received or an error occurs, and then
the system will use a separate thread
to execute the specified callback
method, and blocks on EndRead until
the provided NetworkStream reads data
or throws an exception.
Did you try server.ReceiveTimeout? You can set the time which Read() functon will wait for incomming data before returning zero. In your case, this property is probably set to infinite somewhere.
BeginRead is an async process which means your main thread will start execute Read in another process. So now we have 2 parallel processes. if u want to get the result, u have to call EndRead, which will gives the result.
some psudo
BeginRead()
//...do something in main object while result is fetching in another thread
var result = EndRead();
but if your main thread doesn't have anything else to do and u have to need the result, u should call Read.
Here is the C# code I'm using to launch a subprocess and monitor its output:
using (process = new Process()) {
process.StartInfo.FileName = executable;
process.StartInfo.Arguments = args;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardInput = true;
process.StartInfo.CreateNoWindow = true;
process.Start();
using (StreamReader sr = process.StandardOutput) {
string line = null;
while ((line = sr.ReadLine()) != null) {
processOutput(line);
}
}
if (process.ExitCode == 0) {
jobStatus.State = ActionState.CompletedNormally;
jobStatus.Progress = 100;
} else {
jobStatus.State = ActionState.CompletedAbnormally;
}
OnStatusUpdated(jobStatus);
}
I am launching multiple subprocesses in separate ThreadPool threads (but no more than four at a time, on a quad-core machine). This all works fine.
The problem I am having is that one of my subprocesses will exit, but the corresponding call to sr.ReadLine() will block until ANOTHER one of my subprocesses exits. I'm not sure what it returns, but this should NOT be happening unless there is something I am missing.
There's nothing about my subprocess that would cause them to be "linked" in any way - they don't communicate with each other. I can even look in Task Manager / Process Explorer when this is happening, and see that my subprocess has actually exited, but the call to ReadLine() on its standard output is still blocking!
I've been able to work around it by spinning the output monitoring code out into a new thread and doing a process.WaitForExit(), but this seems like very odd behavior. Anyone know what's going on here?
The MSDN documents about ProcessStartInfo.RedirectStandardOutput discuss in detail deadlocks that can arise when doing what you are doing here. A solution is provided that uses ReadToEnd but I imagine the same advice and remedy would apply when you use ReadLine.
Synchronous read operations introduce
a dependency between the caller
reading from the StandardOutput stream
and the child process writing to that
stream. These dependencies can cause
deadlock conditions. When the caller
reads from the redirected stream of a
child process, it is dependent on the
child. The caller waits for the read
operation until the child writes to
the stream or closes the stream. When
the child process writes enough data
to fill its redirected stream, it is
dependent on the parent. The child
process waits for the next write
operation until the parent reads from
the full stream or closes the stream.
The deadlock condition results when
the caller and child process wait for
each other to complete an operation,
and neither can continue. You can
avoid deadlocks by evaluating
dependencies between the caller and
child process.
The best solution seems to be async I/O rather than the sync methods:
You can use asynchronous read
operations to avoid these dependencies
and their deadlock potential.
Alternately, you can avoid the
deadlock condition by creating two
threads and reading the output of each
stream on a separate thread.
There is a sample here that ought to be useful to you if you go this route.
I think it's not your code that's the issue. Blocking calls can unblock for a number of reasons, not only because their task was accomplished.
I don't know about Windows, I must admit, but in the Unix world, when a child finishes, a signal is sent to the parent process and this wakes him from any blocking calls. This would unblock a read on whatever input the parent was expecting.
It wouldn't surprise me if Windows worked similarly. In any case, read up on the reasons why a blocking call may unblock.
I want to read from a socket in an asynchronous way.
If I used the synchronous code below
StreamReader aReadStream = new StreamReader(aStream);
String aLine = aReadStream.ReadLine(); // application might hand here
with a valid open connection, but no data available for reading, the application would hang at the ReadLine() function.
If I switch to the code below using Asynchronous Programming Model
NetworkStream aStream = aClient.GetStream();
while(true)
{
IAsyncResult res = aStream.BeginRead(data, 0, data.Length, null, null);
if (res.IsCompleted){
Trace.WriteLine("here");
} else {
Thread.Sleep(100);
}
}
In the same scenario what would happen? If there is no data to read, the variable res would appear as Completed or not?
Most important, when there is no data to read, all of my calls in that while loop to aStream.BeginRead() are they scheduled continuously in the Thread Pool? If this is true am I risking to degrade the application performances because the Thread Pool has increased too much its size?
Thanks for the help
AFG
By writing the code in this way, you've basically made it synchronous.
The proper way to make this code work is to call BeginRead, passing a callback handler which will process the data which has been read when it is ready, then go do other work rather than just entering a loop.
The callback handler you specify will also be triggered when the data stream is terminated (e.g. because the connection was closed) which you can handle appropriately.
See http://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream.beginread.aspx for a sample.
Have a look at this article which I wrote that covers asynchronous sockets here on CodeProject which is what I learnt from the "MSDN, August 2005, 'Winsock - Get closer to the wire with high-performance sockets in .NET', Daryn Kiely, Pg 81."
Hope this helps,
Best regards,
Tom.