Infinite While Loop in C#, CPU usage 100% - c#

I have a web response stream which produces data intermittently. There is no way of knowing when will data be received on this channel.
In order to read all data in my application, I use a while(true) which results in 100% CPU usage.
I cannot use ManualResetEvent as the application has to keep reading from the stream all the time. This works OK as long as data is received but when there is no data in stream, other threads cannot get enough CPU time to process.
My code looks like:-
StreamReader streamReader = new StreamReader(httpResponse.GetResponseStream());
while(true)
{
int charvalue = streamReader.Read();
// More code to process data read above
}
I don't want to use Thread.Sleep() as it slows my application unnecessarily and still want other threads to process when there is no data received on this thread.

make an async method and do it with await Task.Delay()
public async void ReadFromStream(StreamReader streamReader)
{
while(true)
{
int charvalue = streamReader.Read();
await Task.Delay(1000);
// add this data to some list, queue or something else so you could use it somewhere else easily and then just use this data in your other methods
}
}
then just call it, it will run indefinely in the background. Make it pass your results to some queue and then you will be free to use those results as you please, dequeue the data to a list and do whatever you want with them.

Related

Constantly read from NetworkStream async

I am a farily new .NET-developer and I'm currently reading up on async/await. I need to work on a framework used for testing devices that are controlled by remotely accessing servers using TCP and reading/writing data from/to these servers. This will be used for unit tests.
There is no application-layer protocol and the server may send data based on external events. Therefore I must be able to continuously capture any data coming from the server and write it to a buffer, which can be read from a different context.
My idea goes somewhere along the lines of the following snippet:
// ...
private MemoryStream m_dataBuffer;
private NetworkStream m_stream;
// ...
public async void Listen()
{
while (Connected)
{
try
{
int bytesReadable = m_dataBuffer.Capacity - (int)m_dataBuffer.Position;
// (...) resize m_dataBuffer if necessary (...)
m_stream.ReadTimeout = Timeout;
lock (m_dataBuffer)
{
int bytesRead = await m_stream.ReadAsync(m_dataBuffer.GetBuffer(),
(int)m_dataBuffer.Position, bytesReadable);
m_stream.Position += bytesRead;
}
}
catch (IOException ex)
{
// handle read timeout.
}
catch (Exception)
{
throw new TerminalException("ReadWhileConnectedAsync() exception");
}
}
}
This seems to have the following disadvantages:
If calling and awaiting the Listen function, the caller hangs, even though the caller must be able to continue (as the network stream should be read as long as the connection is open).
If declaring it async void and not awaiting it, the application crashes when exceptions occur in the Task.
If declaring it async Task and not awaiting it, I assume the same happens (plus I get a warning)?
The following questions ensue:
Can I catch exceptions thrown in Listen if I don't await it?
Is there a better way to constantly read from a network stream using async/await?
Is it actually sane to try to continuously read from a network stream using async/await or is a thread a better option?
async void should at the very least be async Task with the return value thrown away. That makes the method adhere to sane standards and pushes the responsibility into the caller which is better equipped to make decisions about waiting and error handling.
But you don't have to throw away the return value. You can attach a logging continuation:
async Task Log(Task t) {
try { await t; }
catch ...
}
And use it like this:
Log(Listen());
Throw away the task returned by Log (or, await it if you wish to logically wait).
Or, simply wrap everything in Listen in a try-catch. This appears to be the case already.
Can I catch exceptions thrown in Listen if I don't await it?
You can find out about exceptions using any way that attaches a continuation or waits synchronously (the latter is not your strategy).
Is there a better way to constantly read from a network stream using async/await?
No, this is the way it's supposed to be done. At any given time there should be one read IO outstanding. (Or zero for a brief period of time.)
Is it actually sane to try to continuously read from a network stream using async/await or is a thread a better option?
Both will work correctly. There is a trade-off to be made. Synchronous code can be simpler, easier to debug and even less CPU intensive. Asynchronous code saved on thread stack memory and context switches. In UI apps await has significant benefits.
I would do something like this:
const int MaxBufferSize = ... ;
Queue<byte> m_buffer = new Queue<byte>(MaxBufferSize);
NetworkStream m_stream = ... ;
...
// this will create a thread that reads bytes from
// the network stream and writes them into the buffer
Task.Run(() => ReadNetworkStream());
private static void ReadNetworkStream()
{
while (true)
{
var next = m_stream.ReadByte();
if (next < 0) break; // no more data
while (m_buffer.Count >= maxBufferSize)
m_buffer.Dequeue(); // drop front
m_buffer.Enqueue((byte)next);
}
}

WCF response from separate thread

public interface IEventDismiss
{
[OperationContract]
[return:MessageParameter(Name="response")]
[XmlSerializerFormat]
Response ProcessRequest(Request request);
}
Hello,
Above is my WCF implementation in C# and it is pretty straight forward. However, it becomes a little more complicated when I receive the request and pass it on to another thread to process to produce the response and finally send this response back.
My algorithm is:
Get the request.
Pass it on to a separate thread to process by putting onto a static queue for other thread.
Once thread finish processing, it put the response object onto a static queue.
In my function ProcessRequest I have a while loop that dequeue this response and send it back to the requester.
public Response ProcessRequest (Request request)
{
bool sWait = true;
Response sRes = new Response();
ResponseProcessor.eventIDQueue.Enqueue(request.EventID);
while (sWait)
{
if (ResponseProcessor.repQ.Count > 0)
{
sRes = ResponseProcessor.repQ.Dequeue();
sWait = false;
}
}
return sRes;
}
Now, before everyone start to grill me, I am too realized this is bad practice and that's why I ask the question here in hoping to get better way to do this. I realized with the current code I have the following issues:
My while loops maybe in a continue loop and thus eating up the CPU if it has no sleep() in between.
My response queue may contains the wrong response back due to the nature of async call.
So I have two questions:
Is there a way to put sleep in the while loop to eliminate the high CPU usage?
Is there a better way to do this?
There's not point in doing this in the first place. Rather than having the current thread sitting around doing nothing while it waits for another queue to compute the work (while eating up tons of CPU cycles anyway), just compute the response in the current thread and send it back. You are gaining nothing by queuing it for another thread to handle.
You are also using queue objects that cannot be safely accessed from multiple threads, so in addition to being extremely inefficient, it's also subject to race conditions that can mean it won't even work.

Share a BlockingCollection across multiple tasks

Here my scenario. I am getting a large amount of data in chuck from an external data source and I have to write it locally at two places. One of the destination is very slow to write to but the other one is super fast (but I cannot rely on it to read and write to the slow destination). To accomplish that, I am using a Producer-Consumer pattern (using BlockingCollection).
The issue I have right now is that I have to queue the data in two BlockingCollection and that takes way too much memory. My code look very similar to the example below but I would really like to drive the two Task from a single queue. Anybody know what would be the proper way to do that? Any inefficiencies in the code below?
class Program
{
const int MaxNumberOfWorkItems = 15;
static BlockingCollection<int> slowBC = new BlockingCollection<int>(MaxNumberOfWorkItems);
static BlockingCollection<int> fastBC = new BlockingCollection<int>(MaxNumberOfWorkItems);
static void Main(string[] args)
{
Task slowTask = Task.Factory.StartNew(() =>
{
foreach (var item in slowBC.GetConsumingEnumerable())
{
Console.WriteLine("SLOW -> " + item);
Thread.Sleep(25);
}
});
Task fastTask = Task.Factory.StartNew(() =>
{
foreach (var item in fastBC.GetConsumingEnumerable())
{
Console.WriteLine("FAST -> " + item);
}
});
// Population two BlockingCollections with the same data. How can I have a single collection?
for (int i = 0; i < 100; i++)
{
while (slowBC.TryAdd(i) == false)
{
Console.WriteLine("Wait for slowBC...");
}
while (fastBC.TryAdd(i) == false)
{
Console.WriteLine("Wait for 2...");
}
}
slowBC.CompleteAdding();
fastBC.CompleteAdding();
Task.WaitAll(slowTask, fastTask);
Console.ReadLine();
}
}
Using a producer-consumer queue to transfer single ints is extremely inefficient. You are rx it in chunks, so why not type the queue as '*chunk' and send the whole chunk, immediately creating/depooling a new chunk at the same ref. varaible to rx. the next lot of data? This is how P-C queues are normally be used for non-trivial amounts of data - queueing refs/pointers, not actual data. Threads have shared memory spaces, (that some developers seem to think just causes problems), so use it - queue pointers/refs and safely transfer MB of data as one pointer. As long as you, IN THE NEXT LINE OF CODE, always create/depool a new one after queueing off the old one, the producer and consumer threads can never be operating on the same chunk.
Queueing *chunks is powers-of-10 times more efficient for large chunks.
Send the *chunks to the fast link then just 'forward' them to the slow link from there.
You may need flow-control overal if the slow link is not to block up your system and cause eventual OOM errors. What I usually do is fix an 'overall' quota for the total buffer size and create a pool of chunks at startup, (pool is another BlockingCollection, populated with *new(chunks) at startup). The producer thread dequeues chunks, fills them with data, queues them to the FAST thread. The FAST thread processes received chunks and then queues the *chunks to the SLOW thread. The SLOW thread processes the same data and then repools the 'used' chunk for re-use by the producer thread. This forms a flow-controlled system - if the SLOW thred is too slow, the producer eventually tries to depool a *chunk from an empty pool and so blocks there until the SLOW thread repools some used *chunks and so signals the producer thread to run again. You may need some policy in the slow thread to time-out its operations and dump its *chunk early, so dropping data - you must decide on a policy for that given your overall requirements - it is obviously impossible to continually queue data to a fast and slow consumer forever without memory overflow unless the slow consumer dumps some data.
Edit - Oh, and yes, using a pool eliminates GC on the used chunks, further increasing performance.
One overall flow policy would be to not dump any data in the slow thread. With continual high data flow, the *chunks will all end up being on the queue between the fast and slow threads and the producer thread will indeed block on the empty pool. The network connection wil then apply its own flow-contol to stop the network peer sending any more dat over TCP. This extends the flow -control all the way from your slow thread to the peer.

CPU usage problem

I have a network project, there is no timer in it. just a tcpclient that connect to a server and listen to receive any data from network.
TcpClient _TcpClient = new TcpClient(_IpAddress, _Port);
_ConnectThread = new Thread(new ThreadStart(ConnectToServer));
_ConnectThread.IsBackground = true;
_ConnectThread.Start();
private void ConnectToServer()
{
try
{
NetworkStream _NetworkStream = _TcpClient.GetStream();
byte[] _RecievedPack = new byte[1024 * 1000];
string _Message = string.Empty;
int _BytesRead;
int _Length;
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
}
catch (Exception exp)
{
// call a function to alarm that connection is false
}
}
But after a while the cpu usage of my application goes up(90%, 85%,...).
even if no data receive.
could anybody give me some tips about cpu usage. I'm totally blank. i don't know i should check which part of the project!
could anybody give me some tips about cpu usage
You should consider checking the loops in the application, like while loop, if you are spend so much time waiting for some condition to became true, then it will take much CPU time. for instance
while (true)
{}
or
while (_Flag)
{
//do something
}
If the code executed inside the while are synchronous, then the thread will be ending eating much of CPU cycles. to solve this problem you could executes the code inside the while in a different thread, so it will be asynchronous, and then use ManualResetEvent or AutoResetEvent to report back when operation executed, another thing to mentioned is to consider using System.Threading.Thread.Sleep method to till the thread to sleep and give the cpu time to execute other threads, example:
while(_Flag)
{
//do something
Thread.Sleep(100);//Blocks the current thread for 100 milliseconds
}
There are several issues with your code... the most important ones are IMHO:
Use async methods (BeginRead etc.), not blocking methods, and don't create your own thread. Thread are "expensive>" resources - and using blocking calls in threads is therefore a waste of resources. Using async calls lets the operating system call you back when an event (data received for instance) occured, so that no separate thread is needed (the callback runs with a pooled thread).
Be aware that Read may return just a few bytes, it doesn't have to fill the _ReceivedPackbuffer. Theoretically, it may just receive one or two bytes - not even enough for your call to ToInt32!
The CPU usage spikes, because you have a while loop, which does not do anything, if it does not receive anything from the network. Add Thread.Sleep() at the end of it, if not data was received, and your CPU usage will be normal.
And take the advice, that Lucero gave you.
I suspect that the other end of the connection is closed when the while loop is still running, in which case you'll repeatedly read zero bytes from the network stream (marking connection closed; see NetworkStream.Read on MSDN).
Since NetworkStream.Read will then return immediately (as per MSDN), you'll be stuck in a tight while loop that will consume a lot of processor time. Try adding a Thread.Sleep() or detecting a "zero read" within the loop. Ideally you should handle a read of zero bytes by terminating your end of the connection, too.
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
Have you attached a debugger and stepped through the code to see if it's behaving in the way you expect?
Alternatively, if you have a profiling tool available (such as ANTs) then this will help you see where time is being spent in your application.

Difference between NetworkStream.Read() and NetworkStream.BeginRead()?

I need to read from NetworkStream which would send data randomly and the size of data packets also keep varying. I am implementing a multi-threaded application where each thread would have its own stream to read from. If there is no data on the stream, the application should keep waiting for the data to arrive. However, if the server is done sending data and has terminated the session, then it should exit out.
Initially I had utilised the Read method to obtain the data from the stream, but it used to block the thread and kept waiting until data appeared on the stream.
The documentation on MSDN suggests,
If no data is available for reading,
the Read method returns 0. If the
remote host shuts down the connection,
and all available data has been
received, the Read method completes
immediately and return zero bytes.
But in my case, I have never got the Read method to return 0 and exit gracefully. It just waits indefinitely.
In my further investigation, I came across BeginRead which watches the stream and invokes a callback method asynchronously, as soon as it receives the data. I have tried to look for various implementations using this approach as well, however, I was unable to identify when would using BeginRead be beneficial as opposed to Read.
As I look at it, BeginRead has just the advantage of having the async call, which would not block the current thread. But in my application, I already have a separate thread to read and process the data from stream, so that wouldn't make much difference for me.
Can anyone please help me understand the Wait and Exit mechanism for
BeginRead and how is it different from Read?
What would be the best way to implement the desired functionality?
I use BeginRead, but continue blocking the thread using a WaitHandle:
byte[] readBuffer = new byte[32];
var asyncReader = stream.BeginRead(readBuffer, 0, readBuffer.Length,
null, null);
WaitHandle handle = asyncReader.AsyncWaitHandle;
// Give the reader 2seconds to respond with a value
bool completed = handle.WaitOne(2000, false);
if (completed)
{
int bytesRead = stream.EndRead(asyncReader);
StringBuilder message = new StringBuilder();
message.Append(Encoding.ASCII.GetString(readBuffer, 0, bytesRead));
}
Basically it allows a timeout of the async reads using the WaitHandle and gives you a boolean value (completed) if the read was completed in the set time (2000 in this case).
Here's my full stream reading code copied and pasted from one of my Windows Mobile projects:
private static bool GetResponse(NetworkStream stream, out string response)
{
byte[] readBuffer = new byte[32];
var asyncReader = stream.BeginRead(readBuffer, 0, readBuffer.Length, null, null);
WaitHandle handle = asyncReader.AsyncWaitHandle;
// Give the reader 2seconds to respond with a value
bool completed = handle.WaitOne(2000, false);
if (completed)
{
int bytesRead = stream.EndRead(asyncReader);
StringBuilder message = new StringBuilder();
message.Append(Encoding.ASCII.GetString(readBuffer, 0, bytesRead));
if (bytesRead == readBuffer.Length)
{
// There's possibly more than 32 bytes to read, so get the next
// section of the response
string continuedResponse;
if (GetResponse(stream, out continuedResponse))
{
message.Append(continuedResponse);
}
}
response = message.ToString();
return true;
}
else
{
int bytesRead = stream.EndRead(asyncReader);
if (bytesRead == 0)
{
// 0 bytes were returned, so the read has finished
response = string.Empty;
return true;
}
else
{
throw new TimeoutException(
"The device failed to read in an appropriate amount of time.");
}
}
}
Async I/O can be used to achieve the same amount of I/O in less threads.
As you note, right now your app has one thread per Stream. This is OK with small numbers of connections, but what if you need to support 10000 at once? With async I/O, this is no longer necessary because the read completion callback allows context to be passed identifying the relevant stream. Your reads no longer block, so you don't need one thread per Stream.
Whether you use sync or async I/O, there is a way to detect and handle stream closedown on the relevant API return codes. BeginRead should fail with IOException if the socket has already been closed. A closedown while your async read is pending will trigger a callback, and EndRead will then tell you the state of play.
When your application calls BeginRead,
the system will wait until data is
received or an error occurs, and then
the system will use a separate thread
to execute the specified callback
method, and blocks on EndRead until
the provided NetworkStream reads data
or throws an exception.
Did you try server.ReceiveTimeout? You can set the time which Read() functon will wait for incomming data before returning zero. In your case, this property is probably set to infinite somewhere.
BeginRead is an async process which means your main thread will start execute Read in another process. So now we have 2 parallel processes. if u want to get the result, u have to call EndRead, which will gives the result.
some psudo
BeginRead()
//...do something in main object while result is fetching in another thread
var result = EndRead();
but if your main thread doesn't have anything else to do and u have to need the result, u should call Read.

Categories

Resources