Constantly read from NetworkStream async - c#

I am a farily new .NET-developer and I'm currently reading up on async/await. I need to work on a framework used for testing devices that are controlled by remotely accessing servers using TCP and reading/writing data from/to these servers. This will be used for unit tests.
There is no application-layer protocol and the server may send data based on external events. Therefore I must be able to continuously capture any data coming from the server and write it to a buffer, which can be read from a different context.
My idea goes somewhere along the lines of the following snippet:
// ...
private MemoryStream m_dataBuffer;
private NetworkStream m_stream;
// ...
public async void Listen()
{
while (Connected)
{
try
{
int bytesReadable = m_dataBuffer.Capacity - (int)m_dataBuffer.Position;
// (...) resize m_dataBuffer if necessary (...)
m_stream.ReadTimeout = Timeout;
lock (m_dataBuffer)
{
int bytesRead = await m_stream.ReadAsync(m_dataBuffer.GetBuffer(),
(int)m_dataBuffer.Position, bytesReadable);
m_stream.Position += bytesRead;
}
}
catch (IOException ex)
{
// handle read timeout.
}
catch (Exception)
{
throw new TerminalException("ReadWhileConnectedAsync() exception");
}
}
}
This seems to have the following disadvantages:
If calling and awaiting the Listen function, the caller hangs, even though the caller must be able to continue (as the network stream should be read as long as the connection is open).
If declaring it async void and not awaiting it, the application crashes when exceptions occur in the Task.
If declaring it async Task and not awaiting it, I assume the same happens (plus I get a warning)?
The following questions ensue:
Can I catch exceptions thrown in Listen if I don't await it?
Is there a better way to constantly read from a network stream using async/await?
Is it actually sane to try to continuously read from a network stream using async/await or is a thread a better option?

async void should at the very least be async Task with the return value thrown away. That makes the method adhere to sane standards and pushes the responsibility into the caller which is better equipped to make decisions about waiting and error handling.
But you don't have to throw away the return value. You can attach a logging continuation:
async Task Log(Task t) {
try { await t; }
catch ...
}
And use it like this:
Log(Listen());
Throw away the task returned by Log (or, await it if you wish to logically wait).
Or, simply wrap everything in Listen in a try-catch. This appears to be the case already.
Can I catch exceptions thrown in Listen if I don't await it?
You can find out about exceptions using any way that attaches a continuation or waits synchronously (the latter is not your strategy).
Is there a better way to constantly read from a network stream using async/await?
No, this is the way it's supposed to be done. At any given time there should be one read IO outstanding. (Or zero for a brief period of time.)
Is it actually sane to try to continuously read from a network stream using async/await or is a thread a better option?
Both will work correctly. There is a trade-off to be made. Synchronous code can be simpler, easier to debug and even less CPU intensive. Asynchronous code saved on thread stack memory and context switches. In UI apps await has significant benefits.

I would do something like this:
const int MaxBufferSize = ... ;
Queue<byte> m_buffer = new Queue<byte>(MaxBufferSize);
NetworkStream m_stream = ... ;
...
// this will create a thread that reads bytes from
// the network stream and writes them into the buffer
Task.Run(() => ReadNetworkStream());
private static void ReadNetworkStream()
{
while (true)
{
var next = m_stream.ReadByte();
if (next < 0) break; // no more data
while (m_buffer.Count >= maxBufferSize)
m_buffer.Dequeue(); // drop front
m_buffer.Enqueue((byte)next);
}
}

Related

Recursively call async method vs long running task when receiving data from TcpClient

I'm currently rewriting my TCP server from using StreamSocketListener to TcpListener because I need to be able to use SSL. Since it was some time ago that I wrote the code I'm also trying to make it more cleaner and easier to read and hopefully increase the performance with higher number of clients but I'm currently stuck.
I'm calling a receive method recursively until the client disconnects but I'm starting to wonder if it wouldn't be a better to use a single long running task for it. But I hesitate to use it since it will then create a new long running task for every connected client. That's why I'm turning to the Stack Overflow community for some guidance on how to proceed.
Note: The connection is supposed to be open 24/7 or as much as possible for most of the connected clients.
Any comments are appreciated.
The current code looks something like this:
private async Task ReceiveData(SocketStream socket) {
await Task.Yield();
try {
using (var reader = new DataReader(socket.InputStream)) {
uint received;
do {
received = await reader.LoadAsync(4);
if (received == 0) return;
} while (reader.UnconsumedBufferLength < 4);
if (received == 0) return;
var length = reader.ReadUInt32();
do {
received = await reader.LoadAsync(length);
if (received == 0) return;
} while (reader.UnconsumedBufferLength < length);
if (received == 0) return;
// Publish the data asynchronously using an event aggregator
Console.WriteLine(reader.ReadString(length));
}
ReceiveData(socket);
}
catch (IOException ex) {
// Client probably disconnected. Can check hresult to be sure.
}
catch (Exception ex) {
Console.WriteLine(ex);
}
}
But I'm wondering if I should use something like the following code instead and start it as a long running task:
// Not sure about this part, never used Factory.StartNew before.
Task.Factory.StartNew(async delegate { await ReceiveData(_socket); }, TaskCreationOptions.LongRunning);
private async Task ReceiveData(SocketStream socket) {
try {
using (var reader = new DataReader(socket.InputStream)) {
while (true) {
uint received;
do {
received = await reader.LoadAsync(4);
if (received == 0) break;
} while (reader.UnconsumedBufferLength < 4);
if (received == 0) break;
var length = reader.ReadUInt32();
do {
received = await reader.LoadAsync(length);
if (received == 0) break;
} while (reader.UnconsumedBufferLength < length);
if (received == 0) break;
// Publish the data asynchronously using an event aggregator
Console.WriteLine(reader.ReadString(length));
}
}
// Client disconnected.
}
catch (IOException ex) {
// Client probably disconnected. Can check hresult to be sure.
}
catch (Exception ex) {
Console.WriteLine(ex);
}
}
In the first, over-simplified version of the code that was posted, the "recursive" approach had no exception handling. That in and of itself would be enough to disqualify it. However, in your updated code example it's clear that you are catching exceptions in the async method itself; thus the method is not expected to throw any exceptions, and so failing to await the method call is much less of a concern.
So, what else can we use to compare and contrast the two options?
You wrote:
I'm also trying to make it more cleaner and easier to read
While the first version is not really recursive, in the sense that each call to itself would increase the depth of the stack, it does share some of the readability and maintainability issues with true recursive methods. For experienced programmers, comprehending such a method may not be hard, but it will at the very least slow down the inexperienced, if not make them scratch their heads for awhile.
So there's that. It seems like a significant disadvantage, given the stated goals.
So what about the second option, about which you wrote:
…it will then create a new long running task for every connected client
This is an incorrect understanding of how that would work.
Without delving too deeply into how async methods work, the basic behavior is that an async method will in fact return at each use of await (ignoring for a moment the possibility of operations that complete synchronously…the assumption is that the typical case is asynchronous completions).
This means that the task you initiate with this line of code:
Task.Factory.StartNew(
async delegate { await ReceiveData(_socket); },
TaskCreationOptions.LongRunning);
…lives only long enough to reach the first await in the ReceiveData() method. At that point, the method returns and the task which was started terminates (either allowing the thread to terminate completely, or to be returned to the thread pool, depending on how the task scheduler decided to run the task).
There is no "long running task" for every connected client, at least not in the sense of there being a thread being used up. (In some sense, there is since of course there's a Task object involved. But that's just as true for the "recursive" approach as it is for the looping approach.)
So, that's the technical comparison. Of course, it's up to you to decide what the implications are for your own code. But I'll offer my own opinion anyway…
For me, the second approach is significantly more readable. And it is specifically because of the way async and await were designed and why they were designed. That is, this feature in C# is specifically there to allow asynchronous code to be implemented in a way that reads almost exactly like regular synchronous code. And in fact, this is borne out by the false impression that there is a "long running task" dedicated to each connection.
Prior to the async/await feature, the correct way to write a scalable networking implementation would have been to use one of the APIs from the "Asynchronous Programming Model". In this model, the IOCP thread pool is used to service I/O completions, such that a small number of threads can monitor and respond to a very large number of connections.
The underlying implementation details actually do not change when switching over to the new async/await syntax. The process still uses a small number of IOCP thread pool threads to handle the I/O completions as they occur.
The difference is that the when using async/await, the code looks like the kind of code that one would write if using a single thread for each connection (hence the misunderstanding of how this code actually works). It's just a big loop, with all the necessary handling in one place, and without the need for different code to initiate an I/O operation and to complete one (i.e. the call to Begin...() and later to End...()).
And to me, that is the beauty of async/await, and why the first version of your code is inferior to the second. The first fails to take advantage of what makes async/await actually useful and beneficial to code. The second takes full advantage of that.
Beauty is, of course, in the eye of the beholder. And to at least a limited extent, the same thing can be said of readability and maintainability. But given your stated goal of making the code "cleaner and easier to read", it seems to me that the second version of your code serves that purpose best.
Assuming the code is correct now, it will be easier for you to read (and remember, it's not just you today that needs to read it…you want it readable for the "you" a year from now, after you haven't seen the code for awhile). And if it turns out the code is not correct now, the simpler second version will be easier to read, debug, and fix.
The two versions are in fact almost identical. In that respect, it almost doesn't matter. But less code is always better. The first version has two extra method calls and dangling awaitable operations, while the second replaces those with a simple while loop. I know which one I find more readable. :)
Related questions/useful additional reading:
Long Running Blocking Methods. Difference between Blocking, Sleeping, Begin/End and Async
Is async recursion safe in C# (async ctp/.net 4.5)?

how to write to a file stream asynchronously using async/await from multiple threads in a thread-safe manner

I have an extremely simple logging utility that is currently synchronous. It gets called from UI (WPF) and threadpool threads (Task.Run) and uses lock(_stream){_stream.WriteLine(message);} for thread safety. I'd like to add an asynchronous write method but not sure how to make it threadsafe.
Will the following async method work?
Will it play nice with the existing synchronous method and can they be used together without issues?
Assuming the previous is resolved, should the final Debug.WriteLine go inside the try block or after the finally where it currently is - does it make a difference?
private static SemaphoreSlim _sync = new SemaphoreSlim(1);
public static async Task WriteLineAsync(string message)
{
if (_stream != null)
{
await _sync.WaitAsync();
try
{
await _stream.WriteLineAsync(string.Format("{0} {1}", DateTime.Now.ToString("HH:mm:ss.fff") + message));
}
catch(Exception ex)
{
Debug.WriteLine(ex.ToString());
}
finally
{
_sync.Release();
}
Debug.WriteLine("{0} {1}", DateTime.Now.ToString("HH:mm:ss.fff"), message);
}
}
Will the following async method work?
If the stream is set up properly, then yes, i see no reason why it wouldn't
Will it play nice with the existing synchronous method and can they be used together without issues?
If your synchronous methods use the same SemaphoreSlim and use the synchronous Wait, they should play along nicely.
Assuming the previous is resolved, should the final Debug.WriteLine go inside the try block or after the finally where it currently is - does it make a difference?
If the debug message should be written if and only if the stream successfully writes a message to the stream, it should be inside your try block, after the call to WriteLineAsync.

What is the async/await equivalent of a ThreadPool server?

I am working on a tcp server that looks something like this using synchronous apis and the thread pool:
TcpListener listener;
void Serve(){
while(true){
var client = listener.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(this.HandleConnection, client);
//Or alternatively new Thread(HandleConnection).Start(client)
}
}
Assuming my goal is to handle as many concurrent connections as possible with the lowest resource usage, this seems that it will be quickly limited by the number of available threads. I suspect that by using Non-blocking Task apis, I will be able to handle much more with fewer resources.
My initial impression is something like:
async Task Serve(){
while(true){
var client = await listener.AcceptTcpClientAsync();
HandleConnectionAsync(client); //fire and forget?
}
}
But it strikes me that this could cause bottlenecks. Perhaps HandleConnectionAsync will take an unusually long time to hit the first await, and will stop the main accept loop from proceeding. Will this only use one thread ever, or will the runtime magically run things on multiple threads as it sees fit?
Is there a way to combine these two approaches so that my server will use exactly the number of threads it needs for the number of actively running tasks, but so that it will not block threads unnecessarily on IO operations?
Is there an idiomatic way to maximize throughput in a situation like this?
I'd let the Framework manage the threading and wouldn't create any extra threads, unless profiling tests suggest I might need to. Especially, if the calls inside HandleConnectionAsync are mostly IO-bound.
Anyway, if you like to release the calling thread (the dispatcher) at the beginning of HandleConnectionAsync, there's a very easy solution. You can jump on a new thread from ThreadPool with await Yield(). That works if you server runs in the execution environment which does not have any synchronization context installed on the initial thread (a console app, a WCF service), which is normally the case for a TCP server.
The following illustrate this (the code is originally from here). Note, the main while loop doesn't create any threads explicitly:
using System;
using System.Collections.Generic;
using System.Net.Sockets;
using System.Text;
using System.Threading.Tasks;
class Program
{
object _lock = new Object(); // sync lock
List<Task> _connections = new List<Task>(); // pending connections
// The core server task
private async Task StartListener()
{
var tcpListener = TcpListener.Create(8000);
tcpListener.Start();
while (true)
{
var tcpClient = await tcpListener.AcceptTcpClientAsync();
Console.WriteLine("[Server] Client has connected");
var task = StartHandleConnectionAsync(tcpClient);
// if already faulted, re-throw any error on the calling context
if (task.IsFaulted)
await task;
}
}
// Register and handle the connection
private async Task StartHandleConnectionAsync(TcpClient tcpClient)
{
// start the new connection task
var connectionTask = HandleConnectionAsync(tcpClient);
// add it to the list of pending task
lock (_lock)
_connections.Add(connectionTask);
// catch all errors of HandleConnectionAsync
try
{
await connectionTask;
// we may be on another thread after "await"
}
catch (Exception ex)
{
// log the error
Console.WriteLine(ex.ToString());
}
finally
{
// remove pending task
lock (_lock)
_connections.Remove(connectionTask);
}
}
// Handle new connection
private async Task HandleConnectionAsync(TcpClient tcpClient)
{
await Task.Yield();
// continue asynchronously on another threads
using (var networkStream = tcpClient.GetStream())
{
var buffer = new byte[4096];
Console.WriteLine("[Server] Reading from client");
var byteCount = await networkStream.ReadAsync(buffer, 0, buffer.Length);
var request = Encoding.UTF8.GetString(buffer, 0, byteCount);
Console.WriteLine("[Server] Client wrote {0}", request);
var serverResponseBytes = Encoding.UTF8.GetBytes("Hello from server");
await networkStream.WriteAsync(serverResponseBytes, 0, serverResponseBytes.Length);
Console.WriteLine("[Server] Response has been written");
}
}
// The entry point of the console app
static async Task Main(string[] args)
{
Console.WriteLine("Hit Ctrl-C to exit.");
await new Program().StartListener();
}
}
Alternatively, the code might look like below, without await Task.Yield(). Note, I pass an async lambda to Task.Run, because I still want to benefit from async APIs inside HandleConnectionAsync and use await in there:
// Handle new connection
private static Task HandleConnectionAsync(TcpClient tcpClient)
{
return Task.Run(async () =>
{
using (var networkStream = tcpClient.GetStream())
{
var buffer = new byte[4096];
Console.WriteLine("[Server] Reading from client");
var byteCount = await networkStream.ReadAsync(buffer, 0, buffer.Length);
var request = Encoding.UTF8.GetString(buffer, 0, byteCount);
Console.WriteLine("[Server] Client wrote {0}", request);
var serverResponseBytes = Encoding.UTF8.GetBytes("Hello from server");
await networkStream.WriteAsync(serverResponseBytes, 0, serverResponseBytes.Length);
Console.WriteLine("[Server] Response has been written");
}
});
}
Updated, based upon the comment: if this is going to be a library code, the execution environment is indeed unknown, and may have a non-default synchronization context. In this case, I'd rather run the main server loop on a pool thread (which is free of any synchronization context):
private static Task StartListener()
{
return Task.Run(async () =>
{
var tcpListener = TcpListener.Create(8000);
tcpListener.Start();
while (true)
{
var tcpClient = await tcpListener.AcceptTcpClientAsync();
Console.WriteLine("[Server] Client has connected");
var task = StartHandleConnectionAsync(tcpClient);
if (task.IsFaulted)
await task;
}
});
}
This way, all child tasks created inside StartListener wouldn't be affected by the synchronization context of the client code. So, I wouldn't have to call Task.ConfigureAwait(false) anywhere explicitly.
Updated in 2020, someone just asked a good question off-site:
I was wondering what is the reason for using a lock here? This is not
necessary for exception handling. My understanding is that a lock is
used because List is not thread safe, therefore the real question
is why add the tasks to a list (and incur the cost of a lock under
load).
Since Task.Run is perfectly able to keep track of the tasks it
started, my thinking is that in this specific example the lock is
useless, however you put it there because in a real program, having
the tasks in a list allows us to for example, iterate currently
running tasks and terminate the tasks cleanly if the program receives
a termination signal from the operating system.
Indeed, in a real-life scenario we almost always want to keep track of the tasks we start with Task.Run (or any other Task objects which are "in-flight"), for a few reasons:
To track task exceptions, which otherwise might be silently swallowed if go unobserved elsewhere.
To be able to wait asynchronously for completion of all the pending tasks (e.g., consider a Start/Stop UI button or handling a request to start/stop a inside a headless Windows service).
To be able to control (and throttle/limit) the number of tasks we allow to be in-flight simultaneously.
There are better mechanisms to handle a real-life concurrency workflows (e.g., TPL Dataflow Library), but I did include the tasks list and the lock on purpose here, even in this simple example. It might be tempting to use a fire-and-forget approach, but it's almost never is a good idea. In my own experience, when I did want a fire-and-forget, I used async void methods for that (check this).
The existing answers have correctly proposed to use Task.Run(() => HandleConnection(client));, but not explained why.
Here's why: You are concerned, that HandleConnectionAsync might take some time to hit the first await. If you stick to using async IO (as you should in this case) this means that HandleConnectionAsync is doing CPU-bound work without any blocking. This is a perfect case for the thread-pool. It is made to run short, non-blocking CPU work.
And you are right, that the accept loop would be throttled by HandleConnectionAsync taking a long time before returning (maybe because there is significant CPU-bound work in it). This is to be avoided if you need a high frequency of new connections.
If you are sure that there is no significant work throttling the loop you can save the additional thread-pool Task and not do it.
Alternatively, you can have multiple accepts running at the same time. Replace await Serve(); by (for example):
var serverTasks =
Enumerable.Range(0, Environment.ProcessorCount)
.Select(_ => Serve());
await Task.WhenAll(serverTasks);
This removes the scalability problems. Note, that await will swallow all but one error here.
Try
TcpListener listener;
void Serve(){
while(true){
var client = listener.AcceptTcpClient();
Task.Run(() => this.HandleConnection(client));
//Or alternatively new Thread(HandleConnection).Start(client)
}
}
According to the Microsoft http://msdn.microsoft.com/en-AU/library/hh524395.aspx#BKMK_VoidReturnType, the void return type shouldn't be used because it is not able to catch exceptions. As you have pointed out you do need "fire and forget" tasks, so my conclusion is to that you must always return Task (as Microsoft have said), but you should catch the error using:
TaskInstance.ContinueWith(i => { /* exception handler */ }, TaskContinuationOptions.OnlyOnFaulted);
An example I used as proof is below:
public static void Main()
{
Awaitable()
.ContinueWith(
i =>
{
foreach (var exception in i.Exception.InnerExceptions)
{
Console.WriteLine(exception.Message);
}
},
TaskContinuationOptions.OnlyOnFaulted);
Console.WriteLine("This needs to come out before my exception");
Console.ReadLine();
}
public static async Task Awaitable()
{
await Task.Delay(3000);
throw new Exception("Hey I can catch these pesky things");
}
Is there any reason you need to accept connections async? I mean, does awaiting any client connection give you any value? The only reason for doing it would be because there are some other work going on in the server while waiting for a connection. If there is you could probably do something like this:
public async void Serve()
{
while (true)
{
var client = await _listener.AcceptTcpClientAsync();
Task.Factory.StartNew(() => HandleClient(client), TaskCreationOptions.LongRunning);
}
}
This way the accepting will release the current thread leaving option for other things to be done, and the handling is run on a new thread. The only overhead would be spawning a new thread for handling the client before it would go straight back to accepting a new connection.
Edit:
Just realized it's almost the same code you wrote. Think I need to read your question again to better understand what you're actually asking :S
Edit2:
Is there a way to combine these two approaches so that my server will use exactly the
number of threads it needs for the number of actively running tasks, but so that it will
not block threads unnecessarily on IO operations?
Think my solution actually answer this question. Is it really necessary though?
Edit3:
Made Task.Factory.StartNew() actually create a new thread.

async / await or Begin / End with TcpListener?

I've started to build a tcp server which will be able to accept many clients, and receive simultaneously from all of the clients new data.
Until now, I used IOCP for tcp servers which was pretty easy and comfortable,
but this time I want to use the Async / Await tech. that was released in C# 5.0.
The problem is that when I started to write the server using async / await, I figured out that in tcp multiple users server use case, async / await tech. and the regular synchrony methods will work the same.
Here's a simple example to be more specific:
class Server
{
private TcpListener _tcpListener;
private List<TcpClient> _clients;
private bool IsStarted;
public Server(int port)
{
_tcpListener = new TcpListener(new IPEndPoint(IPAddress.Any, port));
_clients = new List<TcpClient>();
IsStarted = false;
}
public void Start()
{
IsStarted = true;
_tcpListener.Start();
Task.Run(() => StartAcceptClientsAsync());
}
public void Stop()
{
IsStarted = false;
_tcpListener.Stop();
}
private async Task StartAcceptClientsAsync()
{
while (IsStarted)
{
// ******** Note 1 ********
var acceptedClient = await _tcpListener.AcceptTcpClientAsync();
_clients.Add(acceptedClient);
IPEndPoint ipEndPoint = (IPEndPoint) acceptedClient.Client.RemoteEndPoint;
Console.WriteLine("Accepted new client! IP: {0} Port: {1}", ipEndPoint.Address, ipEndPoint.Port);
Task.Run(() => StartReadingDataFromClient(acceptedClient));
}
}
private async void StartReadingDataFromClient(TcpClient acceptedClient)
{
try
{
IPEndPoint ipEndPoint = (IPEndPoint) acceptedClient.Client.RemoteEndPoint;
while (true)
{
MemoryStream bufferStream = new MemoryStream();
// ******** Note 2 ********
byte[] buffer = new byte[1024];
int packetSize = await acceptedClient.GetStream().ReadAsync(buffer, 0, buffer.Length);
if (packetSize == 0)
{
break;
}
Console.WriteLine("Accepted new message from: IP: {0} Port: {1}\nMessage: {2}",
ipEndPoint.Address, ipEndPoint.Port, Encoding.Default.GetString(buffer));
}
}
catch (Exception)
{
}
finally
{
acceptedClient.Close();
_clients.Remove(acceptedClient);
}
}
}
Now if you see the lines under 'Note 1' and 'Note 2',
It can easily be changed to:
Note 1 from
var acceptedClient = await _tcpListener.AcceptTcpClientAsync();
to
var acceptedClient = _tcpListener.AcceptTcpClient();
And Note 2 from
int packetSize = await acceptedClient.GetStream().ReadAsync(buffer, 0, 1024);
to
int packetSize = acceptedClient.GetStream().Read(buffer, 0, 1024);
And the server will work exactly the same.
So, why using async / await in tcp listener for multiple users if it's the same like using regular synchrony methods?
Should I keep using IOCP in that case? because for me it's pretty easy and comfortable but I am afraid that it will be obsoleted or even no more available in newer .NET versions.
Untill now, i used IOCP for tcp servers which was pretty easy and comfortable, but this time i want to use the Async / Await tech. that was released in C# 5.0.
I think you need to get your terminology right.
Having BeginOperation and EndOperation methods is called Asynchronous Programming Model (APM). Having a single Task (or Task<T>) returning method is called Task-based Asynchronous Pattern (TAP). I/O Completion Ports (IOCP) are a way to handle asynchronous operations on Windows and asynchronous I/O methods using both APM and TAP use them.
What this means is that the performance of APM and TAP is going to be very similar. The big difference between the two is that code using TAP and async-await is much more readable than code using APM and callbacks.
So, if you want to (or have to) write your code asynchronously, use TAP and async-await, if you can. But if you don't have a good reason to do that, just write your code synchronously.
On the server, a good reason to use asynchrony is scalability: asynchronous code can handle many more requests at the same time, because it tends to use fewer threads. If you don't care about scalability (for example because you're not going to have many users at the same time), then asynchrony doesn't make much sense.
Also, your code contains some practices that you should avoid:
Don't use async void methods, there is no good way to tell when they complete and they have bad exception handling. The exception is event handlers, but that applies mostly to GUI applications.
Don't use Task.Run() if you don't have to. Task.Run() can be useful if you want to leave the current thread (usually the UI thread) or if you want to execute synchronous code in parallel. But it doesn't make much sense to use it to start asynchronous operations in server applications.
Don't ignore Tasks returned from methods, unless you're sure they're not going to throw an exception. Exceptions from ignored Tasks won't do anything, which could very easily mask a bug.
After couple of search i found this
Q: It is a list of TCPServer practices but which one is the best practice for managing 5000+ clients each seconds ?
A: my assumption is "Writing async methods" more over if you are working with database same time "Async methods and iterators" will do more
here is the sample code using async/await.
async await tcp server
I think it is easier to build a tcp server using async/await than iocp.

CPU usage problem

I have a network project, there is no timer in it. just a tcpclient that connect to a server and listen to receive any data from network.
TcpClient _TcpClient = new TcpClient(_IpAddress, _Port);
_ConnectThread = new Thread(new ThreadStart(ConnectToServer));
_ConnectThread.IsBackground = true;
_ConnectThread.Start();
private void ConnectToServer()
{
try
{
NetworkStream _NetworkStream = _TcpClient.GetStream();
byte[] _RecievedPack = new byte[1024 * 1000];
string _Message = string.Empty;
int _BytesRead;
int _Length;
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
}
catch (Exception exp)
{
// call a function to alarm that connection is false
}
}
But after a while the cpu usage of my application goes up(90%, 85%,...).
even if no data receive.
could anybody give me some tips about cpu usage. I'm totally blank. i don't know i should check which part of the project!
could anybody give me some tips about cpu usage
You should consider checking the loops in the application, like while loop, if you are spend so much time waiting for some condition to became true, then it will take much CPU time. for instance
while (true)
{}
or
while (_Flag)
{
//do something
}
If the code executed inside the while are synchronous, then the thread will be ending eating much of CPU cycles. to solve this problem you could executes the code inside the while in a different thread, so it will be asynchronous, and then use ManualResetEvent or AutoResetEvent to report back when operation executed, another thing to mentioned is to consider using System.Threading.Thread.Sleep method to till the thread to sleep and give the cpu time to execute other threads, example:
while(_Flag)
{
//do something
Thread.Sleep(100);//Blocks the current thread for 100 milliseconds
}
There are several issues with your code... the most important ones are IMHO:
Use async methods (BeginRead etc.), not blocking methods, and don't create your own thread. Thread are "expensive>" resources - and using blocking calls in threads is therefore a waste of resources. Using async calls lets the operating system call you back when an event (data received for instance) occured, so that no separate thread is needed (the callback runs with a pooled thread).
Be aware that Read may return just a few bytes, it doesn't have to fill the _ReceivedPackbuffer. Theoretically, it may just receive one or two bytes - not even enough for your call to ToInt32!
The CPU usage spikes, because you have a while loop, which does not do anything, if it does not receive anything from the network. Add Thread.Sleep() at the end of it, if not data was received, and your CPU usage will be normal.
And take the advice, that Lucero gave you.
I suspect that the other end of the connection is closed when the while loop is still running, in which case you'll repeatedly read zero bytes from the network stream (marking connection closed; see NetworkStream.Read on MSDN).
Since NetworkStream.Read will then return immediately (as per MSDN), you'll be stuck in a tight while loop that will consume a lot of processor time. Try adding a Thread.Sleep() or detecting a "zero read" within the loop. Ideally you should handle a read of zero bytes by terminating your end of the connection, too.
while (_Flage)
{
_BytesRead = _NetworkStream.Read(_RecievedPack, 0, _RecievedPack.Length);
_Length = BitConverter.ToInt32(_RecievedPack, 0);
_Message = UTF8Encoding.UTF8.GetString(_RecievedPack, 4, _Length);
if (_BytesRead != 0)
{
//call a function to manage the data
_NetworkStream.Flush();
}
}
Have you attached a debugger and stepped through the code to see if it's behaving in the way you expect?
Alternatively, if you have a profiling tool available (such as ANTs) then this will help you see where time is being spent in your application.

Categories

Resources