I'm working on a program that uses S22 imap to keep an IDLE connection to gmail and receive messages real time.
I am calling the following function from my Main method:
static void RunIdle()
{
using (ImapClient Client = new ImapClient("imap.gmail.com", 993, "user", "pass", AuthMethod.Login, true))
{
if (!Client.Supports("IDLE"))
throw new Exception("This server does not support IMAP IDLE");
Client.DefaultMailbox = "label";
Client.NewMessage += new EventHandler<IdleMessageEventArgs>(OnNewMessage);
Console.WriteLine("Connected to gmail");
while (true)
{
//keep the program running until terminated
}
}
}
Using the infinite while loop works, but it seems there would be a more correct way to do this. In the future if I want to add more IDLE connections, the only way I see my solution working is with a separate thread for each connection.
What is the best way to accomplish what I am doing with the while loop?
Do not dispose of the client and root it in a static variable. That way it just stays running and keeps raising events. There is no need to have a waiting loop at all. Remove the using statement.
If this is the last thread in your program you indeed need to keep it alive.
Thread.Sleep(Timeout.Infinite);
What is the best way to accomplish what I am doing with the while loop?
You may want to do the following two things:
Provide your RunIdle with a CancelationToken so that it can be cleanly stopped.
In your busy waiting while loop, use Task.Delay to "sleep" until you need to "ping" the mailbox again.
static async Task RunIdle(CancelationToken cancelToken, TimeSpan pingInterval)
{
// ...
// the new busy waiting ping loop
while (!cancelToken.IsCancellationRequested)
{
// Do your stuff to keep the connection alive.
// Wait a while, while freeing up the thread.
if (!cancelToken.IsCancellationRequested)
await Task.Delay(pingInterval, cancelToken);
}
}
If you don't need to do anything to keep the connection alive, except from keeping the process from terminating:
Read from the console and wait until there is a ctrl+c or some clean "exit" command.
A simple solution that avoid thread blocking would be to just use Task.Delay() for some random amount of time.
static async Task RunIdle(CancellationToken token = default(CancellationToken))
{
using (ImapClient Client = new ImapClient("imap.gmail.com", 993, "user", "pass", AuthMethod.Login, true))
{
...
var interval = TimeSpan.FromHours(1);
while (!token.IsCancellationRequested)
{
await Task.Delay(interval, token);
}
}
}
Instead of while(true) you could perhaps use a CancellationToken in case the execution needs to be stop at some point. Task.Delay() supports this as well.
This are all solutions for a Console application, though. If running as a service, the service host will make sure your program keeps executing after starting, so you can just return.
Related
In the OnDisconnectedAsync event of my hub, I want to wait a few second before performing some action. I tried to make it async for using non-blocking Task.Delay:
public override async Task OnDisconnectedAsync(Exception exception) {
var session = (VBLightSession)Context.Items["Session"];
activeUsers.Remove(session.User.Id);
await Task.Delay(5000);
if(!activeUsers.Any(u => u.Key == session.User.Id)) {
await Clients.All.SendAsync("UserOffline", UserOnlineStateDto(session));
}
await base.OnDisconnectedAsync(exception);
}
While this works as expected, I noticed that I can't close the console application immediately. Seems that it waits for the 5 seconds delay to finish. How can I solve this, that exiting the application simply exite those delay, too?
The only alternative I see is Creating a classic thread and inject IHubContext, but this seems not scaling well and some kind of overkill for this simple task.
Backgrund
I have a list of online users. When users navigating through the multi page application, they get disconnected for a short time during the new HTTP request. To avoid such flickering in the online list (user get offline and directly online again), I want to remove the user on disconnect from the user list, but not notify the WS client immediatly.
Instead I want to wait 5 seconds. Only if the client is still missing in the list, I know the client hasn't reconnected and I notify the other users. For this purpose, I need to sleep on the disconnect event. The above solution works well, except the delay on application exit (which is annoying during development).
A single page application like Angular or other frameworks shouldn't be used for a few reasons, mainly performance and SEO.
I learned about the CancellationToken, which could be passed to Task.Wait. It could be used to abort the task. Creating such a token using CancellationTokenSource seems good to cancel the token programatically (e.g. on some condition).
But I found the ApplicationStopping token in the IApplicationLifetime interface, that requests cancellation when the application is shutting down. So I could simply inject
namespace MyApp.Hubs {
public class MyHub : Hub {
readonly IApplicationLifetime appLifetime;
static Dictionary<int, VBLightSession> activeUsers = new Dictionary<int, VBLightSession>();
public MyHub(IApplicationLifetime appLifetime) {
this.appLifetime = appLifetime;
}
}
}
and only sleep if no cancellation is requested from this token
public override async Task OnDisconnectedAsync(Exception exception) {
var session = (VBLightSession)Context.Items["Session"];
activeUsers.Remove(session.User.Id);
// Prevents our application waiting to the delay if it's closed (especially during development this avoids additionally waiting time, since the clients disconnects there)
if (!appLifetime.ApplicationStopping.IsCancellationRequested) {
// Avoids flickering when the user switches to another page, that would cause a directly re-connect after he has disconnected. If he's still away after 5s, he closed the tab
await Task.Delay(5000);
if (!activeUsers.Any(u => u.Key == session.User.Id)) {
await Clients.All.SendAsync("UserOffline", UserOnlineStateDto(session));
}
}
await base.OnDisconnectedAsync(exception);
}
This works because when closing the application, SignalR detects this as disconnect (altough it's caused from the server). So he waits 5000s before exit, like I assumed in my question. But with the token, IsCancellationRequested is set to true, so no additional waiting in this case.
I would not have a problem with the program waiting the X seconds before close, because it is granted that no operation will stays in the middle of the operation.
For example, what if the operation was doing something in the database? IT could leave the connection open.
I would introduce a CancellationToken, and, instead of waiting 5 seconds, wait 1 or less and check again
var cancellationToken = new CancellationToken(); // This in some singleton service
And then
var cont = 0;
while (cont < 5 && !cancellationToken.IsCancellationRequested)
{
Task.Delay(1000);
cont++;
}
if (cancellationToken.IsCancellationRequested)
{
return;
}
// Do something
Then you can add IApplicationLifetime to let the app know when the signal to shit down comes and cancel your CancellationToken.
You could even get this code in other class and generalize it to use it in other places.
I have a problem here. In below code the async/await pattern is used with HttpListener. When the request is sent via HTTP "delay" query string argument is expected and its value causes the server to delay the mentioned request processing for the given period. I need the server to process the pending requests even after the server stopped receiving new requests.
static void Main(string[] args)
{
HttpListener httpListener = new HttpListener();
CountdownEvent sessions = new CountdownEvent(1);
bool stopRequested = false;
httpListener.Prefixes.Add("http://+:9000/GetData/");
httpListener.Start();
Task listenerTask = Task.Run(async () =>
{
while (true)
{
try
{
var context = await httpListener.GetContextAsync();
sessions.AddCount();
Task childTask = Task.Run(async () =>
{
try
{
Console.WriteLine($"Request accepted: {context.Request.RawUrl}");
int delay = int.Parse(context.Request.QueryString["delay"]);
await Task.Delay(delay);
using (StreamWriter sw = new StreamWriter(context.Response.OutputStream, Encoding.Default, 4096, true))
{
await sw.WriteAsync("<html><body><h1>Hello world</h1></body></html>");
}
context.Response.Close();
}
finally
{
sessions.Signal();
}
});
}
catch (HttpListenerException ex)
{
if (stopRequested && ex.ErrorCode == 995)
{
break;
}
throw;
}
}
});
Console.WriteLine("Server is running. ENTER to stop...");
Console.ReadLine();
sessions.Signal();
stopRequested = true;
httpListener.Stop();
Console.WriteLine("Stopped accepting requests. Waiting for the pendings...");
listenerTask.Wait();
sessions.Wait();
Console.WriteLine("Finished");
Console.ReadLine();
httpListener.Close();
}
The exact problem here, is that when the server is stopped the HttpListener.Stop is called, but all the pending requests are aborted immediately, i.e. the code is unable to send the responses back.
In non-async/await pattern (i.e. simple Thread based implementation) I have a choice to abort the thread (which I suppose is very bad) and this will allow me to process pending requests, because this simply Aborts HttpListener.GetContext call.
Can you please point me out, what am I doing wrong and how to can I prevent HttpListener to abort pending requests in async/await pattern?
It seems that when HttpListener closes the request queue handle, the requests in progress are aborted. As far as I can tell, there is no way to avoid having HttpListener do that - apparently, it's a compatibility thing. In any case, that's how its GetContext-ending system works - when the handle is closed, the native method GetContext call to actually get the request context returns an error immediately.
Thread.Abort doesn't help - really, I've yet to see a place where Thread.Abort is used correctly outside of the "application domain unloading" scenario. Thread.Abort can only ever abort managed code. Since your code is currently running native, it will only be aborted when it returns back to managed code - which is almost exactly equivalent to just doing this:
var context = await httpListener.GetContextAsync();
if (stopRequested) return;
... and since there's no better cancellation API for HttpListener, this is really your only option if you want to stick with HttpListener.
The shutdown will look like this:
stopRequested = true;
sessions.Wait();
httpListener.Dispose();
listenerTask.Wait();
I'd also suggest using CancellationToken instead of a bool flag - it handles all the synchronization woes for you. If that's not desirable for some reason, make sure you synchronize access to the flag - contractually, the compiler is allowed to omit the check, since it's impossible for the flag to change in single-threaded code.
If you want to, you can make listenerTask complete sooner by sending a dummy HTTP request to yourself right after setting stopRequested - this will cause GetContext to return immediately with the new request, and you can return. This is an approach that's commonly used when dealing with APIs that don't support "nice" cancellation, e.g. UdpClient.Receive.
I am working on a windows service written in C# (.NET 4.5, VS2012), which uses RabbitMQ (receiving messages by subscription). There is a class which derives from DefaultBasicConsumer, and in this class are two actual consumers (so two channels). Because there are two channels, two threads handle incoming messages (from two different queues/routing keys) and both call the same HandleBasicDeliver(...) function.
Now, when the windows service OnStop() is called (when someone is stopping the service), I want to let both those threads finish handling their messages (if they are currently processing a message), sending the ack to the server, and then stop the service (abort the threads and so on).
I have thought of multiple solutions, but none of them seem to be really good. Here's what I tried:
using one mutex; each thread tries to take it when entering HandleBasicDeliver, then releases it afterwards. When OnStop() is called, the main thread tries to grab the same mutex, effectively preventing the RabbitMQ threads to actually process any more messages. The disadvantage is, only one consumer thread can process a message at a time.
using two mutexes: each RabbitMQ thread has uses a different mutex, so they won't block each other in the HandleBasicDeliver() - I can differentiate which
thread is actually handling the current message based on the routing key. Something like:
HandleBasicDeliver(...)
{
if(routingKey == firstConsumerRoutingKey)
{
// Try to grab the mutex of the first consumer
}
else
{
// Try to grab the mutex of the second consumer
}
}
When OnStop() is called, the main thread will try to grab both mutexes; once both mutexes are "in the hands" of the main thread, it can proceed with stopping the service. The problem: if another consumer would be added to this class, I'd need to change a lot of code.
using a counter, or CountdownEvent. Counter starts off at 0, and each time HandleBasicDeliver() is entered, counter is safely incremented using the Interlocked class. After the message is processed, counter is decremented. When OnStop() is called, the main thread checks if the counter is 0. Should this condition be fulfilled, it will continue. However, after it checks if counter is 0, some RabbitMQ thread might begin to process a message.
When OnStop() is called, closing the connection to the RabbitMQ (to make sure no new messages will arrive), and then waiting a few seconds ( in case there are any messages being processed, to finish processing) before closing the application. The problem is, the exact number of seconds I should wait before shutting down the apllication is unknown, so this isn't an elegant or exact solution.
I realize the design does not conform to the Single Responsibility Principle, and that may contribute to the lack of solutions. However, could there be a good solution to this problem without having to redesign the project?
We do this in our application, The main idea is to use a CancellationTokenSource
On your windows service add this:
private static readonly CancellationTokenSource CancellationTokenSource = new CancellationTokenSource();
Then in your rabbit consumers do this:
1. change from using Dequeue to DequeueNoWait
2. have your rabbit consumer check the cancellation token
Here is our code:
public async Task StartConsuming(IMessageBusConsumer consumer, MessageBusConsumerName fullConsumerName, CancellationToken cancellationToken)
{
var queueName = GetQueueName(consumer.MessageBusConsumerEnum);
using (var model = _rabbitConnection.CreateModel())
{
// Configure the Quality of service for the model. Below is how what each setting means.
// BasicQos(0="Don't send me a new message until I’ve finished", _fetchSize = "Send me N messages at a time", false ="Apply to this Model only")
model.BasicQos(0, consumer.FetchCount.Value, false);
var queueingConsumer = new QueueingBasicConsumer(model);
model.BasicConsume(queueName, false, fullConsumerName, queueingConsumer);
var queueEmpty = new BasicDeliverEventArgs(); //This is what gets returned if nothing in the queue is found.
while (!cancellationToken.IsCancellationRequested)
{
var deliverEventArgs = queueingConsumer.Queue.DequeueNoWait(queueEmpty);
if (deliverEventArgs == queueEmpty)
{
// This 100ms wait allows the processor to go do other work.
// No sense in going back to an empty queue immediately.
// CancellationToken intentionally not used!
// ReSharper disable once MethodSupportsCancellation
await Task.Delay(100);
continue;
}
//DO YOUR WORK HERE!
}
}
Usually, how we ensure a windows service not stop before processing completes is to use some code like below. Hope that help.
protected override void OnStart(string[] args)
{
// start the worker thread
_workerThread = new Thread(WorkMethod)
{
// !!!set to foreground to block windows service be stopped
// until thread is exited when all pending tasks complete
IsBackground = false
};
_workerThread.Start();
}
protected override void OnStop()
{
// notify the worker thread to stop accepting new migration requests
// and exit when all tasks are completed
// some code to notify worker thread to stop accepting new tasks internally
// wait for worker thread to stop
_workerThread.Join();
}
I am working on a tcp server that looks something like this using synchronous apis and the thread pool:
TcpListener listener;
void Serve(){
while(true){
var client = listener.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(this.HandleConnection, client);
//Or alternatively new Thread(HandleConnection).Start(client)
}
}
Assuming my goal is to handle as many concurrent connections as possible with the lowest resource usage, this seems that it will be quickly limited by the number of available threads. I suspect that by using Non-blocking Task apis, I will be able to handle much more with fewer resources.
My initial impression is something like:
async Task Serve(){
while(true){
var client = await listener.AcceptTcpClientAsync();
HandleConnectionAsync(client); //fire and forget?
}
}
But it strikes me that this could cause bottlenecks. Perhaps HandleConnectionAsync will take an unusually long time to hit the first await, and will stop the main accept loop from proceeding. Will this only use one thread ever, or will the runtime magically run things on multiple threads as it sees fit?
Is there a way to combine these two approaches so that my server will use exactly the number of threads it needs for the number of actively running tasks, but so that it will not block threads unnecessarily on IO operations?
Is there an idiomatic way to maximize throughput in a situation like this?
I'd let the Framework manage the threading and wouldn't create any extra threads, unless profiling tests suggest I might need to. Especially, if the calls inside HandleConnectionAsync are mostly IO-bound.
Anyway, if you like to release the calling thread (the dispatcher) at the beginning of HandleConnectionAsync, there's a very easy solution. You can jump on a new thread from ThreadPool with await Yield(). That works if you server runs in the execution environment which does not have any synchronization context installed on the initial thread (a console app, a WCF service), which is normally the case for a TCP server.
The following illustrate this (the code is originally from here). Note, the main while loop doesn't create any threads explicitly:
using System;
using System.Collections.Generic;
using System.Net.Sockets;
using System.Text;
using System.Threading.Tasks;
class Program
{
object _lock = new Object(); // sync lock
List<Task> _connections = new List<Task>(); // pending connections
// The core server task
private async Task StartListener()
{
var tcpListener = TcpListener.Create(8000);
tcpListener.Start();
while (true)
{
var tcpClient = await tcpListener.AcceptTcpClientAsync();
Console.WriteLine("[Server] Client has connected");
var task = StartHandleConnectionAsync(tcpClient);
// if already faulted, re-throw any error on the calling context
if (task.IsFaulted)
await task;
}
}
// Register and handle the connection
private async Task StartHandleConnectionAsync(TcpClient tcpClient)
{
// start the new connection task
var connectionTask = HandleConnectionAsync(tcpClient);
// add it to the list of pending task
lock (_lock)
_connections.Add(connectionTask);
// catch all errors of HandleConnectionAsync
try
{
await connectionTask;
// we may be on another thread after "await"
}
catch (Exception ex)
{
// log the error
Console.WriteLine(ex.ToString());
}
finally
{
// remove pending task
lock (_lock)
_connections.Remove(connectionTask);
}
}
// Handle new connection
private async Task HandleConnectionAsync(TcpClient tcpClient)
{
await Task.Yield();
// continue asynchronously on another threads
using (var networkStream = tcpClient.GetStream())
{
var buffer = new byte[4096];
Console.WriteLine("[Server] Reading from client");
var byteCount = await networkStream.ReadAsync(buffer, 0, buffer.Length);
var request = Encoding.UTF8.GetString(buffer, 0, byteCount);
Console.WriteLine("[Server] Client wrote {0}", request);
var serverResponseBytes = Encoding.UTF8.GetBytes("Hello from server");
await networkStream.WriteAsync(serverResponseBytes, 0, serverResponseBytes.Length);
Console.WriteLine("[Server] Response has been written");
}
}
// The entry point of the console app
static async Task Main(string[] args)
{
Console.WriteLine("Hit Ctrl-C to exit.");
await new Program().StartListener();
}
}
Alternatively, the code might look like below, without await Task.Yield(). Note, I pass an async lambda to Task.Run, because I still want to benefit from async APIs inside HandleConnectionAsync and use await in there:
// Handle new connection
private static Task HandleConnectionAsync(TcpClient tcpClient)
{
return Task.Run(async () =>
{
using (var networkStream = tcpClient.GetStream())
{
var buffer = new byte[4096];
Console.WriteLine("[Server] Reading from client");
var byteCount = await networkStream.ReadAsync(buffer, 0, buffer.Length);
var request = Encoding.UTF8.GetString(buffer, 0, byteCount);
Console.WriteLine("[Server] Client wrote {0}", request);
var serverResponseBytes = Encoding.UTF8.GetBytes("Hello from server");
await networkStream.WriteAsync(serverResponseBytes, 0, serverResponseBytes.Length);
Console.WriteLine("[Server] Response has been written");
}
});
}
Updated, based upon the comment: if this is going to be a library code, the execution environment is indeed unknown, and may have a non-default synchronization context. In this case, I'd rather run the main server loop on a pool thread (which is free of any synchronization context):
private static Task StartListener()
{
return Task.Run(async () =>
{
var tcpListener = TcpListener.Create(8000);
tcpListener.Start();
while (true)
{
var tcpClient = await tcpListener.AcceptTcpClientAsync();
Console.WriteLine("[Server] Client has connected");
var task = StartHandleConnectionAsync(tcpClient);
if (task.IsFaulted)
await task;
}
});
}
This way, all child tasks created inside StartListener wouldn't be affected by the synchronization context of the client code. So, I wouldn't have to call Task.ConfigureAwait(false) anywhere explicitly.
Updated in 2020, someone just asked a good question off-site:
I was wondering what is the reason for using a lock here? This is not
necessary for exception handling. My understanding is that a lock is
used because List is not thread safe, therefore the real question
is why add the tasks to a list (and incur the cost of a lock under
load).
Since Task.Run is perfectly able to keep track of the tasks it
started, my thinking is that in this specific example the lock is
useless, however you put it there because in a real program, having
the tasks in a list allows us to for example, iterate currently
running tasks and terminate the tasks cleanly if the program receives
a termination signal from the operating system.
Indeed, in a real-life scenario we almost always want to keep track of the tasks we start with Task.Run (or any other Task objects which are "in-flight"), for a few reasons:
To track task exceptions, which otherwise might be silently swallowed if go unobserved elsewhere.
To be able to wait asynchronously for completion of all the pending tasks (e.g., consider a Start/Stop UI button or handling a request to start/stop a inside a headless Windows service).
To be able to control (and throttle/limit) the number of tasks we allow to be in-flight simultaneously.
There are better mechanisms to handle a real-life concurrency workflows (e.g., TPL Dataflow Library), but I did include the tasks list and the lock on purpose here, even in this simple example. It might be tempting to use a fire-and-forget approach, but it's almost never is a good idea. In my own experience, when I did want a fire-and-forget, I used async void methods for that (check this).
The existing answers have correctly proposed to use Task.Run(() => HandleConnection(client));, but not explained why.
Here's why: You are concerned, that HandleConnectionAsync might take some time to hit the first await. If you stick to using async IO (as you should in this case) this means that HandleConnectionAsync is doing CPU-bound work without any blocking. This is a perfect case for the thread-pool. It is made to run short, non-blocking CPU work.
And you are right, that the accept loop would be throttled by HandleConnectionAsync taking a long time before returning (maybe because there is significant CPU-bound work in it). This is to be avoided if you need a high frequency of new connections.
If you are sure that there is no significant work throttling the loop you can save the additional thread-pool Task and not do it.
Alternatively, you can have multiple accepts running at the same time. Replace await Serve(); by (for example):
var serverTasks =
Enumerable.Range(0, Environment.ProcessorCount)
.Select(_ => Serve());
await Task.WhenAll(serverTasks);
This removes the scalability problems. Note, that await will swallow all but one error here.
Try
TcpListener listener;
void Serve(){
while(true){
var client = listener.AcceptTcpClient();
Task.Run(() => this.HandleConnection(client));
//Or alternatively new Thread(HandleConnection).Start(client)
}
}
According to the Microsoft http://msdn.microsoft.com/en-AU/library/hh524395.aspx#BKMK_VoidReturnType, the void return type shouldn't be used because it is not able to catch exceptions. As you have pointed out you do need "fire and forget" tasks, so my conclusion is to that you must always return Task (as Microsoft have said), but you should catch the error using:
TaskInstance.ContinueWith(i => { /* exception handler */ }, TaskContinuationOptions.OnlyOnFaulted);
An example I used as proof is below:
public static void Main()
{
Awaitable()
.ContinueWith(
i =>
{
foreach (var exception in i.Exception.InnerExceptions)
{
Console.WriteLine(exception.Message);
}
},
TaskContinuationOptions.OnlyOnFaulted);
Console.WriteLine("This needs to come out before my exception");
Console.ReadLine();
}
public static async Task Awaitable()
{
await Task.Delay(3000);
throw new Exception("Hey I can catch these pesky things");
}
Is there any reason you need to accept connections async? I mean, does awaiting any client connection give you any value? The only reason for doing it would be because there are some other work going on in the server while waiting for a connection. If there is you could probably do something like this:
public async void Serve()
{
while (true)
{
var client = await _listener.AcceptTcpClientAsync();
Task.Factory.StartNew(() => HandleClient(client), TaskCreationOptions.LongRunning);
}
}
This way the accepting will release the current thread leaving option for other things to be done, and the handling is run on a new thread. The only overhead would be spawning a new thread for handling the client before it would go straight back to accepting a new connection.
Edit:
Just realized it's almost the same code you wrote. Think I need to read your question again to better understand what you're actually asking :S
Edit2:
Is there a way to combine these two approaches so that my server will use exactly the
number of threads it needs for the number of actively running tasks, but so that it will
not block threads unnecessarily on IO operations?
Think my solution actually answer this question. Is it really necessary though?
Edit3:
Made Task.Factory.StartNew() actually create a new thread.
I'm taking practice with the async CTP framework, and as exercise I would create a TCP client able to query a server (using an arbitrary protocol). Anyway, I'm stuck at the very early stage because an issue on the connection. Either I still didn't understand some basic point, or there is something strange.
So, here is the async connector:
public class TaskClient
{
public static Task<TcpClient> Connect(IPEndPoint endPoint)
{
//create a tcp client
var client = new TcpClient(AddressFamily.InterNetwork);
//define a function to return the client
Func<IAsyncResult, TcpClient> em = iar =>
{
var c = (TcpClient)iar.AsyncState;
c.EndConnect(iar);
return c;
};
//create a task to connect the end-point async
var t = Task<TcpClient>.Factory.FromAsync(
client.BeginConnect,
em,
endPoint.Address.ToString(),
endPoint.Port,
client);
return t;
}
}
I mean to call this function only once, then having back a TcpClient instance to use for any succeeding query (code not shown here).
Somewhere in my form, I call the function above as follows:
//this method runs on the UI thread, so can't block
private void TryConnect()
{
//create the end-point
var ep = new IPEndPoint(
IPAddress.Parse("192.168.14.112"), //this is not reachable: correct!
1601);
var t = TaskClient
.Connect(ep)
.ContinueWith<TcpClient>(_ =>
{
//tell me what's up
if (_.IsFaulted)
Console.WriteLine(_.Exception);
else
Console.WriteLine(_.Result.Connected);
return _.Result;
})
.ContinueWith(_ => _.Result.Close());
Console.WriteLine("connection in progress...");
//wait for 2" then abort the connection
//Thread.Sleep(2000);
//t.Result.Client.Close();
}
The test is to try to connect a remote server, but it has to be unreachable (PC on, but service stopped).
When I run the TryConnect function, it returns correctly "connection in progress..." as soon, then displays an exception because the remote endpoint is off. Excellent!
The problem is that it needs several seconds to return the exception, and I would like to give the chance to the user to cancel the operation in progress. According to the MSDN specs about the BeginConnect method, if you wish to abort the async operation, just call Close on the working socket.
So, I tried to add a couple of lines at the end (commented out as above), so to simulate the users cancellation after 2 seconds. The result looks as a hang of the app (hourglass). By pausing the IDE, it stops on the very last line t.Result.Client.Close(). However, by stopping the IDE everything closes normally, without any exception.
I've also tried to close the client directly as t.Result.Close(), but it's exactly the same.
It's me, or there's anything broken on the connection process?
Thanks a lot in advance.
t.Result.Close() will wait for the t task completion.
t.ContinueWith() will also wait for the completion of the task.
To cancel you must wait on 2 tasks: the tcp and a timer.
Using the async tcp syntax:
await Task.WhenAny(t,Task.Delay(QueryTimeout));
if (!t.IsCompleted)
tcpClient.Close(); //Cancel task
Try calling Dispose() on the object - it's a bit more agressive than Close(). You could look at the various Timeout members on the TcpClient class and set them to more appropriate values (e.g. 1 second in a LAN environment is probably good enough). You can also have a look at the CancellationTokenSource functionality in .Net 4.0. This allows you to signal to a Task that you wish it to discontinue - I found an article that might get you started.
You should also find out which thread is actually stalling (the primary thread might just be waiting for another thread that is stalled), e.g. the .ContinueWith(_ => _.Result.Close()) might be the problem (you should check what the behaviour is when closing a socket twice). While debugging open the Threads window (Debug -> Windows -> Threads) and have a look through each thread.