I have the following C# code:
var response = client.GetAsync(uri).Result;
MemoryStream stream = new MemoryStream();
response.Content.CopyToAsync(stream);
System.Console.WriteLine(stream.Length);
When I insert a breakpoint before the first statement and then continue the program, the code works fine and over 4 MB of data gets stored in the stream.
But if I run the program without any breakpoints or inserting the breakpoint after the after the first statement shown above, the code runs but no data or only 4 KB of data gets stored in the stream.
Can someone please explain why this is happening?
Edit:
Here is what I am trying to do in my program. I use couple of HttpClient.PostAsync requests to get a uri to download a wav file. Then I want to download the wav file into a memory stream. I don't know of any other ways to do this yet.
It seems like you are basically messing the flow of async and await.
The async call will be waited to complete and recapture the task when you use await keyword.
The mentioned code does not clarify whether you are using the async signature in your method or not. Let me clarify the solution for you
Possible solution 1:
public async Task XYZFunction()
{
var response = await client.GetAsync(uri); //we are waiting for the request to be completed
MemoryStream stream = new MemoryStream();
await response.Content.CopyToAsync(stream); //The call will wait until the request is completed
System.Console.WriteLine(stream.Length);
}
Possible solution 2:
public void XYZFunction()
{
var response = client.GetAsync(uri).Result; //we are running the awaitable task to complete and share the result with us first. It is a blocking call
MemoryStream stream = new MemoryStream();
response.Content.CopyToAsync(stream).Result; //same goes here
System.Console.WriteLine(stream.Length);
}
Related
I have a small app that receives a request from a browser, copy the header received and the post data (or GET path) and send it to another endpoint.
It then waits for the result and sends it back to the browser. It works like a reverse proxy.
Everything works fine until it receives a request to download a large file. Something like a 30MB will cause an strange behaviour in the browser. When the browser reaches around 8MB it stops receiving data from my app and, after some time, it aborts the download. Everything else works just fine.
If I change the SendAsync line to use HttpCompletionOption.ResponseContentRead it works just fine. I am assuming there is something wrong waiting for the stream and/or task, but I can't figure out what is going on.
The application is written in C#, .net Core (latest version available).
Here is the code (partial)
private async Task SendHTTPResponse(HttpContext context, HttpResponseMessage responseMessage)
{
context.Response.StatusCode = (int)responseMessage.StatusCode;
foreach (var header in responseMessage.Headers)
{
context.Response.Headers[header.Key] = header.Value.ToArray();
}
foreach (var header in responseMessage.Content.Headers)
{
context.Response.Headers[header.Key] = header.Value.ToArray();
}
context.Response.Headers.Remove("transfer-encoding");
using (var responseStream = await responseMessage.Content.ReadAsStreamAsync())
{
await responseStream.CopyToAsync(context.Response.Body);
}
}
public async Task ForwardRequestAsync(string toHost, HttpContext context)
{
var requestMessage = this.BuildHTTPRequestMessage(context);
var responseMessage = await _httpClient.SendAsync(requestMessage, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted);
await this.SendHTTPResponse(context, responseMessage);
}
EDIT
Changed the SendHTTPResponse to wait for responseMessage.Content.ReadAsStreamAsync using await operator.
Just a guess but I believe the issue lies with the removal of the transfer encoding:
context.Response.Headers.Remove("transfer-encoding");
If the http request you are making with _httpClient returns the 30MB file using Chunked encoding (target server doesn't know the file size) then you would need to return the file to the browser with Chunked encoding as well.
When you buffer the response on your webservice (by passing HttpCompletionOption.ResponseContentRead) you know the exact message size you are sending back to the browser so the response works successfully.
I would check the response headers you get from responseMessage to see if the transfer encoding is chunked.
You are trying to stream a file but you are doing it not exactly right. If you do not specify,ResponseHeadersRead, response will never come back unless the server ends the request because it will try to read the response till the end.
HttpCompletionOption enumeration type has two members and one of them is ResponseHeadersRead which tells the HttpClient to only read the headers and then return back the result immediately.
var response = await httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var stream = await response.Content.ReadAsStreamAsync();
using (var reader = new StreamReader(stream)) {
while (!reader.EndOfStream) {
//Oh baby we are streaming
//Do stuff copy to response stream etc..
}
}
Figure 3 shows a simple example where one method blocks on the result of an async method. This code will work just fine in a console application but will deadlock when called from a GUI or ASP.NET context. This behavior can be confusing, especially considering that stepping through the debugger implies that it’s the await that never completes. The actual cause of the deadlock is further up the call stack when Task.Wait is called.
Figure 3 A Common Deadlock Problem When Blocking on Async Code
public static class DeadlockDemo
{
private static async Task DelayAsync()
{
await Task.Delay(1000);
}
// This method causes a deadlock when called in a GUI or ASP.NET context.
public static void Test()
{
// Start the delay.
var delayTask = DelayAsync();
// Wait for the delay to complete.
delayTask.Wait();
}
}
The root cause of this deadlock is due to the way await handles contexts. By default, when an incomplete Task is awaited, the current “context” is captured and used to resume the method when the Task completes. This “context” is the current SynchronizationContext unless it’s null, in which case it’s the current TaskScheduler. GUI and ASP.NET applications have a SynchronizationContext that permits only one chunk of code to run at a time. When the await completes, it attempts to execute the remainder of the async method within the captured context. But that context already has a thread in it, which is (synchronously) waiting for the async method to complete. They’re each waiting for the other, causing a deadlock.
Note that console applications don’t cause this deadlock. They have a thread pool SynchronizationContext instead of a one-chunk-at-a-time SynchronizationContext, so when the await completes, it schedules the remainder of the async method on a thread pool thread. The method is able to complete, which completes its returned task, and there’s no deadlock. This difference in behavior can be confusing when programmers write a test console program, observe the partially async code work as expected, and then move the same code into a GUI or ASP.NET application, where it deadlocks.
The best solution to this problem is to allow async code to grow naturally through the codebase. If you follow this solution, you’ll see async code expand to its entry point, usually an event handler or controller action. Console applications can’t follow this solution fully because the Main method can’t be async. If the Main method were async, it could return before it completed, causing the program to end. Figure 4 demonstrates this exception to the guideline: The Main method for a console application is one of the few situations where code may block on an asynchronous method.
Figure 4 The Main Method May Call Task.Wait or Task.Result
class Program
{
static void Main()
{
MainAsync().Wait();
}
static async Task MainAsync()
{
try
{
// Asynchronous implementation.
await Task.Delay(1000);
}
catch (Exception ex)
{
// Handle exceptions.
}
}
}
LEARN MORE HERE
try these.
using (HttpResponseMessage responseMessage= await client.SendAsync(request))
{
await this.SendHTTPResponse(context, responseMessage);
}
or
using (HttpResponseMessage responseMessage=await _httpClient.SendAsync(requestMessage,
HttpCompletionOption.ResponseHeadersRead, context.RequestAborted))
{
await this.SendHTTPResponse(context, responseMessage)
}
So, in the following snippet, why is ReadAsStringAsync an async method?
var response = await _client.SendAsync(request);
var body = await response.Content.ReadAsStringAsync();
Originally I expected SendAsync to send the request and load the response stream into memory at which point reading that stream would be in-process CPU work (and not really async).
Going down the source code rabbit hole, I arrived at this:
int count = await _stream.ReadAsync(destination, cancellationToken).ConfigureAwait(false);
https://github.com/dotnet/corefx/blob/0aa654834405dcec4aaa9bd416b2b31ab8d3503e/src/System.Net.Http/src/System/Net/Http/Managed/HttpConnection.cs#L967
This makes me think that maybe the connection is open until the response stream is actually read from some source outside of the process? I fully expect that I am missing some fundamentals regarding how streams from Http Connections work.
SendAsync() waits for the request to finish and the response to start arriving.
It doesn't buffer the entire response; this allows you to stream large responses without ever holding the entire response in memory.
I have a situation where I am making an async call to a method that returns and IDisposable instance. For example:
HttpResponseMessage response = await httpClient.GetAsync(new Uri("http://www.google.com"));
Now before async was on the scene, when working with an IDisposable instance, this call and code that used the "response" variable would be wrapped in a using statement.
My question is whether that is still the correct approach when the async keyword is thrown in the mix? Even though the code compiles, will the using statement still work as expected in both the examples below?
Example 1
using(HttpResponseMessage response = await httpClient.GetAsync(new Uri("http://www.google.com")))
{
// Do something with the response
return true;
}
Example 2
using(HttpResponseMessage response = await httpClient.GetAsync(new Uri("http://www.google.com")))
{
await this.responseLogger.LogResponseAsync(response);
return true;
}
Yes, that should be fine.
In the first case, you're really saying:
Asynchronously wait until we can get the response
Use it and dispose of it immediately
In the second case, you're saying:
Asynchronously wait until we can get the response
Asynchronously wait until we've logged the response
Dispose of the response
A using statement in an async method is "odd" in that the Dispose call may execute in a different thread to the one which acquired the resource (depending on synchronization context etc) but it will still happen... assuming the thing you're waiting for ever shows up or fail, of course. (Just like you won't end up calling Dispose in non-async code if your using statement contains a call to a method which never returns.)
I am working on an windows phone 8 app. I have to save video into camera roll folder.
To get a file stream for camera roll folder, I am using following function:
[CLSCompliantAttribute(false)]
public static Task<Stream> OpenStreamForWriteAsync(
this IStorageFile windowsRuntimeFile
)
For example:
Stream videoStream = await file.OpenStreamForWriteAsync();
where file is StorageFile.
I want to remove this await and make function synchronous because of requirements.
EDIT:
PS: I am executing this function on different thread and I want that thread to be synchronous. I want to write on that file stream after it is created.
Simply access Result:
Stream videoStream = file.OpenStreamForWriteAsync().Result;
This will block until the task finished its execution.
Please note that this can result in a deadlock of your program, if that code is executed on the UI thread.
Please refer to this blog post for further information.
Problem: I would like to download 100 files in parallel from AWS S3 using their .NET SDK. The downloaded content should be stored in 100 memory streams (the files are small enough, and I can take it from there). I am geting confused between Task, IAsyncResult, Parallel.*, and other different approaches in .NET 4.0.
If I try to solve the problem myself, off the top of my head I imagine something like this pseudocode:
(edited to add types to some variables)
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
AmazonS3 _s3 = ...;
IEnumerable<GetObjectRequest> requestObjects = ...;
// Prepare to launch requests
var asyncRequests = from rq in requestObjects
select _s3.BeginGetObject(rq,null,null);
// Launch requests
var asyncRequestsLaunched = asyncRequests.ToList();
// Prepare to finish requests
var responses = from rq in asyncRequestsLaunched
select _s3.EndGetRequest(rq);
// Finish requests
var actualResponses = responses.ToList();
// Fetch data
var data = actualResponses.Select(rp => {
var ms = new MemoryStream();
rp.ResponseStream.CopyTo(ms);
return ms;
});
This code launches 100 requests in parallel, which is good. However, there are two problems:
The last statement will download files serially, not in parallel. There doesn't seem to be BeginCopyTo()/EndCopyTo() method on stream...
The preceding statement will not let go until all requests have responded. In other words none of the files will start downloading until all of them start.
So here I start thinking I am heading down the wrong path...
Help?
It's probably easier if you break the operation down into a method that will handle one request asynchronously and then call it 100 times.
To start, let's identify the final result you want. Since what you'll be working with is a MemoryStream it means that you'll want to return a Task<MemoryStream> from your method. The signature will look something like this:
static Task<MemoryStream> GetMemoryStreamAsync(AmazonS3 s3,
GetObjectRequest request)
Because your AmazonS3 object implements the Asynchronous Design Pattern, you can use the FromAsync method on the TaskFactory class to generate a Task<T> from a class that implements the Asynchronous Design Pattern, like so:
static Task<MemoryStream> GetMemoryStreamAsync(AmazonS3 s3,
GetObjectRequest request)
{
Task<GetObjectResponse> response =
Task.Factory.FromAsync<GetObjectRequest,GetObjectResponse>(
s3.BeginGetObject, s3.EndGetObject, request, null);
// But what goes here?
So you're already in a good place, you have a Task<T> which you can wait on or get a callback on when the call completes. However, you need to somehow translate the GetObjectResponse returned from the call to Task<GetObjectResponse> into a MemoryStream.
To that end, you want to use the ContinueWith method on the Task<T> class. Think of it as the asynchronous version of the Select method on the Enumerable class, it's just a projection into another Task<T> except that each time you call ContinueWith, you are potentially creating a new Task that runs that section of code.
With that, your method looks like the following:
static Task<MemoryStream> GetMemoryStreamAsync(AmazonS3 s3,
GetObjectRequest request)
{
// Start the task of downloading.
Task<GetObjectResponse> response =
Task.Factory.FromAsync<GetObjectRequest,GetObjectResponse>(
s3.BeginGetObject, s3.EndGetObject, request, null
);
// Translate.
Task<MemoryStream> translation = response.ContinueWith(t => {
using (Task<GetObjectResponse> resp = t ){
var ms = new MemoryStream();
t.Result.ResponseStream.CopyTo(ms);
return ms;
}
});
// Return the full task chain.
return translation;
}
Note that in the above you can possibly call the overload of ContinueWith passing TaskContinuationOptions.ExecuteSynchronously, as it appears you are doing minimal work (I can't tell, the responses might be huge). In the cases where you are doing very minimal work where it would be detrimental to start a new task in order to complete the work, you should pass TaskContinuationOptions.ExecuteSynchronously so that you don't waste time creating new tasks for minimal operations.
Now that you have the method that can translate one request into a Task<MemoryStream>, creating a wrapper that will process any number of them is simple:
static Task<MemoryStream>[] GetMemoryStreamsAsync(AmazonS3 s3,
IEnumerable<GetObjectRequest> requests)
{
// Just call Select on the requests, passing our translation into
// a Task<MemoryStream>.
// Also, materialize here, so that the tasks are "hot" when
// returned.
return requests.Select(r => GetMemoryStreamAsync(s3, r)).
ToArray();
}
In the above, you simply take a sequence of your GetObjectRequest instances and it will return an array of Task<MemoryStream>. The fact that it returns a materialized sequence is important. If you don't materialize it before returning, then the tasks will not be created until the sequence is iterated through.
Of course, if you want this behavior, then by all means, just remove the call to .ToArray(), have the method return IEnumerable<Task<MemoryStream>> and then the requests will be made as you iterate through the tasks.
From there, you can process them one at a time (using the Task.WaitAny method in a loop) or wait for all of them to be completed (by calling the Task.WaitAll method). An example of the latter would be:
static IList<MemoryStream> GetMemoryStreams(AmazonS3 s3,
IEnumerable<GetObjectRequest> requests)
{
Task<MemoryStream>[] tasks = GetMemoryStreamsAsync(s3, requests);
Task.WaitAll(tasks);
return tasks.Select(t => t.Result).ToList();
}
Also, it should be mentioned that this is a pretty good fit for the Reactive Extensions framework, as this very well-suited towards an IObservable<T> implementation.
You can use Nexus.Tasks from Nexus.Core package.
var response = await fileNames
.WhenAll(item => GetObject(item, cancellationToken), 10, cancellationToken)
.ConfigureAwait(false);