I am using HttpClient service for simple GetStringAsync() request. While running the app on android emulator, avarage response time is less than ~0.5sec. The problem appears on a physical devices where response is 3 to even 9 seconds.
Because at the same moment, the app have another running service in background (simple while (true) loop downloading other data every 5 sec) also using HttpClient, I've tried disabling it but after that, response time is only ~0.2 secs faster (I also have no idea why it have any impact on other requests since I am reusing the same HttpClient, not creating new one).
I also tried disabling firewall on my host computer but didn't
worked. Using HttpClientFactory is not solving problem either.
I wonder if there is any different way to download string from url for .Net maui or am I using it in wrong way or should I add any options to my HttpClient?
HttpClient:
public static HttpClient _httpclient;
and
_hhtpclient = new HttpClient();
Request:
var jsoN = await _httpclient.GetStringAsync(#"http://192.168.0.209:8032/getStatus?Id=" +id);
if (jsoN != null)
{
var status = JsonConvert.DeserializeObject<Status>(jsoN);
return status ;
}
else
{
return null;
}
Devices i tested on:
Samsung a73 (Android 12.0)
Huwawei Nova Y90 (EMUI 12)
My internet speed (tested on host computer):
DOWNLOAD 295.03 Mb/s | UPLOAD 30.70 Mb/s
Related
I have encountered a very strange issue - HttpClient's SendAsync never returns when request to the specific web server takes 5 minutes or longer.
This is a sample WebApi controller method that I try to get the response from
[HttpGet]
[Route("api/Entity/Ping")]
public async Task<HttpResponseMessage> Ping([FromUri] int time)
{
await Task.Delay(TimeSpan.FromMinutes(time));
var bytes = Enumerable.Repeat((byte)42, 100_000_000).ToArray();
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
response.Content = new ByteArrayContent(bytes);
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = "result.bin";
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
return response;
}
And this is a code for sending a request
using (var client = HttpClientFactory.Create(handler))
{
client.Timeout = TimeSpan.FromMinutes(10);
var url = "http://problem-server/WebApp/api/Entity/Ping?time=5";
var request = new HttpRequestMessage
{
Method = HttpMethod.Get,
RequestUri = new Uri(url)
};
var response = await client.SendAsync(
request,
HttpCompletionOption.ResponseHeadersRead,
default);
var stream = await response.Content.ReadAsStreamAsync();
if (response.IsSuccessStatusCode)
return stream;
return default;
}
As you can see, everything is pretty simple and should work without issues. But it doesn't and SendAsync call just hangs forever (for 10 minutes).
In the same time it works when [time] parameter is less then 5.
Additionally, when you open the URL in a browser it successfully downloads a result.bin file after 5 minutes of processing, so method works.
Firstly I thought this is due to a deadlock.
But synchronous request using old WebRequest class to the same URL also hangs
var url = "http://problem-server/WebApp/api/Entity/Ping?time=5";
var request = WebRequest.Create(url);
request.Timeout = (int)TimeSpan.FromMinutes(10).TotalMilliseconds;
var response = request.GetResponse();
var stream = response.GetResponseStream();
if (stream != null)
return stream;
return default;
Next, I copied the WebApp folder to another server, lets call it ok-server.
Modified the URLs in http client and web request methods.
And, magically, everything works - the response is received after [time] minutes.
So the issue is with the problem-server.
But how to debug \ investigate it - IIS request tracing or logs "say" that the request has completed successfully after [time] minutes and the response was sent.
Both machines, problem-server and ok-server, have IIS 8.5 and Windows Server 2012 R2.
Web Api uses .NET Framework 4.5.
(I have also tried to use .NET Core 3.1 with ASP.NET Core hosted on IIS for the Web Api - the result is the same)
Can you help me find the reason for this issue?
Perhaps, I need to look into global machine configs or maybe network setting.
I am truly lost right now.
UPDATE
problem_server and ok_server are in different network segments.
problem_server IP is 192.168.114.100 and ok_server IP is 192.150.0.15.
To diagnose possible network misconfigurations I decided to send a request to the problem_server from the machine in its IP segment.
Here is the result when executing the test client from 192.168.114.125 machine
My workstation is yet in another IP segment - 192.135.9/24. Perhaps there are some router settings between 192.150.0/24 and 192.135.9/24 segments that allow the request to the ok_server to succeed.
I would really recommend that you not execute a five minute delay in your API controller. It will give you more grief than it's worth. For example, when IIS restarts your AppPool, it will wait up to 90 seconds for requests to process. During these autonomous restarts, this request will be aborted.
The problem server may have TCP KeepAlive set to Microsoft's recommended (but not default) value of 5 minutes. Because the HttpClient doesn't implement TCP keepalives by default, the problem server OS is likely disconnecting the TCP socket before the response is sent to the client because the client fails to respond to the keepalive being sent by the problem server OS.
You could adjust the TCP KeepAlive setting at the OS level on the problem server by editing the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\ subkey as described here.
Alternatively, you can configure the client to support TCP keepalive before sending the request by configuring the ServicePoint. If there is a network device, such as a stateful firewall, between the client and server, a high frequency keep-alive setting may help keep the connection open.
var sp = ServicePointManager.FindServicePoint(new Uri(url));
sp.SetTcpKeepAlive(true, 6000, 3000);
In my .net Application I am using the Microsoft's HttpClient to send Requests to a server. In my team we have a problem that several people get a timeout on each single request without any knowledgable reason.
Following code is used to send the request:
var _httpClient = new HttpClient();
// Here are some values that are identical on each system.
var values = new Dictionary<string, string>();
var content = new FormUrlEncodedContent(values);
var response = await _httpClient.PostAsync("http://myservice.com/endpoint", content);
Is there any known issue or reason that the httpClient runs into a timeout on specific systems?
Important fact: Calling the same endpoint with the same data via Postman works on every collegues system.
EDIT:
The error message is:
"The request was canceled due to the configured HttpClient.Timeout of
100 seconds elapsing."
When navigating through the inner Exceptions the native error code is 995 and SocketErrorCode is OperationAborted
What I expect:
After max. 5 Secs there should a 200 Response with Json Data.
Ok the problem was that c# couldn't resolve the DNS name on particular systems and Postman could...
I have an Azure function that sends a request to a URL and sends back the response. This function kept failing with timeout error for URLs from a particular domain (confidential).
To debug this, I created a very minimal Azure function:
var content = string.Empty;
using (var response = await _httpClient.GetAsync(url))
{
response.EnsureSuccessStatusCode();
content = await response.Content.ReadAsByteArrayAsync();
}
return new OkObjectResult(content);
This code works fine in local. When I try using the deployed Azure function, it works for all the other domains I tried (ex: https://google.com) but it hits request timeout error for a particular domain after trying for about 90 seconds. The error happens at this particular line: _httpClient.GetAsync(url). Again, it works fine for this (confidential) domain in local.
I have tried deploying the Azure function to two completely different Azure service plans and regions. Same result. It doesn't work for URLs from the required domain. Works for URLs of other domains.
Error:
System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request..
Update (solution):
I tried sending a request from Postman, copied the code from there for C# and deployed it to the Azure function and it is now working for the problematic domain. Something like below:
var client = new RestClient(url);
client.Timeout = -1;
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
The key here is client.Timeout = -1, which seems to have fixed the problem.
Now, in my original code, I tried setting HttpClient's timeout to Timeout.InfiniteTimeSpan both in Startup configuration as well as at individual request level but it did not work.
services.AddHttpClient("AzureTestClient", options =>
{
options.Timeout = Timeout.InfiniteTimeSpan;
});
Am I setting the timeout wrong in the HttpClient solution?
If you are using a Consumption plan then maybe the confidential URL need to whitelist the whole Azure Data center. You can follow the guide here or consider upgrading the Consumption plan to a premium one and have a dedicated linked VNET.
Maybe your local machine is already linked to the domain/whitelisted so azure function operates from different range.
Another reason maybe the URL returns a different HttpStatusCode that is't Successful range (200-299) so it fails with "EnsureSuccessStatusCode" in the old code?
Normally for the http code initialization, I did something like that:
public void Configure(IWebJobsBuilder builder)
{
builder.Services.AddHttpClient("AzureTestClient",
options => { options.Timeout = Timeout.InfiniteTimeSpan; });
}
Then when I want to use it, I do like that in any other function and it worked:
var client = clientFactory.CreateClient("AzureTestClient");
I have been developing a OneDrive desktop client app because the one built into windows has been failing me for reasons I cannot figure out. I'm using the REST API in C# via an HttpClient.
All requests to the onedrive endpoint work fine (downloading, uploading small files, etc.) and uploading large files worked fine up until recently (about two days ago). I get the upload session URL and start uploading data to it, but after uploading two chunks to it successfully (202 response), the third request and beyond times out (via the HttpClient), whether it be a GET to get the status, or a PUT to upload data. The POST to create the session still works.
I have tried: getting a new ClientId, logging into a new Microsoft account, reverting code to a known working state, and recloning git repository.
In PostMan, I can go through the whole process of creating a session and uploading chunks and not experience this issue, but if I take an upload URL that my application retrieves from the OneDrive API and try to PUT data to it in PostMan, the server doesn't respond (unless the request is invalid, then it sometimes tells me). Subsequent GET requests to this URL also don't respond.
Here is a log of all requests going to the OneDrive API after authentication: https://pastebin.com/qRrw2Sb5
and here is the relevant code:
//first, create an upload session
var httpResponse = await _httpClient.StartAuthenticatedRequest(url, HttpMethod.Post).SendAsync(ct);
if (httpResponse.StatusCode != HttpStatusCode.OK)
{
return new HttpResult<IRemoteItemHandle>(httpResponse, null);
}
//get the upload URL
var uploadSessionRequestObject = await HttpClientHelper.ReadResponseAsJObjectAsync(httpResponse);
var uploadUrl = (string)uploadSessionRequestObject["uploadUrl"];
if (uploadUrl == null)
{
Debug.WriteLine("Successful OneDrive CreateSession request had invalid body!");
//TODO: what to do here?
}
//the length of the file total
var length = data.Length;
//setup the headers
var headers = new List<KeyValuePair<string, string>>()
{
new KeyValuePair<string, string>("Content-Length", ""),
new KeyValuePair<string, string>("Content-Range","")
};
JObject responseJObject;
//the response that will be returned
HttpResponseMessage response = null;
//get the chunks
List<Tuple<long, long>> chunks;
do
{
HttpResult<List<Tuple<long, long>>> chunksResult;
//get the chunks
do
{
chunksResult = await RetrieveLargeUploadChunksAsync(uploadUrl, _10MB, length, ct);
//TODO: should we delay on failure?
} while (chunksResult.Value == null);//keep trying to get thre results until we're successful
chunks = chunksResult.Value;
//upload each fragment
var chunkStream = new ChunkedReadStreamWrapper(data);
foreach (var fragment in chunks)
{
//setup the chunked stream with the next fragment
chunkStream.ChunkStart = fragment.Item1;
//the size is one more than the difference (because the range is inclusive)
chunkStream.ChunkSize = fragment.Item2 - fragment.Item1 + 1;
//setup the headers for this request
headers[0] = new KeyValuePair<string, string>("Content-Length", chunkStream.ChunkSize.ToString());
headers[1] = new KeyValuePair<string, string>("Content-Range", $"bytes {fragment.Item1}-{fragment.Item2}/{length}");
//submit the request until it is successful
do
{
//this should not be authenticated
response = await _httpClient.StartRequest(uploadUrl, HttpMethod.Put)
.SetContent(chunkStream)
.SetContentHeaders(headers)
.SendAsync(ct);
} while (!response.IsSuccessStatusCode); // keep retrying until success
}
//parse the response to see if there are more chunks or the final metadata
responseJObject = await HttpClientHelper.ReadResponseAsJObjectAsync(response);
//try to get chunks from the response to see if we need to retry anything
chunks = ParseLargeUploadChunks(responseJObject, _10MB, length);
}
while (chunks.Count > 0);//keep going until no chunks left
Everything does what the comments say or what the name suggests, but a lot of the methods/classes are my own, so i'd be happy to explain anything that might not be obvious.
I have absolutely no idea what's going on and would appreciate any help. I'm trying to get this done before I go back to school on Saturday and no longer have time to work on it.
EDIT: After waiting a while, requests can be made to the upload URL again via PostMan.
EDIT 2: I can no longer replicate this timeout phenomenon in Postman. Whether I get the upload URL from my application, or from another Postman request, and whether or not the upload has stalled in my application, I can seem to upload all the fragments I want to through Postman.
EDIT 3: This not-responding behavior starts before the content stream is read from.
Edit 4: Looking at packet info on WireShark, the first two chunks are almost identical, but only "resend" packets show up on the third.
So after 3 weeks of varying levels of testing, I have finally figured out the issue and it has almost nothing to do with the OneDrive Graph api. The issue was that when making the Http requests, I was using the HttpCompletionOption.ResponseHeadersRead but not reading the responses before sending the next one. This means that the HttpClient was preventing me from sending more requests until I read the responses from the old ones. It was strange because it allowed me to send 2 requests before locking up.
i am just working on my first Windows Phone 8.1 app (Universal if this matters, but only Windows Phone implemented at the moment). And at first all is working very smooth but as soon as my app is running for about 25-30 Minutes I can no longer use my HttpClient. I use the Windows.Web.Http.HttpClient.
In my first trys I used a singleHttpClientand reused it all the time. As I became aware that this is not working I started using a newHttpClient` for each request. But still no luck.
This is my method to get a new HttpClient:
private HttpClient GetClient()
{
var filter = new HttpBaseProtocolFilter
{
AllowUI = false,
CacheControl = { WriteBehavior = HttpCacheWriteBehavior.NoCache },
ServerCredential =
new PasswordCredential(
BaseApiUri.ToString(),
credentials.UserName,
credentials.Password),
};
var httpClient = new HttpClient(filter);
var headers = httpClient.DefaultRequestHeaders;
var httpConnectionOptionHeaderValueCollection = headers.Connection;
httpConnectionOptionHeaderValueCollection.Clear();
headers.Accept.TryParseAdd("application/json");
headers.CacheControl.TryParseAdd("no-cache");
headers.Add("Pragma", "no-cache");
headers.Add("Keep-Alive", "false");
headers.Cookie.Clear();
return httpClient;
}
The extra code setting the headers and clearing cookies are my attempts to stop some kind of caching of connections under the surface that might happen. But still no luck.
My method to make requests my API is like the following:
private async Task<bool> PostNoResponseRequestTo(string relativeUri, object requestContent, CancellationToken cancellationToken)
{
var targetUri = new Uri(BaseApiUri, relativeUri);
var requestJson = JsonConvert.SerializeObject(requestContent);
var content = new HttpStringContent(requestJson, UnicodeEncoding.Utf8, "application/json");
try
{
using (var httpClient = this.GetClient())
{
var post =
await httpClient.PostAsync(targetUri, content).AsTask(cancellationToken).ContinueWith(
async request =>
{
using (var response = await request)
{
return response.IsSuccessStatusCode;
}
},
cancellationToken);
return await post;
}
}
catch (Exception)
{
return false;
}
}
This works fine for about 25-30 Minutes after which the calls to the api suddenly start to fail. I start getting a 401 but as you can see i have specified credentials and because those are working and do not change (hardcoded them to test this) i start believing that the problem is on the API side.
This is the response I get:
StatusCode: 401, ReasonPhrase: 'Unauthorized', Version: 2, Content: Windows.Web.Http.HttpStreamContent, Headers:
{
Server: Microsoft-IIS/8.5
Date: Fri, 20 Mar 2015 14:25:06 GMT
WWW-Authenticate: Digest qop="auth",algorithm=MD5-sess,nonce="+Upgraded+NounceRemoved",charset=utf-8,realm="Digest", Negotiate, NTLM
X-Powered-By: ASP.NET
}
{
Content-Length: 1344
Content-Type: text/html
}
My API consists of a Asp.Net project with ServiceStack for its API functionality.
This is running on an IIS with activated digest authentication (all other are disabled).
By inspecting the logs i became aware of a failing API call in front of each successful call. But if i'm right this is by design of digest auth because i have not found a way to tell the client that the other side is using digest auth. I was able to specify this kind of information in my other .Net projects but for some reason Microsoft changed the code (and namespace) for the HttpClient. I am also aware of the HttpClient in the original namespace that you can get through nuget but this is not working for me as i get an error in my output window as soon as i make any call. This closes my app without any kind of information.
Back to the log i was able to get some information with the help of the extended logging and the tool to analyze them. The error is something like (can't access it right now will edit it later):'Invalid token passed to function/method'.
I really hope that someone can help me to solve this problem as it makes the app nearly unusable. My users have to restart the app every 15 Minutes to be on the save site.
Thanks for all advices that help me.
Try Checking the Machine Key setting in IIS. Automatically generate at runtime if tick will generate a new key every time the app pool is restarted. This might be causing your issue. The Machine Key can be set on the server, website or application level. As activated digest authentication is encrypted this might be the issue.
Managing Websites with IIS Manager (part 6) - The Machine Key and Windows Authentication