HttpClient cache when change date - c#

I have a problem in my app with the HttpClient. When I change the date or time on my phone the HttpClient cache gets messed up. For example, when I change the date backwards and run the app again every response is taken from the cache, and when I move the date forward, no request is cached, nor taken from the cache. I don't know where the problem is, so help if you encountered a similiar problem.
Is there any way to clear the HttpClient cache programatically? I would do that on every app start.
EDIT:
I just used Fiddler to look at responses and found out that the time in the 'Date' response header is the correct time (on server), and maybe HttpClient uses this header for its calculations in combination with the time on the device, which returns wrong results
EDIT2:
One use case:
When the user opens the app and fetches an url (http://my.website.com/jsondata) where the cache control header is set to 10 seconds, and then closes the app and changes the time backwards or forwards, and then opens the app again in some time, then in the first case the httpclient Always gets the response of the same url from its cache, and in the second case never uses the cache again (if the user requests the same url multiple times in 10 seconds)
My code is :
public async Task<StringServerResponse> DownloadJsonAsync(Uri uri, DecompressionMethods decompressionMethod = DecompressionMethods.None, int timeout = 30000, int retries = 3)
{
if (retries < 1 || retries > 10) retries = 3;
int currentRetries = 0;
// Baseline delay of 1 second
int baselineDelayMs = 1000;
// Used for exponential back-off
Random random = new Random();
StringServerResponse response = new StringServerResponse();
if (decompressionMethod == DecompressionMethods.GZip)
{
//Use decompression handler
using (var compressedHttpClientHandler = new HttpClientHandler())
{
if (compressedHttpClientHandler.SupportsAutomaticDecompression)
{
compressedHttpClientHandler.AutomaticDecompression = System.Net.DecompressionMethods.GZip;
}
using (var httpClient = new HttpClient(compressedHttpClientHandler))
{
httpClient.Timeout = TimeSpan.FromMilliseconds(timeout);
do
{
++currentRetries;
try
{
var httpResponse = await httpClient.GetAsync(uri, HttpCompletionOption.ResponseContentRead);
if (httpResponse.IsSuccessStatusCode)
{
response.StatusCode = httpResponse.StatusCode;
response.Content = await httpResponse.Content.ReadAsStringAsync();
return response;
}
}
catch (Exception)
{
}
int delayMs = baselineDelayMs + random.Next((int) (baselineDelayMs*0.5), baselineDelayMs);
if (currentRetries < retries)
{
await Task.Delay(delayMs);
DebugLoggingService.Log("Delayed web request " + delayMs + " seconds.");
}
// Increment base-delay time
baselineDelayMs *= 2;
} while (currentRetries < retries);
}
}
}
else
{
// same but without gzip handler
}
return null;
}

You can try to add one more parameter to your input queries in URL. You can follow my another post: Why does the HttpClient always give me the same response?

Related

Problems establishing a secure SSL/TLS channel after a certain number of successful runs with HttpClient

I use a C# HttpClient to access an REST-API-endpoint with a PKI certificate authentication/authorization.
The connection works for a certain number of queries without problems. According to the endpoint, there are some limits, which I list here for completeness:
Number of requests for an already established TLS connection = 200
Timeout how long to wait to be able to send another request over the same connection = 5 seconds
The HttpClient will be instanced once at starting the application.
private HttpClientHandler _httpClientHandler;
private static HttpClient _httpClient;
public MyTestClass()
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
_httpClientHandler = new HttpClientHandler();
_httpClientHandler.ClientCertificateOptions = ClientCertificateOption.Automatic;
_httpClient = new HttpClient(_httpClientHandler);
}
And the request method looks like this:
private HttpResponseMessage PerformRequest(string apiCommand, Entity.EntityTypes entityType, List<KeyValuePair<Entity.Attributes, string>> attributes)
{
try
{
string relativeUri = string.Format("{0}/{1}?{2}&{3}",
apiCommand,
(apiCommand == Entity.API_SELECT) ? Entity.GetEntityCommand(entityType) : string.Empty,
SYSTEM_AUTHENTICATION_PRODUCTION,
PKI_USER_AUTHENTICATION);
foreach (KeyValuePair<Entity.Attributes, string> attribute in attributes)
relativeUri += "&" + Entity.GetAttributeCommand(entityType, attribute.Key) + attribute.Value;
return _httpClient.GetAsync(relativeUri).Result;
}
catch (Exception ex)
{
throw new ClientException(ex.Message, ex.InnerException);
}
finally
{
LogReturn(CALL2);
}
}
At the end of the day, the user is able to select a bunch of records from a grid and uses a method which is calling the PerformRequest method at the end. The PerformRequest will be called in a loop and the response will be analyzed every run.
The call looks like this:
public string GetInfos(string partNo)
{
HttpResponseMessage response = new HttpResponseMessage();
string result = null;
try
{
List<KeyValuePair<Entity.Attributes, string>> attributes = new List<KeyValuePair<Entity.Attributes, string>>();
attributes.Add(new KeyValuePair<Entity.Attributes, string>(Entity.Attributes.PartKey, PartNo));
response = PerformRequest(Entity.API_SELECT, Entity.EntityTypes.Part, attributes);
if (response.Content.Headers.ContentType.MediaType == MIME_TYPE_TEXT)
{
result = (response.Content.ReadAsStringAsync()).Result;
}
}
finally
{
response.Dispose();
}
return result;
}
The problem occurs after > 200 runs of the PerformRequest method (some times 218, another time 220, ...). There will be thrown a WebException: "The request was aborted: Could not create SSL/TLS secure channel".
The surprise is, that no further call of the PerformRequest is working from now. All new calls ends in the same WebException. If I close the application und open it again, all is good and I can call the loop again.
The bigger surprise is, that if I copy the same REST-API-url which will be used by the application and paste it to a browser and excecute it, then it does take some time (I think the browser has to load the pki certificate from certificate store), then it loads the data correctly and afterwards my application also can do the loop one more again.
I did try everything I read from "using blocks" to non "using blocks" of the HttpClient/HttpClientHandler and ResponseMessages. I tried to pick the certificate manually by changing from HttpClientHandler to WebRequestHandler. Using the ServicePoint.ConnectionLeaseTimeout property, also the HttpClientHandler.PreAuthenticate and HttpClient.DefaultRequestHeaders.ConnectionClose, ServicePointManager.Expect100Continue properties etc. I did this the last two days and I don't know what I can do further more.
Does anyone happen to have an idea what else it can be or why this does not work?
And why does it work again if the call took place in the browser?
Thank you very much for your support!

ASP.NET Core 5 - Timeout middleware outputting phantom data?

I created a timeout middleware that works basically like this:
public async Task InvokeAsync(HttpContext httpContext)
{
var stopwatch = Stopwatch.StartNew();
using (var timeoutTS = CancellationTokenSource.CreateLinkedTokenSource(httpContext.RequestAborted))
{
var delayTask = Task.Delay(config.Timeout);
var res = await Task.WhenAny(delayTask, _next(httpContext));
Trace.WriteLine("Time taken = " + stopwatch.ElapsedMilliseconds);
if (res == delayTask)
{
timeoutTS.Cancel();
httpContext.Response.StatusCode = 408;
}
}
}
In order to test it, I created a controller action:
[HttpGet]
public async Task<string> Get(string timeout)
{
var result = DateTime.Now.ToString("mm:ss.fff");
if (timeout != null)
{
await Task.Delay(2000);
}
var rng = new Random();
result = result + " - " + DateTime.Now.ToString("mm:ss.fff");
return result;
}
The configured timeout to 500ms and the Time Taken reported is usually 501-504 ms (which is a very acceptable skid).
The problem is that every now and then I was seeing an error on the output windows saying that the response had already started. And I thought to myself: this cant be! this is happening 1 second earlier than the end of the Task.Delay on the corresponding controller.
So I opened up fiddler and (to my surprise) several requests are returning in 1.3-1.7 seconds WITH A FULL RESPONSE BODY.
By comparing the reported time written on the response body with the timestamp on fiddler "statistic" tab I can guarantee that the response im looking at does not belong to that request at hand!
Does anyone knows what's going on? Why is this "jumbling" happening?
Frankly, you're not using middleware in the way it is designed for.
You might want to read this middleware docs.
The ASP.NET Core request pipeline consists of a sequence of request delegates, called one after the other.
In your case, your middleware is running in parallel with the next middleware.
When a middleware short-circuits, it's called a terminal middleware because it prevents further middleware from processing the request.
If I understand you correctly, you might want to create such terminal middleware, but clearly your current one is not.
In your case, you have invoked the _next middleware, which means the request has already handed off to the next middleware in the request pipeline. The subsequent middleware components can start the response before the timeout has elapsed. i.e. a race condition between your middleware and a subsequent middleware.
To avoid the race condition, you should always check HasStarted before assigning the status code. And if the response has started, all you can do might only be aborting the request if you don't want the client to wait for too long.
static void ResetOrAbort(HttpContext httpContext)
{
var resetFeature = httpContext.Features.Get<IHttpResetFeature>();
if (resetFeature is not null)
{
resetFeature.Reset(2);
}
else
{
httpContext.Abort();
}
}
app.Use(next =>
{
return async context =>
{
var nextTask = next(context);
var t = await Task.WhenAny(nextTask, Task.Delay(100));
if (t != nextTask)
{
var response = context.Response;
// If response has not started, return 408
if (!response.HasStarted)
{
// NOTE: you will still get same exception
// because the above check does not eliminate
// any race condition
try
{
response.StatusCode = StatusCodes.Status408RequestTimeout;
await response.StartAsync();
}
catch
{
ResetOrAbort(context);
}
}
// Otherwise, abort the request
else
{
ResetOrAbort(context);
}
}
};
});

Getting "Response status code does not indicate success: 502 (Bad Gateway)" from Asp.net Core MVC Web App hosted in Azure

I have a client application built on Asp.net core MVC (V 1.1.1) and a Web API built on Asp.net core (v 2.1). I had hosted both on Azure.
While making some requests, the web app is failing and giving 502 Bad Gateway response.
Response status code does not indicate success: 502 (Bad Gateway).
The specified CGI application encountered an error and the server terminated the process.
This issue is intermittent, however, seems happening when request is taking more than 2 min to process. I have set up requestTimeout to 20 min on both client and API side in Web.config file still didn't get it resolved. Sometimes, the same request is being processed in less time and I am getting response.
Additionally, the 5 min timeout for Httpclient has also been set but no luck.
<aspNetCore requestTimeout="00:20:00"/>
_httpClient.Timeout = new TimeSpan(0, 5, 0);
I have tested the app locally and can't face this issue, also able to get response even it take more than 3 min to process.
It seems that Azure web app is not waiting for request to get processed if it's crossing 2 min. However, the azure session timeout specifies 230 seconds (3.8 min) but still it's not waiting and the app is not considering this case as an error and not logging anything.
Client Side code:
public class ApiClientFactory
{
private static Uri ApiUrl;
private static Lazy<ApiClient> restClient = new Lazy<ApiClient>(
() => new ApiClient(ApiUrl),
LazyThreadSafetyMode.ExecutionAndPublication);
static ApiClientFactory()
{
ApiUrl = new Uri(Convert.ToString(ConfigurationManager.AppSettings["WebAPIUrl"]));
}
public static ApiClient Instance
{
get
{
return restClient.Value;
}
}
}
public class ApiClient
{
private readonly HttpClient _httpClient;
private readonly Uri BaseEndPointUrl;
public ApiClient(Uri baseEndPointUrl)
{
if (baseEndPointUrl == null)
throw new ArgumentNullException("baseEndPointUrl");
BaseEndPointUrl = baseEndPointUrl;
_httpClient = new HttpClient();
_httpClient.DefaultRequestHeaders.Add("Accept", "application/json");
_httpClient.Timeout = new TimeSpan(0, 5, 0); //Timeout needed for few modules to get results from db.
}
private HttpContent CreateHttpContent<T>(T content)
{
var json = JsonConvert.SerializeObject(content, MicrosoftDateFormatSettings);
return new StringContent(json, Encoding.UTF8, "application/json");
}
private static JsonSerializerSettings MicrosoftDateFormatSettings
{
get
{
return new JsonSerializerSettings
{
DateFormatHandling = DateFormatHandling.MicrosoftDateFormat
};
}
}
public async Task<T1> PostAsync<T1, T2>(string url, T2 content, string token)
{
try
{
_httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
var response = await _httpClient.PostAsync(url, CreateHttpContent<T2>(content));
if (response.StatusCode == HttpStatusCode.InternalServerError){
var exception = await response.Content.ReadAsStringAsync();
throw new Exception(exception);
}
response.EnsureSuccessStatusCode();
var data = await response.Content.ReadAsStringAsync();
return JsonConvert.DeserializeObject<T1>(data);
}
catch (Exception ex){throw ex;}
}
}
Please assist where it's having an issue.
You can implement a method that retry your requests, or simply use .Net Polly for this kind of errors. Is not a good practice to increase your timeout setup.
Method:
private async Task<bool> PostAsync()//Params you need
{
HttpStatusCode[] httpStatusCodesWorthRetrying =
{
HttpStatusCode.RequestTimeout,
HttpStatusCode.InternalServerError,
HttpStatusCode.BadGateway,
HttpStatusCode.ServiceUnavailable,
HttpStatusCode.GatewayTimeout
};
var maxAttempts = 0;
while (maxAttempts < 3)
{
var response = await _httpClient.PostAsync(requestUrl, CreateHttpContent<T2>(content));
if (!httpStatusCodesWorthRetrying.Contains(response.StatusCode))
{
return true;
}
maxAttempts ++;
await Task.Delay(300);
}
return false;
}
Polly:
private async Task PostAsync()//Params you need
{
HttpStatusCode[] httpStatusCodesWorthRetrying =
{
HttpStatusCode.RequestTimeout,
HttpStatusCode.InternalServerError,
HttpStatusCode.BadGateway,
HttpStatusCode.ServiceUnavailable,
HttpStatusCode.GatewayTimeout
};
_retryPolicy = Policy
.Handle<HttpRequestException>()
.OrResult<HttpResponseMessage>(r => httpStatusCodesWorthRetrying.Contains(r.StatusCode))
.WaitAndRetry(4, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));
_retryPolicy.Execute(await _httpClient.PostAsync(requestUrl, CreateHttpContent<T2>(content)));
}
We thought that the following could be the possible reasons.
At the time of testing, Network related issue between two hosted applications while processing the given
request and to deliver the response. Before the API delivers the response the Web app
might be timing out.
Made the following code change in the API client class. (it's posted in the question).I am not certain about the subtle difference between awaiting the task and getting the Result from the task. But after this change we no longer experienced the issue, additionally did some load testing also but still didn't face that later on.
From
var response = await _httpClient.PostAsync(requestUrl, CreateHttpContent<T2>(content));
To
var response = _httpClient.PostAsync(requestUrl, CreateHttpContent<T2>(content)).Result;
Note: Enabling the Analytics options for the hosted domains in Azure would give us good insights to narrow down this type of issues.

Elasticsearch: Waiting for Long Running requests to complete

What is the best approach for knowing when a long running Elasticsearch request is complete?
Today I have a process that periodically purges ~100K documents from an AWS hosted ES that contains a total of ~60M documents.
var settings = new ConnectionSettings(new Uri("https://mycompany.es.aws.com"));
settings.RequestTimeout(TimeSpan.FromMinutes(3)); // not sure this helps
var client = new ElasticClient(settings);
var request = new DeleteByQueryRequest("MyIndex") { ... };
// this call will return an IsValid = true, httpstatus = 504 after ~60s,
var response = await client.DeleteByQueryAsync(request);
Even with timeout set to 3 minutes, the call always returns in ~60s with an empty response and a 504 status code. Though through Kibana, I can see that the delete action continues (and properly completes) over the next several minutes.
Is there a better way to request and monitor (wait for completion) a long running ES request?
UPDATE
Based on Simon Lang's response I updated my code to make use of ES Tasks. The final solution looks something like this...
var settings = new ConnectionSettings(new Uri("https://mycompany.es.aws.com"));
settings.RequestTimeout(TimeSpan.FromMinutes(3)); // not sure this helps
var client = new ElasticClient(settings);
var request = new DeleteByQueryRequest("MyIndex")
{
Query = ...,
WaitForCompletion = false
};
var response = await client.DeleteByQueryAsync(request);
if (response.IsValid)
{
var taskCompleted = false;
while (!taskCompleted)
{
var taskResponse = await client.GetTaskAsync(response.Task);
taskCompleted = taskResponse.Completed;
if (!taskCompleted)
{
await Task.Delay(5000);
}
}
}
I agree with #LeBigCat that the timeout comes from AWS and it is not a NEST problem.
But to address your question:
The _delete_by_query request supports the wait_for_completion parameter. If you set it to false, the request returns immediately with a task id. You then can request the task status by the task api.
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
This isnot a nest - elastic problem, the default timeout in nest query is 0 (no timeout).
You got timeout from amazon server (60s default)
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ts-elb-error-message.html
This link explain everything you need to know :)
Regarding #Simon Lang's answer, this is also the case with the _update_by_query api. For those unfamiliar with the _tasks api, you can query for your task in Kibana. The string returned by the update or delete by query will be of the form:
{
"tasks" : "nodeId:taskId"
}
and you can view the status of the task using this command in Kibana:
GET _tasks/nodeId:taskId

Httpclient gets a timeout before server starts processing the request

I have a strange error, that I can't reproduce on my own computer.
Here is the complete code I'm using:
public async Task LoadHeaderAndFooter()
{
//ignore any SSL errors
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
var baseAddress = new Uri(Request.Url.GetComponents(UriComponents.Scheme | UriComponents.Host, UriFormat.Unescaped));
var cookieContainer = new CookieContainer();
using (var handler = new HttpClientHandler { CookieContainer = cookieContainer })
using (var client = new HttpClient(handler) { BaseAddress = baseAddress })
{
var oCookies = Request.Cookies;
for (var j = 0; j < oCookies.Count; j++)
{
var oCookie = oCookies.Get(j);
if (oCookie == null) continue;
var oC = new Cookie
{
Domain = baseAddress.Host,
Name = oCookie.Name,
Path = oCookie.Path,
Secure = oCookie.Secure,
Value = oCookie.Value
};
cookieContainer.Add(oC);
}
Header.Text = await client.GetStringAsync("/minside/loyalityadmin/header");
Footer.Text = await client.GetStringAsync("/minside/loyalityadmin/footer");
}
}
What happens is that the request starts, then waits for the timeout (30 sek default), the httpclient throw's a "task canceled" exeption. THEN the actuall request fires on the server.
Now this code is run in an .ascx.cs file. While the /header is a MVC controll with a .cshtml view. Both run on the same server.
How can I get this to work?
After even more testing it seems like restsharp.org also have the same issues on some plattforms (everywhere but my development plattform).
Turns out the problem is the session object. On IIS it is not possible to read the session state in parallell:
Access to ASP.NET session state is exclusive per session, which means
that if two different users make concurrent requests, access to each
separate session is granted concurrently. However, if two concurrent
requests are made for the same session (by using the same SessionID
value), the first request gets exclusive access to the session
information. The second request executes only after the first request
is finished. (The second session can also get access if the exclusive
lock on the information is freed because the first request exceeds the
lock time-out.) If the EnableSessionState value in the # Page
directive is set to ReadOnly, a request for the read-only session
information does not result in an exclusive lock on the session data.
However, read-only requests for session data might still have to wait
for a lock set by a read-write request for session data to clear.
from https://msdn.microsoft.com/en-us/library/ms178581.aspx
So the solution was to simply not pass the session id:
if (oCookie == null || oCookie.Name == "ASP.NET_SessionId") continue;
Witch works fine, since the secondary request did not need access to the session. Any solution that makes it possible to use the session in both request would be recieved with thanks.

Categories

Resources