When I put the application into production with pod-managed Kubernetes architecture where it has the possibility of scaling so today it has two servers running the same application, hangfire recognizes both but returns an error 500
Unable to refresh the statistics: the server responded with 500 (error). Try reloading the page manually, or wait for automatic reload that will happen in a minute.
But when I leave on stage which is the testing application where there is only one server, hangfire works normally.
Hangfire Configuration:
Startup.cs
services.AddHangfire(x => x.UsePostgreSqlStorage(Configuration.GetConnectionString("DefaultConnection")));
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] { new AuthorizationFilterHangfire() }
});
app.UseHangfireServer();
Error
You can now add IgnoreAntiforgeryToken to your service which should resolve this issue.
According to this github post, the issue occured when you had multiple servers running the dashboard and due to load balancing when your request went to different server from the one you originally got the page, you'd see the error.
Adding IgnoreAntiforgeryToken = true to the dashboard should resolve the issue.
Excerpt Taken from here
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] {new HangfireAuthFilter()},
IgnoreAntiforgeryToken = true // <--This
});
We solved this issue by persisting our DataProtectionKeys in Redis for all replicas to read from.
services.AddDataProtection().PersistKeysToStackExchangeRedis(redis, $yourprefix::DataProtectionKeys");
Related
I created a very simple ASP.NET Core app with SignalR with Visual Studio using a Web App MVC application template with the following customization:
added a reference to #microsoft/signalr library via libman,
referenced <script src="~/lib/microsoft-signalr/signalr.min.js"></script> in _Layout.cshtml,
added the required SignalR services in Startup.cs and created an empty Hub, exposed in the following way:
app.UseEndpoints(endpoints =>
{
endpoints.MapHub<MyHub>("hub/remote");
// ... MVC router definitions
}
created the SignalR connection in JS:
const connection =
new signalR.HubConnectionBuilder()
.withUrl("/hub/remote")
.configureLogging(signalR.LogLevel.Trace)
.withAutomaticReconnect()
.build();
connection.start().then(() => console.log("Connected."));
Then I launched the MVC app and everything started without an error.
However, I took a closer look at SignalR log messages:
I believe this is the SignalR's internal heartbeat that keeps the connection alive.
I wonder why does it take 4-5s between sending the message and receiving the response?
I also tried using SignalR in a more complex application and from time to time I even started receiving "Reconnecting" events, as the load was significantly larger there.
That makes me feel that I do something wrong while configuring the connection, but no idea what exactly.
"Connection Slow" isn't an event in ASP.NET Core SignalR.
The heartbeats are not directly related to each other, so the gaps between client and server pings are normal.
I'm working on a .NET Core 3.1 webapp using C#.
I use Blazor ServerSide as my front-end. The app is hosted by Azure.
On my page I have an upload component. When I upload 1 file it works fine. When I upload 2-3 files it is still working but when I upload more files I get this error:
(Serilog.AspNetCore.RequestLoggingMiddleware) HTTP "POST" "/api/foo/2/bar" responded 500 in 18501.3265 ms
Microsoft.AspNetCore.Connections.ConnectionResetException: The client has disconnected
---> System.Runtime.InteropServices.COMException (0x800704CD): An operation was attempted on a nonexistent network connection. (0x800704CD)
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.IIS.Core.IO.AsyncIOOperation.GetResult(Int16 token)
Most likely it has something to do with the file size of the POST.
When running my webapp in Visual Studio I don't have any problems.
I also see this warning in my logging:
(Microsoft.AspNetCore.Server.IIS.Core.IISHttpServer) Increasing the MaxRequestBodySize conflicts with the max value for IIS limit maxAllowedContentLength. HTTP requests that have a content length greater than maxAllowedContentLength will still be rejected by IIS. You can disable the limit by either removing or setting the maxAllowedContentLength value to a higher limit.
I've tried numerous options to increase the limit:
In my controller:
[Authorize]
[DisableRequestSizeLimit]
public class UploadController : BaseApiController
In Startup.cs:
// To bypass the file limit:
services.Configure<FormOptions>(options =>
{
options.ValueLengthLimit = int.MaxValue;
options.MultipartBodyLengthLimit = long.MaxValue; // <-- !!! long.MaxValue
options.MultipartBoundaryLengthLimit = int.MaxValue;
options.MultipartHeadersCountLimit = int.MaxValue;
options.MultipartHeadersLengthLimit = int.MaxValue;
});
services.Configure<IISServerOptions>(options =>
{
options.MaxRequestBodySize = null;
});
But still no luck.
Am I missing yet another setting or is this error not related to the upload size?
It took me weeks to solve this together with help from Microsoft Support.
It appears the problem was only with a few clients.
I had changed the WebApp setting in Azure to use HTTP v2 instead of HTTP v1.
And it looks like in the connection from my browser to Azure one hop was dropping the HTTP v2 packages. When I reset the setting the problem went away.
I have a .Net Core 3.1 app. In the app, I use the IHttpClientFactory to create an HttpClient. When I make a call using SendAsync, the first request takes over 2 seconds whereas subsequent requests take less than 100 ms. This is not acceptable performance for a production application.
I have also noticed that it happens if I don't make any requests for a while. I came across the PooledConnectionIdleTimeout property, which defaults to 2 minutes, and I can extend that time, but that would only work for pooled connections that already exist, not when needing to create a new one.
I configure the HttpClient in my Startup.cs as such:
services.AddHttpClient("HttpClient",
h =>
{
h.BaseAddress = new Uri(Configuration["PythonUrl"]);
});
Use the HttpClient like this:
var client = httpClientFactory.CreateClient("HttpClient");
var res = await client.GetAsync(nameof(Accounts).ToLower() + "/" + id.ToString() + "/");
when the "Configuration["PythonUrl"]" contains PC name,like this :
{
"AllowedHosts": "*",
"PythonUrl": "http://PC202003261059:8000/",
"url": "http://*:5000"
}
The HttpClient's first request becomes very slow. Can anything be done to avoid this?
I had a similar problem and found a solution that works well for me.
Note that this solution will only work for IIS hosted WebApps.
When hosting an application in IIS, the AppPool is responsible for actually running it. However, it only starts the application when it receives the first request. This is fine in most situations and is therefore the default setting, but this leads to a slow first request.
In IIS manager, right click on the Application Pool that runs your
app and select 'Advanced Settings'. Set 'Start Mode' to 'Always
Running'.
Rick-Click your Site and under 'Manage Website' select 'Advanced
Settings'. Set 'Preload Enabled' to 'true'.
This should make the first request faster and won't put your AppPool to sleep after some time. It should also be noted that this may impact the performance of other apps hosted on the same server.
This article helped me with this.
Disclaimer: the first request might still be slower due to several reason (some of which may be client-side like DNS requests, or network-related lost packets, poor internet connection etc...).
I have a problem in the following code where I used an MSMQ queue:
Message m;
if (t! = TimeSpan.Zero)
{
m = q.Receive (t);
}
plus
{
m =
q.Receive ();
}
What I am doing is consuming a web service that uses this code. The problem is that if I test it locally using the Visual Studio debugger it works. But when I send the message using the service that is deployed in the IIS, when the Receive () method is executed it returns a TimeOut error, apparently due to the lack of permissions.
System.Messaging.MessageQueueException (0x80004005): The timeout for the requested operation has expired
Investigating I think it could be because when using debugger the user who accesses the queue is the one who encounters the session started on the computer. And when the consumption from the service URL the user who accesses is the IIS / DefaultAppPool. But I'm not sure, and I can't make it work, if anyone has any idea of how qualitative the solution would be, I appreciate it in advance.
I am working on Windows Server 2016 with .NetFramework 4.8.
This error occurrs when I perform a redirect to another part of the site in question.
return this.RedirectToActionPermanent("Index", "Dashboard"
The code does not step into the Index method in the Dashboard controller before the error occurs.
I think the Fiddler response header posted is not the >internal< Response header in the error message.
Could anyone explain how best to increase the size limit of the internal response header?
Or, Are we looking at needing to compress the internal response header as a solution?
For your information the call to redirect above was working day in day out for about 3 months until it broke about 5 days ago.
[Edit 1]
The program.cs file is now looking like:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().ConfigureKestrel((context, options) => {
options.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(5);
options.Limits.MaxResponseBufferSize = 1024000;
options.Limits.Http2.HeaderTableSize = 16384;
});
However, the same error is still occurring.
[Edit 2 18.03.19 2:28am]
The problem was caused by a User Claim not being filled in (Identity Table) on one of the profile users. I filled the data in manually in SSMS and the error stopped occurring.
TIA.
David
I have also faced similar issue, for me
this was caused by a bug in WinHTTP which happened long time ago (core 2.0). When a redirect response occurs, WinHTTP prefers to reuse the same connection to follow the redirect.
The number of response bytes drained is controlled by the WinHttpHandler.MaxResponseDrainSize.
When the response size exceeds MaxResponseDrainSize the request is aborted with specified error.
The issue disappeared when i upgraded to 2.1
Update Timout Issue
The settings you add as KeepAliveTimeout = TimeSpan.FromMinutes(5); applies to Kestrel but not IIS.
There is a reverse proxy setup between IIS and Kestrel. So you made the Timeout of Kestrel to 5 Min. But IIS sits with default timeout which is 120sec. You can change the time out in
IIS->Sites->Select your Site->Advanced Settings ->Limits->Connection Time Out.
Try changing this settings to check whether this error happens.
Also there is another setting available in Configuration Editor