.NET Core MVC SignalR hub methods not running concurrently - c#

I am having an issue with the latest version of SignalR in .NET Core 3.1. It seems that one of my hub methods are blocking a second hub method.
The hub methods are called correctly on the client side. There is no delay after the call to GetPriceHistoryDifferentialAsync, which is a LINQ query on the server, before also logging that the second GetPriceHistoryAsync, which is a quick button click, was called.
client console log
invoke: GetPriceHistoryAsync
invoke: GetPriceHistoryDifferentialAsync
invoke: GetPriceHistoryAsync
Next is the debug output in Visual Studio on the client side.
server log
Stocks.Web.Hubs.StocksHub: Information: GetPriceHistoryAsync A UKhqAp6VQheTCrxYWeyjbQ
Stocks.Web.Hubs.StocksHub: Information: GetPriceHistoryDifferentialAsync UKhqAp6VQheTCrxYWeyjbQ
There is a delay until GetPriceHistoryDifferential completes and then any remaining calls to GetPriceHistory complete.
Stocks.Web.Hubs.StocksHub: Information: GetPriceHistoryAsync AAN UKhqAp6VQheTCrxYWeyjbQ
There was previously no delay in returning data with the button click GetPriceHistory method calls while the GetPriceHistoryDifferential query method call was running. I think updating to .NET Core 3.1 and the latest SignalR is when this issue began.
Does anyone know what caused this problem or how to make SignalR hub methods run concurrently again? I am using the async/await model, and both hub methods await Clients.Caller.SendCoreAsync.

Related

Check if gRpc server is running for Grpc.Net.Client

This question is basically the same as 1 but instead Grpc.Core I would use the more up to date library Grpc.Net.Client. The Channel class here has neither a property "State" nor a ConnectAsync() method nor any other method to detect the current connection state. My goal is to write a Server and a Client application where the startup order ist arbitrary, i.e. If the Client is started before the Server it shall wait until the Server and therefore the connection becomes ready.

SignalR takes a long time to respond to a heartbeat

I created a very simple ASP.NET Core app with SignalR with Visual Studio using a Web App MVC application template with the following customization:
added a reference to #microsoft/signalr library via libman,
referenced <script src="~/lib/microsoft-signalr/signalr.min.js"></script> in _Layout.cshtml,
added the required SignalR services in Startup.cs and created an empty Hub, exposed in the following way:
app.UseEndpoints(endpoints =>
{
endpoints.MapHub<MyHub>("hub/remote");
// ... MVC router definitions
}
created the SignalR connection in JS:
const connection =
new signalR.HubConnectionBuilder()
.withUrl("/hub/remote")
.configureLogging(signalR.LogLevel.Trace)
.withAutomaticReconnect()
.build();
connection.start().then(() => console.log("Connected."));
Then I launched the MVC app and everything started without an error.
However, I took a closer look at SignalR log messages:
I believe this is the SignalR's internal heartbeat that keeps the connection alive.
I wonder why does it take 4-5s between sending the message and receiving the response?
I also tried using SignalR in a more complex application and from time to time I even started receiving "Reconnecting" events, as the load was significantly larger there.
That makes me feel that I do something wrong while configuring the connection, but no idea what exactly.
"Connection Slow" isn't an event in ASP.NET Core SignalR.
The heartbeats are not directly related to each other, so the gaps between client and server pings are normal.

.NET Core 3.1 Post Request Connection Closes After 5 Minutes

I am working on a .NET Core 3.1 web api application and am having a terrible time with something closing my long running API request.
I have a POST endpoint that looks like this (slightly simplified):
[HttpPost]
[IgnoreAntiforgeryToken]
[ActionName("LoadDataIntoCache")]
public async Task<IActionResult> LoadDataIntoCache([FromQuery] string filter)
{
//long running process (15-20 mins)
var success = await _riskService.LoadDataIntoCache(filter);
if (success == false)
{
return StatusCode(StatusCodes.Status500InternalServerError);
}
return Ok();
}
This endpoint works fine when I test it locally via Postman. However, when I push this to the server (IIS), and hit the endpoint via Postman it produces an error after 5 minutes: Error: read ECONNRESE.
No more details are produced that this. Checking the logs of the application, it does not throw an exception, in fact it appears that the long running processes continues to run as if nothing is wrong. Its as if the connection itself is just being closed by something, but that the application is working fine.
I have also tried calling this endpoint via C# instead of Postman. My calling code produced the following exception message Processing of the HTTP request resulted in an exception. and additionally The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.
I have checked the IIS timeout, which is set to 120s, which does not align with the 5 minute time I am seeing. I have checked a bunch of timeout settings on the .NET side, but my understanding is the .NET Core 3.1 does not need this settings because it will wait forever by default? This application is also set to run inProcess if that is significant...
I am really scratching my head on this one. Any pointers would be much appreciated.

How to close C++ REST Sdk websocket?

I'm using cpprestsdk on the client and .net core 2.1 on the server side. Everything works except the closing part.
// C++
web::websockets::client::websocket_callback_client _client;
//connecting and working with websocket...
_client.close().wait();
// C#
while (!Socket.CloseStatus.HasValue)
{
//sending/reciving data
}
await Socket.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, "Connection closed", CancellationToken.None);
The issue is that the _client.close().wait(); never exits. The server gets the close request and calls CloseOutputAsync successfully. And I can't figure out why it never exits _client.close().wait();. It looks like there is some issue with the handshake between C++ and .net core implementations and didn't manage to make a workaround. Is there any way of forcing _client.close().wait(); to close the connection and do not wait for the handshake part from the server? Or is there is something wrong with the server code of closing a web socket?
It was my own mistake. I have set the _client.set_close_handler(...) which use lock_guard. This cases a deadlock since this mutex was locked during the close() call.

SignalR Delay firing OnDisconnected in Hub

When I navigate away from a page in Silverlight I call a function in Javascript which contains the following:
$.connection.hub.stop();
In my Hub class I override OnDisconnected()
Usually, the OnDisconnected event fires, but sometimes it doesn't fire and it takes seemingly various amounts of time before it times out and the OnDisconnected is fired.
After I navigate away from the page that creates the connection, Fiddler shows that the original signalr/connect?... request has no response code (the column shows '-', and the Body column shows '-1) - Shouldn't this return 200?
If I navigate back to the page, a new connection is established.
Sometimes a signalr/abort request is issued, passing the same connectionToken, but the original connect request remains the same (open).
Why isn't the OnDisconnected event firing immediately every time in the hub, since the connection is being stopped explicitly in the client?
Using SignalR 1.2.1 .NET 4.0
Having read the documentation, I don't believe this applies to my situation:
The OnDisconnected method doesn't get called in some scenarios, such as when a server goes down or the App Domain gets recycled. When another server comes on line or the App Domain completes its recycle, some clients may be able to reconnect and fire the OnReconnected event.

Categories

Resources