SignalR to prevent gateway timeout - c#

We have an ASP.Net MVC app that is running with a gateway policy that any web requests that go over 5 minutes are terminated. One of the features is exporting some data. It's been running just above 5 minutes. Would SignalR help? Does having a persistent connection between the client and server be enough for the gateway to think that it is active and will not terminate it?

We face the same issue in our project where we have to process some data in API and UI can't wait for such long processing time interval response from API.
We use SignalR to notify the requester UI/Client when data get successfully processed.

Related

c# Windows Service - subscribe to realtime events (notifications) from a external source

I have a C# windows Service installed in a customer server that does the following tasks:
Listen to a SQL Broker service for any insert / update in 3 tables and then POST data to an API method so remote server gets updated with latest. (using SqlTableDependency)
Polling method every 5 minutes to verify / validate that on remote server these 3 tables have same data. (In case the SQL Broker service is not working)
Starts a SelfHosted WebAPI server (this doesn't work because customer doesn't allow server to be exposed to Internet)
This last selfhosted task was implemented so that from an application it can request to the customer server to perform some updates on a table.
I would like to know if there is a way to subscribe the windows service to a realtime broascast engine / service such Pusher or AWS SQS, etc. The idea behind is that I can trigger tasks in the remote customer windows service from an outside application.
Any idea if this is a doable thing? If I can do this I even can get rid of the Polling task in the windows service because now I can get the windows service to push information to the API based on an event that I can trigger from an external source.
This might not be the best workaround but seems to be working pretty nice.
What I did was to implement in the Windows Service an infinite loop with a Long polling call to a AWS SQS, having the max Receive message wait time parameter in SQS to 20 seconds. What this allowed me is to reduce the empty response messages and also reduce cost for requests to SQS service to only one every 20 seconds.
If a message comes in when the long polling is beging handled then inmediatly the long polling stops and the message is received.
Because the messages sent frequency is not that big, lets say on every 20 seconds I receive 1 then:
3 request messages every minute
180 per hour
4,320 per day
103,680 per month
And AWS Pricing is $0.40 per 1 million so that will be practically free.

Receive information once server updated from asp.net webapi

I am new to asp.net. I started learning webapi then later I wish to learn the mvc and spa.
I am developing sample api that can be accessed from 2 kinds of desktop application.
App1 will use post request to update the data using api. and App2 will use get request (run every 30 sec) to get the updated information.
Is there anyway sever can send a message for App2 so that it can get the updated data without having check server for every 30sec ?
At present I running a dispatcher timer to send request periodically with 30 sec delay.
Thank you in advance.

SignalR (Version 1) hub connection to server during application pool recycle

We have several servers running an ASP.NET Web API application. The Web API is accessed from a Windows desktop client application written in C#. The client uses a SignalR HubConnection object to start a reverse channel with the server through which it receives live feedback (log statements, status, etc.). Last year, one of our users reported that the client was periodically encountering the following error:
An existing connection was forcibly closed by the remote host
This error is received when the HubConnection.Error event is raised. I dug into the Windows Event Viewer logs on the server side and discovered that these errors coincided exactly with the occurrence of the following events:
A process serving application pool 'ASP.NET v4.0' exceeded time limits during shut down. The process id was ‘xxxx'.
This event immediately followed 90 seconds after the application pool recycling event:
A worker process with process id of ‘xxxx' serving application pool 'ASP.NET v4.0' has requested a recycle because the worker process reached its allowed processing time limit.
So clearly, the old worker process serving the ASP.NET v4.0 application pool was failing to shut down within the ShutdownTimeLimit of 90 seconds. I did some further experiments and discovered the following request was being retained in the request queue was causing the forced shutdown of the worker process:
The version of the SignalR libraries we were using at the time was 1.0.20228.0.
Last year, we upgraded to SignalR Version 2.2.31215.272 on both the client and server. It seems that this change resolved the problem I described above. The 'signalr/connect' request is still retained for the life of the hub connection, but when the application pool recycles the client and server gracefully reconnect it without any issues. Apparently some fix was made between SignalR V1 and V2 which allows it to handle application pool recycle events in a much more graceful manner.
Just for my own understanding, why was this issue being caused with V1 of the SignalR libraries, and what changed between V1 and V2 which resolved this issue?
Thanks.

Azure webapp load balance time out issue,getting HTTP status of 500 and sub status of 121

We have a web site which load and analyse excel data and report back to the user.Now the process of analyzing the excel data takes about on average over 5 minutes (depending on the data) during which time the client server communication seems to be idle.
This web site is hosted on Azure as a webapp, and it seems that Azure has a load balancing time out according to the following link
https://azure.microsoft.com/en-us/blog/new-configurable-idle-timeout-for-azure-load-balancer/
in this link it is mentioned that
In its default configuration, Azure Load Balancer has an ‘idle
timeout’ setting of 4 minutes.
This means that if you have a period of inactivity on your tcp or http
sessions for more than the timeout value, there is no guarantee to
have the connection maintained between the client and your service.
because of this issue the end user constantly get HTTP status of 500 and sub status of 121.
Currently we cant re-architecture the system nor able to change deploying as a webapp.
We have tried to sending Jquery ajax request to the server on a set interval but this doesn't seem to be working.
The above article talks about keeping the TCP session alive using ServicePoint.SetTcpKeepAlive(), but we have no idea how to implement this in MVC web application.(Did not find any samples on the net either)
We really need to resolve this issue because this could make or break our project.So any help is appreciated. specifically any working sample code using ServicePoint.SetTcpKeepAlive() in an MVC application is greatly appritiated
Thanks in advance.
UPDATE
i tried out what Irb mentioned but still no luck.As you can see in the given image i call KeepSessionAlive repeatedly.At every call to KeepSessionAlive i access the Session Variable making sure not to time out the session.But still the call to Save returns 500. Again this only happens in Azure
A web api method that return json will not always keep session alive. If you request something that invokes session then you can call it on a timed interval from the client. Something as simple as requesting an image with a jpeg web handler could work. Here is a link to something similar.

WCF multi-session wsdualhttpbinding reconnect to server

I implemented a WCF service that will do some long task. It needs to provide client with notifications about current progress of that long task. It is working well so far, but the problem is:
When user closes the client app, and then open it again, client app should start receiving updates from server again about the task that is running.
There can be multiple tasks started by different users at the same time.
So for example, client starts a process named "proc1" that will be 3 hours long, and after 15 minutes he closes the app. The process continues to work on server. After 30 minutes client starts the app again and then client app needs to start getting notifications about the process client has started 30 minutes ago. How can this be accomplished?
Thanks in advance.
You should save on client side some process id that can be used later to get progress of that process. Try using that process id to reconnect to the notifications.

Categories

Resources