I'm writing a test application with signal r server and a web client and I wanted to know if there is a way to determine or have the server know which transport method the client is establishing with the server.
In regards to websockets which has a persistent two-way connection between the client and server or long polling which keeps polling the server until the server responds and then closes up the connection would there be any downside that I have to be aware of regarding the transport method not being web sockets outside of the persistent two-way connection especially if there are going to be many long running requests being made one after another?
I've noticed that making multiple requests from a client will be handled by the hub and returned when done, example I send a request to wait 10 seconds then a another request to wait 1 second. The Hub will respond to the 1 second wait request first then the 10 second delay, I am curious as to whether there is a thread per request created which is attached to the client via the same persistent duplex connection.
here is my example code.
class Startup
{
public void Configuration(IAppBuilder app)
{
app.UseCors(CorsOptions.AllowAll);
app.MapSignalR();
}
}
public class RunningHub : Hub
{
public void SendLongRunning(string name, string waitFor)
{
Clients.All.addMessage(name, "just requested a long running request I'll get back to you when im done");
LongRunning(waitFor);
Clients.All.addMessage(name, "I'm done with the long running request. which took " + waitFor + " ms");
}
private void LongRunning(string waitFor)
{
int waitTime = int.Parse(waitFor);
Thread.Sleep(waitTime);
}
}
JQuery Sample.
$(function () {
//Set the hubs URL for the connection
$.connection.hub.url = "http://localhost:9090/signalr";
// Declare a proxy to reference the hub.
var signalHub = $.connection.runningHub;
$('#url').append('<strong> Working With Port: ' + $.connection.hub.url + '</strong>');
// Create a function that the hub can call to broadcast messages.
signalHub.client.addMessage = function (name, message) {
//handles the response the message here
};
// Start the connection.
$.connection.hub.start().done(function () {
$('#sendlongrequest').click(function() {
signalHub.server.sendLongRunning($('#displayname').val(), $('#waitTime').val());
});
});
});
For ASP.NET Core;
var transportType = Context.Features.Get<IHttpTransportFeature>()?.TransportType;
Regarding the transport method:
You can inspect HubCallerContext.QueryString param transport:
public void SendLongRunning(string name, string waitFor)
{
var transport = Context.QueryString.First(p => p.Key == "transport").Value;
}
Regarding threading & long-running tasks:
Each request will be handled on a separate thread and the hub pipeline resolves the client-side promise when the hub method completes. This means that you can easily block your connection because of the connection limit in browsers (typically 6 connections at a time).
E.g.: if you use long-polling and you make six requests to the server, each triggering (or directly executing) a long-running operation, then you'll have six pending AJAX requests which only get resolved once the hub method is done, and you won't be able to make any further requests to the server until then. So you should use separate tasks for the long-running code and you should also not await those so the hub dispatcher can send its response without a delay.
If the client needs to know when the long-running task is done, then you should do a push notification from the server instead of relying on the .done() callback.
Related
I am trying to keep track of connected users to my hub.
The way I tried to do this was by creating a custom Authorize attribute for my hub, and checking for the user that is trying to connect. If the user is already connected then the hub does not authorize the connection
public class SingleHubConnectionPerUserAttribute : Microsoft.AspNet.SignalR.AuthorizeAttribute
{
private static readonly HashSet<UserKey> connections = new HashSet<UserKey>();
public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
{
Type hubType = hubDescriptor.HubType;
string userId = request.User.Identity.GetUserId();
UserKey userKey = new UserKey(hubType, userId);
if (connections.Contains(userKey) || !base.AuthorizeHubConnection(hubDescriptor, request))
{
return false;
}
connections.Add(userKey);
return true;
}
}
This would work fine if the method AuthorizeHubConnection was called only once per connection, but that is not what is happening.
When I load the page that tries to connect with the hub, AuthorizeHubConnection oddly runs multiple times, and the number of times it runs is not always the same, sometimes it's 5, some it's 3, I really have no clue of what could possibly be causing it.
Do you know what could cause AuthorizeHubConnection to get called more than once?
Authorization is invoked each time SignalR server receives an HTTP request before it does anything else (See: https://github.com/SignalR/SignalR/blob/dev/src/Microsoft.AspNet.SignalR.Core/PersistentConnection.cs#L161). While SignalR maintains a logically persistent connection it makes multiple HTTP Requests behind the scenes. When using Websockets transport you will typically see only 3 of these when starting the connection (for the negotiate, connect and start requests) and one for each reconnect. longPolling and serverSentEvents transport create an HTTP request each time to send data (send). In addition longPolling creates a polling HTTP request to receive data (poll). Each of these requests has to be authorized so this is the reason why you see multiple calls to the AuthorizeHubConnection method.
I have successfully created an Azure application that sends DbTransactions to a ServiceBus Queue, and then, enqueues a 'notifying message' to a ServiceBus Topic for other clients to monitor (...so they can receive the updates automatically).
Now, I want to use SignalR to monitor & receive the SubscriptionClient messages...and I have test-code that works just fine on its' own.
I have found many examples for sending messages to an Azure Queue (that is easy). And, I have the code to receive a BrokeredMessage from a SubscriptionClient. However, I cannot get SignalR to continuously monitor my Distribute method.
How do I get SignalR to monitor the Topic?
CODE BEHIND: (updated)
public void Dequeue()
{
SubscriptionClient subscription = GetTopicSubscriptionClient(TOPIC_NAME, SUBSCRIPTION_NAME);
subscription.Receive();
BrokeredMessage message = subscription.Receive();
if (message != null)
{
try
{
var body = message.GetBody<string>();
var contextXml = message.Properties[PROPERTIES_CONTEXT_XML].ToString();
var transaction = message.Properties[PROPERTIES_TRANSACTION_TYPE].ToString();
Console.WriteLine("Body: " + body);
Console.WriteLine("MessageID: " + message.MessageId);
Console.WriteLine("Custom Property [Transaction]: " + transaction);
var context = XmlSerializer.Deserialize<Person>(contextXml);
message.Complete();
Clients.All.distribute(context, transaction);
}
catch (Exception ex)
{
// Manage later
}
}
}
CLIENT-SIDE CODE:
// TEST: Hub - GridUpdaterHub
var hubConnection = $.hubConnection();
var gridUpdaterHubProxy = hubConnection.createHubProxy('gridUpdaterHub');
gridUpdaterHubProxy.on('hello', function (message) {
console.log(message);
});
// I want this automated
gridUpdaterHubProxy.on('distribute', function (context, transaction) {
console.log('It is working');
});
connection.start().done(function () {
// This is successful
gridUpdaterHubProxy.invoke('hello', "Hello");
});
I would not do it like that. Your code is consuming and retaining ASP.NET thread pool's threads for each incoming connection, so if you have many clients you are not scaling well at all. I do not know the internals of SignalR that deep, but I'd guess that your never-ending method is preventing SignalR to let the client call your callbacks because that needs the server method to end properly. Just try to change while(true) with something exiting after, let's say, 3 messages in the queue, you should be called back 3 times and probably those calls will happen all together when your method exits.
If that is right, then you can move to something different, like dedicating a specific thread to consuming the queue and having callbacks called from there usning GlobalHost.ConnectionManager.GetHubContext. Probably better, you could try a different process consuming the queue and doing HTTP POST to your web app, which in turns broadcasts to the clients.
I am using SignalR 1.1.2 with SQL as backplane, and frequently (like 70% of page loads), SignalR fires of negotiate request which suceedes (doesn't fail) and just halts and nothing happens anymore. I found out that there is some conflict with jquery.validation.js
so I updated: jquery, jquery validation and jquery unobtrusive ajax to latest available versions, but it still happens,,
Other times when it works correctly I can see few XHR requests made:
negotiate
connect
ping
long polling requests further on
Here is my hub:
public class NotificationsHub : Hub
{
public override Task OnConnected()
{
Groups.Add(Context.ConnectionId, CurrentUser.UserId.ToString());
return base.OnConnected();
}
public override Task OnDisconnected()
{
Groups.Remove(Context.ConnectionId, CurrentUser.UserId.ToString());
return base.OnDisconnected();
}
}
And my client code:
$(document).ready(function () {
var nh = $.connection.notificationsHub;
nh.notify = function (notificationHtml) {
$('#notification-box').find('ul').prepend(notificationHtml);
$('#notification-bubble').text($('#notification-bubble').text() + 1);
};
$.connection.hub.start();
});
While refreshing page I get error:
The connection to http://localhost:55087/signalr/connect?transport=serverSentEvents&connectionToken=FuqvvQJaxyCfy0dj2DEoZQFkBmthBk9LxU6ZEGZO4ypbSy8pUpzrvCKWTjykxp9E9GtMdwLI0sX_tWkvK1XfJaEtjFHOL3Qmeg2eILh4pTZnEzXDyq6KPuvPw_4kzj9VtNe89YsWW-3sstPwu_I60A2&connectionData=%5B%5D&tid=4 was interrupted while the page was loading.
And then following POST request which succedes:
http://localhost:55087/signalr/abort?transport=serverSentEvents&connectionToken=FuqvvQJaxyCfy0dj2DEoZQFkBmthBk9LxU6ZEGZO4ypbSy8pUpzrvCKWTjykxp9E9GtMdwLI0sX_tWkvK1XfJaEtjFHOL3Qmeg2eILh4pTZnEzXDyq6KPuvPw_4kzj9VtNe89YsWW-3sstPwu_I60A2
Here is a quick fix, but I would like to know is this a bug or normal behaviour of SignalR:
SignalR event becomes intermittent when deployed to a server
Update 2:
Fix from above worked but long poll XHR requests lasted for few minutes, which seemed odd to me and client side was updated after few minutes when request compleated which isn't acceptable, so I decided to upgrade SignalR to 2.0.0 beta2.
After that first time I was running applications SignalR couldn't connect(timeout after second or so)..
Second time everything worked fine, long poll XHR were completing in 2 sec tops.
Third time back to step one, no ping, no connnect, no long poll requests. BUT when something is triggered from server side it get's updated immedietly like it is connected through websockets.. and only thing I can see in debug window is:
[00:12:52 GMT+0200 (Central European Standard Time)] SignalR: Triggering client hub event 'notify' on hub 'NotificationsHub'.
Here are my changes to Hub which work with SignalR 2.0.0 beta2
public class NotificationsHub : Hub
{
public async override Task OnConnected()
{
var user = WebSecurity.GetUserId(Context.User.Identity.Name);
Groups.Add(Context.ConnectionId, user.ToString());
await base.OnConnected();
}
// OnDisconnected() was dropped thanks to N. Taylor Mullen
}
Client side:
$(document).ready(function () {
$.connection.notificationsHub.client.notify = function (notificationHtml) {
$('#notification-box').find('ul').prepend(notificationHtml);
$('#notification-bubble').text(parseInt($('#notification-bubble').text()) + 1);
};
$.connection.hub.logging = true; // optional, if you want to see what's going on
$.connection.hub.start();
});
Your connection interruption should be resolved by a reconnect event. However that doesn't seem to be happening. You have some issues in your Hub, modify it to be:
public class NotificationsHub : Hub
{
public override Task OnConnected()
{
// Ensure that the group is added before completing OnConnected
return Groups.Add(Context.ConnectionId, CurrentUser.UserId.ToString());
}
// Never remove from group in OnDisconnected, ConnectionId's are auto-removed from groups when they disconnect.
}
I have a .NET Windows Service which spawns a thread that basically just acts as an HttpListener. This is working fine in synchronous mode example...
private void CreateLListener()
{
HttpListenerContext context = null;
HttpListener listener = new HttpListener();
bool listen = true;
while(listen)
{
try
{
context = listener.GetContext();
}
catch (...)
{
listen = false;
}
// process request and make response
}
}
The problem I now have is I need this to work with multiple requests and have them responded to simultaneously or at least in an overlapped way.
To explain further - the client is a media player app which starts by making a request for a media file with the request header property Range bytes=0-. As far as I can tell it does this to work out what the media container is.
After it has read a 'chunk' (or if it has read enough to ascertain media type) it then makes another request (from a different client socket number) with Range bytes=X-Y. In this case Y is the Content-Length returned in the first response and X is 250000 bytes less than that (discovered using IIS as a test). At this stage it is getting the last 'chunk' to see if it can get a media time-stamp to gauge length.
Having read that, it makes another request with Range bytes=0- (from another socket number) to start streaming the media file properly.
At any time though, if the user of the client performs a 'skip' operation it then sends another request (from yet another socket number) with Range bytes=Z- where Z is the position to jump to in the media file.
I'm not very good with HTTP stuff but as far as I can tell I need to use multiple threads to handle each request/response while allowing the original HttpListener to return to listening. I've done plenty of searching but can't find a model which seems to fit.
EDIT:
Acknowledgement and gratitude to Rick Strahl for the following example which I was able to adapt to suit my needs...
Add a Web Server to your .NET 2.0 app with a few lines of code
If you're here from the future and trying to handle multiple concurrent requests with a single thread using async/await..
public async Task Listen(string prefix, int maxConcurrentRequests, CancellationToken token)
{
HttpListener listener = new HttpListener();
listener.Prefixes.Add(prefix);
listener.Start();
var requests = new HashSet<Task>();
for(int i=0; i < maxConcurrentRequests; i++)
requests.Add(listener.GetContextAsync());
while (!token.IsCancellationRequested)
{
Task t = await Task.WhenAny(requests);
requests.Remove(t);
if (t is Task<HttpListenerContext>)
{
var context = (t as Task<HttpListenerContext>).Result;
requests.Add(ProcessRequestAsync(context));
requests.Add(listener.GetContextAsync());
}
}
}
public async Task ProcessRequestAsync(HttpListenerContext context)
{
...do stuff...
}
If you need a more simple alternative to BeginGetContext, you can merely queue jobs in ThreadPool, instead of executing them on the main thread. Like such:
private void CreateLListener() {
//....
while(true) {
ThreadPool.QueueUserWorkItem(Process, listener.GetContext());
}
}
void Process(object o) {
var context = o as HttpListenerContext;
// process request and make response
}
You need to use the async method to be able to process multiple requests. So you would use e BeginGetContext and EndGetContext methods.
Have a look here.
The synchronous model is appropriate if your application should block
while waiting for a client request and if you want to process only one
*request at a time*. Using the synchronous model, call the GetContext
method, which waits for a client to send a request. The method returns
an HttpListenerContext object to you for processing when one occurs.
Given an async controller:
public class MyController : AsyncController
{
[NoAsyncTimeout]
public void MyActionAsync() { ... }
public void MyActionCompleted() { ... }
}
Assume MyActionAsync kicks off a process that takes several minutes. If the user now goes to the MyAction action, the browser will wait with the connection open. If the user closes his browser, the connection is closed. Is it possible to detect when that happens on the server (preferably inside the controller)? If so, how? I've tried overriding OnException but that never fires in this scenario.
Note: I do appreciate the helpful answers below, but the key aspect of this question is that I'm using an AsyncController. This means that the HTTP requests are still open (they are long-lived like COMET or BOSH) which means it's a live socket connection. Why can't the server be notified when this live connection is terminated (i.e. "connection reset by peer", the TCP RST packet)?
I realise this question is old, but it turned up frequently in my search for the same answer.
The details below only apply to .Net 4.5
HttpContext.Response.ClientDisconnectedToken is what you want. That will give you a CancellationToken you can pass to your async/await calls.
public async Task<ActionResult> Index()
{
//The Connected Client 'manages' this token.
//HttpContext.Response.ClientDisconnectedToken.IsCancellationRequested will be set to true if the client disconnects
try
{
using (var client = new System.Net.Http.HttpClient())
{
var url = "http://google.com";
var html = await client.GetAsync(url, HttpContext.Response.ClientDisconnectedToken);
}
}
catch (TaskCanceledException e)
{
//The Client has gone
//you can handle this and the request will keep on being processed, but no one is there to see the resonse
}
return View();
}
You can test the snippet above by putting a breakpoint at the start of the function then closing your browser window.
And another snippet, not directly related to your question but useful all the same...
You can also put a hard limit on the amount of time an action can execute for by using the AsyncTimeout attribute. To use this use add an additional parameter of type CancellationToken. This token will allow ASP.Net to time-out the request if execution takes too long.
[AsyncTimeout(500)] //500ms
public async Task<ActionResult> Index(CancellationToken cancel)
{
//ASP.Net manages the cancel token.
//cancel.IsCancellationRequested will be set to true after 500ms
try
{
using (var client = new System.Net.Http.HttpClient())
{
var url = "http://google.com";
var html = await client.GetAsync(url, cancel);
}
}
catch (TaskCanceledException e)
{
//ASP.Net has killed the request
//Yellow Screen Of Death with System.TimeoutException
//the return View() below wont render
}
return View();
}
You can test this one by putting a breakpoint at the start of the function (thus making the request take more than 500ms when the breakpoint is hit) then letting it run out.
Does not Response.IsClientConnected work fairly well for this? I have just now tried out to in my case cancel large file uploads. By that I mean if a client abort their (in my case Ajax) requests I can see that in my Action. I am not saying it is 100% accurate but my small scale testing shows that the client browser aborts the request, and that the Action gets the correct response from IsClientConnected.
It's just as #Darin says. HTTP is a stateless protocol which means that there are no way (by using HTTP) to detect if the client is still there or not. HTTP 1.0 closes the socket after each request, while HTTP/1.1 can keep it open for a while (a keep alive timeout can be set as a header). That a HTTP/1.1 client closes the socket (or the server for that matter) doesn't mean that the client has gone away, just that the socket hasn't been used for a while.
There are something called COMET servers which are used to let client/server continue to "chat" over HTTP. Search for comet here at SO or on the net, there are several implementations available.
For obvious reasons the server cannot be notified that the client has closed his browser. Or that he went to the toilet :-) What you could do is have the client continuously poll the server with AJAX requests at regular interval (window.setInterval) and if the server detects that it is no longer polled it means the client is no longer there.