When I navigate away from a page in Silverlight I call a function in Javascript which contains the following:
$.connection.hub.stop();
In my Hub class I override OnDisconnected()
Usually, the OnDisconnected event fires, but sometimes it doesn't fire and it takes seemingly various amounts of time before it times out and the OnDisconnected is fired.
After I navigate away from the page that creates the connection, Fiddler shows that the original signalr/connect?... request has no response code (the column shows '-', and the Body column shows '-1) - Shouldn't this return 200?
If I navigate back to the page, a new connection is established.
Sometimes a signalr/abort request is issued, passing the same connectionToken, but the original connect request remains the same (open).
Why isn't the OnDisconnected event firing immediately every time in the hub, since the connection is being stopped explicitly in the client?
Using SignalR 1.2.1 .NET 4.0
Having read the documentation, I don't believe this applies to my situation:
The OnDisconnected method doesn't get called in some scenarios, such as when a server goes down or the App Domain gets recycled. When another server comes on line or the App Domain completes its recycle, some clients may be able to reconnect and fire the OnReconnected event.
Related
I have a scaled out application, where each instance connects to a azure service bus subscription with the same name. The end result being that only a single instance gets to act on any given message because they are all listening to the same subscription.
Occasionally the application needs to place an instance into an idle state (service fabric ActiveSecondary replica). When this occurs, I need to close the subscription so that this instance no longer receives messages. If there were 2 instances originally, once one gets placed into the idle state all message should go to the remaining instance. This is important so that all messages are handled by a properly configured primary instance.
When the instance becomes idle, a cancellation token is cancelled. I have code listening for the cancellation and calling Close() on the SubscriptionClient generated when I created the subscription originally.
The issue is, even after I call Close() on one instance, messages are still being randomly split between it and the primary.
Is the way I'm doing this inherently wrong, or is something else in my code causing this behavior?
The Azure Service Bus track 0 and 1 SDKs do not support CancellationTokens. If you're closing your client and messages won't be processed, they'd be picked up another competing instance when visible again. That's where MaxLockDuration and MaxDeliveryCount are important to ensure messages have enough processing attempts to account the situation you're describing w/o waiting for too long.
Disregard this post. Turns out I had the same subscription name used twice within a single instance, so they were competing for the events. The close() function works as expected.
I have a Windows Service that runs some processes and it must be notify the progress of it on the browser. I am not sure if I am doing something that is good but I just did it:
Windows Service publish a json on a redis channel called 'web' -> An action on ASP.NET MVC application subscribe the 'web' channel and send the json to browser via signalR hub -> the browser take it and show the progress.
I have the following code (it is a helper) to add a channel scope after a publish. It is called from my controller/action:
public void Listen(string channel, Action<string, object> action)
{
var sub = Client.GetSubscriber();
sub.Subscribe(channel, (c, v) =>
{
action(c.ToString(), v.ToString());
});
}
The problem: It works as expetected and I get the browser notified. The problem is when the user (on browser) hits F5 or executes the action again. It creates a new channel and I get duplicated messages. If the users executes again it, I started getting 3 messages for each one and so on. I want to avoid it.
What I have tried: I tried to use the IsConnection(channel) but it always returns true. I have tried to Unsubscribe(channel) before Subscribe(channel) again and it works but I am not sure if i will lost some messages (I am afraid). I do not know how to solve it and avoid getting duplicate subscriptions. Does anyone can help me?
Thank you.
Are you using the ConnectionMultiplexer? See Using redis pub/sub.
... in the event of connection failure, the ConnectionMultiplexer will
handle all the details of re-subscribing to the requested channels
Consider switching from Pub/Sub to Redis Streams. See What are the main differences between Redis Pub/Sub and Redis Stream?
You can name groups and clients with Consumer Groups. Therefore you can control it by session, or anything else, or even use something like fingerprint.js to identify each browser anonymously.
I am trying to call methods server-side from:
var client = GlobalHost.ConnectionManager.GetHubContext<HUB>().Clients.Client(ConnectionID);
where ConnectionID is coming straight from the client $.connection.hub.id. However, after about 30ish seconds, executing client commands with this object fails silently. It's as if it's talking to a client that doesn't exist.
Additionally, I have tried
var client = GlobalHost.ConnectionManager.GetHubContext<HUB>().Clients.User(User.Identity.Name);
but similarly, this also starts returning null after about a minute.
Is there some kind of cache that is being cleared after a time?
I am listening to OnConnect and OnDisconnect both on the server and client and none are fired.
EDIT: I realized that the Clients.Client() method is not returning null, but just an invalid object.
EDIT 2: I am looking into the solution from this post because it involves load balanced servers (my situation)
I am using wsHttpBinding with InstanceContextMode.PerCall. On the server side I need to detect if the client has properly received the response. If not I need to clear some stuff in the server side. For this I tried to use OperationContext.Current.InstanceContext.Closed and OperationContext.Current.InstanceContext.Faulted. OperationContext.Current.InstanceContext.Closed fires always but .Faulted event never fires no matter what I do (timeout the connection, close the client while still receiving the data, unplug the cable). Also OperationContext.Current.InstanceContext.State is always closed.
How can I detect that the client did not received the request ?
As an alternative solution I was thinking if I catch error on the client side, to call a cleanup method on the server side, but this may complicate things because the internet connection may be down for a while.
Channel events on the service side are fired when the client calls Close() successfully, you will not get channel events fired on the server side because of client faults and timeout until the moment the server replies, which is very stupid.
To solve your problem you have three solutions:
1- if the client is yours, use duplex to call a dummy method on the client then the channel will fire its events.
2- use reliable session (sessionfull binding + reliable session turned on) so you can set the "InactivityTimeout" which will fire when infra messages like KeepAlive messages are not coming anymore. I guess InactivityTimeout default is 10 minutes, while Windows KeepAlive frequency is 1 second, so you can set InactivityTimeout to 5 seconds for example.
3- use IIS hosting with one of the http bindings, so inside your operation create a loop that keeps checking "ClientDisconnectedToken.IsCancellationRequested" in "HttpContext.Current.Response" in "System.Web" namespace.
We have pub/sub application that involves an external client subscribing to a Web Role publisher via an Azure Service Bus Topic. Our current billing cycle indicates we've sent/received >25K messages, while our dashboard indicates we've sent <100. We're investigating our implementation and checking our assumptions in order to understand the disparity.
As part of our investigation we've gathered wireshark captures of client<=>service bus traffic on the client machine. We've noticed a regular pattern of communication that we haven't seen documented and would like to better understand. The following exchange occurs once every 50s when there is otherwise no activity on the bus:
The client pushes ~200B to the service bus.
10s later, the service bus pushes ~800B to the client. The client registers the receipt of an empty message (determined via breakpoint.)
The client immediately responds by pushing ~1000B to the service bus.
Some relevant information:
This occurs when our web role is not actively pushing data to the service bus.
Upon receiving a legit message from the Web Role, the pattern described above will not occur again until a full 50s has passed.
Both client and server connect to sb://namespace.servicebus.windows.net via TCP.
Our application messages are <64 KB
Questions
What is responsible for the regular, 3-packet message exchange we're seeing? Is it some sort of keep-alive?
Do each of the 3 packets count as a separately billable message?
Is this behavior configurable or otherwise documented?
EDIT:
This is the code the receives the messages:
private void Listen()
{
_subscriptionClient.ReceiveAsync().ContinueWith(MessageReceived);
}
private void MessageReceived(Task<BrokeredMessage> task)
{
if (task.Status != TaskStatus.Faulted && task.Result != null)
{
task.Result.CompleteAsync();
// Do some things...
}
Listen();
}
I think what you are seeing is the Receive call in the background. Behind the scenes the Receive calls are all using long polling. Which means they call out to the Service Bus endpoint and ask for a message. The Service Bus service gets that request and if it has a message it will return it immediately. If it doesn't have a message it will hold the connection open for a time period in case a message arrives. If a message arrives within that time frame it will be returned to the client. If a message is not available by the end of the time frame a response is sent to the client indicating that no message was there (aka, your null BrokeredMessage). If you call Receive with no overloads (like you've done here) it will immediately make another request. This loop continues to happend until a message is received.
Thus, what you are seeing are the number of times the client requests a message but there isn't one there. The long polling makes it nicer than what the Windows Azure Storage Queues have because they will just immediately return a null result if there is no message. For both technologies it is common to implement an exponential back off for requests. There are lots of examples out there of how to do this. This cuts back on how often you need to go check the queue and can reduce your transaction count.
To answer your questions:
Yes, this is normal expected behaviour.
No, this is only one transaction. For Service Bus you get charged a transaction each time you put a message on a queue and each time a message is requested (which can be a little opaque given that Recieve makes calls multiple times in the background). Note that the docs point out that you get charged for each idle transaction (meaning a null result from a Receive call).
Again, you can implement a back off methodology so that you aren't hitting the queue so often. Another suggestion I've recently heard was if you have a queue that isn't seeing a lot of traffic you could also check the queue depth to see if it was > 0 before entering the loop for processing and if you get no messages back from a receive call you could go back to watching the queue depth. I've not tried that and it is possible that you could get throttled if you did the queue depth check too often I'd think.
If these are your production numbers then your subscription isn't really processing a lot of messages. It would likely be a really good idea to have a back off policy to a time that is acceptable to wait before it is processed. Like, if it is okay that a message sits for more than 10 minutes then create a back off approach that will eventually just be checking for a message every 10 minutes, then when it gets one process it and immediately check again.
Oh, there is a Receive overload that takes a timeout, but I'm not 100% that is a server timeout or a local timeout. If it is local then it could still be making the calls every X seconds to the service. I think this is based on the OperationTimeout value set on the Messaging Factory Settings when creating the SubscriptionClient. You'd have to test that.