I have a website which runs SingalR to inform the front end of certain back end events so it can update. The back end comprises the website back end as well as other services running as console apps, Azure workers, etc. which may or may not be on the same machine as the website runs.
I need some way to grab those back end events and send them down to the clients over SignalR. I would immediately use a service bus, but I'm concerned about managing the subscriptions in the website. I'm interested in eventually using SignalR scale out (with service bus as the backplane), but it seems I won't be able to publish to the backplane topics as the names of the topics is an internal implementation detail.
Question is then: Should I use a service bus subscription in the website and how should I manage the subscription (to ensure that it conforms to the ASP.Net lifecycle), or should I try to use the service bus backplane and how do I publish to it?
Related
I have a Microservice (Web API), that publish messages to a Topic (Topic A).
Now I have another Microservice (Web API) that should subscribe to this topic and act upon the messages.
My question is simply: How should I do this since my Microservice that should subscribe to the Topic is a WebApi? In my web api, I want in somehow instantly know when a new message is available in the Topic. Should I poll the service bus via an endpoint?
I'm uncertain about the best practices about this.
All examples that I have seen using console applications to subscribe. But that's not my case since I have an web api.
There are different ways of doing this.
1. Using Azure Functions
This way you create two applications. Your standard web api, and separately you create an Azure Function that will handle the messages from the queue. There multiple benefits of this approach, on of them is that you are isolating the code handling the queue, so if you have many messages, it will not affect the performance of your API
2. Using a Singleton service inside your web application
The idea here is that your API application is handling queue messages in the background. This has the advantage that you have only one application doing everything, simpler to maintain for example. It has the disadvantage that a very big inflow of messages will slow down your APIs.
(Note, in the link above look for Consuming messaging from the Queue
Whether is a WebAPI or a console, it is the responsibility of the consumer to communicates and collect records. Being a WebAPI doesn't mean that it should only have public endpoints. Typical WebAPI, might have public endpoint (for external world) or can have private endpoints (for internal communications) or can have a combination of both. The responsibility of private endpoints could be reading data from service data-store, consuming external services via adapter services etc. In your case, upon initialization of your WebAPI, you might want to create a consumer object and start reading data and process as you want. Hope this help.
You can poll in a Web Job or background task. But the built-in way to do this is with an Azure Function triggered from the Topic, or with Azure Event Grid.
For listening in the background you can use IHostedService. Inside Method StartAsync
you can register message processors;
queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions)
And on StopAsync you can stop processing messages and close the client.
We have a microservice based application/ website hosted currently in Azure, and we need to have a function where we press on a button, and it sends some data to another webservice currently hosted inside our corporate network.
Our IT bods are against being able to POST to a service hosted inside our network, and I am wondering how people normally deal with this problem.
I can think of 2 possible solutions, neither of which I like particularly:
Set up a VPN to the internal network, which feels a bit of a heavy solution to me
The internal network service polls the cloud application for changes of state continuously, an triggers an update process when a change is recorded. This will generate a lot more traffic than I would ideally want
How do other people address this issue? Essentially I just want to send some data from the cloud into our network in a secure fashion. Pulls from our network are OK, but pushes into it are not.
Even sending a signal to get the internal network to initiate a pull would also work fine.
Both the solutions you came up with are fairly common patterns in Azure architecture. Of the two, the second would be the one I would generally choose for this particular scenario, but it does depend on how fast you need the push to happen. VPN is going to be the fastest as you have a direct connection between your Azure service and your internal one, but it is a bit more complex to set up for a single pipeline.
The second is generally accomplished through a messaging service like Service Bus as it adds a lot of resiliency to that sort of arrangement. You can configure your onprem service to ping Service Bus based on the interval you define- more often if you need the updates to happen quickly, less often if you want to reduce traffic. Depending on the size of the data, you can load it directly into Service Bus for pickup or the message can contain the location of the required data. Event Grid is another option for a messaging service. It sends notifications out instead of waiting for you to poll, so it would be a good choice if you wanted to ping your onprem service to reach out and pick up the changes.
If you are open to using Logic Apps to do the push, it accesses onprem resources via a data gateway that you install inside your network. It does use Service Bus in the background to accomplish this so you will be using your second solution, but it would be a bit simpler from a development perspective.
I have two application for my client, one is web application where user schedule some action and another one is windows service which runs every 3 min and execute scheduled task and email to client, both works independent of each others.
Previously both application hosted in Azure VM
Recently i have converted my application into Azure Web Role to achieve Scalability.
Now i am working on implementing worker role for Windows service. but i have few confusion that client web project need scalability so i converted web project into azure web role
But in what case using Windows Service as Azure Worker role is better that running service inside VM?
Do i continue with running service in Azure VM?
But in what case using Windows Service as Azure Worker role is better that running service inside VM?
There are several benefits for moving a Windows Service to an Azure Worker Role - IMHO, the top three are:
Azure Worker Roles are PaaS (Platform as a Service) - you are provided with a 'platform' on which your code runs (a Windows Server VM, but that is abstracted away from you) - there is no need to manage the underlying infrastructure OS, networking, disks etc. This means that you can focus on your code and how that works / scales / performs etc. without having to worry whether the VM is up and running. Furthermore, the underlying Azure Fabric will manage failures for you, starting a new instance of a worker if the underlying hardware fails.
Azure Worker Roles give you the benefit of scale - Because they run on the Azure PaaS platform, your code 'package' can be scaled to multiple instances through the Azure Portal with a few mouse-clicks. Scaling can be automatic (triggered by the underlying Azure Fabric) on a queue length (if you are receiving messages from a queue) or based on average CPU usage; alternatively, you can scale manually or on a set schedule (e.g. 'we do lots of processing overnight so increase the number of workers to 4 from 1am - 6am and then back to 2 for the rest of the day'). See https://azure.microsoft.com/en-gb/documentation/articles/cloud-services-how-to-scale/ for more information on scaling Worker Roles (aka. 'Cloud Services')
Similar API to Windows Services - the API for an Azure Worker Role is almost exactly the same as that exposed within a Windows Service - you have an OnStart(), OnStop() and Run() methods^ allowing you to easily port an existing Windows Service to a Worker Role with minimum fuss.
^ Ok, these might not quite be right as its a couple of months or so since I last worked with WR's and I can't remember the interface exactly, but you get the idea ;-)
Do i continue with running service in Azure VM?
Let me answer that question in the context of your problem (my emphasis):
I have two application for my client, one is web application where user schedule some action and another one is windows service which runs every 3 min and execute scheduled task and email to client.
I think (IMHO) that you need to think about developing for the cloud, rather than the traditional model of development. What I read from this is that you have a web-application that writes something to a persistent store (probably a database table); you then have a second service (that you are looking to migrate to an Azure Worker Role) that polls the persistent store at a specific interval, detects whether there are any new clients to e-mail and sends out the e-mail.
If we were to re-architecture this for the cloud, I would keep Worker Roles in the mix, but do the following:
Web-app publishes a message to a queue to indicate that a client needs to be e-mailed - this message could contain their first and last names, e-mail address and possibly some data that would go into the e-mail message body (if required).
The Worker Role would poll this queue for messages. For each message received, the Worker Role would send an e-mail based on the content of the message via your preferred e-mail provider (hopefully they have a nice .Net API - no raw SMTP please!). Once the e-mail was successfully sent, the Worker Role would delete the message off the queue.
This approach would be both scale-able and repeatable - a true cloud architecture!
FYI, if your interested in using the queue approach, either Azure Storage Queues or Azure Service Bus Queues could work here. It sounds like you have simple queuing requirements and as such, Storage Queues would be a perfect fit. Take a look at their comparison here: https://azure.microsoft.com/en-gb/documentation/articles/service-bus-azure-and-service-bus-queues-compared-contrasted/.
Hope this helps!
In our application there is a event emitter (window service A) that emits events on queue. There is a notification service (B) (window service hosted in cloud) that reads these events from queue and based on configured rules, send notifications to loggedin users on web application (C).
We are planning to use signalR between web browser and the web application (C). But confused on setting communication between event emitter (A) and Notification service (B). Initially we are thinking to use SignalR there as well. But there is a catch. If load increases, notification service and web app can scale out (having more than one instance).
Lets say there are 3 notification service instances and 4 webserver instances. Now, how would SignalR works between them. Each web server has to open a signalR channel with every notification engine. But this would lead to issue:
There will be tight coupling between notification service and web application. Whenever a new web instance is added, it has to form connection all available notification instance. And it goes down it has to destroy this connection. Same hold true with notification instance.
So, we are thinking of using some pubsub rather than signalR. Right now we can think of Redis pubsub. Notification services will push message to Redis and all the web servers will be subscribing to it and will eventually get message.
Please share your thoughts if we can design it better.
Redis pubsub isn't a reliable messaging system. It's fire and forget. If some message is sent to some channel and there's no listener, the message is lost.
If you're already in Azure, your solution is Service Bus and take a look at topics and brokered messages.
I need to build a system that is similar to a pub/sub system. It is composed of multiple sub-systems or services running in separate executables or as a Windows Services.
The sub-systems are:
The pub/sub service
A pub/sub service managing communications between the internal sub-systems and the users.
A user can have multiple channels open (A web page connected to a SignalR service, a mobile device connected to a duplex WCF service, etc.).
The service should manage all the channels of an user and be able to send information to them on demand based on topics or specific users.
The service must support multiple transports like SignalR, WCF, or others ...
Worker services
A worker that runs as a Windows Service and sends information to the users using the pub/sub service.
The SignalR and WCF host
The SignalR service and WCF service will be hosted on IIS
My questions are
As the sub-systems run in separate processes, how do I communicate between the pub/sub service and the other sub-systems like (the workers and IIS). The communication must be really fast. Do I use named-pipes, is it fast enough ?
An example; The worker tells the pub/sub system to send a message to a user, the pub/sub systems checks the channels opened for the user (let's say a SignalR channel), then in turn it must notify the SignalR service running in IIS to send the message to the user's browser.
Do you know of implementations of similar systems ?
Observations
I cannot use third-party service-bus services (Azure ..). And even with that .. I can't see a solutions to the problems above.
The service must be very scalable and high-demand proof.
If the question is how to bridge SignalR with other transports there are several solutions.
On a single server you could just connect them up with the Reactive framework's own pubsub mechanism which is neatly encapsulated in the Subject class.
If you need to scale out to multiple servers you'll want to use an existing service bus or perhaps roll your own simple one using SQL server and a SqlDependency.
You could also use SignalR as a client on one server communicating with the other servers to copy messages between them.
I recommend you look into some of the existing Service Bus technologies for .NET.
There is an article which clearly explains the possible mechanism of how to incorporate a pub/sub design pattern in your .NET application. The answer lies in using a .NET In-Memory distributed cache and use its clustering capabilities as a publish subscribe medium. Since it's clustered therefore you won't have to worry about down-times as well.
Basically you'll be using the Application Initiated Custom Events
Register your events
public void OnApplicationEvent(object notifId, object data)
{
...
}
_cache.CustomEvent += new CustomEventCallback(this.OnApplicationEvent);
And fire those events whenever you need to
_cache.RaiseCustomEvent("NotificationID", DateTime.Now);
Pub/Sub design pattern in a .NET distributed cache
Full disclosure: I work for Alachisoft