We have a microservice based application/ website hosted currently in Azure, and we need to have a function where we press on a button, and it sends some data to another webservice currently hosted inside our corporate network.
Our IT bods are against being able to POST to a service hosted inside our network, and I am wondering how people normally deal with this problem.
I can think of 2 possible solutions, neither of which I like particularly:
Set up a VPN to the internal network, which feels a bit of a heavy solution to me
The internal network service polls the cloud application for changes of state continuously, an triggers an update process when a change is recorded. This will generate a lot more traffic than I would ideally want
How do other people address this issue? Essentially I just want to send some data from the cloud into our network in a secure fashion. Pulls from our network are OK, but pushes into it are not.
Even sending a signal to get the internal network to initiate a pull would also work fine.
Both the solutions you came up with are fairly common patterns in Azure architecture. Of the two, the second would be the one I would generally choose for this particular scenario, but it does depend on how fast you need the push to happen. VPN is going to be the fastest as you have a direct connection between your Azure service and your internal one, but it is a bit more complex to set up for a single pipeline.
The second is generally accomplished through a messaging service like Service Bus as it adds a lot of resiliency to that sort of arrangement. You can configure your onprem service to ping Service Bus based on the interval you define- more often if you need the updates to happen quickly, less often if you want to reduce traffic. Depending on the size of the data, you can load it directly into Service Bus for pickup or the message can contain the location of the required data. Event Grid is another option for a messaging service. It sends notifications out instead of waiting for you to poll, so it would be a good choice if you wanted to ping your onprem service to reach out and pick up the changes.
If you are open to using Logic Apps to do the push, it accesses onprem resources via a data gateway that you install inside your network. It does use Service Bus in the background to accomplish this so you will be using your second solution, but it would be a bit simpler from a development perspective.
Related
I've tried a few different ways to do this, but I keep coming up short.
In short, here's what I need to do:
Create a WCF service that acts as a router between client (desktop pc) run diagnostic tools and "widgets" (that also run desktop windows and are have internet connectivety). Since these "widgets" are typically behind some sort of firewall, we've decided to use an IIS hosted WCF service over a tcp connection (port 800, i believe) for callbacks.
Notifications of what the widget is doing need to be sent, asyncronously up through the router to any "connection" clients.
Clients need to be able to syncronously call into the widgets to get diagnostic data or command them to perform a task.
Right now I have a windows service running on the widget that monitors it's status and provided a link to the internal programs to get data.
I also have a light weight diagnostic application running on desktops.
I have created a single callback interface for both status-push and data-pulls that both the widget monitoring program and desktop program implement.
My first attempt was to have the router service keep a list of registered devices and clients and pass messages between them.
Ie: Desktop calls server.getwidgetcolor("widgetid"); and the service calls _widgetlist["widgetId"].getcolor() and returns it.
Similarly the widget monitoring program calls server.notifywidgetcolorchange("widgetid") and the service calls, on all registered client _widgetlist["widgetid"].clients.Notifiycolorchange()
The problem I am running into is that if a wigdet is calling up to the server at the same time the client is calling down to that widget, both calls timeout.
I initially had the server setup as a singleton, and have played with changing the concurancy mode to multiple or re-entrant, but those didn't seem to work.
Conceptually, i'd like to have the service be per-call and persist somehow, that device and client call backs so that when a call comes in, the server wakes up, depersists the call back, sends the message, then goes back to sleep.
With all that said:
Is that ^^ possible (to persist call-back data so that a per-call server can call back on clients)? If not, could I make the service per session (for clients/widgets) but pass the data between service sessions through some other means? Shared memory? File?
Is the over all design possible/recommended? I've looked into the WCF routing library, but that doesn't seem to do what I want, unless I'm reading it wrong?
Are there other technologies I should be using that can do this more easily?
Thanks,
-Bill
So basically I am thinking about attempting load testing on my asp.net application using various features all at once. There is a lot of dependencies and ajax requests being performed in this application so it seems like a simple replay of captured http requests will not suffice and due to other features like picking out random operations, performing then verifying results across several machines, simple load testing software will not suffice.
Also there is no budget to this project for spending, so commercial implementations can not be used. I'm debating on trying to use MSMQ (never used before) to handle communication between clients, but if that is really complicated to set up then I would either use a database table as a queue or a simple TCP server with each test machine as its clients.
Features I want are: immediate failure (one client crashes, then all clients should stop), each test run should start with a brand new scenario with no prior messages, and ability to publish a start and stop event. Also it would be nice if I don't have to worry about state management (leaning towards TCP server for this over database) or concurrency.
It doesn't sound like MSMQ is what you need. It is a message-passing asynchronous communication method, akin to email. You can send a message to another queue that no one is even listening to (i.e. the application isn't running). It seems to me you want a more "online" communication model.
How about creating agents (client applications that sit on many machines and create the load) that expose a WCF service where a controller program can connect to all of them and instruct the agents what to do? It can be a duplex contract, so that the agents can send the controller a notifications. When one of them send a error notification, the controller can instruct all the other agents to shut down. Also I'd go for a Net.TCP binding rather than HTTP binding.
I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc
I have developed a windows service which reads data from a database, the database is populated via a ASP.net MVC application.
I have a requirement to make the service re-load the data in memory by issuing a select query to the database. This re-load will be triggered by the web app. I have thought of a few ways to accomplish this e.g. Remoting, MSMQ, or simply making the service listen on a socket for the reload command.
I am just looking for suggestions as to what would be the best approach to this.
How reliable does the notification has to be? If a notification is lost (lets say the communication pipe has a hickup in a router and drops the socket), will the world end come or is business as usual? If the service is down, do notifications from the web site ned to be queued up for when it starts up, or they can e safely dropped?
The more reliable you need it to be, the more you have to go toward a queued solution (MSMQ). If reliability is not an issue, then you can choose from the mirirad of non-queued solutions (remoting, TCP, UDP broadcast, HTTP call etc).
Do you care at all about security? Do you fear an attacker my ping your 'refresh' to death, causing at least a DoS if not worse? Do you want to authenticate the web site making the 'refresh' call? Do you need privacy of the notifications (ie. encryption)? UDP is more difficult to secure (no session).
Does the solution has to allow for easy deployment, configuration and management on the field (ie. is a standalone, packaged, product) or is a one time deployment that can be fixed 'just-in-time' if something changes?
Withous knowing the details of all these factors, is dififcult to say 'use X'. At least one thing is sure: remoting is sort of obsolete by now.
My recommendation would be to use WCF, because of the ease of changing bindings on-the-fly, so you can test various configurations (TCP, net pipe, http) w/o any code change.
BTW, have you considered using Query Notifications to detect data changes, instead of active notifications from the web site? I reckon this is a shot in the dark, but equivalent active cache support exists on many databases.
Simply host a WCF service inside the Windows Service. You can use netTcpBinding for the binding, which will use binary over TCP/IP. This will be much simpler than sockets, yet easier to develop and maintain.
I'd use standard TCP sockets - this will survive all sorts of moving of components, and minimize configuration issues IMHO.
I'm creating an application that I want to put into the cloud. This application has one main function.
It hosts socket CLIENT sessions on behalf of other users (think of Beejive IM for the iPhone, where it hosts IM sessions for clients to maintain state on those IM networks, allowing the client to connect/disconnect at will, without breaking the IM network connection).
Now, the way I've planned it now, is that one 'worker instance' can likely only handle a finite number of client sessions (let's say 50,000 for argument sake). Those sessions will be very long lived worker tasks.
The issue I'm trying to get my head around is that I will sometimes need to perform tasks to specific client sessions (eg: If I need to disconnect a client session). With Azure, would I be able to queue up a smaller task that only the instance hosting that specific client session would be able to dequeue?
Right now I'm contemplating GoGrid as my provider, and I solve this issue by using Apache's Active Messaging Queue software. My web app enqueues 'disconnect' tasks that are assigned to a specific instance Id. Each client session is therefore assigned to a specific instance id. The instance then only dequeues 'disconnect' tasks that are assigned to it.
I'm wondering if it's feasible to do something similar on Azure, and how I would generally do it. I like the idea of not having to setup many different VM's to scale, but instead just deploying a single package. Also, it would be nice to make use of Azure's Queues instead of integrating a third party product such as Apache ActiveMQ, or even MSMQ.
I'd be very concerned about building a production application on Azure until the feature set, pricing, and licensing terms are finalized. For starters, you can't even do a cost comparison between it and e.g. GoGrid or EC2 or Mosso. So I don't see how it could possibly end up a front-runner. Also, we know that all of these systems will have glitches as they mature. Amazon's services are in much wider use than any of the others, and have been publicly available for much years. IMHO choosing Azure is a recipe for pain as they stabilize.
Have you considered Amazon's Simple Queue Service for queueing?
I think you can absolutely use Windows Azure for this. My recommendation would be to create a queue for each session you're tracking. Then enqueue the disconnect message (for example) on the queue for that session. The worker instance that's handling that connection should be the only one polling that queue, so it should handle performing the task on that connection.
Regarding the application hosting socket connections for clients to connect to, I'd double-check on what's allowed as I think only HTTP and HTTPS connections are allowed to be made with Azure.