How can my Azure App Services use more outbound IP addresses? - c#

My collection of app services in Azure have several web jobs that run behind-the-scenes and consume an external, third-party api via the web. This third-party api throttles me by my IP address. As my app is growing, I am scaling out and even separating modules out into separate app services. However, even with separate app services, each app service and scale out instance uses the same set of outbound IP addresses. So no matter how much I scale out, I am still throttled the same by this external api. I am getting more and more customers, and I am starting to get more and more timeouts from this external api. Is there a way to configure each of my app services to use a different set of outbound IPs? Or, am I able to have more than 4 separate outbound IP addresses assigned to an app service?

Related

Theory: Azure Websockets

Is it possible to consume an external (to Azure) API that requires you to establish a wss connection to receive notifications of changes in some kind of Azure container (Kubernetes/Durable Function)?
Or do I need to run a Virtual Machine with a background service keeping the socket alive until it's got no more data to send (hours). No UI.
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp
Durable functions look promising but I'm unsure if these will cover my requirements.
Any advice welcomed.
Yes, you should be able to use WebSocket connections to services deployed on Kubernetes. And also the other way around where services in Kubernetes is WebSocket clients with connections to external services.
I haven't tested, but Azure Web App supports Web Socket. As you can host Azure Functions in the same App Service which is running your web app, I think it's possible to support web sockets on your functions with durable function.
Another point that leads me to think this, is the native support of Azure Functions to SignalR Service, which runs on Web Socket too.

Azure App Service Restrictions - allow app service A for app service B

Azure app service A needs to call Azure app service B using System.Net.WebClient class.
Access to app service B is restricted to company's IP range only, through
Azure > app service > Networking > Access Restrictions
Tried adding <public ip of app service A>/32 to B's allow list but that did not work - System.Net.WebClient.DownloadData threw a 403 Forbidden exception.
What else can I try?
It looks like it's impossible to restrict the Public IP address in Access Restrictions of the app service B since both app services in the same app service plan.
Azure App Service is a multi-tenant service, except for App Service Environments. Apps that are not in an App Service environment (not in the Isolated tier) share network infrastructure with other apps. If you restrict the inbound or possible outbound Public IP address of web app service, it looks like restrict the access from itself. Even this, per my understanding, it should be limit the private IP address of the instance in the web app service over the Azure backbone network. However, We could not know the private IP address of each app service.
You can use Azure service plan with isolated price tier but its high cost. So I suggest recreating the web app service A in a different service plan with a different region. Then restrict the possible outbound IP addresses in web app service A.
Additionally, you can get a further understanding of the Azure app service plan in this blog.
I understand what you are trying to achieve and I will suggest you utilize the Azure traffic manager.
Azure Traffic Manager is a DNS-based traffic load balancer. This service allows you to distribute traffic to your public-facing applications across the global Azure regions. Traffic Manager also provides your public endpoints with high availability and quick responsiveness.
Traffic Manager uses DNS to direct the client requests to the appropriate service endpoint based on a traffic-routing method. The traffic manager also provides health monitoring for every endpoint. The endpoint can be any Internet-facing service hosted inside or outside of Azure. Traffic Manager provides a range of traffic-routing methods and endpoint monitoring options to suit different application needs and automatic failover models. Traffic Manager is resilient to failure, including the failure of an entire Azure region.
Please visit the link below for more information
https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-overview
I faced the same problem and found a solution.
Use NAT Gateway to fix the outbound IP address on Service A
https://learn.microsoft.com/en-us/azure/app-service/networking/nat-gateway-integration
Route all trafic and regional virtual network integration are also needed on Service A (explained in the link above)
Allow access the NAT gateway on Service B by specifying the IP address of the NAT gateway
This method can also disable public access to Service B.
Pivate endpoints can also be used in this scenario, but it disables the SCM(Kudu) either which is used for deployments from Azure Pipelines etc.

How to communicate from Azure web app to WCF services hosted locally in IIS?

I have an ASP.NET MVC application hosted in Azure.
This application is complemented with a desktop application that also has WCF services for communicating with III party interfaces. WCF are hosted locally.
There are thousands of clients using the desktop application at different geographical locations.
Till now, every desktop application used to talk to web app using api with the help of WCF.
This was limited to on demand from the desktop application.
Whenever desktop application feels the need to talk to web app, it used the way of web api from WCF.
Now, what I want is:-
To access the different desktop applications(typically called sites), from azure depending upon the need.
This is required on account of an online ordering system that is through web app/mobile app.
I do not want to keep polling from desktop application to know about if any new order is there for this site.
I feel it would be better if I can play from other side.
Also, keeping in mind that IP of sites will not be fixed. There may be issue with firewall. NAT may translate resource identifier differently.
Can service bus in azure may be of any help, but what confuses me is that every desktop application is having its own WCF service and order should reach the respective site only.
Any type of ideas on this would be appreciated.
According to your description, Service Bus messaging is a perfect way to achieve this.
More information about Service Bus Messaging, we can refer to: Service Bus queues, topics, and subscriptions
In addition, We can also use RabbitMQ or ZeroMQ which is similar with Service Bus Messaging because both of them are free. You can choose an best way to realize your requirements.
About differences between ZeroMQ and RabbitMQ:
ZeroMQ has better performance, but it is built in the case of allowing message data loss to apply to high throughput / low latency applications. Unlike ZeroMQ, RabbitMQ fully implements the AMQP protocol, which is similar to mailbox services, supporting message persistence, transaction, congestion control, load balancing and so on, making RabbitMQ have a more extensive application scenario.
Function RabbitMQ ZeroMQ
Message persistence Support Not Support
Transaction Support Not Support
performance Low High
stability High Low
Support for AMQP protocol Support Not Support
Application scenario Data loss is not allowed High throughput
More information about RabbitMQ and ZeroMQ, we can refer to:
RabbitMQ
ZeroMQ
If you are able to modify the desktop applications, implementing a websockets connection with SignalR might be worth a look. The desktop applications sign up with a SignalR hub you provide.
You can then push data to the clients from, for example an ASP.NET MVC app. It works very reliable and handles lots of connections well. It is typically used for realtime web communication but might be useful in your case, too.
The downside is probably, that the desktop app needs to initially sign up to a hub to receive push messages.

Using Azure Service Fabric Reliable Actors across different systems

When using Service Fabric Reliable Actors, is it possible for an actor client in one system (for example, on a local deployment) to communicate with an actor server in a different system (for example, on an Azure cloud deployment)? If so, how can this be configured? If not, what Azure functionality could I use to achieve this instead? The linked overview gives code examples for the client and server, but not any of the necessary configuration steps.
For communicating from a client to an actor running in a cluster, you need to have direct connectivity today - for example the Azure Load Balancer can't be in the way. To configure which cluster to connect to, create a ServicePartitionResolver with the FabricClientSettings, SecurityCredentials, and Endpoints matching the cluster, and then use ServicePartitionResolver.SetDefault (https://msdn.microsoft.com/en-us/library/microsoft.servicefabric.services.client.servicepartitionresolver.aspx)

Azure webjob calling internal service

I have a console app that calls a WCF service. This WCF service is on a Azure Cloud Service VM, and the WCF service is only accessible internally (using Windows creds). The Cloud Services VM has been added to our domain.
I have deployed this console app as an Azure webjob. It is living in an Azure App Services Web App by itself - there is no related web app.
When I run the webjob, I get a "System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at..." exception. Which seems to be expected since the Azure App Services Web App is not on the domain or talking to internal DNS.
My question, can/how can I add the VM that is backing the Azure App Services Wep App to our domain? And if not, what options are there for getting this webjob to talk to internal DNS?
In general when trying to connect with on-premise resources or other private networks from within Azure, there are a few options that you can check out:
Option #1: App Service Environents: https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-intro/
App Service Environments are isolated to running only a single customer's applications, and are always deployed into a virtual network. Customers have fine-grained control over both inbound and outbound application network traffic, and applications can establish high-speed secure connections over virtual networks to on-premises corporate resources.
This will give you the most flexibility because of the virtual network, but at the highest cost as it is a premium offering.
Option #2: App Service Hybrid Connections: https://azure.microsoft.com/en-us/documentation/articles/integration-hybrid-connection-overview/
Hybrid Connections are a feature of Azure BizTalk Services. Hybrid Connections provide an easy and convenient way to connect the Web Apps feature in Azure App Service (formerly Websites) and the Mobile Apps feature in Azure App Service (formerly Mobile Services) to on-premises resources behind your firewall.
I'm less familiar with this option, but it's design to work with App Service for these types of scenarios. It may be difficult to use if you require access to an internal DNS or domain controller, however.
Option #3: Service Bus Relay: https://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-relay/
The Service Bus relay service enables you to build hybrid applications that run in both an Azure datacenter and your own on-premises enterprise environment. The Service Bus relay facilitates this by enabling you to securely expose Windows Communication Foundation (WCF) services that reside within a corporate enterprise network to the public cloud, without having to open a firewall connection, or require intrusive changes to a corporate network infrastructure.
This option has been around for a while and is especially designed for connecting to WCF services. It's not specific to Azure App Service (as you can probably tell from the article) but it might still be a good, fairly light-weight fit for your scenario. However, it also will not help you with DNS and on-premise domain controller.

Categories

Resources