I'd like to have items added to a service bus topic, then pulled off the 'Live' subscription and sent to the live site, and pulled off the 'Development' subscription and sent to the dev site.
[FunctionName("AddFoo")]
public static async Task AddFooAsync(
[ServiceBusTrigger("topic-foo", "Live")]QueueItem item,
TraceWriter log)
{
var endpoint = ConfigurationManager.AppSettings["EndPoint"];
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Add("PublisherKey", foo.PublisherKey);
var foos = new HttpFooStore(httpClient, endpoint);
try
{
await foos.AddAsync(item.Value);
}
catch (BadRequestException)
{
log.Warning("Malformed request was rejected by XXX", item.PublisherName);
return;
}
catch (AuthorizationException)
{
log.Warning("Unauthorized request was rejected by XXX", item.PublisherName);
return;
}
catch (ResourceNotFoundException)
{
log.Warning("Request for unknown tracker was rejected by XXX", item.PublisherName);
return;
}
catch (Exception e)
{
log.Error("Request to XXX was unsuccessful", e, item.PublisherName);
throw e;
}
}
The implementation of the function is exactly the same, the only thing that is different is the name of the subscription, and the endpoint used. Unfortunately, the subscription name is part of an annotation, so it has to be a constant. Is there any way I can get the desired effect without having to duplicate all the code?
Edit
To clarify, I want to create two separate deployments - one for live, and one for development. For each deployment, I would update the environment settings, which would determine which subscription the function is bound to.
You can refer to environment variables by surrounding them in percentage signs:
ServiceBusTrigger("%myTopic%", "%mySubscription%")
Where myTopic and mySubscription are environment variables in the app settings.
You can't have a single function triggered by two service bus topics (development vs. live), but you can move the meat of your function into a helper method that can be called from both functions.
Related
I have a web job in my Azure web app that writes data to an Azure Cosmos instance. This web is triggered from a storage queue. Each trigger spawns a new process to do one insert or one update to the Cosmos instance. With the amount of data coming into that queue, the web job inserts/updates the Azure Cosmos instance around 1000 times every minute.
In a separate, user-facing portal, the users query data from this Azure Cosmos instance. We have been getting a high number of these errors from that public-facing portal:
Only one usage of each socket address (protocol/network address/port) is normally permitted <>
An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full
To me, this is indicative of SNAT port exhaustion. All documentation and help information on this subject, and on these specific error messages point to "ensuring that we are re-using connections to the Cosmos instance", and that we are using best practices. I "believe" we are re-using connections to the Azure Cosmos instance properly, but I am not sure. This is the code:
Program.cs
using Microsoft.Extensions.Hosting;
internal class Program
{
private static async Task Main(string[] args)
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageQueues();
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
}
Functions.cs
namespace WebhookMessageProcessor
{
public class RingCentralMessageProcessor
{
private static List<KeyValuePair<string, CosmosClient>> cosmosClients = new List<KeyValuePair<string, CosmosClient>>();
public async static void ProcessQueueMessage([QueueTrigger("<<storage-queue-name>>")] string message, ILogger logger)
{
var model = Newtonsoft.Json.JsonConvert.DeserializeObject<WebHookHandlerModel>(message);
//the intention here is to maintain a list of cosmos clients, as each message from the queue indicates which Cosmos instance to update/insert the data to. For now, however, all messages are going to a single instance. More will be added later.
if (cosmosClients == null) cosmosClients = new List<KeyValuePair<string, CosmosClient>>();
await HandleCallData(model.ownerId, model.body, storageConnectionString);
}
public async static Task HandleCallData(string ownerId, string deserializedData, string storageConnectionString)
{
var model = Newtonsoft.Json.JsonConvert.DeserializeObject<PushModel>(deserializedData);
if (model == null || model.body == null || model.body.sessionId == null)
{
//log error
}
else
{
//the intention here is to maintain a list of cosmos clients, as each message from the queue indicates which Cosmos instance to update/insert the data to. For now, however, all messages are going to a single instance. More will be added later.
var cosmosClient = null;
if (!cosmosClients.Any(x => x.Key == ownerId))
{
cosmosClient = new CosmosClient(cosmosConfig.accountEndpoint, cosmosConfig.accountKey);
cosmosClients.Add(new KeyValuePair<string, CosmosClient>(ownerId, cosmosClient));
}
else
{
cosmosClient = cosmosClients.First(x => x.Key == ownerId).Value;
}
//data building logic here
//...
var cosmosContainer = cosmosClient.GetContainer(cosmosConfig.databaseId, cosmosConfig.containerId);
string etag = null;
if (condition1) // THEN INSERT
{
var task = await cosmosContainer.CreateItemAsync(call, partitionKey: new PartitionKey(partitionKey), requestOptions: new ItemRequestOptions() { IfMatchEtag = etag });
success = true;
}
else if (condition2) // THEN FIND AND REPLACE
{
var response = await cosmosContainer.ReadItemAsync<CallIndex>(call.id, new PartitionKey(partitionKey));
var existingCallIndex = response.Resource;
etag = response.ETag;
await cosmosContainer.ReplaceItemAsync(existingCallIndex, call.id, new PartitionKey(partitionKey), new ItemRequestOptions() { IfMatchEtag = etag });
success = true;
}
else // FIND AND REPLACE BY DEFAULT
{
var response = await cosmosContainer.ReadItemAsync<CallIndex>(call.id, new PartitionKey(partitionKey));
var existingCallIndex = response.Resource;
etag = response.ETag;
await cosmosContainer.ReplaceItemAsync(existingCallIndex, call.id, new PartitionKey(partitionKey), new ItemRequestOptions() { IfMatchEtag = etag });
success = true;
}
}
catch (Exception ex)
{
//handle exception here
}
curTries++;
} while (!success && curTries < maxTries);
}
}
}
}
I am maintaining a list of cosmos clients in a static variable, as the content of the message may indicate writing to a different cosmos instance. However, as of now, there is only one instance, and all data is going to that single instance. There will be more instances in the future. Is this a good/correct way to reuse connections to the Cosmos instance in my web job?
Thanks
This can be technically achieved but there are trade-offs you need to make. Mainly latency (you can't have an unbounded list of Cosmos Clients on Direct mode).
The key of Dictionary should be the account name, that way you don't end up creating multiple clients for the same account even if the "owner" is different. There should be a Singleton client per account your application interacts with.
You should put your client on Gateway mode. This should use less ports, have higher potential latency, but there is no scenario where you can have an unbounded number of client instances on Direct mode, that simply will almost always hit your connection limit. Example on how to change the connection mode.
You are using a List, that is neither concurrent nor handles eviction. You should dispose clients that are not used after some time or define a max number of clients you can handle, it's impossible to write an app that handles an unbounded/infinite number of clients. Maybe MemoryCache is a good option. But you need to define a limit or make sure you can distribute across multiple machines/instances.
Putting Cosmos clients in a List will never work as you can't pool connections for different clients pointing at different accounts. Your single client instance here is likely hitting the 128 port max for your WebJob. For Cosmos you should use a single client per instance. You should also cache the container references too. Not doing this will cause 429s on the master partition (stores all your account meta data) in Cosmos DB due to all the meta data requests that will happen at larger request volumes.
Take a look at this article here on Singleton client, container reference caching and PortReuseMode
Best Practices for .NET SDK
Also see here for Networking Performance Tips for .NET SDK v3
We have 3 different environments: test, cert and prod. These environments have topics configured using the offset explorer.
The problem is that I can send messages to cert and test, but I can't send to prod until the topic in prod is marked for deletion. As soon as I do this, the messages immediately begin to be sent. I tried to create new topics in test and cert. The problem persists until I put a mark on these topics for deletion, I did not succeed in sending a message.
This problem is happening when i call method ProduceAsync. This method work 5 minutes and finished with error :
Local: Message timed out.
If i use method Produce, the program goes next step but message in topic doesn't exist.
private readonly KafkaDependentProducer<Null, string> _producer;
private string topic;
private ILogger<SmsService> _logger;
public SmsService(KafkaDependentProducer<Null, string> producer, ILogger<SmsService> logger)
{
_producer = producer;
topic = config.GetSection("Kafka:Topic").Value;
_logger = logger;
}
public async Task<Guid?> SendMessage(InputMessageModel sms)
{
var message = new SmsModel(sms.text, sms.type);
var kafkaMessage = new Message<Null, string>();
kafkaMessage.Value = JsonConvert.SerializeObject(message);
try
{
await _producer.ProduceAsync(topic, kafkaMessage);
}
catch (Exception e)
{
Console.WriteLine($"Oops, something went wrong: {e}");
return null;
}
return message.messageId;
Class KafkaDependentProducer i take from official repo example https://github.com/confluentinc/confluent-kafka-dotnet/tree/master/examples/Web
I finded the solution. In my case i needed add "Acks" parameter in ProducerConfig. (Acks = Acks.Leader (equals 1))
Unfortunately the last version of Kafka.Confluent don't write exeption, i had to do version lower. ProduceAsync gave me exeption: Broker: Not enough in-sync replicas, when i just finded answer in internet.
Parameter: min.insync.replicas in problem topic equals 2.
I created several microservices using the ASP.net core API
One of these microservices returns the exact address of the other microservices
How to update the address of any of these microservices without restarting if the address is changed
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddHttpClient("MainMicroservice", x =>
{
x.BaseAddress = new Uri("http://mainmicroservice.com");
});
services.AddHttpClient("Microservice1", x =>
{
x.BaseAddress = new Uri("http://microservice1.com");
});
services.AddHttpClient("Microservice2", x =>
{
x.BaseAddress = new Uri("http://microservice2.com");
});
services.AddHttpClient("Microservice3", x =>
{
x.BaseAddress = new Uri("http://microservice3.com");
});
}
}
public class Test
{
private readonly IHttpClientFactory _client;
public Test(IHttpClientFactory client)
{
_client = client;
}
public async Task<string> Test()
{
var repeat = false;
do
{
try
{
return await _client
.CreateClient("Microservice1")
.GetStringAsync("Test")
.ConfigureAwait(false);
}
catch (HttpRequestException e) when (e.StatusCode == HttpStatusCode.NotFound)
{
var newAddress = await _client
.CreateClient("MainMicroservice")
.GetStringAsync("Microservice1")
.ConfigureAwait(false);
//todo change address of microservice1
repeat = true;
}
} while (repeat);
}
}
If you are building a microservice-based solution sooner or later (rather sooner) you will encounter a situation when one service needs to talk to another. In order to do this caller must know the exact location of a target microservice in a network, they operate in.
You must somehow provide an IP address and port where the target microservice listens for requests. You can do it using configuration files or environment variables, but this approach has some drawbacks and limitations.
First is that you have to maintain and properly deploy
configuration files for all your environments: local development,
test, pre-production, and production. Forgetting to update any of
these configurations when adding a new service or moving an existing
one to a different node will result in errors discovered at runtime.
Second, the more important issue is that it works only in a static
environment, meaning you cannot dynamically add/remove nodes,
therefore you won’t be able to dynamically scale your system. The
ability to scale and deploy given microservice autonomously is one
of the key advantages of microservice-based architecture, and we do
not want to lose this ability.
Therefore we need to introduce service discovery. Service discovery is a mechanism that allows services to find each other's network location. There are many possible implementations of this pattern.
We have two types of service discovery: client-side and server-side which you can find a good NuGet package to handle in ASP.NET projects.
I am developing NET Core web app and I am using blob to store some objects what could be modified during request. I had to prevent multiple paralel access to one object so I add 'lease' into my storage integration.
In prctice when I receive request, one object is fetched from blob with lease for some period. On the end of request this object is updated in storage and lease is removed - pretty simple.
But what is correct exception handling?
I am faicing agains the problem when some exception occured in the middle of request, lease is not released. I tried to implement release into dispose (in some class where I control fetching and leasing from blob). But this is not executed when unhandled exception is thrown.
Add try/catch/finally seems not clean for me. My question is do you know some best common approach how release lease on the end request? Thank you
Per your description, I write a simple demo for you about lease and break lease, just try the code below:
using System;
using Azure.Storage.Blobs;
using Microsoft.AspNetCore.Mvc;
using Azure.Storage.Blobs.Specialized;
using System.Threading;
namespace getSasTest.Controllers
{
[ApiController]
[Route("[controller]")]
public class editBlob : ControllerBase
{
[HttpGet]
public string get()
{
var connstr = "";
var container = "";
var blob = "";
var blobClient = new BlobContainerClient(connstr,container).GetBlobClient(blob);
var leaseClient = new BlobLeaseClient(blobClient);
try
{
//auto break lease after 15s
var duration = new TimeSpan(0, 0, 15);
leaseClient.Acquire(duration, null);
}
//if some error occurs, request ends here
catch (Azure.RequestFailedException e)
{
if (e.ErrorCode.Equals("LeaseAlreadyPresent"))
{
return "Blob is under process,it will take some time,please try again later";
}
else
{
return "some other Azure request errors:"+ e.Message;
}
}
catch (Exception e) {
return "some other errors:" + e.Message;
}
//mock time consumption to process blob
Thread.Sleep(10000);
//break relase first if process finishs in 15s.
leaseClient.Break();
return "Done";
}
}
}
So, based on your requirements, can't you authorize a party/cron/event (register event/ttl handlers?) to break a lease when some weirdness (whatever that means for you) is detected. It looks like you are really worried on the "pattern" over correctness?.
This should complement the correctness.
In practice, an exception handling strategy should lead to sufficient actionable information.
For some, that means:
E -> E - 1 (digest or no digest) -> E - 2 (digest or no digest) ..
such that:
E - n: Exception, at a certain level of nesting
digest: would you propagate this or digest and move forward?
Breaking the lease (essentially correctness of program) should not mean you hamper your version of the elegance.
Every service is usually a pair of services:
service itself
service clean-up handler 1 through n
service edge case handler 1 through n
primer for service
so on...
I'm trying to create a web app which does many things but the one that I'm currently focused in is the inbox count. I want to use EWS StreamSubscription so that I can get notification for each event and returns the total count of items in the inbox. How can I use this in terms of MVC? I did find some code from Microsoft tutorial that I was gonna test, but I just couldn't figure how I could use it in MVC world i.e. What's the model going to be, if model is the count then how does it get notified every time an event occurs in Exchange Server, etc.
Here's the code I downloaded from Microsoft, but just couldn't understand how I can convert the count to json and push it to client as soon as a new change event occurs. NOTE: This code is unchanged, so it doesn't return count, yet.
using System;
using System.Linq;
using System.Net;
using System.Threading;
using Microsoft.Exchange.WebServices.Data;
namespace StreamingNotificationsSample
{
internal class Program
{
private static AutoResetEvent _Signal;
private static ExchangeService _ExchangeService;
private static string _SynchronizationState;
private static Thread _BackroundSyncThread;
private static StreamingSubscriptionConnection CreateStreamingSubscription(ExchangeService service,
StreamingSubscription subscription)
{
var connection = new StreamingSubscriptionConnection(service, 30);
connection.AddSubscription(subscription);
connection.OnNotificationEvent += OnNotificationEvent;
connection.OnSubscriptionError += OnSubscriptionError;
connection.OnDisconnect += OnDisconnect;
connection.Open();
return connection;
}
private static void SynchronizeChangesPeriodically()
{
while (true)
{
try
{
// Get all changes from the server and process them according to the business
// rules.
SynchronizeChanges(new FolderId(WellKnownFolderName.Inbox));
}
catch (Exception ex)
{
Console.WriteLine("Failed to synchronize items. Error: {0}", ex);
}
// Since the SyncFolderItems operation is a
// rather expensive operation, only do this every 10 minutes
Thread.Sleep(TimeSpan.FromMinutes(10));
}
}
public static void SynchronizeChanges(FolderId folderId)
{
bool moreChangesAvailable;
do
{
Console.WriteLine("Synchronizing changes...");
// Get all changes since the last call. The synchronization cookie is stored in the _SynchronizationState field.
// Only the the ids are requested. Additional properties should be fetched via GetItem calls.
var changes = _ExchangeService.SyncFolderItems(folderId, PropertySet.IdOnly, null, 512,
SyncFolderItemsScope.NormalItems, _SynchronizationState);
// Update the synchronization cookie
_SynchronizationState = changes.SyncState;
// Process all changes
foreach (var itemChange in changes)
{
// This example just prints the ChangeType and ItemId to the console
// LOB application would apply business rules to each item.
Console.Out.WriteLine("ChangeType = {0}", itemChange.ChangeType);
Console.Out.WriteLine("ChangeType = {0}", itemChange.ItemId);
}
// If more changes are available, issue additional SyncFolderItems requests.
moreChangesAvailable = changes.MoreChangesAvailable;
} while (moreChangesAvailable);
}
public static void Main(string[] args)
{
// Create new exchange service binding
// Important point: Specify Exchange 2010 with SP1 as the requested version.
_ExchangeService = new ExchangeService(ExchangeVersion.Exchange2010_SP1)
{
Credentials = new NetworkCredential("user", "password"),
Url = new Uri("URL to the Exchange Web Services")
};
// Process all items in the folder on a background-thread.
// A real-world LOB application would retrieve the last synchronization state first
// and write it to the _SynchronizationState field.
_BackroundSyncThread = new Thread(SynchronizeChangesPeriodically);
_BackroundSyncThread.Start();
// Create a new subscription
var subscription = _ExchangeService.SubscribeToStreamingNotifications(new FolderId[] {WellKnownFolderName.Inbox},
EventType.NewMail);
// Create new streaming notification conection
var connection = CreateStreamingSubscription(_ExchangeService, subscription);
Console.Out.WriteLine("Subscription created.");
_Signal = new AutoResetEvent(false);
// Wait for the application to exit
_Signal.WaitOne();
// Finally, unsubscribe from the Exchange server
subscription.Unsubscribe();
// Close the connection
connection.Close();
}
private static void OnDisconnect(object sender, SubscriptionErrorEventArgs args)
{
// Cast the sender as a StreamingSubscriptionConnection object.
var connection = (StreamingSubscriptionConnection) sender;
// Ask the user if they want to reconnect or close the subscription.
Console.WriteLine("The connection has been aborted; probably because it timed out.");
Console.WriteLine("Do you want to reconnect to the subscription? Y/N");
while (true)
{
var keyInfo = Console.ReadKey(true);
{
switch (keyInfo.Key)
{
case ConsoleKey.Y:
// Reconnect the connection
connection.Open();
Console.WriteLine("Connection has been reopened.");
break;
case ConsoleKey.N:
// Signal the main thread to exit.
Console.WriteLine("Terminating.");
_Signal.Set();
break;
}
}
}
}
private static void OnNotificationEvent(object sender, NotificationEventArgs args)
{
// Extract the item ids for all NewMail Events in the list.
var newMails = from e in args.Events.OfType<ItemEvent>()
where e.EventType == EventType.NewMail
select e.ItemId;
// Note: For the sake of simplicity, error handling is ommited here.
// Just assume everything went fine
var response = _ExchangeService.BindToItems(newMails,
new PropertySet(BasePropertySet.IdOnly, ItemSchema.DateTimeReceived,
ItemSchema.Subject));
var items = response.Select(itemResponse => itemResponse.Item);
foreach (var item in items)
{
Console.Out.WriteLine("A new mail has been created. Received on {0}", item.DateTimeReceived);
Console.Out.WriteLine("Subject: {0}", item.Subject);
}
}
private static void OnSubscriptionError(object sender, SubscriptionErrorEventArgs args)
{
// Handle error conditions.
var e = args.Exception;
Console.Out.WriteLine("The following error occured:");
Console.Out.WriteLine(e.ToString());
Console.Out.WriteLine();
}
}
}
I just want to understand the basic concept as in what can be model, and where can I use other functions.
Your problem is that you are confusing a service (EWS) with your applications model. They are two different things. Your model is entirely in your control, and you can do whatever you want with it. EWS is outside of your control, and is merely a service you call to get data.
In your controller, you call the EWS service and get the count. Then you populate your model with that count, then in your view, you render that model property. It's really that simple.
A web page has no state. It doesn't get notified when things change. You just reload the page and get whatever the current state is (ie, whatever the current count is).
In more advanced applications, like Single Page Apps, with Ajax, you might periodically query the service in the background. Or, you might have a special notification service that uses something like SignalR to notify your SPA of a change, but these concepts are far more advanced than you currently are. You should probably develop your app as a simple stateless app first, then improve it to add ajax functionality or what not once you have a better grasp of things.
That's a very broad question without a clear-cut answer. Your model could certainly have a "Count" property that you could update. The sample code you found would likely be used by your controller.