How do you integrate Application Insights into Service Fabric? - c#

I currently use Azure Application Insights for logging on all of my Web API and MVC applications. Obviously the majority of this logging is automatic which is great. For events that I manually want to capture I have implemented a "LoggingUtility" which has methods like "LogError" and "LogInformation" that simply call Trace.TraceError and Trace.TraceInformation (the thinking is that the implementation of logging could be changed in one place in the future). The Trace is then captured by Application Insights.
I have started to develop some Stateful Services in Azure Service Fabric and cannot seem to find a way to use Application Insights. I have stumbled upon several articles pointing me towards a NuGet package that was in prerelease but has now been removed (https://www.nuget.org/packages/Microsoft.ServiceFabric.Telemetry.ApplicationInsights/).
Of course the Service Fabric templates generate the "ServiceEventSource" but firstly I cannot see how this would be useful for Application Insights and ideally I want all logging to be done through my "LoggingUtility" class.
Is it possible to integrate Application Insights into Service Fabric? If so, can I simply continue using Trace (via my "LoggingUtility" class)?

You have two options:
1. Using the Application Insights SDK in your LoggingUtility class to send information directly to AI
2. Using Windows Azure Diagnostics (WAD) to forward Eventsource traces to AI, using the provided EventSource class in the SF project templates. That class you can modify to be your LoggingUtility class implementation.
Considering that you are running your SF cluster in Azure, the second approach is the current recommendation, as Service Fabric system service events are also using Event Tracing.
For configuring Azure Diagnostics to AI, follow the steps outlined in this article: https://azure.microsoft.com/en-us/blog/azure-diagnostics-integration-with-application-insights/
Be aware this article targets Cloud Services and VMs, but just use the VM Scale Set for configuring Azure Diagnostics in stead of a VM. That should work.
The NuGet package is no longer supported: https://social.msdn.microsoft.com/Forums/en-US/f0f1ad78-4d83-48e5-b1da-4a9f0eddb9b2/application-insights-for-service-fabric?forum=AzureServiceFabric

We used the new Microsoft.Extentions.Logging and wrote a insights logger - it gets the service fabric messages via Trace we also pulled out all the ETW stuff it doesnt add much .

Related

Migrate from Azure Function app to Azure Container app

I have asp.net core REST API interacting with Azure Queue with input data. I have a Azure Function App with trigger on Azure Queue service. Whenever any entry that happens at the Azure Queue level, Azure Function app gets triggered which executes certain business functionality and returns the response.
After going the article : https://endjin.com/blog/2022/09/bye-bye-azure-functions-hello-azure-container-apps-part-2-migrating-from-azure-functions-to-asp-net-core , I am planning to migrate the Azure Function app to Azure Container app with gRPC based services.
I tried to explore few details https://learn.microsoft.com/en-us/azure/container-apps/samples but did not come across any good reference.
Here my challenge is how to trigger the gRPC C# services whenever any entry is added to the Azure Service Queue.
Can anyone help me here by providing some guidance?
Container Apps are built on top of KEDA, so any auto-scalers it supports (storage queues is one of them), you can use to scale your app but you lose bindings when moving away from Azure Functions.
Since bindings are not present anymore, you must use the Azure Storage Queues SDK directly in your code. So, you would call your gRPC service as you dequeue messages.
Container Apps are useful for HTTP triggered functions since you can use HTTP Frameworks like ASP.NET and leverage their complete feature set like built-in authentication, middleware, etc.
For other bindings, unless you have lots of custom code that need to run beyond the limits of Azure Functions or are perhaps trying to convert your existing non-Azure Function app to run serverless, you are likely better off using Azure Functions since most of the service-level binding code is taken care, reducing effort to maintain.
Obviously, if there is no binding support for your auxiliary service like IBM MQ or ActiveMQ, then you would want to use Container Apps instead.

Google Cloud Tracing in a .net 6 (or at least .net core 3+) Console App

I've been trying to implement Cloud Trace in a .net 6 Console App that works as a listener for events coming through a Publisher Subscriber pattern.
I have been following Googles docs for their Diagnostics NuGet https://cloud.google.com/dotnet/docs/reference/Google.Cloud.Diagnostics.AspNetCore/latest but to no success. When I apply this on a WebAPI project it works like a charm but using this on the Console App posts nothing to GCP Cloud Trace.
I'm trying to use the IManagedTracer.StartSpan() method to force the code to start a trace span and send something to GCP but nothing happens. No error and no trace.
I've also tried doing this using the https://cloud.google.com/dotnet/docs/reference/Google.Cloud.Diagnostics.Common/latest which from what I understood looking at the code of its basically the underlying lib of the Google.Cloud.Diagnostics.AspNetCore which wraps that stuff and sets some extra stuff through IOC.
Anyone knows of any sample projects using Google Cloud trace and .net? Or if I'm clearly missing something basic?
Just for context... I'm running this locally connected to the project on GCP using the projectId hardcoded and with my account authenticated on my machine and with Owner Role assigned to it so permissions should not be an issue. My machine can post traces because I've successfully pushed traces from a sample WebAPI I did to test this out first.
https://github.com/googleapis/google-cloud-dotnet/issues/6367#issuecomment-903852079
This github thread pretty much answers my question and provides a solution for this.
The Google packages do not support Console apps out of the box so we have to do some work to setup the tracing manually in the code.

Azure Bot logging

I have a Bot configured for Teams and hosted in Azure. I have a couple of graph api requests sent from the Bot. I would like to log the time between request, response, and other related exceptions thrown. Where is this log best kept for an Azure hosted Bot?
This can be done automatically when you choose to integrate Application Insights. See this guide:
Application Insights helps you get actionable insights through application performance management (APM) and instant analytics. Out of the box you get rich performance monitoring, powerful alerting, and easy-to-consume dashboards to help ensure your Bot is available and performing as you expect. You can quickly see if you have a problem, then perform a root cause analysis to find and fix the issue.
You can also keep track of bot messages analytics using Application Insights (source):
Analytics is an extension of Application Insights. Application Insights provides service-level and instrumentation data like traffic, latency, and integrations. Analytics provides conversation-level reporting on user, message, and channel data.
The easiest way (if you re hosting it on Azure App Service), is to enable App logging and then use Stream Logs Service. Of course, on your app you would need to use Trace class and log the events manually.
Check: https://learn.microsoft.com/en-us/azure/app-service/web-sites-enable-diagnostic-log

Is uploading / updating Web.config a good way to change trace level of System.Diagnostic tracing?

Generally, I would inject TraceListeners and adjust trace level through app.config and Web.config. And I understand that IIS will restart the Web app after the Web.config is updated and the last HTTP request is done and new HTTP requests will be pending before the new instance is created. I have been doing this for years no problem.
However, if I deploy the Web app to Azure managed services, or I have many (clustered) instances of the Web app, I am not sure if updating / uploading Web.config to each instance is still a good practice? Is there some alternative/better method to change the trace level for System.Diagnostics.TraceListeners?
And what if I deploy to AWS or alike for clustered services?
You got it right! updating / uploading Web.config to each instance is not bad but could be tedious task,or error prone approach. Rather,
Would recommend to go with Application Insights,an extensible analytics service that monitors your live web application.
Just install a small instrumentation package in your application, and set up an Application Insights resource in the Microsoft Azure portal.
Performance impact would be minimal as,tracking calls are non-blocking, and are batched ; sent in a separate thread.
Telemetry types such as 'Exception traces from both server and client', 'Diagnostic log traces' and many more helps you understand how your app is performing and how it's being used.
Also you can perform Diagnostic search on instances of requests, exceptions, custom events, log traces, page views, dependency and AJAX calls.
For more information do read : Application Insights - introduction
Thanks,
Kasam Shaikh

How do I deploy an app with multiple components to Azure?

So my application is composed of a handful of separate .NET components that all run in Azure. To give you an idea of what's involved:
A main ASP.NET MVC5/Web API 2 REST service that runs as an Azure website (I think they renamed these to web apps?).
A SQL database that the main REST service uses.
Another internal Web API REST service that the main REST service talks to that runs as an Azure website.
An Azure storage table that the the internal Web API REST service uses.
3 scheduled jobs (just .NET exe's) that do work in the background and also talk to the main SQL database.
All that's running great in Azure right now. My problem is automating the deployment and configuration.
Right now it's all manual. I right-click and publish both web apps from Visual Studio. I build and FTP up the web jobs. The database and Azure storage already exist so I don't have to re-set them up.
But say something bad happens - a datacenter goes down or something. I'd like to be able spin up a new version of my app (with all those components) that is ready to go with minimal effort.
I'm pretty new to the world of Azure so I'm not sure where to start. What are my options?
You are looking to automating deployments in Azure. I recommend to use ElasticBox to solve it.
To achieve the automation you will need to create a box for every different service or component you need to deploy (a box is the abstraction unit that uses to define the installation and configuration of the deployment of a service or application in any cloud).
It's possible also to create boxes based on VM Instances, VM Roles, or Worker Roles and also automate the deploy of Microsoft SQL Servers. Let's say near every option offered by Azure.
Then with those boxes completed (that can be customized and reuse your legacy code from your previous manual installation), you can deploy the multiple vms with near no manual intervention, just one click or a command with some parameters.
A box includes the variables necessary for your deployment (you can set default values for those variables) and your legacy scripts (In this case probably PowerShell, but they could be bash, python, perl, java, or any other language)
When you deploy your boxes:
Creates a Cloud Service or VM in the location that you choose and with the Azure configuration that you preconfigured. It takes care of provision the vm in your Azure provider, or near any other cloud provider in the market.
Installs, configure files with your specified variables and execute your SQL or Web services that you have defined.
Other ways to interact with the service:
Jenkins' Plugin could be used to build a CI environment connecting your code updated or a Pull Request with automated deployments in Azure or any other public cloud.
Command line tool that enables to do VM deployments of your boxes and also you can manage your deployed vm instances with it.
Azure Resource Manager (ARM) is intended to solve exactly the issues you described.
The basic idea is that you use a JSON template to describe all your services. You can then give that template to ARM and it will create the services as defined in the template. If you want to make a change, instead of doing it imperatively (via powershell or manually in the portal) just update your template, pass it to ARM and it will make whatever changes are necessary to make the services match your template.
Some resources:
ARM talk at MS Ignite 2015
ARM template language reference
Quickstart templates on GitHub
Azure Resource Explorer - view ARM templates of existing resources
Resource Group Deployment Projects in Visual Studio
I think your looking for something to help you handel deploys to your windows Azure servers. If that is the case I recommend looking into Jenkins CI. There are many resources available online you can look into in terms of having Jenkins and Azure work together.

Categories

Resources