Can't Seem to See Custom Telemetry in App Insights - c#

I tried adding custom telemetry per the docs (https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-telemetry?view=azure-bot-service-4.0).
I am missing something because I cannot find my custom event in the App Insights logs.
I tried interacting with the bot and searching the App Insights logs for "VeryImportantProperty" and "VeryImportantValue"
I wrote this class:
public class TelemetryMiddleware : TelemetryLoggerMiddleware
{
public TelemetryMiddleware(IBotTelemetryClient telemetryClient, bool logPersonalInformation)
: base(telemetryClient, logPersonalInformation)
{
}
protected override async Task OnReceiveActivityAsync(Activity activity, CancellationToken cancellation)
{
Dictionary<string, string> propertyItems = new Dictionary<string, string>
{
{"VeryImportantProperty", "VeryImportantValue" }
};
var properties = await FillReceiveEventPropertiesAsync(activity, propertyItems);
TelemetryClient.TrackEvent(TelemetryLoggerConstants.BotMsgReceiveEvent, properties);
}
}
I added it in startup.cs as a service available for injection:
services.AddSingleton<IMiddleware, TelemetryMiddleware>();
I also added all the other items named in the article required as injectable services.
I deployed the bot and interacted with it, but I cannot find my VeryImportantValue or property even after a full search of my App Insights logs.
I’m sure I am missing something, but from the docs, I cannot determine what it is.
Any ideas or pointers in the right direction?

You should be able to see these events by going to Azure Portal > All Resources > Application Insights Resource > Overview page > Logs (Analytics) which is along the top, above the details for the Application Insights resource.
Then if you enter the following for your query:
customEvents
| where name == "BotMessageReceived"
and click run (you may have to select the query text you entered before clicking run.
Your VeryImportantProperty data should show under the customDimensions column.
The getting started information is available here.
Edit
If you still cannot see the log entries then you will need to debug where the issue is. The steps I would recommend are:
Get the latest version of the Bot Framework Emulator.
Update your TelemetryMiddleware class to have a the following field private IBotTelemetryClient _telemetryClient;
Update your TelemetryMiddleware constructor to assign the value from the telemetryClient parameter to your new _telemetryClient field.
Update the call inside OnReceiveActivityAsync to use the new _telemetryClient field instead of the TelemetryClient class (you are calling TrackEvent statically currently which isn't what you want.
Run your bot locally using the Bot Framework Emulator.
Add a breakpoint on the line where you call TrackEvent
Create the scenario which should trigger OnReceiveActivityAsync (send a message to the bot).
Use F10 to step over the TrackEvent line and ensure that it is called successfully.
At this stage I would also inspect your variables to ensure they have the values that you expect.
Wait for the event to flow through to App Insights (might take up to 5 minutes).
If this still does not work I would create a new Application Insights API key and updating the following places with the new value:
For local testing:
Your appsettings.json file so that you can test locally.
For production:
The Application Insights Instrumentation key under the Settings tab of your Web App Bot in Azure.
Also check that the value for Application Insights Application Id under the Settings tab of your Web App Bot in Azure matches the Application Id value under the API Access tab of your Application Insights resource.
Follow the steps above to test locally using the emulator.
Once the logs are flowing through locally use the Test in Web Chat functionality to ensure that it is working in production.

Related

Azure: How to write and read custom log messages in ASP.NET Core application in Azure?

I want to achieve the following:
Have custom log statements in my ASP.NET Core web service application.
Deploy my application to Azure (in my case using Pulumi).
Call the webservice so it triggers the logging code.
Read the logged messages, either programmatically or via the Azure browser-based GUI.
I am targeting .NET 5.0.
In my code I do something like this:
public class MyController : ControllerBase
{
private readonly ILogger<MyController> _logger;
public MyController(ILogger<MyController> logger) => _logger = logger;
public async Task<ActionResult<something>> DoStuff()
{
_logger.LogInformation("Hello, World!");
...
}
}
My Pulumi code contains this:
var app = new AppService(
"kmsApp",
new AppServiceArgs
{
Logs = new AppServiceLogsArgs
{
ApplicationLogs = new AppServiceLogsApplicationLogsArgs { FileSystemLevel = "Error" },
DetailedErrorMessagesEnabled = true,
FailedRequestTracingEnabled = true,
HttpLogs = new AppServiceLogsHttpLogsArgs
{
FileSystem = new AppServiceLogsHttpLogsFileSystemArgs { RetentionInDays = 1, RetentionInMb = 35 }
}
}
},
...);
With the above, when I am running my application in debug mode in Visual Studio, I can see the log messages in the Output pane. So the logging code definitely gets triggered. But when I deploy my application to Azure, I don't know how to get the log messages, and I find the Azure GUI confusing.
What I am struggling with is this:
What configuration do I need to do in my code - e.g. NuGet packages or stuff in my Program and Startup classes?
What configuration do I need to do in Azure?
Where in the Azure browser-based GUI do I go to see these log messages?
How can I fetch these logs programmatically (either via Pulumi or the raw Azure API)?
I have looked for documentation, of course, but I find the documentation labyrinthine. Most of it seems to be about diagnostics such as response time. I just want to view my own custom log messages from my code...
Posts like this one give some hints, but after reading the thread it is still nebulous to me how to read the logs: ASP.NET Core trace logging on Azure with Application Insights
There probably exists good documentation and guides. Please help me find them.
Thanks in advance!
You have several options:
Use az webapp log command from PowerShell to configure your application to log to files
Use az webapp log tail command from PowerShell to see the logs in real time
Configure the application logging manually from the Azure Portal
Enable Application Insights for app service
For downloading the logs use the az webapp log download command or connect to the logs directory with FTP
I think I figured out what was missing. I did two things.
First thing was to change my error level in Pulumi from "Error" to "Verbose":
ApplicationLogs = new AppServiceLogsApplicationLogsArgs { FileSystemLevel = "Verbose" },
The other thing was to install a site extension:
Go to app service in Azure.
In the left-hand menu under Monitoring go to App Service logs.
Click the banner that says Click here to install the ASP.NET Core site extensions to enable Application Logging.
After this I was able to see logs by running az webapp log tail as suggested by Igor.
Now I just need to figure out how to do this programmatically with Pulumi.

Azure error: DefaultAzureCredential authentication failed

I am working on the Official Azure sample: Getting started - Managing Compute Resources using Azure .NET SDK. And getting the following error on line resourceGroup = await resourceGroups.CreateOrUpdateAsync(resourceGroupName, resourceGroup); of the following code where app is trying to create a Resource Group. I have followed the instructions for Registering an app and from this link provided by the sample. And, have assigned a role to app as follows:
Error:
Azure.Identity.AuthenticationFailedException
HResult=0x80131500
Message=DefaultAzureCredential authentication failed.
Source=Azure.Identity
Inner Exception 2:
MsalServiceException: AADSTS70002: The client does not exist or is not enabled for consumers. If you are the application developer, configure a new application through the App Registrations in the Azure Portal
static async Task Main(string[] args)
{
var subscriptionId = Environment.GetEnvironmentVariable("AZURE_SUBSCRIPTION_ID");
var resourceClient = new ResourcesManagementClient(subscriptionId, new DefaultAzureCredential());
// Create Resource Group
Console.WriteLine("--------Start create group--------");
var resourceGroups = resourceClient.ResourceGroups;
var location = "westus2";
var resourceGroupName = "QuickStartRG";
var resourceGroup = new ResourceGroup(location);
resourceGroup = await resourceGroups.CreateOrUpdateAsync(resourceGroupName, resourceGroup);
Console.WriteLine("--------Finish create group--------");
// Create a Virtual Machine
await Program.CreateVmAsync(subscriptionId, "QuickStartRG", location, "quickstartvm");
// Delete resource group if necessary
//Console.WriteLine("--------Start delete group--------");
//await (await resourceGroups.StartDeleteAsync(resourceGroupName)).WaitForCompletionAsync();
//Console.WriteLine("--------Finish delete group--------");
//Console.ReadKey();
}
UPDATE:
As per instructions in the sample, following is how I Used the portal to create an Azure AD application and service principal that can access resources. I may not have done something right here. Please let me know what I am not doing right here:
Role Assignment for the registered app in Access Control (IAM):
Authentication and Direct URI:
API Permissions for the Registered App:
UPDATE-2:
Working with #JoyWan, I was able to resolve the issue (thank you Joy). Below is the screenshot of successful creation of all required compute resources including VM. NOTE: Clicking on the image would provide a better view of the screenshot.
I test the code, it works fine on my side. The steps you mentioned are also correct.
In this sample, the DefaultAzureCredential() actually uses the EnvironmentCredential() in local, so if you run the code in local, make sure you have Set Environment Variables with the AD App Client ID, Client Secret, Tenant ID.
Update:
From #nam's comment, the issue was that environment vars were not refreshed yesterday, since he had shutdown the machine yesterday and restarted it again today, the environment var got in sync and hence the app started working.

Detect is when a windows service has been deleted

Is there a way to detect when a windows service has been deleted? I've checked the event log but it doesn't pick up deleted actions only added.
I believe there may be a way using audit logs but I'm unsure how to do this?
Any help is much appreciated.
Thanks
While there is no trace of service deletion in Event or Audit logs, what you can do is create a small console app that detects if a service exists and attach this app to Windows Task Scheduler such that it is scheduled to execute based on frequency or a Trigger that you can customize to your requirements such that you will receive an alert if a service has been added or removed etc..
The console app is designed such that on the first run, it logs all
the services on the system and on the subsequent runs it will be
tracking changes made on the services via servicesRemoved and
servicesAdded, with this we can decide what action to take when a
service has been modified
Console App: ServiceDetector.exe
static void Main(string[] args)
{
var path = #"C:\AdminLocation\ServicesLog.txt";
var currentServiceCollection = ServiceController.GetServices().Select(s => s.ServiceName).ToList(); //Queries the most current Services from the machine
if (!File.Exists(path)) //Creates a Log file with current services if not present, usually means the first run
{
// Assumption made is that this is the first run
using (var text = File.AppendText(path))
{
currentServiceCollection.ForEach((s) => text.WriteLine(s));
}
return;
}
// Fetches the recorded services from the Log
var existingServiceCollection = File.ReadAllLines(path).ToList();
var servicesRemoved = existingServiceCollection.Except(currentServiceCollection).ToList();
var servicesAdded = currentServiceCollection.Except(existingServiceCollection).ToList();
if (!servicesAdded.Any() && !servicesRemoved.Any())
{ Console.WriteLine("No services have been added or removed"); return; }
//If any services has been added
if (servicesAdded.Any())
{
Console.WriteLine("One or more services has been added");
using (var text = File.AppendText(path))
{
servicesAdded.ForEach((s) => text.WriteLine(s));
}
return;
}
//Service(s) may have been deleted, you can choose to record it or not based on your requirements
Console.WriteLine("One or more services has been removed");
}
Scheduling Task
Windows Start > Task Scheduler > Create Basic Task > Set Trigger > Attach your exe > Finish
You're right that deleting a Windows Service does cause an event to be added to the System Event Log (source: https://superuser.com/questions/1238311/how-can-we-detect-if-a-windows-service-is-deleted-is-there-an-event-log-id-for-i).
AFAIK there's no audit policy to audit the deletion of a service and I think if there were I think it would be listed here: https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/basic-audit-process-tracking
I assume polling ServiceController.GetServices() is out of the question because your program might not be running when the service is uninstalled?
There are lots of ways to build instrumentation, until you learn what constitutes good instrumentation. My how-to is essentially taken directly from the Wikipedia entry https://en.wikipedia.org/wiki/Instrumentation.
Instrumentation How-to
http://www.powersemantics.com/e.html
Non-integrated
Primary data only
Pull not push
Organized by process
Never offline
The solution to the problem of measuring indicators exists, but you're stuck conceptualizing how to also have "push-based" instrumentation signal another system. As my E article explains, instruments should always pull data never push it. Event-driven signalling is a potential point of failure you don't need.
To clear up any indecisiveness or doubts you may have about building a separate application, monitors are normally independent (non-integrated as Wikipedia says) processes. So saying your monitor "might not be running" means you have not chosen to build a real non-integrated monitor, one which is always on. Your consumer system doesn't correctly model instrumentation, because it integrates the check in its own process.
Separate these responsibilities and proceed. Decide how often the instrument should reasonably poll for deleted services and poll the data with a timer. If you use the API call simon-pearson suggested, you can also detect when services have been added. Of course, the monitor needs to locally cache a copy of the service list so that indicators can infer what's been added or removed.

Authorization exception with Azure using C# and Microsoft.Azure.Management.Fluent

I am a student and I am currently trying to learn Azure platform and how to use the C# libraries to manage it.
I was able to create, delete blob and files with no problem using the package WindowsAzure.Storage.
Then I wanted to list VMs using this tutorial : https://learn.microsoft.com/en-us/azure/virtual-machines/windows/csharp
This is my code:
var credentials = SdkContext.AzureCredentialsFactory.FromFile(Environment.GetEnvironmentVariable("AZURE_AUTH_LOCATION"));
var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();
Console.WriteLine(azure.VirtualMachines.List().Count());
My connection file look like this:
subscription=********-****-****-****-************
client=********-****-****-****-************
key=qeFkWjPm0YHn5xw8UMS2ytLhf9Oi0rEMxZVOTpk3aMQ=
tenant=********-****-****-****-************
managementURI=https://management.core.windows.net/
baseURL=https://management.azure.com/
authURL=https://login.windows.net/
graphURL=https://graph.windows.net/
But I get this error:
Unhandled Exception: Microsoft.Rest.Azure.CloudException: The client '********-****-****-****-************' with object id '********-****-****-****-************' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/********-****-****-****-************'.
There is two weird things I noticed :
1 - In the exception message, the client id and object id are the same and I dont know where they come from.
2 - When I create a new application registration on AzureAD, I can't see it in "My apps" but only in "All Apps", and I cant add myself as an owner.
Click here to see a screenshot
I have searched for 2 days and I cant figured out why it's not working.
(This is a simple .Net Core 2 console project, I am on Linux if it can help to find out)
Thank you in advance.
Edit #1 :
Thank's to #juunas, working now.
Help link: https://learn.microsoft.com/en-US/azure/azure-resource-manager/resource-group-create-service-principal-portal#assign-application-to-role
To read details about a VM, the application should have a Reader role on the VM, its resource group, or the subscription. If you need to modify things, Contributor allows all modifications. You should add the application to a role via the Access Control IAM tab on the subscription/resource group/resource.

Azure Application Insights custom response metric

I need some help to find a good pattern for a custom application insights metric.
Environment
I have a custom Windows Service running on multiple Azure VMs.
I can successfull add Events to my Monitoring instance on Azure.
Goal
I want to create a custom metric that allows me to monitor if my windows services are running and responding per instance. It would be perfect if it acts like the respond timeout in website metric.
Each service instance has a custom maschine related identifier, like:
TelemetryClient telemetry = new TelemetryClient();
telemetry.Context.Device.Id = FingerPrint.Instance;
Now I wnat to create a alert if one of my Service instances (Context.Device.Id) is not running or responding.
Question
How to achive this?
Is it even possible or usefull to Monitor multiple instance of one service type onside on application insight? Or must I open one single application insight per instance?
Can anybody help me?
Response to Paul's answere
Track Metric Use TrackMetric to send metrics that are not attached to particular events. For example, you could monitor a queue length at regular intervals.
If I do so, whats happens if my server made a restart (update or somethink) and my service don't start up. Now the service did't send a TrackMetric to the application insight and no alert is raised because the value don't drop below 1, but the Service is still not running.
Regards Steffen
I found a good working solution, with only a few simple steps.
1) Implement a HttpListener instance on a service port (for example 8181) with a simple text response "200: OK"
2) Add a matching endpoint to the azure VM imstande
3) Create a default web test on "myVM.cloudapp.net:8181" with checkup of response text
Work great so far and matches all my needs! :)
Per the documentation on Azure portal:
https://azure.microsoft.com/en-us/documentation/articles/app-insights-api-custom-events-metrics/#track-metric
Track Metric
Use TrackMetric to send metrics that are not attached to particular events. For example, you could monitor a queue length at regular intervals.
Metrics are displayed as statistical charts in metric explorer, but unlike events, you can't search for individual occurrences in diagnostic search.
Metric values should be >= 0 to be correctly displayed.
c# code looks like this
private void Run() {
var appInsights = new TelemetryClient();
while (true) {
Thread.Sleep(60000);
appInsights.TrackMetric("Queue", queue.Length);
}
}
I don't think there is currently a good way to accomplish this. What you're actually looking for is a way to detect a "stale heartbeat." For example, if your service was sending up an event "Service Health is okay", you'd want an alert that you haven't received one of those events in a certain amount of time. There aren't any date/time conditional operators in AI's alert system.
Microsoft might explain that this scenario is not intended to be satisfied by AI, as this is more of a "health checking" system's responsibility, like SCOM or Operation Insights or something else entirely.
I agree this is something that needs a solution, and using AI for it would be wonderful (I've already attempted to accomplish the same thing with no luck); I just think "they" will say its not a scenario in the realm of responsibility for AI.

Categories

Resources