How to read logs directly from elasticsearch without using Kibana - c#

I have a ASP.NET Core Web API written in C# with docker-compose, elasticsearch, and serilog and Kibana. I plan on removing the Kibana from the docker-compose.yml file. Once the Serilog generates the log files and after configuring a sink to Elasticsearch so it can write the logs where elasticsearch can read it. How do I go about reading those logs that is now in elasticsearch without having to go to Kibana to view the logs and read them?
Is there any recommendations on a documentation and/or a package for this or is this something that needs to be programmed from scratch?
Suggestion attempt:
I went to Download Kafka then I went to powershell as an admin and did a wget (url). After it downloaded, I ran tar -xzf kafka_2.13-2.8.0.tgz &
cd kafka_2.13-2.8.0. I then followed what you advised to Activate Zookeeper broker and Kafka and then creating the topic. However when for each step you said to do, nothing happened. I would try to activate zookeeper it would tell me how do I want to open the file, so I would just hit ESC and then ran the other commands but same thing would come up. Should this be doing that?

You can use one of the two official clients for elasticsearch using .NET
There is a low level and high level client, you can read more about the difference and how to use each one in the official documentation.

Make use of log4net as log provide and it's Kafka_Appender .This appender will produce your operation logs in every level to topic and it consumers then Logstash will ingest this logs to your elastic index as output.
There are many privileges in this roadmap, You have supper powerful stream processor like Apache Kafka and its queue based messaging help you to always trace every logs that it produce an other one is Logstash which you can even add more stream processor and filter like grok and have multiplr outputs and even storing you logs as Csv or file system.
First activate Zookeeper and Kafka broker and create a consumer with some topic name in bin directory of downloaded Kafka file:
Activating Zookeeper broker
./zookeeper-server-start.sh ../config/zookeeper.properties
Activating Kafka broker
./kafka-server-start.sh ../config/server.properties
Create Topic
./kafka-topics.sh --create --topic test-topic -zookeeper localhost:2181 --replication-factor 1 --partitions 4
Active consumer of the created topic
./kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic
Then add the log appender for created topic for consuming logs(This one is up to you) and after that create a Logstash pipeline such as below configuration
input {
kafka{
group_id => "35834"
topics => ["yourtopicname"]
bootstrap_servers => "localhost:9092"
codec => json
}
}
filter {
}
output {
file {
path => "C:\somedirectory"
}
elasticsearch {
hosts => ["localhost:9200"]
document_type => "_doc"
index => "yourindexname"
}
stdout { codec => rubydebug
}
}
And run it with popular command in the bin directory of logstash
./logstash -f yourconfigurationfile.conf
Please note that to create an index before activating Logstash more over you do not need to design mapping for your output index as soon as first document inserted elastic will creates a mapping for all relevant fields in your index.

Related

Rabbit MQ Error : unable to perform an operation on node 'rabbit#USERNAME'

Error: unable to perform an operation on node 'rabbit#YASHODIP-PC'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbit#YASHODIP-PC
If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
attempted to contact: ['rabbit#YASHODIP-PC']
rabbit#YASHODIP-PC:
* connected to epmd (port 4369) on YASHODIP-PC
* epmd reports: node 'rabbit' not running at all
no other nodes on YASHODIP-PC
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-17388-rabbit#YASHODIP-PC'
* effective user's home directory: C:\Users\yasho
* Erlang cookie hash: 96Pe9121Rb1vncp1IqXA6Q==
I am not able to view the status of the rabbitMQ service installed on my local machine. Please suggest resolution.
This error can occur due to
Clustering between nodes are not proper
The erlang cookie is irrelevant
If you are not working on Clustering then second can be your problem, Please study RabbitMQ, erlang: how to "make sure the erlang cookies are the same"
This type of errors could happen if you have too long hostname. It's the reason why it explains :
If target node is configured to use long node names, don't forget to use --longnames with CLI tools
Solution
edit the rabbit's config file (for RHEL type it's /etc/rabbitmq/rabbitmq.conf). In the networking section :
#IPv4
listeners.tcp.local = 127.0.0.1:5672
#...
## write your ip and not your hostname
listeners.tcp.ohter_ip = 164.81.0.0:5672
Old topic, but I stumbled across it looking for reasons. So I will write what I have.
In the file /etc/rabbitmq/rabbitmq-env.conf
export RABBITMQ_NODENAME=rabbit#domain.com
replaced by
export RABBITMQ_NODENAME=rabbit#localhost

Microsoft Access-friendly Serilog sink

I am using SEQ, file and JSON as a Serilog sinks
Log.Logger = new LoggerConfiguration()
.Enrich.With(new ThreadIdEnricher())
//.Enrich.FromLogContext()
.WriteTo.RollingFile(#"C:\QRT\Logs\QRT-LOG.txt", LogEventLevel.Information)
.WriteTo.Seq("http://localhost:5341")
.WriteTo.Console(restrictedToMinimumLevel: LogEventLevel.Information)
.WriteTo.File(new CompactJsonFormatter(), "C:/QRT/Logs/log.clef")
.CreateLogger();
SEQ is for me because it looks like it would be really useful.
JSON I may do away with... I was attempting to write a file that I could import into Access. The point is that I need my non-developer friend to be able to see the logs and Access is a tool I believe he can use to easily filter on items such as Customer ID etc. I have not been able to find much documentation on the Serilog sinks other than their names. Can someone either suggest a mechanism to sink to something that can be imported to Access or another sink that a user-friendly tool can ready?
I am currently using NLog and GamutLogViewer which is awesome because it can color entries based on regular expressions!
Any suggestions would be most welcome. The idea is my friend is not looking at the logs to debug. He will be looking at the "Information" contained in the logs.
This is using C# on a console app in Windows.
Thanks
-Ed
Serilog has a sink called Serilog.Sinks.NLog which adapts Serilog to write events through your existing NLog infrastructure, which means you can effectively use Serilog throughout your app, but output log files in the NLog format, which would be readable by the GamutLogViewer (or YALV! as an alternative).
Another approach I can think of is to use the sink Serilog.Sinks.MSSqlServer where you write your logs to a SQL Server table (could even be a SQL Server Express instance on the user's machine, if you don't want/have a shared SQL Server) and then use Microsoft Access to query these logs via linked tables in Access.
Ultimately, you could develop your own sink that writes directly to a .csv file or even directly to an Access .accdb file, for example. Developing Sinks for Serilog is super easy and there are tons of examples you can use as a base for your custom sink.

How To Get Sharepoint Online Migration API Logs (using c#)

Using the Sharepoint.Client version 16 package, we are trying to create a MigrationJob in c# and then subsequently want to see the status and logs of that migration job. We managed to provision the containers and queue using the ProvisionMigrationContainers and ProvisionMigrationQueue methods on the Site object. And we managed to upload some files and manifest XMLs. These XMLs still contain some errors in the ids and structure, so we expect the job to fail. However, we still expect the job to be created and output some messages and logs. Unfortunately the message queue seems to be empty and the logs are nowhere to be found (at least we can't find them). The Guid of the created migration job is the null guid: 00000000-0000-0000-0000-000000000000
According to https://learn.microsoft.com/en-us/sharepoint/dev/apis/migration-api-overview the logs should be saved in the manifest container as a blob. But how would you actually find the name of the log file? The problem is that everything has to be encrypted and it is not allowed to list the blobs in the blob storage (trying this leads to a 403 error).
So the main question is: how are we supposed to access the log files? And the bonus question: assuming that the command to create the migration job is correct, why are we getting the null guid? And last one: why is the queue empty? I could speculate that the migration job is never created and that's why the guid is all zeroes, but how are we supposed to know what is preventing the job from being created?
Here is the code that creates the Migration Job:
public ClientResult<Guid> CreateMigrationJob()
{
var encryption = new EncryptionOption
{
AES256CBCKey = encryptionProvider.Key
};
return context.Site.CreateMigrationJobEncrypted(
context.Web.Id,
dataContainer.Uri.ToString(),
metadataContainer.Uri.ToString(),
migrationQueue.Uri.ToString(),
encryption
);
}
context, dataContainer, metadataContainer have all been properly instantiated as members and have been used in other methods successfully. migrationQueue and encryption also look fine, but have not been used elsewhere. The encryption key has been used to upload and download files though and works perfectly fine there.
For completeness sake, here is the code we tried to use to check if there is anything in the queue:
public void GetMigrationLog()
{
while (migrationQueue.ApproximateMessageCount > 0) //debug code, this should be done async
{
Console.WriteLine(migrationQueue.GetMessage().AsString);
}
}
It outputs nothing, because the queue is empty. We would expect there to be at least an error message or a message that the logs were created (including the name of the log file).
PS: we realise that it should be possible to download the logs using DownloadToFileEncrypted(encryptionProvider, targetFile.ToString(), System.IO.FileMode.Create) but only if you already know the file name, which you cannot find, so that seems a bit silly.
When you call context.Site.CreateMigrationJobEncrypted in your code it returns a Guid. The name of the log file will Import-TheGuidThatWasReturned-ANumberThatStartsAt1ButIncremennts.log
So the first log file might be called.
Import-AE9525D9-3CF7-4D1A-A9E0-8AB0DF4F09B2-1.log
Using encryption should not stop you from reading the queue. You will only be unable to read the queue if you have configured your queue this way or you are using the tenancy default rather than you own.

Update to Azure Application Insights 1.1, now data is not sent

I'm tracking metrics from WPF application. I have updated Application Insights DLLs from 0.17 to 1.1. This meant removing Old DLLs and adding the SDK via Nuget. Now i don't see my metrics/events in the portal. I see no activity in the debugger output window.
Activating DeveloperMode don't seem to do anything.
TelemetryConfiguration.Active.TelemetryChannel.DeveloperMode = true;
I can see that the AI DLLs are placed correctly in the output folder, and I get no error messages when sending events. But no data seems to come through any more.
I have tried to check traffic with fiddler. But no data seems to be sent. I have already tried to do what is suggested here:
https://azure.microsoft.com/en-us/documentation/articles/app-insights-troubleshoot-faq/#how-do-i-upgrade-from-older-sdk-versions
Any suggestions to what could be the problem?
Solution:
Make sure the ApplicationInsights.config properties is set to
"Always copy"
or
"Copy if newer"
Bonus:
How to configure 1.1
https://azure.microsoft.com/en-us/documentation/articles/app-insights-configuration-with-applicationinsights-config/
In the newer 1.1 SDK setting up should be simpler. You can simply new up a telemetryClient to send. You shouldn't need any additional config file or additional code.
tc = new TelemetryClient();
tc.InstrumentationKey = "GET YOUR KEY FROM THE PORTAL";
tc.TrackEvent("SampleEvent");
Some additional details about getting setup for a WPF app can be found here.

How can I upload a static HTML site to a Windows Azure Website programmatically?

I am currently building a local static site generator in C#. It compiles a bunch of templates together into a hierarchy of plain old HTML files. I want to upload the resulting files to my Windows Azure Website and have the changes reflected live, and I want to be able to do this programmatically via my script.
As it stands, I'm having to upload the generated files manually using WebMatrix, as I haven't been able to find an API or SDK that lets me directly upload HTML to a Windows Azure Website.
Surely there must be a way to do this from code, other than just using an sFTP library (which, because it doesn't use the WebMatrix/IIS protocol, which I think sends zipped diffs, would be slow and would mean out-of-sync data during the upload while some files have been updated and others haven't.) I'd also rather not have to commit my generated site to source control if I can avoid it. It seems conceptually wrong to me to be putting something into source control merely as an implementation detail of deployment.
Update: WebMatrix internally uses Web Deploy (MSDeploy). Theoretically you should be able to build the deployment package yourself using the API, but 99% of the examples I can find are using the command-line tool or the GUI tools in Visual Studio. I need to build the package and deploy it programmatically from within C#. Any ideas or guidance on how to go about this? The docs on MSDN don't really show any examples for this kind of scenario.
OK, so I worked out what to do with help from a couple of friendly folks at Microsoft. (See David's Ebbo's response to my forum question, and this very helpful info from Sayed Hashimi showing how to do exactly what I wanted to do from the msdeploy.exe console app).
Just grab your PublishSettings file from the Azure web portal. Open it in a text editor to get the values to paste into the below code.
var destinationOptions = new DeploymentBaseOptions()
{
// userName from Azure Websites PublishSettings file
UserName = "$msdeploytest",
// pw from PublishSettings file
Password = "ThisIsNotMyPassword",
// publishUrl from PublishSettings file using https: protocol prefix rather than 443 port
// and adding "/msdeploy.axd?site={msdeploySite-variable-from-PublishSettings}"
ComputerName = "https://waws-prod-blu-003.publish.azurewebsites.windows.net/msdeploy.axd?site=msdeploytest",
AuthenticationType = "Basic"
};
// This option says we're giving it a directory to deploy
using (var deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.ContentPath,
// path to root directory of source files
#"C:\Users\ryan_000\Downloads\dummysite"))
{
var syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
// "msdeploySite" variable from PublishSettings file
var changes = deploymentObject.SyncTo(DeploymentWellKnownProvider.ContentPath, "msdeploytest", destinationOptions, syncOptions);
Console.WriteLine("BytesCopied: " + changes.BytesCopied.ToString());
Console.WriteLine("Added: " + changes.ObjectsAdded.ToString());
Console.WriteLine("Updated: " + changes.ObjectsUpdated.ToString());
Console.WriteLine("Deleted: " + changes.ObjectsDeleted.ToString());
Console.WriteLine("Errors: " + changes.Errors.ToString());
Console.WriteLine("Warnings: " + changes.Warnings.ToString());
Console.WriteLine("ParametersChanged: " + changes.ParameterChanges.ToString());
Console.WriteLine("TotalChanges: " + changes.TotalChanges.ToString());
}
You might also be able to stumble your way through the obscure documentation on MSDN. There is a lot of passing around of oddly-named options classes, but with a bit of squinting of one's eyes and flailing about in the docs it's possible to see how the command-line options (of which it is much easier to find examples online) map to API calls.
The easiest way is probably to set up Git publishing for your website and programmatically do a git commit followed by a git push. You can think of it as a deployment mechanism instead of source control, given that Azure websites natively support a backing Git repository that doesn't have to have anything to do with your chosen SCM solution.
WebMatrix uses WebDeploy to upload the files to Windows Azure Web Sites.
An alternative is to use the VFS REST API (https://github.com/projectkudu/kudu/wiki/REST-API#wiki-vfs). The diagnostic console uses this to work with the file system today.

Categories

Resources