I'm using Log4net ElasticSearchAppender in my C# webAPI with a BufferSize of 10 and Lossy set to true to preserve performance, as seen here :
https://github.com/bruno-garcia/log4net.ElasticSearch/wiki/02-Appender-Settings
<lossy value="false"/>Log4net.ElasticSearch uses a buffer to collect
events and then flush them to the Elasticsearch server on a background
thread. Setting this value to true will cause log4net.Elasticsearch to
begin discarding events if the buffer is full and has not been
flushed. This could happen if the Elasticsearch server becomes
unresponsive or goes offline.
I also set the evaluator to ERROR, that will force the flushing of the buffer anyway if an ERROR occurs.
Here's the associated config file :
<?xml version="1.0"?>
<log4net>
<appender name="ElasticSearchAppender" type="log4net.ElasticSearch.ElasticSearchAppender, log4net.ElasticSearch">
<threshold value="ALL" />
<layout type="log4net.Layout.PatternLayout,log4net">
<param name="ConversionPattern" value="%d{ABSOLUTE} %-5p %c{1}:%L - %m%n" />
</layout>
<connectionString value="Server=my-elasticsearch-server;Index=foobar;Port=80;rolling=true;mode=tcp"/>
<lossy value="true" />
<bufferSize value="10" />
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR" />
</evaluator>
</appender>
<root>
<level value="DEBUG" />
<appender-ref ref="ElasticSearchAppender" />
</root>
</log4net>
Here's the behaviour I get :
the flushing triggered by an ERROR (evaluator) is working fine, but INFO or DEBUG messages alone are never flushed to Elastic, even if there are 10, 20, or 100 of them.
The buffer does never flush when full in this configuration, it just keeps discarding DEBUG or INFO logs until an ERROR comes out, even though Elastic is online and perfectly responsive.
Note: I tried setting lossy to false, and the buffer flushes when full. But I'm affraid this would damage my application responsiveness too much.
Am I gettings something wrong?
Is there a better way to log while minimizing performance impact?
After testing the behaviour, here's what I found :
The buffer becoming full does never trigger a flushing when lossy is true.
Bruno garcia's article was quite misleading about the Lossy property, especially this sentence :
Setting this value to true will cause (...) to begin discarding events if the buffer is full (...). This could happen if the Elasticsearch server becomes unresponsive or goes offline.
In fact it has nothing to do with the appender/Elastic being unresponsive : in a lossy configuration, only evaluators will trigger the flushing of the buffer :
Level evaluator, will flush if an event of a certain lever occurs (ex: FATAL or ERROR), giving the context of the crash (=last logs occuring before the crash).
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR"/>
</evaluator>
Time evaluator will flush if a certain time interval has elapsed
<evaluator type="log4net.Core.TimeEvaluator">
<interval value="300"/>
</evaluator>
For my purpose I finally decided to configure a TimeEvaluator with a 5 minutes interval.
This way, as long as there is no more than 200 logs (my buffer size) per 5 minutes, no log will be discarded and the impact on performance is kept low.
Related
I've been having an issue on an Umbraco 7.5.6 site hosted in an App Service on Azure where the indexes seem to be dropped after an unspecific amount of time.
We're storing information, including some custom fields, on published news articles in the External Examine index to query stories from the index. This is consumed by our client-facing search API.
Initially, we thought that this might be caused by Azure swapping servers so removed the {computerName} parameter from the path under ExamineSettings.config. However, that didn't appear to have any effect.
Our current index path is ~/App_Data/TEMP/ExamineIndexes/External/
The ExamineSettings.config file is as follows:
<Examine>
<ExamineIndexProviders>
<providers>
<add name="InternalIndexer" type="UmbracoExamine.UmbracoContentIndexer, UmbracoExamine"
supportUnpublished="true"
supportProtected="true"
analyzer="Lucene.Net.Analysis.WhitespaceAnalyzer, Lucene.Net"/>
<add name="InternalMemberIndexer" type="UmbracoExamine.UmbracoMemberIndexer, UmbracoExamine"
supportUnpublished="true"
supportProtected="true"
analyzer="Lucene.Net.Analysis.Standard.StandardAnalyzer, Lucene.Net"/>
<!-- default external indexer, which excludes protected and unpublished pages-->
<add name="ExternalIndexer" type="UmbracoExamine.UmbracoContentIndexer, UmbracoExamine"/>
</providers>
</ExamineIndexProviders>
<ExamineSearchProviders defaultProvider="ExternalSearcher">
<providers>
<add name="InternalSearcher" type="UmbracoExamine.UmbracoExamineSearcher, UmbracoExamine"
analyzer="Lucene.Net.Analysis.WhitespaceAnalyzer, Lucene.Net"/>
<add name="ExternalSearcher" type="UmbracoExamine.UmbracoExamineSearcher, UmbracoExamine" />
<add name="InternalMemberSearcher" type="UmbracoExamine.UmbracoExamineSearcher, UmbracoExamine"
analyzer="Lucene.Net.Analysis.Standard.StandardAnalyzer, Lucene.Net" enableLeadingWildcard="true"/>
</providers>
</ExamineSearchProviders>
</Examine>
Due to the unpredictable nature of this issue, short of writing a WebJob to republish the articles on a regular basis, I'm unsure of what to try next.
First thing to do is update your examine config
The filesystem attached to web apps is actually a UNC share which can suffer from IO latency issues which in turn can cause Umbraco to flip out a little bit.
Try updating your ExamineSettings.config as per the following and add this to the indexer(s):
directoryFactory="Examine.LuceneEngine.Directories.SyncTempEnvDirectoryFactory,Examine"
The SyncTempEnvDirectoryFactory enables Examine to sync indexes
between the remote file system and the local environment temporary
storage directory, the indexes will be accessed from the temporary
storage directory. This setting is required due to the nature of
Lucene files and IO latency on Azure Web Apps.
This should take performance issues out of the equation.
Then, debugging
Indexing issues should be picked up in Umbraco's logs (some at Info level, some at Debug). If you're not already capturing Umbraco's logs then use something like Papertrail or Application Insights to collect the logs and see if you can identify what's causing the deletion (you may need to drop logging level to Debug to catch it).
N.B if you do push logs to an external service then wrap it in the Async/Parallel provider from Umbraco Core: here's an example config.
<log4net>
<root>
<priority value="Info"/>
<appender-ref ref="AsynchronousLog4NetAppender" />
</root>
<appender name="AsynchronousLog4NetAppender" type="Umbraco.Core.Logging.ParallelForwardingAppender,Umbraco.Core">
<appender-ref ref="PapertrailRemoteSyslogAppender"/>
</appender>
<appender name="PapertrailRemoteSyslogAppender" type="log4net.Appender.RemoteSyslogAppender">
<facility value="Local6" />
<identity value="%date{yyyy-MM-ddTHH:mm:ss.ffffffzzz} your-site-name %P{log4net:HostName}" />
<layout type="log4net.Layout.PatternLayout" value="%level - %message%newline" />
<remoteAddress value="logsN.papertrailapp.com" />
<remotePort value="XXXXX" />
</appender>
<!--Here you can change the way logging works for certain namespaces -->
<logger name="NHibernate">
<level value="WARN" />
</logger>
</log4net>
I have two loggers for same database with different level. I would like to have different bufferSize for each logger.
One way is to have two appenders to same database with only difference in bufferSize element, but it's copy-paste.
Is it possible to extend already defined appender and change it's bufferSize property?
For example:
<appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender">
<bufferSize value="20" />
...other elements
</appender>
<appender name="AdoNetAppenderChild" extends="AdoNetAppender">
<bufferSize value="1" />
</appender>
<logger name="Fatal" additivity="false">
<level value="FATAL"/>
<appender-ref ref="AdoNetAppenderChild" />
</logger>
<logger name="Common" additivity="false">
<level value="INFO"/>
<appender-ref ref="AdoNetAppender" />
</logger>
What I want to avoid is having two appenders with same elements and properties and only different value is bufferSize
You can make one appender and use a evaluator to log when you have an error message:
<evaluator type="log4net.Core.LevelEvaluator">
<threshold value="ERROR"/>
</evaluator>
The Evaluator is a pluggable object that is used by the
BufferingAppenderSkeleton to determine if a logging event should not
be buffered, but instead written/sent immediately. If the Evaluator
decides that the event is important then the whole contents of the
current buffer will be sent along with the event. Typically an
SmtpAppender will be setup to buffer events before sending as the cost
of sending an email may be relatively high. If an important event
arrives, say an ERROR, we would like this to be delivered immediately
rather than waiting for the buffer to become full. This is where the
Evaluator comes in as it allows us to say: "when an important event
arrives don't worry about buffering, just send over everything you
have right now".
I have a client-side application that uses log4net's RollingFileAppender and that can be instantiated multiple times. Initially I've written all my logs into a single file, however, I've realized soon enough that log4net locks the file while writing, though, even if I used a less restrictive writing mode, I would still end up with a lot of mess in my log files.
I've decided to incorporate process-id into the file name, like so:
<appender name="HumanRollingLog" type="log4net.Appender.RollingFileAppender">
<file type="log4net.Util.PatternString" value="Log\TestLog[%processid].txt"/>
<param name="DatePattern" value="dd.MM.yyyy'.log'"/>
<appendToFile value="true"/>
<rollingStyle value="Size"/>
<staticLogFileName value="true" />
<maxSizeRollBackups value="10"/>
<maximumFileSize value="1KB"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%type] [%thread] %-5level %logger - %message%newline%exception%"/>
</layout>
</appender>
That worked. However, it completely messed up the rolling features since now every process spawns its own log file, the actual rolling would happen only after process ids start repeating. E.g, starting my application 3 times resulting the following logs being created:
TestLog[5396].txt
TestLog[5396].txt.1
TestLog[5396].txt.10
TestLog[5396].txt.2
TestLog[5396].txt.3
TestLog[5396].txt.4
TestLog[5396].txt.5
TestLog[5396].txt.6
TestLog[5396].txt.7
TestLog[5396].txt.8
TestLog[5396].txt.9
TestLog[5976].txt
TestLog[5976].txt.1
TestLog[5976].txt.10
TestLog[5976].txt.2
TestLog[5976].txt.3
TestLog[5976].txt.4
TestLog[5976].txt.5
TestLog[5976].txt.6
TestLog[5976].txt.7
TestLog[5976].txt.8
TestLog[5976].txt.9
TestLog[6860].txt
TestLog[6860].txt.1
TestLog[6860].txt.10
TestLog[6860].txt.2
TestLog[6860].txt.3
TestLog[6860].txt.4
TestLog[6860].txt.5
TestLog[6860].txt.6
TestLog[6860].txt.7
TestLog[6860].txt.8
TestLog[6860].txt.9
Anyone has an idea what can I do to resolve this issue? I'd like to have each process its own file, but I can't allow the rolling to be reused among ALL the processes.
Thanks!
If you insist on using a process identifier in the name of the log file, then the built-in rolling patterns will never work. I would like to explore your requirements. What does "I would still end up with a lot of mess in my log files" really mean? What answers are you trying to get from your log files?
A solution to a different problem is to append the process id to the log messages and filter/search with one of the many tools available (log4net dashboard, log4net viewer, Apache Chainsaw, Microsoft LogParser, or Kiwi LogViewer.
I have a rather large program that have some odd behaviour once in a while. After it has been deployed at a customer it's not possible to do debugging. But it is permissible to use log files, so this is what I have created. Something like this:
TextWriter tw = new StreamWriter(#"C:\AS-log.txt", true);
tw.WriteLine("ValidateMetaData");
tw.Close();
3 lines like this has been inserted into the code at many places and do give excellent log information. There are 2 problems with this approach however:
The code looks very messy when there are more lines regarding logging than actual code.
I would like to be able to switch logging on and off via a configuration file.
Any suggestions to a way of logging that can do this and still be simple?
Maybe you could try Enterprise libraries from Microsoft. It has a logging application block which works quite nice
http://msdn.microsoft.com/en-us/library/ff648951.aspx
Log4net is a simple framework that you can utilize.
http://logging.apache.org/log4net/
I would suggets to use log4Net. It has a huge potential, that you probbaly don't need, but give you easy predefined formatting in loog entries.
Before use it yuo should configure it your application's .config file.
This is just an example how to do, you can use others that easily can find on internet:
<log4net debug="true">
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="Logs\\TestLog.txt" />
<appendToFile value="true" />
<rollingStyle value="Date" />
<maxSizeRollBackups value="10" />
<maximumFileSize value="10MB" />
<staticLogFileName value="false" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%-5p %d %5rms %-22.22c{1} %-18.18M - %m%n" />
</layout>
</appender>
<root>
<level value="DEBUG" />
<appender-ref ref="RollingLogFileAppender" />
</root>
</log4net>
Regards.
You can use a Listener of System.Diagnostics.Debug and System.Diagnostics.Trace.
Normally these are controlled by compile options but you can attach the listener depending on your config option.
System.Diagnosics.Trace.WriteLine("ValidateMetaData");
This also allows you to watch live with DebugView, etc.
A simple solution would be to make the log writing a class. On application startup it opens the file for writing to, and there is a simple method Write. You can then simply use Log.Write("ValidateMetaData") which reduces the amount of code that you use inline, and stops you having to always open and close the file. You can also add checks depending on configuration (the easiest way to do that would be with application settings).
Try log4net or nlog (I prefer nlog)
http://logging.apache.org/log4net/
http://nlog-project.org/
If you want something built-in and configurable from config-file see http://msdn.microsoft.com/en-us/library/ms228993.aspx
I am trying to configure a smtp appender in the log4net.config file that I have. The problem is that I have looked all over the internet and cannot find how to send an email when an error occurs with all the other log information included such as info, debug, error, fatal. Only when the application ends (NOT every time an ERROR occurs).
So I only want to receive this email when:
The application ends +
With all the log information (DEBUG, INFO, ERROR, FATAL) +
Only if an ERROR has occured.
Elaborating some more this is because of the way I handle my exceptions in c sharp, with multiple level handling all over the place and so if an error occurs no matter how many times I only want to receive one email. Also I do not want to use multiple logs, but rather
just one in root.
Thanks.
SmtpAppender cannot accomplish this on its own. So what I did was create another appender that to an appender of type MemoryAppender. I set a threshold on this logger to only include messages that should trigger the SmtpAppender, e.g. Error. We use this to later determine if we want to send the email which has more levels logged.
We don't actually care about the messages in the MemoryAppender--we just care that it contains messages at the end. The messages we get via email actually come from the SmtpAppender.
At the end of my program I check the memory appender to see if its GetEvents() contains any events. If so, I do not stop the SmtpAppender from running normally.
Log4Net configs for both appenders:
<appender name="ErrorHolder" type="log4net.Appender.MemoryAppender" >
<onlyFixPartialEventData value="true" />
<!-- if *any* message is logged with this level, the email appender will
be used with its own level -->
<threshold value="ERROR" />
</appender>
<appender name="Email" type="log4net.Appender.SmtpAppender">
<!-- the level you want to see in the email IF ErrorHolder finds anything -->
<threshold value="INFO"/>
<bufferSize value="512" />
<lossy value="false" /> <!-- important! -->
<to value="name#domain.com" />
<from value="name#domain.com" />
<subject value="ERROR: subject" />
<smtpHost value="smtpserver" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%newline%date [%thread] %-5level %logger [%property{NDC}] - %message%newline%newline%newline" />
</layout>
</appender>
<root>
<level value="ALL" />
<appender-ref ref="ErrorHolder" />
<appender-ref ref="Email" />
</root>
At the end of the app run this to disable the SmtpAppender if the ErrorHolder appender is empty:
// trigger loggers if errors occurred:
var memoryAppender = ((Hierarchy)LogManager.GetRepository())
.Root.Appenders.OfType<MemoryAppender>().FirstOrDefault();
if (memoryAppender != null && memoryAppender.GetEvents().Length == 0)
{
// there was no error so don't email anything
var smtpAppender = ((Hierarchy)LogManager.GetRepository())
.Root.Appenders.OfType<SmtpAppender>().FirstOrDefault();
if (smtpAppender != null)
{
smtpAppender.Threshold = Level.Off;
smtpAppender.ActivateOptions();
}
}
This sounds like an application configuration issue rather than a log4net configuration issue. I would suggest putting a method in at the close of your application that emails you the log file if it detects that there is an error inside it. You could either detect this error by flipping a global variable from false to true in every place where you log errors or you could wait until the end of your application and then read the log file to see if it contains errors. The first method would be quicker at shutdown but it means modifying your code in multiple places. The latter would allow you to just add one method but it might take longer in a large file.
A third option would be to send errors to a second log file (so they go two places) using log4net. Then, when your application is closing and you are checking to see if you should email the log, just check for the existence of the error-only file. If it exists, delete it (so it isn't there next time) and email the full log.
change treshold to Error. Also log 4net only sends the email when the app is clodes.you can send it earlier. but than you first need to copy it ( because the org file is still in use) and than you can email the copy.