I have a project, that using System.Diagnostics for logging,
And it creating lots of new logs files, each one starting with GUID,
Even when the last log file was very small
I want to setup a role that controls the creation of new log file
Where can I configure it?
And second question:
Where can I set the log to write non utc time?
Thanks
See the following link for a discussion about rollover trace listeners:
What the best rollover log file tracelistener for .NET
The accepted answer recommends the FileLogTraceListener:
http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.logging.filelogtracelistener.aspx
I would encourage you to also look at Ukadc.Diagnostics as a way to add flexibility (and formatting) to System.Diagnostics tracing/logging:
http://ukadcdiagnostics.codeplex.com/
To answer your final question about logging in something other than UTC, I think the only answer is to write your own TraceListener (or use someone else's, such as Ukadc.Diagnostics).
It goes without saying that logging frameworks like NLog and log4net are very popular for a reason: they provide extremely powerful and flexible logging solutions, allowing you to focus on your application's functionality, not on solving logging problems.
I also have been faced with the both issues (filesize -rollover and Timestamps non UTC for events) of the standard implementation of TraceListener and I didn't want to have a third party tool.
I found this solution which comes with minimum effort:
http://www.geekzilla.co.uk/View2C5161FE-783B-4AB7-90EF-C249CB291746.htm
Related
I wounder how to setup Serilog.Sinks.File to produce this:
log.txt <-- current log
log20200704.txt <-- rolled over yesterday log
log20200703.txt
instead of:
log20200705.txt <-- current log
log20200704.txt <-- rolled over yesterday log
log20200703.txt
I am used to such behavior since log4net days.
This is currently not supported by the Serilog.Sinks.File and there are no plans to support it in the short term. You can see a long discussion around this on the link below:
Fixed filename with rolling archive files #40
You can see an initial attempt to add this feature as a separate package (though it's still early days and has known limitations) on this repository: https://github.com/dfacto-lab/serilog-sinks-file
Of course, you can always roll your own version of Serilog.Sinks.File that adds the behavior you're looking for.
Other, related links:
Allow option to keep root log file as most recent #115
Properly rotate log files #667
Serilog does not support this feature yet.
You can find the discussion here. There is also a workaround in place :)
https://github.com/serilog/serilog-sinks-file/issues/40
im currently getting into security stuff and trying to build a basic "access limiter" which should register if a file or folder in a specific directory (including subdirectories) is created, changed, deleted etc.
I already figured out how to do it after the action took place using FileSystemWatcher, but I want to catch the request / event before it happens to process it. I already searched a while but haven`t really found a solution yet.
If something like this is possible I would be grateful for tips or short samples / references.
Thanks in advance.
As you've stated, you can already see what happens after the fact that something has been done to a file. To my knowledge, you can't preempt file system access from .Net as this is happening on a much lower layer (right above storage). If you are trying to ensure security, you are better off focusing on NTFS/Share level security (standard permissions like Blorgbeard said).
If you really want to intercept file system calls, a file system filter driver may be what you're after but it looks very difficult.
Into to the concept: http://msdn.microsoft.com/en-us/library/windows/hardware/dn641617
Another SO answer restating the same thing (which I based the above link on): How to intercept the access to a file in a .NET Program
Alternatively, you can tailor your security app to enumerate the security settings on files/folders as a preventative measure to ensure the proper file system security settings are in place, or take corrective action if they are not. You can use the FileSecurity class to get an idea of what is available to you (quite a bit!): http://msdn.microsoft.com/en-us/library/system.security.accesscontrol.filesecurity(v=vs.110).aspx
If you want to go the audit route and get stuck on using the FileSecurity and/or DirectorySecurity class, I would post a separate question and we can tackle that.
EDIT:
Some actual code on writing the filter driver, if you are so inclined (beware, no .net):
http://www.codeproject.com/Articles/43586/File-System-Filter-Driver-Tutorial
I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are:
Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc)
Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions.
Performance: Request times, memory usage, cpu, etc. whatever stats I can get
I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?
You should take a look at Gibraltar not used it myself but looks very good! Also works with nLog and log4net so if you use those you are in luck!!
Well, we have exactly the same current solution. Emails upon emails litter my inbox and mostly get ignored. Over the holidays an error caused everyone in dev to hit their inbox limit ;)
So where are we headed with a solution?
We already generate classes for all our excpetions, they rarely get used from more than one place. This was essentially step one, but we started the code base this way.
We modified the generator to create unique HRESULT values for all exceptions.
Again we added a generator to create a .MC message resource file for every exception.
Now every exception can write itself to the Windows Event Log, and thus we removed all emails etc, and rely on the event log.
Now with an event log full of information, including unique message codes and parameters for each exception, we can use off-the-shelf tools to aggregate, monitor, and alert.
The exception generator (before modifications above) is located here:
http://csharptest.net/browse/src/Tools/Generators
It integrates with visual studio by replacing the ResX generator with this:
http://csharptest.net/browse/src/Tools/CmdTool
I have not yet published the MC generator, or the HRESULT generation; however, it will be available in the above locations when I get the time.
-- UPDATE --
All the tools and source are now available online for this. So where do I go from here?
Download the source or binaries from: http://code.google.com/p/csharptest-net/
Take a look at the help for CmdTool.exe Visual Studio Integration
Then review the help on Generators for ResX and MC files, there are several ways to generate MC files or complete message DLLs from ResX files. Pick the approach that fits you best.
Run CmdTool.exe REGISTER to register the tool with Visual Studio
Create a ResX file as normal, then change the custom tool to CmdTool
You will need to add some entries to the resx file. At minimal create the following:
".AutoLog" = true
".NextMessageId" = 1
".EventSource" = "HelloWorld"
"MyCustomException" = "Some exception text"
Other settings exampled by the NUnit: http://csharptest.net/browse/src/Tools/Generators/Test/TestResXAutoLog.cs#80
Now you should have an exception class being generated that writes log events. You will need to build the MC file as a pre/post build action with something like:
CSharpTest.Net.Generators.exe RESXTOMESSAGEDLL /output=MyCustomMessages.dll /input=TheProjectWithAResX.csproj
Lastly, you need to register it, run the framework's InstallUtil.exe MyCustomMessages.dll
That should get you started until I get time to document it all.
One suggestion from Ryans Roberts I really like is exceptioneer which seems to solve my exception woes at least.
I would first go for log4net. The SmtpAppender can wait for N exceptions to cumulate before sending an email and avoid crashing Outlook. And log4net also logs to log files that can be stored on network drives, read with cat and grep, etc.
About stats, you can perform a health/performance logging with the same tools, ie. spawn a thread that every minute logs CPU usage etc.
I don't have a concrete answer for the first part of question, since it implies automated log analysis and coalescence. At university, we made a tool that is designed to do part of these things but doesn't apply to your scenario (but it's two-way integrated with log4net).
In terms of handled exceptions or just typical logging l4ndash is worth a look. I always set our log4net to not only write out text files, but to append to the database. That way l4ndash can analyse it easily. It'll group your errors, let you see where bad things are occurring a lot. You get one free dev license
With Elmah we just pull down the logs periodically. It can exports as csv, then we use Excel do filter/group the data. It's not ideal, but it works. It would be nice to write a script to automate this a bit more. I've not seen much else out there for Elmah.
You can get some metrics on request times (and anything else that's saved) by running LogParser over the IIS logs.
We have built a simple monitoring app that sits on the desktop and flashes up red when there is either an exception written to the event log from one of the apps on the server or it writes an error to the error log table on the database. It also monitors the database health, checking fragmentation and the like.
The only problem we have with this is that it can be a little intrusive on the desktop as it keeps popping up with a red message box if there is a a problem. However it does encourage you to fix it asap.
We currently have this running on several of the developers machines. The improvement we are thinking of making is to have one monitoring app running on a server that then publishes an rss feed so that the app is only checking once in one place but we can consume the information from anywhere using whichever method we choose at the time (such as through our phones when we aren't in the office).
You can have an RSS feed select from your Exceptions table (and other things).
Then you can subscribe to the RSS feed in MS Outlook or on any smart phone. I use an RSS feed reader called NewsRob because it alerts me when there is something new.
I blog about how to do this HERE.
As a related step, I found a way to notify myself when something DIDN'T happen. That blog is HERE.
In Windows, how can I programmatically determine which user account last changed or deleted a file?
I know that setting up object access auditing may be an option, but if I use that I then have the problem of trying to match up audit log entries to specific files... sounds complex and messy! I can't think of any other way, so does anyone either have any tips for this approach or any alternatives?
You can divide your problem into two parts:
Write to a log whenever a file is accessed.
Parse, filter and present the relevant information of the log.
Of those two part 1, writing to the log is a built in function through auditing as you mention. Reinventing that would be hard and probably never get as good as the builtin functionality.
I would use the built in functionality for logging by setting up an audit ACL on those files. Then I would focus my efforts on providing a good interface that reads the event log, filters out relevant events and presents them in a way that is suitable and relevant for your users.
You could always create a file system filter. This might be overkill, but it depends on your purposes. You can have it load at boot and it sits behind pretty much every file access (its what virus scanners usually use to scan files as they are accessed).
Simply need to log the "owner" of the application that is writing to the file.
Also see the MSDN documentation
The only way I know of to do this is to set up a FileSystemWatcher and keep it running. Oh, and if it's across a network drive, it may randomly lose connection, so it may be good to force a disconnect/reconnect every few hours just to make sure it has a fresh connection.
I was reading the following article:
http://odetocode.com/articles/294.aspx
This article raised me a lot of question regarding logs.
(I don’t know if I should have made this in separated questions… but I don’t want to spam stackoverflow.com with questions of mine)
The 1st one is if I should store it in a .txt, or .xml file… or even in a table inside the database.
Probably saving in the .txt will be better regarding performance. But when someone needs to find something the .txt file, it may become a pain in the... neck.
So… which one should I use, and why?
The second one, is there any specific class to deal with “log” thing?
I have read several threads about this subject, and I didn’t find the answers to my questions.
Thanks in advance.
The easiest approach I've taken in the past is using log4net. That way you can configure the logging in the config file. If you need it to go to a database, set it up as such. If you want to be notified when a major error occurs, set it up that way.
As far as sorting through the logs, it really depends on the approach you want to take, and how much you plan on logging. Normally I log to a flat text file as I don't enable a lot of logging in my applications. So parsing through them isn't a big deal.
Unless you want to write a system for education purposes, I honestly think that you'd be best off sticking with log4net or nlog.
And further, you would probably be better off studying the code to those systems instead of writing your own.
As to your question, I would stick to a text file and buffer the messages before spitting them to disk.
Why bother inventing wheel? you can check MS enterprise library Logging Block.
definitely not xml.
with xml, you will need to read it all, parse it, add whatever, then generate the whole xml again, and write it back to hard disk. every single time you log something.
unless of course you append the nodes to the xml file manually, in which way you loose most of xml advantages.
warnings to fatal errors - whatever will help you to debug the application if it crashes - those logs i would store in a txt file.
append a new line for every entry.
this way you can also ask from your user to check it out (if you assist him via the phone).
if it's not a meta log, such as mentioned above, in other words, if it's anything related to the program itself you may need to analyze - keep on the db.
Regarding file vs database, it's up to you to choose.
File logs give greater performance but with pain of access.
If the logs are there just to rarely provide information (e.g. the app crashes and you need to know why), you're better off storing the logs in a file.
If you want to give access to those logs, analyze them, etc, you should store them in a database.
.net is really not my zone, but there are lots of reasons why you should use the framework's logging classes.
For my apps I have chosen to write to db. Its easier (for me) to read the logs this way. However I do not go log crazy as some people do, I only log what I need to log and nothing else.
I gave log4net a shot not to long ago and did not like it at all. It was a whole lot of junk to just write to a db and send an email. I ended up writing a custom logging class and it was a whole ~200 lines and took just a few hours. It works great, I don't have another dependency, and it can be easily changed.
If you're dealing with ASP.NET, ELMAH is another good logging tool. It's apparently what Microsoft's Scott Hanselman uses.
It does need some additional code to get it to work with ASP.NET MVC's HandleError attribute, though.
NLog and log4net both provide a rich logging API but neither addresses the challanges you face managing and analyzing all the data in your log files.
If you're willing to consider a commerical tool, take a look at GIBRLATAR - it works with NLog and log4net and also collects useful performance metrics. Most importantly, GIBRALTAR provides great tools for managing and analyzing logs.