I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are:
Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc)
Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions.
Performance: Request times, memory usage, cpu, etc. whatever stats I can get
I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?
You should take a look at Gibraltar not used it myself but looks very good! Also works with nLog and log4net so if you use those you are in luck!!
Well, we have exactly the same current solution. Emails upon emails litter my inbox and mostly get ignored. Over the holidays an error caused everyone in dev to hit their inbox limit ;)
So where are we headed with a solution?
We already generate classes for all our excpetions, they rarely get used from more than one place. This was essentially step one, but we started the code base this way.
We modified the generator to create unique HRESULT values for all exceptions.
Again we added a generator to create a .MC message resource file for every exception.
Now every exception can write itself to the Windows Event Log, and thus we removed all emails etc, and rely on the event log.
Now with an event log full of information, including unique message codes and parameters for each exception, we can use off-the-shelf tools to aggregate, monitor, and alert.
The exception generator (before modifications above) is located here:
http://csharptest.net/browse/src/Tools/Generators
It integrates with visual studio by replacing the ResX generator with this:
http://csharptest.net/browse/src/Tools/CmdTool
I have not yet published the MC generator, or the HRESULT generation; however, it will be available in the above locations when I get the time.
-- UPDATE --
All the tools and source are now available online for this. So where do I go from here?
Download the source or binaries from: http://code.google.com/p/csharptest-net/
Take a look at the help for CmdTool.exe Visual Studio Integration
Then review the help on Generators for ResX and MC files, there are several ways to generate MC files or complete message DLLs from ResX files. Pick the approach that fits you best.
Run CmdTool.exe REGISTER to register the tool with Visual Studio
Create a ResX file as normal, then change the custom tool to CmdTool
You will need to add some entries to the resx file. At minimal create the following:
".AutoLog" = true
".NextMessageId" = 1
".EventSource" = "HelloWorld"
"MyCustomException" = "Some exception text"
Other settings exampled by the NUnit: http://csharptest.net/browse/src/Tools/Generators/Test/TestResXAutoLog.cs#80
Now you should have an exception class being generated that writes log events. You will need to build the MC file as a pre/post build action with something like:
CSharpTest.Net.Generators.exe RESXTOMESSAGEDLL /output=MyCustomMessages.dll /input=TheProjectWithAResX.csproj
Lastly, you need to register it, run the framework's InstallUtil.exe MyCustomMessages.dll
That should get you started until I get time to document it all.
One suggestion from Ryans Roberts I really like is exceptioneer which seems to solve my exception woes at least.
I would first go for log4net. The SmtpAppender can wait for N exceptions to cumulate before sending an email and avoid crashing Outlook. And log4net also logs to log files that can be stored on network drives, read with cat and grep, etc.
About stats, you can perform a health/performance logging with the same tools, ie. spawn a thread that every minute logs CPU usage etc.
I don't have a concrete answer for the first part of question, since it implies automated log analysis and coalescence. At university, we made a tool that is designed to do part of these things but doesn't apply to your scenario (but it's two-way integrated with log4net).
In terms of handled exceptions or just typical logging l4ndash is worth a look. I always set our log4net to not only write out text files, but to append to the database. That way l4ndash can analyse it easily. It'll group your errors, let you see where bad things are occurring a lot. You get one free dev license
With Elmah we just pull down the logs periodically. It can exports as csv, then we use Excel do filter/group the data. It's not ideal, but it works. It would be nice to write a script to automate this a bit more. I've not seen much else out there for Elmah.
You can get some metrics on request times (and anything else that's saved) by running LogParser over the IIS logs.
We have built a simple monitoring app that sits on the desktop and flashes up red when there is either an exception written to the event log from one of the apps on the server or it writes an error to the error log table on the database. It also monitors the database health, checking fragmentation and the like.
The only problem we have with this is that it can be a little intrusive on the desktop as it keeps popping up with a red message box if there is a a problem. However it does encourage you to fix it asap.
We currently have this running on several of the developers machines. The improvement we are thinking of making is to have one monitoring app running on a server that then publishes an rss feed so that the app is only checking once in one place but we can consume the information from anywhere using whichever method we choose at the time (such as through our phones when we aren't in the office).
You can have an RSS feed select from your Exceptions table (and other things).
Then you can subscribe to the RSS feed in MS Outlook or on any smart phone. I use an RSS feed reader called NewsRob because it alerts me when there is something new.
I blog about how to do this HERE.
As a related step, I found a way to notify myself when something DIDN'T happen. That blog is HERE.
Related
I'm making a C# (winforms) app that I want user to be able to execute only for a defined number of times (say 100 times). I know its possible to add an xml or a text file to store a count but that would be easy to edit or "crack"... is their any way to embed the counter in the code or maybe any other way that might not be easy to crack? and that its also easy later to "update" the membership for another period of 100 executions?
Thanks in advance
There are lots of ways to store a variable. As you've noted, you can write it to a text or xml file. You could write it to the Registry. You could encrypt it and write it in a file somewhere.
Probably the most secure method is to write it on a server and have the application "call home" whenever it wants to run.
Preventing copying is a difficult balancing act - treat your legitimate customers too much like criminals and they'll leave you.
If you're talking about memberships, your application may be web connected. If that is the case, you could verify the instance against a web service on your server that holds and increments the count and issues a "OK/Not OK to run" reply.
If you don't want to do this, I have heard of an application that uses steganography to hide relevant details in certain files - you could hide your count in some of your image resources.
Create multiple files containing the counter or the number of times your app will run. Name these files with different file names and store it in different location so that it will be hard to locate,delete and crack by user. The reason why it is not just one file because if the user found one of your file and alter or delete it, you still have other files which contains the valid information about your app.
If your application is a commercial product it might be worth to have a look at security products from other commercial vendors like SafeNet.com, for example.
A few years ago I used the HASP HL hardlock for a project, which worked just fine.
They offer hardware dongles for software protection as well as software based protection (using authentication services over the internet), and combinations of both.
Their products allow for very fine grained control of what you want to allow your users, e.g. how many times an application may be started before it expires (which would be just what you want) or time-expiration, or feature packages, or any combination of it all.
The downside is, that they have very "healthy" licensing prices.
If this is worth it will depend on the size and price of your own application.
thank you for reading my question :)
My goal is to modify this message before a commit is fired to the SVN-Server:
I already have a start and pre-commit hook (C#), both of them are called when i try to commit something. I also have a working SharpSvn library. But unfortunately i'm not getting forward, I have no idea how to fill this message.
So is it possible and if yes, how? I'm glad about every little hint :)
Commit hooks don't talk to the client except on a failure. Then, they send whatever was sent to Standard Error to the user. Hook scripts cannot change any aspect of the commit. (This isn't entirely true: A post-commit hook could modify the commit message, the author, and the commit time, but that's normally a really bad idea.)
What you can do is ensure your commit message is in the correct format, and then fail the commit if it isn't. It would be up to to user to resubmit their commit with the correct format.
I would not recommend using the built in TortoiseSVN client hooks. These are done on a machine-by-machine basis, so a user could opt out of them, and they don't work if the user uses a different client (such as the VisualStudio AnkhSVN plugin).
You didn't mention what you're trying to do or why.
You're more than welcome to use my hook written in Perl. The kitchen sink hook does several tasks, and one of them makes sure that the commit message is in the correct format. For example, you can require that it contains a defect tracking ID, or be at least ...say... 10 characters long.
However, if the user's commit message doesn't match the criteria, the hook will fail the commit, and the user will have to try again. The trick is to put a good error message, so the user knows what they did wrong. Believe me, after one or two tries, the users get the hang of what a good commit message needs to be.
However, you might want to try a slightly different route: Use a continuous integration system such as Jenkins.
What I've noticed is that as soon as I get a project up and running on Jenkins, the commit messages automatically improve. Each Subversion commit is turned into a build. But, Jenkins also shows you all the changes between the previous and current build and the commit message.
This mainly has to do with visibility. Before Jenkins, the commit messages are fairly hidden. You only see them if you did an svn log and few people ever do that on a regular basis. However, in Jenkins, you see it right there on each build and commit. You can see the history, who changed it, and what they changed via a few clicks on a webpage.
That might be the best way to handle what you want.
Okay, I found an answer :)
It was wrong from me to try it via a hook-script. The better solution is building a custom issue tracker plugin:
http://tortoisesvn.googlecode.com/svn/trunk/contrib/issue-tracker-plugins/issue-tracker-plugins.txt
With this I can read my issues from DB, let the developer select the ones he need and put them into the log-message window.
I am not a software engineer as you will see if you continue reading, however I managed to write a very valuable application that saves our company lots of money. I am not paid to write software, I was not paid for writing this application, nor is my job title software engineer so I would like to have total control over who uses this application if I ever had to leave since as far as I can tell it is not legally theirs (did not write during company hours either).
This may sound childish but I've put much much time into this and I've been maintaining it almost on a daily basis so I feel that I should have some control over it, or at least could sell it to my company if they ever had to let me go, or I wanted to move on.
My current protection scheme on this application looks something like this:
string version;
WebRequest request = WebRequest.Create("http://MyWebSiteURL/Licence text file that either says 'expired' or "not expired'");
WebResponse response = request.GetResponse();
StreamReader stream = new StreamReader(response.GetResponseStream());
version = stream.ReadToEnd();
stream.Close();
response.Close();
if (version == ("not expired") == false)
{
MessageBox.Show(Environment.NewLine + "application expired etc etc", "Version Control");
}
It checks my server for "not expired" (in plain text), and if the webrequest comes back as anything but "not expired", it ultimately pops up another form stating it is expired and allows you to type in a passcode for the day which is a multiplication of some predetermined numbers times the current date to create "day passes" if ever needed (I think Alan Turing just rolled over in his grave).
Not the best security scheme, but I thought it was pretty clever having no experience in software security. I have however heard of hex editing to get around security so I did a little test for science and found this area of my compiled EXE:
"System.Net.WebRequest." Which I filled in with zeros to look like this: System.Net000000000
That was all it took to offset the loading of the application to hiccup during the server check which allowed me to click "continue" and completely bypass all my "security" and go along using the program without it ever expiring.
Now would a normal person go to this length (hex editing) to try to get past my protection scheme? Not likely, however just as a learning experience, what could I do as an added step to make hex editing or any other common workarounds not work unless it was by "professional" cracker?
Again I'm not paranoid, I'm just eager to learn more about security of applications. I was both proud of myself and ashamed at the same time for creating and breaking my own protection.
If commenting, please be kind since I know this is probably a humerus post to those more informed than I as I really have little experience in writing software and have never taken any type of course etc. Thanks for reading!
Another way to bypass the license check is to redirect the checking url to localhost returning always the desired text...
A better way is to make a call to a function doing the same thing but make your server response a signed XML including the server response time-stamp, that you can check on addition with the system datetime (use UTC dates in both sides). It is also a good idea to throw exceptions whenever something is not the way you expect it, and control the flow of your program with exception handling.
Check the following to get a how to clue:
How to: Sign XML Documents with Digital Signatures
How to: Verify the Digital Signatures of XML Documents
Now would a normal person go to this length (hex editing) to try to
get past my protection scheme?
Well i guess, that depends on how useful the application is for that "normal person", and how determines he is to make it work.
Most .net application unless obfuscated can be easily de-compiled to the source code using tools like (Telerik JustDecompile) or they can simple use the ildasm to see the IL code, i heard there are tools to even de-compile obfuscated .net libraries, although i haven't used or found any.
With my little experience, i can suggest two approaches
Enforcing licensing and cracking it in a application which runs plainly on the user machine is a cat and mouse game, you can add some extra protection to your code by moving some part of the applications functionality to the server and expose it as a web service which your client can consume, the part you move to the server must be an important part for the application to work and should be something that is hard to simulate.
The other approach is to add a auto updater feature to your application that will check the server for latest updates, and when ever it finds a new version it will overwrite the older one, thus overriding any cracked version, this can be easily disabled, but if disabled this will also stop any bug fixes you might release
I tried both the approaches, but they are only useful to some extent and you have to decide whether it is worth the effort enforcing or not
In Windows, how can I programmatically determine which user account last changed or deleted a file?
I know that setting up object access auditing may be an option, but if I use that I then have the problem of trying to match up audit log entries to specific files... sounds complex and messy! I can't think of any other way, so does anyone either have any tips for this approach or any alternatives?
You can divide your problem into two parts:
Write to a log whenever a file is accessed.
Parse, filter and present the relevant information of the log.
Of those two part 1, writing to the log is a built in function through auditing as you mention. Reinventing that would be hard and probably never get as good as the builtin functionality.
I would use the built in functionality for logging by setting up an audit ACL on those files. Then I would focus my efforts on providing a good interface that reads the event log, filters out relevant events and presents them in a way that is suitable and relevant for your users.
You could always create a file system filter. This might be overkill, but it depends on your purposes. You can have it load at boot and it sits behind pretty much every file access (its what virus scanners usually use to scan files as they are accessed).
Simply need to log the "owner" of the application that is writing to the file.
Also see the MSDN documentation
The only way I know of to do this is to set up a FileSystemWatcher and keep it running. Oh, and if it's across a network drive, it may randomly lose connection, so it may be good to force a disconnect/reconnect every few hours just to make sure it has a fresh connection.
I was reading the following article:
http://odetocode.com/articles/294.aspx
This article raised me a lot of question regarding logs.
(I don’t know if I should have made this in separated questions… but I don’t want to spam stackoverflow.com with questions of mine)
The 1st one is if I should store it in a .txt, or .xml file… or even in a table inside the database.
Probably saving in the .txt will be better regarding performance. But when someone needs to find something the .txt file, it may become a pain in the... neck.
So… which one should I use, and why?
The second one, is there any specific class to deal with “log” thing?
I have read several threads about this subject, and I didn’t find the answers to my questions.
Thanks in advance.
The easiest approach I've taken in the past is using log4net. That way you can configure the logging in the config file. If you need it to go to a database, set it up as such. If you want to be notified when a major error occurs, set it up that way.
As far as sorting through the logs, it really depends on the approach you want to take, and how much you plan on logging. Normally I log to a flat text file as I don't enable a lot of logging in my applications. So parsing through them isn't a big deal.
Unless you want to write a system for education purposes, I honestly think that you'd be best off sticking with log4net or nlog.
And further, you would probably be better off studying the code to those systems instead of writing your own.
As to your question, I would stick to a text file and buffer the messages before spitting them to disk.
Why bother inventing wheel? you can check MS enterprise library Logging Block.
definitely not xml.
with xml, you will need to read it all, parse it, add whatever, then generate the whole xml again, and write it back to hard disk. every single time you log something.
unless of course you append the nodes to the xml file manually, in which way you loose most of xml advantages.
warnings to fatal errors - whatever will help you to debug the application if it crashes - those logs i would store in a txt file.
append a new line for every entry.
this way you can also ask from your user to check it out (if you assist him via the phone).
if it's not a meta log, such as mentioned above, in other words, if it's anything related to the program itself you may need to analyze - keep on the db.
Regarding file vs database, it's up to you to choose.
File logs give greater performance but with pain of access.
If the logs are there just to rarely provide information (e.g. the app crashes and you need to know why), you're better off storing the logs in a file.
If you want to give access to those logs, analyze them, etc, you should store them in a database.
.net is really not my zone, but there are lots of reasons why you should use the framework's logging classes.
For my apps I have chosen to write to db. Its easier (for me) to read the logs this way. However I do not go log crazy as some people do, I only log what I need to log and nothing else.
I gave log4net a shot not to long ago and did not like it at all. It was a whole lot of junk to just write to a db and send an email. I ended up writing a custom logging class and it was a whole ~200 lines and took just a few hours. It works great, I don't have another dependency, and it can be easily changed.
If you're dealing with ASP.NET, ELMAH is another good logging tool. It's apparently what Microsoft's Scott Hanselman uses.
It does need some additional code to get it to work with ASP.NET MVC's HandleError attribute, though.
NLog and log4net both provide a rich logging API but neither addresses the challanges you face managing and analyzing all the data in your log files.
If you're willing to consider a commerical tool, take a look at GIBRLATAR - it works with NLog and log4net and also collects useful performance metrics. Most importantly, GIBRALTAR provides great tools for managing and analyzing logs.