I'm writing an app (C#) and at times I will need to log to the Windows event log. So the first thing that comes to mind is to write a function in my one and only class and call it when I need it. Something like this:
private void Write_Event_Log(string log, string source, string message, EventLogEntryType type, int eventid)
{
if (!EventLog.SourceExists(source))
EventLog.CreateEventSource(source, log);
EventLog.WriteEntry(source, message, type, eventid);
}
A colleague of mine asked, "why didn't you just create a new class for your event log writer?" So my question is, why would I? And what would this class even look like? And why would I need it when my function works nicely? ok that's 3 questions but you get the point :)
why would I?
To encapsulate the logging functionality into its own class. Why? Single Responsibility Principle http://en.wikipedia.org/wiki/Single_responsibility_principle. Ny mixing it into your class you are making that class be responsible for at least two (2) things: whatever it does and logging.
And what would this class even look like?
public class LogWriter
{
public static Log(string log, string source, string message, EventLogEntryType type, int eventid)
{
if (!EventLog.SourceExists(source))
EventLog.CreateEventSource(source, log);
EventLog.WriteEntry(source, message, type, eventid);
}
}
And why would I need it when my function works nicely?
Think about when you are no longer responsible for the code. Think ahead to when the code grows. Eventually, in addition to logging it might have a host of other very helpful functions included in it. The next programmer would be much happier not having to refactor your work because the design precedent has been set.
This is a very general question about OO design. Your colleague is referring to separation of responsibilities; he doesn't think that the idea of an event log writer fits into the abstraction of the class you put it in, and it deserves its own.
If this is all you are ever going to use it for (this one method) and this program is simple enough that you are implementing it one class, there is no need to use another class to interact with your event writer. If you can conceive that your event writer might be used in a different way, or by a different class, in the future, then yes, absolutely make it is own class so that you avoid future problems where you have to change the source code that uses it.
The function you've written is a small function that doesn't keep state, so another class is not really necessary unless it's to avoid future problems.
Simple, what if you wish to use this method every where in all other parts of your code base? You again copy - paste. Instead have a helper or a Add in class, where just instantiate and keep calling.
Plus if its in a class, you can have more properties and provide more customization methods as well in logging data.
See if you can make use of built in eventlog/trace stuffs.
If it's a small application (which with one class it must be) then it probably doesn't matter.
But design wise in a larger application, you probably would want to consider having the logging functionality in a class by itself in order to keep each class as narrowly focused as possible.
For the same reason that someone put SourceExists(Source) and CreateEventSource(source, log) into their own class, so you could call them just by referencing the assembly that has that class defined, and writing
EventLog.SourceExists(source);
or
EventLog.CreateEventSource(source, log);
So if you will never ever need to write to the event log in any other application you ever write, then what you are doing is fine... but if you might ever need this again, then .....
I think you should have seperate class because if you are going to create more no.of classes in your application you can use same logging for all of them see below example
public static class Logger
{
private static string logFilePath = string.Empty;
public static void Log(string logMessage, TextWriter w)
{
w.Write( logMessage);
w.Flush();
}
public static void Log(string textLog)
{
string directoryString =
filepath+ #"\Logging";
Directory.CreateDirectory(directoryString);
logFilePath = directoryString + "\\" +
DateTime.Now.ToShortDateString().Replace("/", "") + ".txt";
StreamWriter sw = null;
if (!File.Exists(logFilePath))
{
try
{
sw = File.CreateText(logFilePath);
}
finally
{
if (sw != null) sw.Dispose();
}
}
using (StreamWriter w = File.AppendText(logFilePath))
{
Log(textLog, w);
w.Close();
}
}
I agree that you shouldn't create a new class for writing directly to the event log, but for another reason. That class already exists!
Consider using the built-in debug/tracing mechanisms in System.Diagnostics:
Debug output
Trace output
These are standard classes that dump information to a collection of TraceListener objects, of which many useful types already exist:
DefaultTraceListener - Dumps output to standard debug out, I believe via OutputDebugString().
EventLogTraceListener - Dumps output to the windows event log.
So this changes your output mechanism from a programmatic question into a configuration question. (Yes, if you're working in a straight-up managed app, you can populate your TraceListener collection via your app.config.) That means that everywhere you simply use the appropriate Trace.Write() or Debug.Write() call (Depending on if your want the output in a release build), and the configuration determines where the output goes.
Of course, you can also populate your TraceListener collection programmatically, it's fun and simple.
And this way you don't have to build up your own home-grown logging infrastructure. It's all built-in! Use it in good health! :D
If, on the other hand, you insist on rolling your own (a bad idea, I think), your colleague is right. It's a separate responsibility and belongs in a separate class. I would expect static methods for output because there's probably no concept instances of your debug log. In fact, I'd expect an interface very similar to System.Diagnostics.Debug, so yeah, just use that one instead.
Depending on your approach, you may run into a subtle gotcha' that's in the docs, but not immediately obvious without a careful reading. I found an answer for it elsewhere.
Related
I am looking for a way not to use Message Files, as I don't want the mess that comes with it.
I would like to be able to write events using a method similar to
public void WriteEvent(EventLogEntryType type, string description, int eventId, int categoryId)
And specify those categories in the same class I register my EventSource, in some enum.
Thanks!
Unfortunately this is not possible.
Even thought things have changed a bit in terms of API, since that blog post I mentioned, yet the principe stayed the same.
See the documentation + samples:
https://msdn.microsoft.com/en-us/library/650k61tw(v=vs.100).aspx
https://msdn.microsoft.com/en-us/library/system.diagnostics.eventinstance.categoryid(v=vs.100).aspx
https://msdn.microsoft.com/en-us/library/system.diagnostics.eventloginstaller.categoryresourcefile(v=vs.100).aspx
I found an acceptable workaround for this; use a different source id, instead of categoryId. It is simpler and can be done with simple API.
Example:
Manage event sources on your own, create event sources per category type. Use some lazy creation logic, e.g by running
if (!EventLog.SourceExists(sourceName))
{
lock (_eventSourceCreationLock)
{
if (!EventLog.SourceExists(sourceName))
{
EventLog.CreateEventSource(sourceName, _logName);
}
}
}
And then, use this to write each log entry per source:
EventLog.WriteEntry(sourceName, description, type, id);
These samples are thread safe as well, as the static calls create a new internal event log.
Instance methods of EventLog aren't guaranteed to be thread safe.
I am attempting to build (for learning purposes) my own event logger; I am not interested in hearing about using a non-.net frameworks instead of building my own as I am doing this to better understand .net.
The idea is to have an event system that I can write out to a log file and/or pull from while inside the program. To do this I am creating an LogEvent class that will be stored inside of a Queue<LogEvent>.
I am planning on using the following fields in my LogEvent class:
private EventLogEntryType _eventType //enum: error, info, warning...
private string _eventMessage
private Exception _exception
private DateTime _eventTime
What I am not sure is the best way to capture the object that caused the event to be called. I thought about just doing a private Object _eventObject; but I am thinking that is not thread safe or secure.
Any advice on how to best store the object that called the event would be appreciated. I am also open to any other suggestions you may have.
Thanks, Tony
First off, nothing wrong with writing your own. There are some good frameworks our there, but sometimes you reach the point where some bizarre requirement gets you rolling your own, I've been there anyway...
I don't think you should be using text messages. After doing this type of logging in several projects, I have come the the conclusion that the best approach is to have a set of event types (integer IDs) with some type of extra information field.
You should have an enum of LogEvetTypes that looks something like this:
public enum LogEventTypes
{
//1xxx WS Errors
ThisOrThatWebServiceError = 1001,
//2xxx DB access error
//etc...
}
This, from my experience will make your life much easier when trying to make use of the information you logged. You can also add an ExtraInformation field in order to provide event instance specific information.
As for the object that caused the event, I would just use something like typeof(YourClass).ToString();. If this a custom class you created, you can also implement a ToString override that will name sense in your logging context.
Edit: I am adding several details I wrote about in the comments, since I think they are important. Passing objects, which are not immutable, by ref to service methods is generally not a good idea. You might reassigne the same variable in a loop (for example) and create a bug that is near-impossible to find. Also, I would recommend doing some extra work now to decouple the logging infrastructure from the implementation details of the application, since doing this later will cause a lot of pain. I am saying this from my own very painful experience.
i'm working on a fork of the Divan CouchDB library, and ran into a need to set some configuration parameters on the httpwebrequest that's used behind the scenes. At first i started threading the parameters through all the layers of constructors and method calls involved, but then decided - why not pass in a configuration delegate?
so in a more generic scenario,
given :
class Foo {
private parm1, parm2, ... , parmN
public Foo(parm1, parm2, ... , parmN) {
this.parm1 = parm1;
this.parm2 = parm2;
...
this.parmN = parmN;
}
public Bar DoWork() {
var r = new externallyKnownResource();
r.parm1 = parm1;
r.parm2 = parm2;
...
r.parmN = parmN;
r.doStuff();
}
}
do:
class Foo {
private Action<externallyKnownResource> configurator;
public Foo(Action<externallyKnownResource> configurator) {
this.configurator = configurator;
}
public Bar DoWork() {
var r = new externallyKnownResource();
configurator(r);
r.doStuff();
}
}
the latter seems a lot cleaner to me, but it does expose to the outside world that class Foo uses externallyKnownResource
thoughts?
This can lead to cleaner looking code, but has a huge disadvantage.
If you use a delegate for your configuration, you lose a lot of control over how the objects get configured. The problem is that the delegate can do anything - you can't control what happens here. You're letting a third party run arbitrary code inside of your constructors, and trusting them to do the "right thing." This usually means you end up having to write a lot of code to make sure that everything was setup properly by the delegate, or you can wind up with very brittle, easy to break classes.
It becomes much more difficult to verify that the delegate properly sets up each requirement, especially as you go deeper into the tree. Usually, the verification code ends up much messier than the original code would have been, passing parameters through the hierarchy.
I may be missing something here, but it seems like a big disadvantage to create the externallyKnownResource object down in DoWork(). This precludes easy substitution of an alternate implementation.
Why not:
public Bar DoWork( IExternallyKnownResource r ) { ... }
IMO, you're best off accepting a configuration object as a single parameter to your Foo constructor, rather than a dozen (or so) separate parameters.
Edit:
there's no one-size-fits-all solution, no. but the question is fairly simple. i'm writing something that consumes an externally known entity (httpwebrequest) that's already self-validating and has a ton of potentially necessary parameters. my options, really, are to re-create almost all of the configuration parameters this has, and shuttle them in every time, or put the onus on the consumer to configure it as they see fit. – kolosy
The problem with your request is that in general it is poor class design to make the user of the class configure an external resource, even if it's a well-known or commonly used resource. It is better class design to have your class hide all of that from the user of your class. That means more work in your class, yes, passing configuration information to your external resource, but that's the point of having a separate class. Otherwise why not just have the caller of your class do all the work on your external resource? Why bother with a separate class in the first place?
Now, if this is an internal class doing some simple utility work for another class that you will always control, then you're fine. But don't expose this type of paradigm publicly.
I am running into a design disagreement with a co-worker and would like people's opinion on object constructor design. In brief, which object construction method would you prefer and why?
public class myClass
{
Application m_App;
public myClass(ApplicationObject app)
{
m_App = app;
}
public method DoSomething
{
m_App.Method1();
m_App.Object.Method();
}
}
Or
public class myClass
{
Object m_someObject;
Object2 m_someOtherObject;
public myClass(Object instance, Object2 instance2)
{
m_someObject = instance;
m_someOtherObject = instance2;
}
public method DoSomething
{
m_someObject.Method();
m_someOtherObject.Method();
}
}
The back story is that I ran into what appears to be a fundamentally different view on constructing objects today. Currently, objects are constructed using an Application class which contains all of the current settings for the application (Event log destination, database strings, etc...) So the constructor for every object looks like:
public Object(Application)
Many classes hold the reference to this Application class individually. Inside each class, the values of the application are referenced as needed. E.g.
Application.ConfigurationStrings.String1 or Application.ConfigSettings.EventLog.Destination
Initially I thought you could use both methods. The problem is that in the bottom of the call stack you call the parameterized constructor then, higher up the stack, when the new object expects a reference to the application object to be there, we ran into a lot of null reference errors and saw the design flaw.
My feeling on using an application object to set every class is that it breaks encapsulation of each object and allows the Application class to become a god class which holds information for everything. I run into problems when thinking of the downsides to this method.
I wanted to change the objects constructor to accept only the arguments it needs so that public object(Application) would change to public object(classmember1, classmember2 etc...). I feel currently that this makes it more testable, isolates change, and doesn't obfuscate the necessary parameters to pass.
Currently, another programmer does not see the difference and I am having trouble finding examples or good reasons to change the design, and saying it's my instinct and just goes against the OO principles I know is not a compelling argument. Am I off base in my design thoughts? Does anyone have any points to add in favor of one or the other?
Hell, why not just make one giant class called "Do" and one method on it called "It" and pass the whole universe into the It method?
Do.It(universe)
Keep things as small as possible. Discrete means easier to debug when things inevitably break.
My view is that you give the class the smallest set of "stuff" it needs for it to do its job. The "Application" method is easier upfront but as you've seen already, it will lead to maintainence issues.
I thing Steve McConnel put it very succintly. He states,
"The difference between the
'convenience' philosophy and the
'intellectual manageability'
philosophy boils down to a difference
in emphasis between writing programs
and reading them. Maximizing scope
may indeed make programs easy to
write, but a program in which any
routine can use any variable at any
time is harder to understand than a
program that uses well-factored
routines. In such a program you can't
understand only one routine; you have
to understand all the other routines
with which that routine shares global
data. Such programs are hard to read,
hard to debug, and hard to modify." [McConnell 2004]
I wouldn't go so far as to call the Application object a "god" class; it really seems like a utility class. Is there a reason it isn't a public static class (or, better yet, a set of classes) that the other classes can use at will?
I need advice on how to have my C# console application display text to the user through the standard output while still being able access it later on. The actual feature I would like to implement is to dump the entire output buffer to a text file at the end of program execution.
The workaround I use while I don't find a cleaner approach is to subclass TextWriter overriding the writing methods so they would both write to a file and call the original stdout writer. Something like this:
public class DirtyWorkaround {
private class DirtyWriter : TextWriter {
private TextWriter stdoutWriter;
private StreamWriter fileWriter;
public DirtyWriter(string path, TextWriter stdoutWriter) {
this.stdoutWriter = stdoutWriter;
this.fileWriter = new StreamWriter(path);
}
override public void Write(string s) {
stdoutWriter.Write(s);
fileWriter.Write(s);
fileWriter.Flush();
}
// Same as above for WriteLine() and WriteLine(string),
// plus whatever methods I need to override to inherit
// from TextWriter (Encoding.Get I guess).
}
public static void Main(string[] args) {
using (DirtyWriter dw = new DirtyWriter("path", Console.Out)) {
Console.SetOut(dw);
// Teh codez
}
}
}
See that it writes to and flushes the file all the time. I'd love to do it only at the end of the execution, but I couldn't find any way to access to the output buffer.
Also, excuse inaccuracies with the above code (had to write it ad hoc, sorry ;).
The perfect solution for this is to use log4net with a console appender and a file appender. There are many other appenders available as well. It also allows you to turn the different appenders off and on at runtime.
I don't think there's anything wrong with your approach.
If you wanted reusable code, consider implementing a class called MultiWriter or somesuch that takes as input two (or N?) TextWriter streams and distributes all writs, flushes, etc. to those streams. Then you can do this file/console thing, but just as easily you can split any output stream. Useful!
Probably not what you want, but just in case... Apparently, PowerShell implements a version of the venerable tee command. Which is pretty much intended for exactly this purpose. So... smoke 'em if you got 'em.
I would say mimic the diagnostics that .NET itself uses (Trace and Debug).
Create a "output" class that can have different classes that adhere to a text output interface. You report to the output class, it automatically sends the output given to the classes you have added (ConsoleOutput, TextFileOutput, WhateverOutput).. And so on.. This also leaves you open to add other "output" types (such as xml/xslt to get a nicely formatted report?).
Check out the Trace Listeners Collection to see what I mean.
Consider refactoring your application to separate the user-interaction portions from the business logic. In my experience, such a separation is quite beneficial to the structure of your program.
For the particular problem you're trying to solve here, it becomes straightforward for the user-interaction part to change its behavior from Console.WriteLine to file I/O.
I'm working on implementing a similar feature to capture output sent to the Console and save it to a log while still passing the output in real time to the normal Console so it doesn't break the application (eg. if it's a console application!).
If you're still trying to do this in your own code by saving the console output (as opposed to using a logging system to save just the information you really care about), I think you can avoid the flush after each write, as long as you also override Flush() and make sure it flushes the original stdoutWriter you saved as well as your fileWriter. You want to do this in case the application is trying to flush a partial line to the console for immediate display (such as an input prompt, a progress indicator, etc), to override the normal line-buffering.
If that approach has problems with your console output being buffered too long, you might need to make sure that WriteLine() flushes stdoutWriter (but probably doesn't need to flush fileWriter except when your Flush() override is called). But I would think that the original Console.Out (actually going to the console) would automatically flush its buffer upon a newline, so you shouldn't have to force it.
You might also want to override Close() to (flush and) close your fileWriter (and probably stdoutWriter as well), but I'm not sure if that's really needed or if a Close() in the base TextWriter would issue a Flush() (which you would already override) and you might rely on application exit to close your file. You should probably test that it gets flushed on exit, to be sure. And be aware that an abnormal exit (crash) likely won't flush buffered output. If that's an issue, flushing fileWriter on newline may be desirable, but that's another tricky can of worms to work out.