I use a log property in my class that is intended only for debugging purposes.
Note: I do not use any existing logger packet, as I manage a large lists of objects each of them having its own (!) log.
As it is not used in release mode, it is enclosed by a preprocessor directive:
#if DEBUG
public List<LogItem> DebugLog { get; }
#endif
Unfortunately, I need to initialize and copy this property a few times, leading to messy code like this:
public MyClass(object parameterA, object parameterB, ...,
#if DEBUG
, List<LogItem> debugLog
#endif
) {
throw new NotImplementedException();
}
Whilst for actual logging, I wrote a [Conditional("DEBUG")] method, I am not aware of any possibility avoiding this ugly and idiom-violating directives for noting arguments and parameters. The ConditionalAttribute appears to be only applicable on properties and attributes.
I am wondering whether there is any design pattern for this problem enabling a better readability. I am looking forward to your ideas!
Have a look at log4net, which is probably the most popular open-source logger for .NET. One of the many benefits you'll see is that you can configure the logger once, in your config file, and have different configurations for debug and release. That way, switching to release is done automatically when you publish the web site or app, you don't have conditional directives in your code, and you don't need to change the code to account for different situations.
How to inject C# Preprocessor Directives to an interface by Reflection ?
Example :
I want to inject #if SILVERLIGHT to any WCF service contract interface.
Short answer: you can't.
Slightly longer answer: you question doesn't even make sense in the first place.
Preprocessor directives are processed before compilation. The result of that processing is the new, modified, source code. That source code then gets compiled.
For example, if the SILVERLIGHT symbol is not defined at the time of compilation, then the whole code between #if SiLVERLIGHT and #endif will be completely ignored by the compiler as if it wasn't even there.
That's not possible. As per the name, preprocessor directives exist only just before compile time. Nowhere else.
Is it possible in C# to set such a condition that if the condition is true - compile one file; if the condition is false - compile another file?
Sort of like
#ifdef DEBUG
#include Class1.cs
#else
#include Class2.cs
#endif
Or possibly set it up in project properties.
No, it isn't.
However, you can wrap both entire files in #if blocks.
You might also want to look at the [Conditional] attribute.
I wouldn't recommend it. I don't like the idea of Debug and Release having such wildly different code that you need to have two totally separate files to make sense of the differences. #if DEBUG at all is a pretty big code smell IMO...
However, you could do it like this:
// Class1.cs
#if DEBUG
...
#endif
.
// Class2.cs
#if !DEBUG
...
#endif
In C# we don't use an include of a file, but you can use conditional methods.
For instance, if I'm developing a game and I'm using a shared code base for my input class, but I want to have one method called if I'm on an Xbox, and a different method get called if I'm on a Zune. It's still going to return the same class of input data, but it's going to take a very different route to get it.
You learn more about conditional methods here:
http://msdn.microsoft.com/en-us/library/aa288458(v=VS.71).aspx
Thankfully no, not in the preprocessor like that. You can, however, exclude the implementation of a whole file within the file itself. Or you can set up a build process with MSBuild or NAnt to switch around our files.
Having recently learned of the DebuggerDisplay attribute, I've found it quite useful. However, one thing that surprises me is that it doesn't have a [ConditionalAttribute("DEBUG")] attribute attached to it. Is there some way to force this or is it a bad idea to try? Or does it not matter for some other reason?
The [ConditionalAttribute("DEBUG")] is only used for optimising out method calls.
If you really want to remove these from your builds you can use #ifdef so that the code is only compiled in release mode.
One thing to bear in mind is that you can still debug binaries in release mode, as long as you have the pdb files it shouldn't matter. Release mode just clears up variables sooner and applies some compiler optimisations
As I often have to debug things in Release configuration builds that don't have the DEBUG directive, I would not want these hints to the debugger to be removed.
However, if you have some proprietary or confidential information in the way you display things when debugging that you don't want to make it into your release build, you may want to consider using the ConditionalAttribute or #if/#elif/#endif preprocessor directives to control what is emitted into your release builds.
For example, you could do:
#if DEBUG
[DebuggerDisplay...]
#endif
public class MyAwesomeClass
{
}
This would ensure the attribute is only emitted when the DEBUG directive is given.
I'll share a pattern that I've come to appreciate using partial.
public partial class MyClass{
//class details here
}
And then elsewhere:
#if DEBUG
[DebuggerDisplay("DebuggerValue")]
public partial class MyClass{
//anything needed for debugging purporses
}
#endif
This gives the ability to use DebuggerDisplay or other attributes without cluttering-up the base class.
I've been using a handful of files, all wrapped in #if DEBUG to hold these Debug-Partials. It helps keep the core classes cleaner and I don't have to remember to start/end compiler directives for each attribute.
I would think it would be a bad idea, because a lot of times the thing you're attaching the attribute to has some other use besides just showing it in the debugger, IMO.
I believe, that the usage of preprocessor directives like #if UsingNetwork is bad OO practice - other coworkers do not.
I think, when using an IoC container (e.g. Spring), components can be easily configured if programmed accordingly. In this context either a propery IsUsingNetwork can be set by the IoC container or, if the "using network" implementation behaves differently, another implementation of that interface should be implemented and injected (e.g.: IService, ServiceImplementation, NetworkingServiceImplementation).
Can somebody please provide citations of OO-Gurus or references in books which basically reads "Preprocessor usage is bad OO practice if you try to configure behaviour which should be configured via an IoC container"?
I need this citations to convince coworkers to refactor...
Edit: I do know and agree that using preprocessor directives to change targetplatform specific code during compilation is fine and that is what preprocessor directives are made for. However, I think that runtime-configuration should be used rather than compiletime-configuration to get good designed and testable classes and components. In other words: Using #defines and #if's beyond what they are meant for will lead to difficult to test code and badly designed classes.
Has anybody read something along these lines and can give me so I can refer to?
Henry Spencer wrote a paper called #ifdef Considered Harmful.
Also, Bjarne Stroustrup himself, in the chapter 18 of his book The Design and Evolution of C++, frowns on the use of preprocessor and wishes to eliminate it completely. However, Stroustrup also recognizes the necessity for #ifdef directive and the conditional compilation and goes on to illustrate that there is no good alternative for it in C++.
Finally, Pete Goodliffe, in chapter 13 of his book Code Craft: The Practice of Writing Excellent Code, gives an example how, even when used for its original purpose, #ifdef can make a mess out of your code.
Hope this helps. However, if your co-workers won't listen to reasonable arguments in the first place, I doubt book quotes will help convince them ;)
Preprocessor directives in C# have very clearly defined and practical uses cases. The ones you're specifically talking about, called conditional directives, are used to control which parts of the code are compiled and which aren't.
There is a very important difference between not compiling parts of code and controlling how your object graph is wired via IoC. Let's look at a real-world example: XNA. When you're developing XNA games that you plan to deploy on both Windows and XBox 360, your solution will typically have at least two platforms that you can switch between, in your IDE. There will be several differences between them, but one of those differences will be that the XBox 360 platform will define a conditional symbol XBOX360 which you can use in your source code with a following idiom:
#if (XBOX360)
// some XBOX360-specific code here
#else
// some Windows-specific code here
#endif
You could, of course, factor out these differences using a Strategy design pattern and control via IoC which one gets instantiated, but the conditional compilation offers at least three major advantages:
You don't ship code you don't need.
You can see the differences between platform-specific code for both platforms in the rightful context of that code.
There's no indirection overhead. The appropriate code is compiled, the other isn't and that's it.
IMHO, you talk about C and C++, not about OO practice in general. And C is not Object-oriented. In both languages the preprocessor is actually useful. Just use it correctly.
I think this answer belongs to C++ FAQ: [29.8] Are you saying that the preprocessor is evil?.
Yes, that's exactly what I'm saying:
the preprocessor is evil.
Every #define macro effectively
creates a new keyword in every source
file and every scope until that symbol
is #undefd. The preprocessor lets
you create a #define symbol that is
always replaced independent of the
{...} scope where that symbol
appears.
Sometimes we need the preprocessor,
such as the #ifndef/#define wrapper
within each header file, but it should
be avoided when you can. "Evil"
doesn't mean "never use." You will use
evil things sometimes, particularly
when they are "the lesser of two
evils." But they're still evil :-)
I hope this source is authoritative enough :-)
"The preprocessor is the incarnation of evil, and the cause of all pain on earth" -Robert (OO Guru)
IMO it is important to differentiate between #if and #define. Both can be useful and both can be overused. My experience is that #define is more likely to be overused than #if.
I spent 10+ years doing C and C++ programming. In the projects I worked on (commercially available software for DOS / Unix / Macintosh / Windows) we used #if and #define primarily to deal with code portability issues.
I spent enough time working with C++ / MFC to learn to detest #define when it is overused - which I believe to be the case in MFC circa 1996.
I then spent 7+ years working on Java projects. I cannot say that I missed the preprocessor (although I most certainly did miss things like enumerated types and templates / generics which Java did not have at the time).
I've been working in C# since 2003. We have made heavy use of #if and [Conditional("DEBUG")] for our debug builds - but #if is just a more convenient, and slightly more efficient way of doing the same things we did in Java.
Moving forward, we have started to prepare our core engine for Silverlight. While everything we are doing could be done without #if, it is less work with #if which means we can spend more time adding features that our customers are asking for. For example, we have a value class which encapsulates a system color for storage in our core engine. Below are the first few lines of code. Because of the similarity between System.Drawing.Color and System.Windows.Media.Color, the #define at the top gets us a lot of functionality in normal .NET and in Silverlight without duplicating code:
using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;
#if SILVERLIGHT
using SystemColor = System.Windows.Media.Color;
#else
using SystemColor = System.Drawing.Color;
#endif
namespace SpreadsheetGear.Drawing
{
/// <summary>
/// Represents a Color in the SpreadsheetGear API and provides implicit conversion operators to and from System.Drawing.Color and / or System.Windows.Media.Color.
/// </summary>
public struct Color
{
public override string ToString()
{
//return string.Format("Color({0}, {1}, {2})", R, G, B);
return _color.ToString();
}
public override bool Equals(object obj)
{
return (obj is Color && (this == (Color)obj))
|| (obj is SystemColor && (_color == (SystemColor)obj));
}
...
The bottom line for me is that there are many language features which can be overused, but this is not a good enough reason to leave these features out or to make strict rules prohibiting their use. I must say that moving to C# after programming in Java for so long helps me to appreciate this because Microsoft (Anders Hejlsberg) has been more willing to provide features which might not appeal to a college professor, but which make me more productive in my job and ultimately enable me to build a better widget in the limited time anybody with a ship date has.
One problem with the preprocessor #ifdef's is that they effectively duplicate the number of compiled versions that, in theory, you should test thorougly so that you can say that your delivered code is correct.
#ifdef DEBUG
//...
#else
//...
Ok, now I can produce the "Debug" version and the "Release" version. This is ok for me, I always do it, because I have assertions and debug traces which are only executed in the debug version.
If someone comes and writes (real life example)
#ifdef MANUALLY_MANAGED_MEMORY
...
And they write a pet optimization which they propagate to four or five different classes, then suddenly you have FOUR possible ways to compile your code.
If only you have another #ifdef-dependant code then you'll have EIGHT possible versions to generate, and what's more disturbing, FOUR of them will be possible release versions.
Of course runtime if()'s, like loops and whatever, create branches that you have to test - but I find it much more difficult to guarantee that every compile time variation of the configuration remains correct.
This is the reason why I think, as a policy, all #ifdef's except the one for Debug/Release version should be temporary, i.e. you're doing an experiment in development code and you'll decide, soon, if it stays one way or the other.
Bjarne Stroustrap provides his answer (in general, not specific to IoC) here
So, what's wrong with using macros?
(excerpt)
Macros do not obey the C++ scope and
type rules. This is often the cause of
subtle and not-so-subtle problems.
Consequently, C++ provides
alternatives that fit better with the
rest of C++, such as inline functions,
templates, and namespaces.
The support of preprocessing in C# is highly minimal.... verging on useless. Is that Evil?
Is the Preprocessor anything to do with OO? Surely it's for build configuration.
For instance I have a lite version and a pro-version of my app. I might want to exclude some code on the lite withour having to resort to building very similar versions of the code.
I might not want to ship a lite version which is the pro version with different runtime flags.
Tony
Using a #if instead of an IoC or some other mechanism for controlling different functionality based on configuration is probably a violation of the Single Responsibility Principle, which is the key for 'modern' OO designs. Here is an extensive series of articles about OO design principles.
Since the parts in the different sections of the #if by definition concern themselves with different aspects of the system, you are now coupling the implementation details of at least two different components into the dependency chain of your code that uses the #if.
By refactoring those concerns out, you have created a class that, assuming it is finished and tested, will no longer need to be cracked open unless the common code is broken.
In your original case, you'll need to remember the existence of the #if and take it into account any time any of the three components change with respect to possible side-effects of a breaking change.
In c# / VB.NET I would not say its evil.
For example, when debugging windows services, I put the following in Main so that when in Debug mode, I can run the service as an application.
<MTAThread()> _
<System.Diagnostics.DebuggerNonUserCode()> _
Shared Sub Main()
#If DEBUG Then
'Starts this up as an application.'
Dim _service As New DispatchService()
_service.ServiceStartupMethod(Nothing)
System.Threading.Thread.Sleep(System.Threading.Timeout.Infinite)
#Else
'Runs as a service. '
Dim ServicesToRun() As System.ServiceProcess.ServiceBase
ServicesToRun = New System.ServiceProcess.ServiceBase() {New DispatchService}
System.ServiceProcess.ServiceBase.Run(ServicesToRun)
#End If
End Sub
This is configuring the behavior of the application, and is certainly not evil. At the very least, its not as evil as trying to debug a service startup routine.
Please correct me if I read your OP wrong, but it seems that you are complaining about others using a preprocessor when a simple boolean would suffice. If thats the case, dont damn the preprocessors, damn those using them in such fashion.
EDIT:
Re: first comment. I dont get how that example ties in here. The problem is that the preprocessor is being mis-used, not that it is evil.
I'll give you another example. We have an application that does version checking between client and server on startup. In development, we often have different versions and dont want to do a version check. Is this evil?
I guess what I am trying to say is that the preprocessor is not evil, even when changing program behavior. The problem is that someone is misusing it. What is wrong with saying that? Why are you trying to dismiss a language feature?
Much later EDIT: FWIW: i haven't used preprocessor directives for this in a few years. I do use Environment.UserInteractive with a specific arg set ("-c" = Console), and a neat trick I picked up from somewhere here on SO to not block the application but still wait for user input.
Shared Sub Main(args as string[])
If (Environment.UserInteractive = True) Then
'Starts this up as an application.
If (args.Length > 0 AndAlso args[0].Equals("-c")) Then 'usually a "DeriveCommand" function that returns an enum or something similar
Dim isRunning As Boolean = true
Dim _service As New DispatchService()
_service.ServiceStartupMethod(Nothing)
Console.WriteLine("Press ESC to stop running")
While (IsRunning)
While (Not (Console.KeyAvailable AndAlso (Console.ReadKey(true).Key.Equals(ConsoleKey.Escape) OrElse Console.ReadKey(true).Key.Equals(ConsoleKey.R))))
Thread.Sleep(1)
End While
Console.WriteLine()
Console.WriteLine("Press ESC to continue running or any other key to continue shutdown.")
Dim key = Console.ReadKey();
if (key.Key.Equals(ConsoleKey.Escape))
{
Console.WriteLine("Press ESC to load shutdown menu.")
Continue
}
isRunning = false
End While
_service.ServiceStopMethod()
Else
Throw New ArgumentException("Dont be a double clicker, this needs a console for reporting info and errors")
End If
Else
'Runs as a service. '
Dim ServicesToRun() As System.ServiceProcess.ServiceBase
ServicesToRun = New System.ServiceProcess.ServiceBase() {New DispatchService}
System.ServiceProcess.ServiceBase.Run(ServicesToRun)
End If
End Sub
Preprocessor code injection is to the compiler what triggers are to the database. And it's pretty easy to find such assertions about triggers.
I mainly think of #define being used to inline a short expression because it saves the overhead of a function call. In other words, it's premature optimization.
One quick point to tell your coworkers is this: the preprocessor breaks operator precedence in mathematical statements if symbols are used in such statements.
I have no guru statement regarding to the usage of preprocessor directives in my mind and can not add a reference to a famous one. But I want to give you a link to a simple sample found at Microsoft's MSDN.
#define A
#undef B
class C
{
#if A
void F() {}
#else
void G() {}
#endif
#if B
void H() {}
#else
void I() {}
#endif
}
This will result in the simple
class C
{
void F() {}
void I() {}
}
and I think it is not very easy to read because you have to look at the top to see what exactly is defined at this point. This is getting more complex if you have defined it elsewhere.
For me it looks much simpler to create different implementations and
inject them into a caller instead of switching defines to create "new" class definitions. (... and because of this I understand why you compare the usage of preprocessor defintions with the usage of IoC instead). Beside the horrible readability of code using preprocessor instructions, I rarely used preprocessor definitions because they increase the complexity of testing your code because they result in multiple paths (but this is a problem of having multiple implementations injected by external IoC-Container, too).
Microsoft itself has used a lot of preprocessor definitions within the win32 api and you might know/remember the ugly switching between char and w_char method calls.
Maybe you should not say "Don't use it". Tell them "How to use it" and "When to use it" instead. I think everyone will agree with you if you're coming up with good (better understandable) alternatives and can describe the risks of using preprocessor defines/makros.
No need for a guru... just be a guru. ;-)
I wanted to ask a new question but looks like it fits here. I agree that having a full-blown preprocessor may be too much for Java. There is one clear need that is covered by precoprocessor in C world and not covered at all in Java world:
I want debug printouts being completely ignored by compiler depending on debug level. Now we rely on "good practice" but in practice this practice is hard to enforce and still some redundant CPU load remains.
In Java style that could be solved by having some designated methods like debug(), warning() etc. for which calls the code is generated conditionality.
Actually that would be about a bit of integration of Log4J into the language.