I believe, that the usage of preprocessor directives like #if UsingNetwork is bad OO practice - other coworkers do not.
I think, when using an IoC container (e.g. Spring), components can be easily configured if programmed accordingly. In this context either a propery IsUsingNetwork can be set by the IoC container or, if the "using network" implementation behaves differently, another implementation of that interface should be implemented and injected (e.g.: IService, ServiceImplementation, NetworkingServiceImplementation).
Can somebody please provide citations of OO-Gurus or references in books which basically reads "Preprocessor usage is bad OO practice if you try to configure behaviour which should be configured via an IoC container"?
I need this citations to convince coworkers to refactor...
Edit: I do know and agree that using preprocessor directives to change targetplatform specific code during compilation is fine and that is what preprocessor directives are made for. However, I think that runtime-configuration should be used rather than compiletime-configuration to get good designed and testable classes and components. In other words: Using #defines and #if's beyond what they are meant for will lead to difficult to test code and badly designed classes.
Has anybody read something along these lines and can give me so I can refer to?
Henry Spencer wrote a paper called #ifdef Considered Harmful.
Also, Bjarne Stroustrup himself, in the chapter 18 of his book The Design and Evolution of C++, frowns on the use of preprocessor and wishes to eliminate it completely. However, Stroustrup also recognizes the necessity for #ifdef directive and the conditional compilation and goes on to illustrate that there is no good alternative for it in C++.
Finally, Pete Goodliffe, in chapter 13 of his book Code Craft: The Practice of Writing Excellent Code, gives an example how, even when used for its original purpose, #ifdef can make a mess out of your code.
Hope this helps. However, if your co-workers won't listen to reasonable arguments in the first place, I doubt book quotes will help convince them ;)
Preprocessor directives in C# have very clearly defined and practical uses cases. The ones you're specifically talking about, called conditional directives, are used to control which parts of the code are compiled and which aren't.
There is a very important difference between not compiling parts of code and controlling how your object graph is wired via IoC. Let's look at a real-world example: XNA. When you're developing XNA games that you plan to deploy on both Windows and XBox 360, your solution will typically have at least two platforms that you can switch between, in your IDE. There will be several differences between them, but one of those differences will be that the XBox 360 platform will define a conditional symbol XBOX360 which you can use in your source code with a following idiom:
#if (XBOX360)
// some XBOX360-specific code here
#else
// some Windows-specific code here
#endif
You could, of course, factor out these differences using a Strategy design pattern and control via IoC which one gets instantiated, but the conditional compilation offers at least three major advantages:
You don't ship code you don't need.
You can see the differences between platform-specific code for both platforms in the rightful context of that code.
There's no indirection overhead. The appropriate code is compiled, the other isn't and that's it.
IMHO, you talk about C and C++, not about OO practice in general. And C is not Object-oriented. In both languages the preprocessor is actually useful. Just use it correctly.
I think this answer belongs to C++ FAQ: [29.8] Are you saying that the preprocessor is evil?.
Yes, that's exactly what I'm saying:
the preprocessor is evil.
Every #define macro effectively
creates a new keyword in every source
file and every scope until that symbol
is #undefd. The preprocessor lets
you create a #define symbol that is
always replaced independent of the
{...} scope where that symbol
appears.
Sometimes we need the preprocessor,
such as the #ifndef/#define wrapper
within each header file, but it should
be avoided when you can. "Evil"
doesn't mean "never use." You will use
evil things sometimes, particularly
when they are "the lesser of two
evils." But they're still evil :-)
I hope this source is authoritative enough :-)
"The preprocessor is the incarnation of evil, and the cause of all pain on earth" -Robert (OO Guru)
IMO it is important to differentiate between #if and #define. Both can be useful and both can be overused. My experience is that #define is more likely to be overused than #if.
I spent 10+ years doing C and C++ programming. In the projects I worked on (commercially available software for DOS / Unix / Macintosh / Windows) we used #if and #define primarily to deal with code portability issues.
I spent enough time working with C++ / MFC to learn to detest #define when it is overused - which I believe to be the case in MFC circa 1996.
I then spent 7+ years working on Java projects. I cannot say that I missed the preprocessor (although I most certainly did miss things like enumerated types and templates / generics which Java did not have at the time).
I've been working in C# since 2003. We have made heavy use of #if and [Conditional("DEBUG")] for our debug builds - but #if is just a more convenient, and slightly more efficient way of doing the same things we did in Java.
Moving forward, we have started to prepare our core engine for Silverlight. While everything we are doing could be done without #if, it is less work with #if which means we can spend more time adding features that our customers are asking for. For example, we have a value class which encapsulates a system color for storage in our core engine. Below are the first few lines of code. Because of the similarity between System.Drawing.Color and System.Windows.Media.Color, the #define at the top gets us a lot of functionality in normal .NET and in Silverlight without duplicating code:
using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;
#if SILVERLIGHT
using SystemColor = System.Windows.Media.Color;
#else
using SystemColor = System.Drawing.Color;
#endif
namespace SpreadsheetGear.Drawing
{
/// <summary>
/// Represents a Color in the SpreadsheetGear API and provides implicit conversion operators to and from System.Drawing.Color and / or System.Windows.Media.Color.
/// </summary>
public struct Color
{
public override string ToString()
{
//return string.Format("Color({0}, {1}, {2})", R, G, B);
return _color.ToString();
}
public override bool Equals(object obj)
{
return (obj is Color && (this == (Color)obj))
|| (obj is SystemColor && (_color == (SystemColor)obj));
}
...
The bottom line for me is that there are many language features which can be overused, but this is not a good enough reason to leave these features out or to make strict rules prohibiting their use. I must say that moving to C# after programming in Java for so long helps me to appreciate this because Microsoft (Anders Hejlsberg) has been more willing to provide features which might not appeal to a college professor, but which make me more productive in my job and ultimately enable me to build a better widget in the limited time anybody with a ship date has.
One problem with the preprocessor #ifdef's is that they effectively duplicate the number of compiled versions that, in theory, you should test thorougly so that you can say that your delivered code is correct.
#ifdef DEBUG
//...
#else
//...
Ok, now I can produce the "Debug" version and the "Release" version. This is ok for me, I always do it, because I have assertions and debug traces which are only executed in the debug version.
If someone comes and writes (real life example)
#ifdef MANUALLY_MANAGED_MEMORY
...
And they write a pet optimization which they propagate to four or five different classes, then suddenly you have FOUR possible ways to compile your code.
If only you have another #ifdef-dependant code then you'll have EIGHT possible versions to generate, and what's more disturbing, FOUR of them will be possible release versions.
Of course runtime if()'s, like loops and whatever, create branches that you have to test - but I find it much more difficult to guarantee that every compile time variation of the configuration remains correct.
This is the reason why I think, as a policy, all #ifdef's except the one for Debug/Release version should be temporary, i.e. you're doing an experiment in development code and you'll decide, soon, if it stays one way or the other.
Bjarne Stroustrap provides his answer (in general, not specific to IoC) here
So, what's wrong with using macros?
(excerpt)
Macros do not obey the C++ scope and
type rules. This is often the cause of
subtle and not-so-subtle problems.
Consequently, C++ provides
alternatives that fit better with the
rest of C++, such as inline functions,
templates, and namespaces.
The support of preprocessing in C# is highly minimal.... verging on useless. Is that Evil?
Is the Preprocessor anything to do with OO? Surely it's for build configuration.
For instance I have a lite version and a pro-version of my app. I might want to exclude some code on the lite withour having to resort to building very similar versions of the code.
I might not want to ship a lite version which is the pro version with different runtime flags.
Tony
Using a #if instead of an IoC or some other mechanism for controlling different functionality based on configuration is probably a violation of the Single Responsibility Principle, which is the key for 'modern' OO designs. Here is an extensive series of articles about OO design principles.
Since the parts in the different sections of the #if by definition concern themselves with different aspects of the system, you are now coupling the implementation details of at least two different components into the dependency chain of your code that uses the #if.
By refactoring those concerns out, you have created a class that, assuming it is finished and tested, will no longer need to be cracked open unless the common code is broken.
In your original case, you'll need to remember the existence of the #if and take it into account any time any of the three components change with respect to possible side-effects of a breaking change.
In c# / VB.NET I would not say its evil.
For example, when debugging windows services, I put the following in Main so that when in Debug mode, I can run the service as an application.
<MTAThread()> _
<System.Diagnostics.DebuggerNonUserCode()> _
Shared Sub Main()
#If DEBUG Then
'Starts this up as an application.'
Dim _service As New DispatchService()
_service.ServiceStartupMethod(Nothing)
System.Threading.Thread.Sleep(System.Threading.Timeout.Infinite)
#Else
'Runs as a service. '
Dim ServicesToRun() As System.ServiceProcess.ServiceBase
ServicesToRun = New System.ServiceProcess.ServiceBase() {New DispatchService}
System.ServiceProcess.ServiceBase.Run(ServicesToRun)
#End If
End Sub
This is configuring the behavior of the application, and is certainly not evil. At the very least, its not as evil as trying to debug a service startup routine.
Please correct me if I read your OP wrong, but it seems that you are complaining about others using a preprocessor when a simple boolean would suffice. If thats the case, dont damn the preprocessors, damn those using them in such fashion.
EDIT:
Re: first comment. I dont get how that example ties in here. The problem is that the preprocessor is being mis-used, not that it is evil.
I'll give you another example. We have an application that does version checking between client and server on startup. In development, we often have different versions and dont want to do a version check. Is this evil?
I guess what I am trying to say is that the preprocessor is not evil, even when changing program behavior. The problem is that someone is misusing it. What is wrong with saying that? Why are you trying to dismiss a language feature?
Much later EDIT: FWIW: i haven't used preprocessor directives for this in a few years. I do use Environment.UserInteractive with a specific arg set ("-c" = Console), and a neat trick I picked up from somewhere here on SO to not block the application but still wait for user input.
Shared Sub Main(args as string[])
If (Environment.UserInteractive = True) Then
'Starts this up as an application.
If (args.Length > 0 AndAlso args[0].Equals("-c")) Then 'usually a "DeriveCommand" function that returns an enum or something similar
Dim isRunning As Boolean = true
Dim _service As New DispatchService()
_service.ServiceStartupMethod(Nothing)
Console.WriteLine("Press ESC to stop running")
While (IsRunning)
While (Not (Console.KeyAvailable AndAlso (Console.ReadKey(true).Key.Equals(ConsoleKey.Escape) OrElse Console.ReadKey(true).Key.Equals(ConsoleKey.R))))
Thread.Sleep(1)
End While
Console.WriteLine()
Console.WriteLine("Press ESC to continue running or any other key to continue shutdown.")
Dim key = Console.ReadKey();
if (key.Key.Equals(ConsoleKey.Escape))
{
Console.WriteLine("Press ESC to load shutdown menu.")
Continue
}
isRunning = false
End While
_service.ServiceStopMethod()
Else
Throw New ArgumentException("Dont be a double clicker, this needs a console for reporting info and errors")
End If
Else
'Runs as a service. '
Dim ServicesToRun() As System.ServiceProcess.ServiceBase
ServicesToRun = New System.ServiceProcess.ServiceBase() {New DispatchService}
System.ServiceProcess.ServiceBase.Run(ServicesToRun)
End If
End Sub
Preprocessor code injection is to the compiler what triggers are to the database. And it's pretty easy to find such assertions about triggers.
I mainly think of #define being used to inline a short expression because it saves the overhead of a function call. In other words, it's premature optimization.
One quick point to tell your coworkers is this: the preprocessor breaks operator precedence in mathematical statements if symbols are used in such statements.
I have no guru statement regarding to the usage of preprocessor directives in my mind and can not add a reference to a famous one. But I want to give you a link to a simple sample found at Microsoft's MSDN.
#define A
#undef B
class C
{
#if A
void F() {}
#else
void G() {}
#endif
#if B
void H() {}
#else
void I() {}
#endif
}
This will result in the simple
class C
{
void F() {}
void I() {}
}
and I think it is not very easy to read because you have to look at the top to see what exactly is defined at this point. This is getting more complex if you have defined it elsewhere.
For me it looks much simpler to create different implementations and
inject them into a caller instead of switching defines to create "new" class definitions. (... and because of this I understand why you compare the usage of preprocessor defintions with the usage of IoC instead). Beside the horrible readability of code using preprocessor instructions, I rarely used preprocessor definitions because they increase the complexity of testing your code because they result in multiple paths (but this is a problem of having multiple implementations injected by external IoC-Container, too).
Microsoft itself has used a lot of preprocessor definitions within the win32 api and you might know/remember the ugly switching between char and w_char method calls.
Maybe you should not say "Don't use it". Tell them "How to use it" and "When to use it" instead. I think everyone will agree with you if you're coming up with good (better understandable) alternatives and can describe the risks of using preprocessor defines/makros.
No need for a guru... just be a guru. ;-)
I wanted to ask a new question but looks like it fits here. I agree that having a full-blown preprocessor may be too much for Java. There is one clear need that is covered by precoprocessor in C world and not covered at all in Java world:
I want debug printouts being completely ignored by compiler depending on debug level. Now we rely on "good practice" but in practice this practice is hard to enforce and still some redundant CPU load remains.
In Java style that could be solved by having some designated methods like debug(), warning() etc. for which calls the code is generated conditionality.
Actually that would be about a bit of integration of Log4J into the language.
Related
Being inspired by this article I asked myself how can I emit compiler errors or , at least stop the build, if a feature is not implemented.
The only quick solution I came up with, is by creating a custom attribute:
public class NoReleaseAttribute : System.Attribute
{
public NoReleaseAttribute()
{
#if RELEASE
.
#endif
}
}
The idea is to have a syntactic error somewhere, but only for Release. I used an attribute because the IDE will help me find quickly the references to the attribute, and thus all places that I marked as needed to be completed or fixed before going in production.
I don't like this solution because I want one to emit a compiler error in each place that needs attention, not a single global error elsewhere.
Perhaps there is a more fitted solution.
You may use #error directive in your code, it will cause an error at compile-time.
Example usage:
static void Main(string[] args)
{
DoSomeStuff();
}
#error Needs to be implemented
static void DoSomeStuff(){
//This needs some work
}
There are no more fitted solution. You can use for your goal:
a) ObsoleteAttribute (may not fit your goal)
b) throw NotImplementedException in every place that you need to:
#if RELEASE
throw new NotImplementedException();
#endif
c) Wrap throwing NotImplementedException with attribute as you described us.
Basically it is unnecessary to add code that will not be used anywhere - Visual Studio even marking this code with 0 usages and it will not be included in CIL. So you have to question yourself - do i really need useless code exist in our project? It is better to track unreleased features in tracking systems like YouTrack or something, than to search them in your code.
I would suggest using a build step that checks the code for TODO-comments and generates warnings for them. A quick google suggest the Warn About TODOs extension might do just that, but it is not something I have used. You should also configure your build to fail on most warnings. A possible risk here is that people will just avoid using todo-comments, since they will fail the build.
But needing such a check suggest you do not have sufficient testing of your code. I find it a good practice to at least do some level of testing before committing any code, and just about any testing should reveal if entire components are not implemented. Reviewing your own code before committing is another practice that could be useful.
Automated unit testing can be a great way to deal with things like this. If you write unit tests for all your components the tests should find things like this, especially if you throw NotImplementedException in placeholders. This might require you to write your tests before the components, but there are certainly arguments for such a development approach.
The next line of defense should be code review. Even a quick review should find methods that are not implemented.
The final line of defense should be independent testing. If someone else tests the new features against the specification, any significant missing functionality should be found.
This is a general software design question.
Is it a good idea to use the #if compiler directive to support multiple platforms. For example, I have three files:
IScreen.cs
public interface IScreen {
...
}
ScreenWin.cs
#if WIN
public class Screen : IScreen {
...
}
#endif
ScreenMac.cs
#if WIN
public class Screen : IScreen {
...
}
#endif
What seems cool about this is that I can use BDD and have something like:
Given a Screen exists with width: 1920 and height: 1080
and the correct Screen would be used based on the compiler directive.
This seems like you would also get better performance than trying to use an abstract factory.
What are some drawbacks?
As #Oded has answered, the down-sode of using compile-time directoves is that you have to compile a specific binary for each platform. The up side is that you don't have a bloated one-size-fits-all application that is larger and slower on every platform just because it is platform-independent.
Another down-side of #if is that a lot of Visual Studio tools only consider the "currently active" code. For example, intellisense and refactoring tools - if you wished to rename IScreen into IDisplay, then you'd find refactoring worked and all your PC code was updated perfectly but all your Mac code would be badly broken as the refactoring tools simply wouldn't "see" the other code branch. Similarly, it's easy to make a change to the code on the PC branch and forget to make the corresponding change to the Mac branch, again breaking it.
The benefit of a factory instead of "#if" is that a single binary image can potentially be run on every platform, and you only need to check the host platform once when creating the concrete implementation (so it's still very efficient at runtime, but will be larger as you have to ship all the platform-variants within one distribution)
However, it is possible to get the best of both worlds by using a more modular design pattern instead of #if: use a satellite assembly for the platform-specific implementations. In this approach, the main application code will specify the IScreen interface, and then you would have Concrete Screen (Mac) and Screen (PC) classes in separate platform-specific variants of a ScreenImplementation.dll assembly. You can also ship a single installer that covers all platforms (by simply deploying the appropriate satellite dll for the host platform). This gives you the marginal efficiency/size improvements of using #if but withut actually having to mess up your code with conditionals. (Of course, this modular approach makes sense with both the #if and Factory designs - they become more of a deployment option rather than the core implementation architecture once you split the platform-specific implementations into separate assemblies)
Drawbacks include having to compile multiple binaries - one per platform.
If not careful you may end up cluttering your code with preprocessor directive in many parts of your code. See this SO question about reversing one such codebase.
If you can detect platform within code, it is a better approach.
I got a pretty common scenario, namely a self implemented ILogger interface. It contains several methods like _logger.Debug("Some stuff") and so on. The implementation is provided by a LoggingService, and used in classes the normal way.
Now I have a question regarding performance, I am writing for Windows Phone 7, and because of the limited power of these devices, little things may matter.
I do not want to:
Include a precompiler directive on each line, like #IF DEBUG
Use a condition like log4net e.g. _logger.DebugEnabled
The way I see it, in the release version, I just return NullLoggers, which contain an empty implementation of the interface, doing nothing.
The question is: Does the compiler recognize such things (may be hard, he can't know on compile time which logger I assign). Is there any way to give .NET a hint for that?
The reason for my question, I know entering an empty function will not cause a big delay, no problem there. But there are a lot of strings in the source code of my application, and if they are never used, they do not really need to be part of my application...
Or am I overthinking a tiny problem (perhaps the "string - code" ratio just looks awful in my code editor, and its no big deal anyway)..
Thanks for tips,
Chris
Use the Conditional attribute:
[Conditional("DEBUG")]
public void Debug(string message) { /* ... */ }
The compiler will remove all calls to this method for any build configurations that don't match the string in the conditional attribute. Note that this attribute is applied to the method not the call site. Also note that it is the call site instruction that is removed, not the method itself.
It is probably a very small concern to have logging code in your application that does not "run". The overhead of the "null" logger or conditionals is likely to be very small in the scheme of things. The strings will incur memory overhead which could be worrying for a constrained device, but as it is WP7 the minimum specs are not that constrained in reality.
I understand that logging code looks fugly though. :)
If you really want to strip that logging code out...
In .Net you can use the ConditionalAttribute to mark methods for conditional compilation. You could leverage this feature to ensure that all logging calls are removed from compilation for specified build configurations. As long as methods that you have decorated with the conditional attributes follows a few rules, the compiler will literally strip the call chain out.
However, if you wanted to use this approach then you would have to forgo your interface design as the conditional attribute cannot be applied to interface members, and you cannot implement interfaces with conditional members.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
For c# developers that are staring out to learn Java, are there any big underlying differences between the two languages that should be pointed out?
Maybe some people may assume things to be the same, but there are some import aspects that shouldn't be overlooked? (or you can really screw up!)
Maybe in terms of OOP constructs, the way GC works, references, deployment related, etc.
A few gotchas off the top of my head:
Java doesn't have custom value types (structs) so don't bother looking for them
Java enums are very different to the "named numbers" approach of C#; they're more OO. They can be used to great effect, if you're careful.
byte is signed in Java (unfortunately)
In C#, instance variable initializers run before the base class constructor does; in Java they run after it does (i.e. just before the constructor body in "this" class)
In C# methods are sealed by default. In Java they're virtual by default.
The default access modifier in C# is always "the most restrictive access available in the current context"; in Java it's "package" access. (It's worth reading up on the particular access modifiers in Java.)
Nested types in Java and C# work somewhat differently; in particular they have different access restrictions, and unless you declare the nested type to be static it will have an implicit reference to an instance of the containing class.
here is a very comprehensive comparison of the 2 languages:
http://www.25hoursaday.com/CsharpVsJava.html
Added: http://en.wikipedia.org/wiki/Comparison_of_Java_and_C_Sharp
I am surprised that no one has mentioned properties, something quite fundamental in C# but absent in Java. C# 3 and above has automatically implemented properties as well. In Java you have to use GetX/SetX type methods.
Another obvious difference is LINQ and lambda expressions in C# 3 absent in Java.
There are a few other simple but useful things missing from Java like verbatim strings (#""), operator overloading, iterators using yield and pre processor are missing in Java as well.
One of my personal favourites in C# is that namespace names don't have to follow the physical directory structure. I really like this flexibility.
There are a lot of differences, but these come to mind for me:
Lack of operator overloading in Java. Watch your instance.Equals(instance2) versus instance == instance2 (especially w/strings).
Get used to interfaces NOT being prefixed with an I. Often you see namespaces or classes suffixed with Impl instead.
Double checked locking doesn't work because of the Java memory model.
You can import static methods without prefixing them with the class name, which is very useful in certain cases (DSLs).
Switch statements in Java don't require a default, and you can't use strings as case labels (IIRC).
Java generics will anger you. Java generics don't exist at runtime (at least in 1.5), they're a compiler trick, which causes problems if you want to do reflection on the generic types.
.NET has reified generics; Java has erased generics.
The difference is this: if you have an ArrayList<String> object, in .NET, you can tell (at runtime) that the object has type ArrayList<String>, whereas in Java, at runtime, the object is of type ArrayList; the String part is lost. If you put in non-String objects into the ArrayList, the system can't enforce that, and you'll only know about it after you try to extract the item out, and the cast fails.
One thing I miss in C# from Java is the forced handling of checked exceptions. In C# is it far to common that one is unaware of the exceptions a method may throw and you're at the mercy of the documentation or testing to discover them. Not so in Java with checked exceptions.
Java has autoboxing for primitives rather than value types, so although System.Int32[] is an array of values in C#, Integer[] is an array of references to Integer objects, and as such not suitable for higher performance calculations.
No delegates or events - you have to use interfaces. Fortunately, you can create classes and interface implementations inline, so this isn't such a big deal
The built-in date/calendar functionality in Java is horrible compared to System.DateTime. There is a lot of info about this here: What's wrong with Java Date & Time API?
Some of these can be gotchas for a C# developer:
The Java Date class is mutable which can make returning and passing dates around dangerous.
Most of the java.util.Date constructors are deprecated. Simply instantiating a date is pretty verbose.
I have never gotten the java.util.Date class to interoperate well with web services. In most cases the dates on either side were wildly transformed into some other date & time.
Additionally, Java doesn't have all the same features that the GAC and strongly-named assemblies bring. Jar Hell is the term for what can go wrong when linking/referencing external libraries.
As far as packaging/deployment is concerned:
it can be difficult to package up web applications in an EAR/WAR format that actually install and run in several different application servers (Glassfish, Websphere, etc).
deploying your Java app as a Windows service takes a lot more effort than in C#. Most of the recommendations I got for this involved a non-free 3rd party library
application configuration isn't nearly as easy as including an app.config file in your project. There is a java.util.Properties class, but it isn't as robust and finding the right spot to drop your .properties file can be confusing
There are no delegates in Java. Therefore, aside from all the benefits that delegates bring to the table, events work differently too. Instead of just hooking up a method, you need to implement an interface and attach that instead.
One thing that jumps out b/c it's on my interview list is that there is no "new" keyword analogue in Java for method hiding and there fore no compiler warning "you should put new here". Accidental method hiding when you meant to override leads to bugs.
(edit for example)
Example, B derives from A (using C# syntax, Java behaves same way last I checked but does not emit compiler warning). Does A's foo get called, or B's foo? (A's gets called, probably surprising the dev who implemented B).
class A
{
public void foo() {code}
}
class B:A
{
public void foo() {code}
}
void SomeMethod()
{
A a = new B(); // variable's type is declared as A, but assigned to an object of B.
a.foo();
}
Java doesn't have LINQ and the documentation is hell. User interfaces in Java are a pain to develop, you lose all the good things Microsoft gave us (WPF, WCF, etc...) but get hard - to - use, hardly documented "APIs".
The most harrasing difference to me when I switch to java it's the string declaration.
in C# string (most of the time)
in Java String
It's pretty simple, but trust me, it makes you lose so much time when you have the habit to s not S !
The one issue I've run into so far when working with Java coming from C# is Exceptions and Errors are different.
For example you cannot catch an out of memory error using catch(Exception e).
See the following for more details:
why-is-java-lang-outofmemoryerror-java-heap-space-not-caught
It's been so long since I've been in Java but the things I noticed right off the bat in application development was C# event model, C# drag and drop vs using Layout Managers in Swing (if your doing App dev), and exception handling with Java making sure you catch an exception and C# not required.
In response to your very direct question in your title:
"C# developers learning Java, what are the biggest differences one may overlook?"
A: The fact that Java is considerably slower on Windows.
I'm wondering if there are any reasons (apart from tidying up source code) why developers use the "Remove Unused Usings" feature in Visual Studio 2008?
There are a few reasons you'd want to take them out.
It's pointless. They add no value.
It's confusing. What is being used from that namespace?
If you don't, then you'll gradually accumulate pointless using statements as your code changes over time.
Static analysis is slower.
Code compilation is slower.
On the other hand, there aren't many reasons to leave them in. I suppose you save yourself the effort of having to delete them. But if you're that lazy, you've got bigger problems!
I would say quite the contrary - it's extremely helpful to remove unneeded, unnecessary using statements.
Imagine you have to go back to your code in 3, 6, 9 months - or someone else has to take over your code and maintain it.
If you have a huge long laundry list of using statement that aren't really needed, looking at the code could be quite confusing. Why is that using in there, if nothing is used from that namespace??
I guess in terms of long-term maintainability in a professional environment, I'd strongly suggest to keep your code as clean as possible - and that includes dumping unnecessary stuff from it. Less clutter equals less confusion and thus higher maintainability.
Marc
In addition to the reasons already given, it prevents unnecessary naming conflicts. Consider this file:
using System.IO;
using System.Windows.Shapes;
namespace LicenseTester
{
public static class Example
{
private static string temporaryPath = Path.GetTempFileName();
}
}
This code doesn't compile because both the namespaces System.IO and System.Windows.Shapes each contain a class called Path. We could fix it by using the full class path,
private static string temporaryPath = System.IO.Path.GetTempFileName();
or we could simply remove the line using System.Windows.Shapes;.
This seems to me to be a very sensible question, which is being treated in quite a flippant way by the people responding.
I'd say that any change to source code needs to be justified. These changes can have hidden costs, and the person posing the question wanted to be made aware of this. They didn't ask to be called "lazy", as one person inimated.
I have just started using ReSharper, and it is starting to give warnings and style hints on the project I am responsible for. Amongst them is the removal of redundant using directive, but also redundant qualifiers, capitalisation and many more. My gut instinct is to tidy the code and resolve all hints, but my business head warns me against unjustified changes.
We use an automated build process, and therefore any change to our SVN repository would generate changes that we couldn't link to projects/bugs/issues, and would trigger automated builds and releases which delivered no functional change to previous versions.
If we look at the removal of redundant qualifiers, this could possibly cause confusion to developers as classes in our Domain and Data layers are only differentiated by the qualifiers.
If I look at the proper use of capitalisation of anachronyms (i.e. ABCD -> Abcd), then I have to take into account that ReSharper doesn't refactor any of the Xml files we use that reference class names.
So, following these hints is not as straight-forward as it appears, and should be treated with respect.
Less options in the IntelliSense popup (particularly if the namespaces contain lots of Extension methods).
Theoretically IntelliSense should be faster too.
Remove them. Less code to look at and wonder about saves time and confusion. I wish more people would KEEP THINGS SIMPLE, NEAT and TIDY.
It's like having dirty shirts and pants in your room. It's ugly and you have to wonder why it's there.
Code compiles quicker.
Recently I got another reason why deleting unused imports is quite helpful and important.
Imagine you have two assemblies, where one references the other (for now let´s call the first one A and the referenced B). Now when you have code in A that depends on B everything is fine. However at some stage in your development-process you notice that you actually don´t need that code any more but you leave the using-statement where it was. Now you not only have a meaningless using-directive but also an assembly-reference to B which is not used anywhere but in the obsolete directive. This firstly increases the amount of time needed for compiling A, as B has to be loaded also.
So this is not only an issue on cleaner and easier to read code but also on maintaining assembly-references in production-code where not all of those referenced assemblies even exist.
Finally in our exapmle we had to ship B and A together, although B is not used anywhere in A but in the using-section. This will massively affect the runtime-performance of A when loading the assembly.
It also helps prevent false circular dependencies, assuming you are also able to remove some dll/project references from your project after removing the unused usings.
At least in theory, if you were given a C# .cs file (or any single program source code file), you should be able to look at the code and create an environment that simulates everything it needs. With some compiling/parsing technique, you may even create a tool to do it automatically. If this is done by you at least in mind, you can ensure you understand everything that code file says.
Now consider, if you were given a .cs file with 1000 using directives which only 10 was actually used. Whenever you look at a symbol that is newly introduced in the code that references the outside world, you will have to go through those 1000 lines to figure out what it is. This obviously slows down the above procedure. So if you can reduce them to 10, it will help!
In my opinion, the C# using directive is very very weak, since you cannot specify single generic symbol without genericity being lost, and you cannot use using alias directive to use extension methods. This is not the case in other languages like Java, Python and Haskell, in those languages you are able to specify (almost) exactly what you want from the outside world. But even then, I will suggest to use using alias whenever possible.