I want to separate platform-independent logic of my C# program into a shared project. Now I would like to hide repositories, service classes and such from my platform-specific projects. What access modifier can I use? internal doesn't seem to work, as they are compiled into the same executable (I think) and I don't want to go tag all my classes with InternalsVisibleToAttribute.
Is there a way to make classes in my shared project invisible to my platform-specific code?
There's only one place where you need to know the real type you're trying to instance - the platform provider. Everyone else should just use the interfaces that are platform-invariant.
All the platform-specific implementations can then be private or internal for all you care - you just need to ensure the provider has access. Your application will use the platform-specific provider to get the platform-specific instances, while only ever using the platform-invariant interfaces.
As for "being compiled into a single executable", that's not really important. Most likely you care entirely about compile-time checking, and that's still present regardless of how the final executable is packaged. There's some restrictions on reflection in a partial trust environment, but by that point you shouldn't care - you're only in it for the compile checks, not the runtime safety.
No, there is no such feature in C#. If you consider marking every other project with InternalsVisibleToAttribute an option, that would do the trick.
If possible, you could split off those other files (repositories, service files) to another assembly, which is not included in your shared project.
i joined a new project where they use c#.
I noticed that several dll's were being add in the references
From my knowledge and the e-learning that i have done, after building a class(which has some Methods & data), a DLL is generated.
Now in a new project, the class that just got converted into a DLL is added as a reference so that the functions defined in it could be called.
So, now my question is:
1) what is the need for converting the class file into a DLL file. Even it were a Class file, I could still be calling the functions defined in it by adding its namespace at the top of the code
2) If After adding the reference of the DLL , I deleted the entire contents of the project, leaving only the dll untouched(and in the same place), would the class using this dll still work
Separating your code into different projects (each of which will create a separate assembly) has various benefits:
It makes the structure of your code clear. For example, it can separate your storage layer from your business logic, and also from your user interface.
It allows reuse: two different user interfaces can refer to the same assembly containing the business logic, for example.
It allows greater encapsulation: classes which are only needed within their own assemblies can be declared as internal (which is the default for top-level classes in C# anyway) which means code in other assemblies won't even know about them. If all your code is in a single assembly, all those classes will "know about" each other.
Now choosing just how many projects to have is a balancing act - I've certainly seen applications where this has gone much too far, with lots of assemblies containing just a single class. If you have a large number of assemblies, that becomes a headache in terms of project and reference management. However, having too few assemblies makes it harder to reuse that code cleanly.
In addition to Jon Skeets answer, I'd like to add "updateability" as well. For me, this has two benefits
one is that the build time becomes smaller if only one project needs to be rebuilt
and second, pushing to "release" could be limited to a few dlls instead of one major .exe.
The first might not be a big deal in C# since projects build pretty fast, but for instance switching to C++ would be a big impact, since C++ code take a long time to compile.
The benefit of Separating is that it lets you change the internal implementation without breaking client code. It doesn't protect you if you decide that you need to change the interface to your code, but that's a different matter.
they can reuse their code. but if they use classes every time they need to implement these classes ( in the best way copy and paste all codes )
when they use dlls in instead of classes they can update all project easily by just Update one or more dll although if you use class in multiple projects you suould modify all classes in all projects.
I might add that a class is a language construct while an assembly is a deployment package.
Already in UML those are two totally different things.
http://en.wikipedia.org/wiki/Package_(UML)
When approaching the new idea of subdividing a solution, projects may be seen as "places" in which to put namespaces (i.e. folders) and classes (i.e. files).
It will take some time until you realize that a project best fits the concept of stratum (or layer) which is an architectural separation of a system.
When stratifying a system, you'll realize that the most crucial problem to tackle are the dependencies between strata (which would be the references to projects or dlls).
There cannot be loops but more important, you should study OCP (Open-Closed principle) and ISP (Interface Segregation Principle) and DIP (Dependency Inversion Principle) of SOLID:
http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
At that point a new question will emerge. How can you know which classes depend on each other or do not? You may draw class diagrams, but there is a conceptual approach to the problem. Over the years it becomes a "practice" of designing systems. The concepts are described for educational purposes in GRASP:
http://en.wikipedia.org/wiki/GRASP_(object-oriented_design)
The most important parts of GRASP for stratification are "Low Coupling" and "High Cohesion". In other words, you should batch functionally very similar classes in a stratum and separate through the stratification classes that functionally are not very much related to each other.
I'm writing an app which plays host to a series of plug-ins. Those plug-ins generally use two libraries .Common and .UI which contain the interfaces that the plug-ins need to implement etc.
I am now at the point where I'm adding the capability for plug-ins to be subject to licensing. I have modified my host application such that it will only load plug-ins which define an interface instance (ILicenseInfoProvider) and export it through MEF. That bit is fine.
We have a selected provider of licensing code, and their licensing system involves use of a library. Now, I don't want to force each plug-in to be licensed through that system, and, by extension, require a reference to that system's assembly. So, I am planning on putting the code that references the third-party library in it's own assembly (something like .Licensing.Vendor). This way plug-ins can simply add a reference to that assembly, and include a class that looks somewhat like this:
[Export(typeof(ILicenseInfoProvider))]
class MyAssemblyLicenseInfoProvider : BaseVendorLicenseInfoProvider
{
public MyAssemblyLicenseInfoProvider() : base("My Assembly's Product Name")
}
I'm reasonably happy with that set-up, apart from one niggling thing - which is that the .Licensing.Vendor assembly will only contain a single class, which is the BaseVendorLicenseInfoProvider relating to the specific licensing system in use.
So, after all that, my question is pretty simple:
Does it seem overkill to put that class in it's own assembly, or is the benefit of not forcing all plug-ins to hold a reference to the third party library worth it?
At the moment there's a suitable purpose for the assembly - a publicly visible assembly for third parties to provide a means to interact via licensing. Seems perfectly reasonable to me:
even if there is only the one class currently, there may be more in the future
it's publicly visible, so you only want to provide only that which is necessary
it encapsulates a reasonable level of responsibility, namely licensing, without forcing specific implementations
I vote no, its not overkill, some plugins may not need a license, some may do..
It depends on what you are trying to achieve. Assemblies are a way of physically separating code whereas namespaces are a way of logically separating code.
Given that there can be a slight performance hit of loading too many assemblies (by which I mean a significant number, not just a few) then I suppose you could consider if it is possible to group as much as you can into one assembly but separate them by namespaces. But if you feel that it really does make sense to keep BaseVendorLicenseInfoProvider completely separate from everything else then I also do not see that as an issue.
At the end of the day it is all about what you feel is right, everyone has their own opinion of course but as long as what you have works for you then I don't see a problem.
I know how to branch the code based on Mono (Type.GetType("Mono.Runtime") != null) but even when the Mono code path is taken, Mono is attempting to load assemblies that would be required by the non-Mono code path. This is not all that surprising, but how do I get around the problem? I have tried putting the call to the non-Mono assembly in a different class, but that didn't help.
The only option to do it directly is Reflection all the way, so far as I can see.
I'd suggest a more roundabout approach: refactor all your code that is dependent on Mono or .NET into separate assemblies, one for each platform - let's call them MA and NA. Make sure that the entire API surface of your classes there is covered by common interfaces, which should be in the 3rd assembly, IA. After that, your main application references IA for interfaces, and uses Reflection just once to load either MA or NA depending on whether it's running on Mono or .NET, and obtain the instance of "top-level factory class". Once there, it just uses normal calls via IA interfaces to instantiate all other objects via that factory and work with them.
Expanding on Pavel's answer you can use a plugin framework to help with the conditionality of loading bits of code that are specific to a platform. You can use Mono.Addins or MS' own open sourced Managed Extensibility Framework known as MEF (http://www.codeplex.com/MEF)
Don't add the reference in the command-line compiler options. If you are using a high level IDE tool then you might have to play with its project settings to effect the same thing.
There are other files that come into play too like AssemblyInfo.cs and might contain instructions about assemblies that you are considering. Also the program might be using types from App.Config (Configuration file) or Web.Config (ASP.NET) / dynamic type loading.
Don't rely for your dependencies on the fact that your code is JITted and that only called code is JITted.
Best is always to assume, that whatever is referenced will be loaded and has to be available.
You user might choose to use AOT, which is Mono's counterpart of NGEN.
Or subtle differences in how newer runtime versions handle things like serialization, remoting, security, reflection, etc. can lead to your references being loaded even your code does not use anything directly. (But the serializer might have pulled all types, which then loaded other assemblies)
Use interfaces or classic inheritance, or maybe events or other means of indirection to load the .Net parts only when they are appropriate. And by hat I mean an assembly that you don't reference but load dynamically.
I believe, that the usage of preprocessor directives like #if UsingNetwork is bad OO practice - other coworkers do not.
I think, when using an IoC container (e.g. Spring), components can be easily configured if programmed accordingly. In this context either a propery IsUsingNetwork can be set by the IoC container or, if the "using network" implementation behaves differently, another implementation of that interface should be implemented and injected (e.g.: IService, ServiceImplementation, NetworkingServiceImplementation).
Can somebody please provide citations of OO-Gurus or references in books which basically reads "Preprocessor usage is bad OO practice if you try to configure behaviour which should be configured via an IoC container"?
I need this citations to convince coworkers to refactor...
Edit: I do know and agree that using preprocessor directives to change targetplatform specific code during compilation is fine and that is what preprocessor directives are made for. However, I think that runtime-configuration should be used rather than compiletime-configuration to get good designed and testable classes and components. In other words: Using #defines and #if's beyond what they are meant for will lead to difficult to test code and badly designed classes.
Has anybody read something along these lines and can give me so I can refer to?
Henry Spencer wrote a paper called #ifdef Considered Harmful.
Also, Bjarne Stroustrup himself, in the chapter 18 of his book The Design and Evolution of C++, frowns on the use of preprocessor and wishes to eliminate it completely. However, Stroustrup also recognizes the necessity for #ifdef directive and the conditional compilation and goes on to illustrate that there is no good alternative for it in C++.
Finally, Pete Goodliffe, in chapter 13 of his book Code Craft: The Practice of Writing Excellent Code, gives an example how, even when used for its original purpose, #ifdef can make a mess out of your code.
Hope this helps. However, if your co-workers won't listen to reasonable arguments in the first place, I doubt book quotes will help convince them ;)
Preprocessor directives in C# have very clearly defined and practical uses cases. The ones you're specifically talking about, called conditional directives, are used to control which parts of the code are compiled and which aren't.
There is a very important difference between not compiling parts of code and controlling how your object graph is wired via IoC. Let's look at a real-world example: XNA. When you're developing XNA games that you plan to deploy on both Windows and XBox 360, your solution will typically have at least two platforms that you can switch between, in your IDE. There will be several differences between them, but one of those differences will be that the XBox 360 platform will define a conditional symbol XBOX360 which you can use in your source code with a following idiom:
#if (XBOX360)
// some XBOX360-specific code here
#else
// some Windows-specific code here
#endif
You could, of course, factor out these differences using a Strategy design pattern and control via IoC which one gets instantiated, but the conditional compilation offers at least three major advantages:
You don't ship code you don't need.
You can see the differences between platform-specific code for both platforms in the rightful context of that code.
There's no indirection overhead. The appropriate code is compiled, the other isn't and that's it.
IMHO, you talk about C and C++, not about OO practice in general. And C is not Object-oriented. In both languages the preprocessor is actually useful. Just use it correctly.
I think this answer belongs to C++ FAQ: [29.8] Are you saying that the preprocessor is evil?.
Yes, that's exactly what I'm saying:
the preprocessor is evil.
Every #define macro effectively
creates a new keyword in every source
file and every scope until that symbol
is #undefd. The preprocessor lets
you create a #define symbol that is
always replaced independent of the
{...} scope where that symbol
appears.
Sometimes we need the preprocessor,
such as the #ifndef/#define wrapper
within each header file, but it should
be avoided when you can. "Evil"
doesn't mean "never use." You will use
evil things sometimes, particularly
when they are "the lesser of two
evils." But they're still evil :-)
I hope this source is authoritative enough :-)
"The preprocessor is the incarnation of evil, and the cause of all pain on earth" -Robert (OO Guru)
IMO it is important to differentiate between #if and #define. Both can be useful and both can be overused. My experience is that #define is more likely to be overused than #if.
I spent 10+ years doing C and C++ programming. In the projects I worked on (commercially available software for DOS / Unix / Macintosh / Windows) we used #if and #define primarily to deal with code portability issues.
I spent enough time working with C++ / MFC to learn to detest #define when it is overused - which I believe to be the case in MFC circa 1996.
I then spent 7+ years working on Java projects. I cannot say that I missed the preprocessor (although I most certainly did miss things like enumerated types and templates / generics which Java did not have at the time).
I've been working in C# since 2003. We have made heavy use of #if and [Conditional("DEBUG")] for our debug builds - but #if is just a more convenient, and slightly more efficient way of doing the same things we did in Java.
Moving forward, we have started to prepare our core engine for Silverlight. While everything we are doing could be done without #if, it is less work with #if which means we can spend more time adding features that our customers are asking for. For example, we have a value class which encapsulates a system color for storage in our core engine. Below are the first few lines of code. Because of the similarity between System.Drawing.Color and System.Windows.Media.Color, the #define at the top gets us a lot of functionality in normal .NET and in Silverlight without duplicating code:
using System;
using System.Collections.Generic;
using System.Text;
using System.Diagnostics;
#if SILVERLIGHT
using SystemColor = System.Windows.Media.Color;
#else
using SystemColor = System.Drawing.Color;
#endif
namespace SpreadsheetGear.Drawing
{
/// <summary>
/// Represents a Color in the SpreadsheetGear API and provides implicit conversion operators to and from System.Drawing.Color and / or System.Windows.Media.Color.
/// </summary>
public struct Color
{
public override string ToString()
{
//return string.Format("Color({0}, {1}, {2})", R, G, B);
return _color.ToString();
}
public override bool Equals(object obj)
{
return (obj is Color && (this == (Color)obj))
|| (obj is SystemColor && (_color == (SystemColor)obj));
}
...
The bottom line for me is that there are many language features which can be overused, but this is not a good enough reason to leave these features out or to make strict rules prohibiting their use. I must say that moving to C# after programming in Java for so long helps me to appreciate this because Microsoft (Anders Hejlsberg) has been more willing to provide features which might not appeal to a college professor, but which make me more productive in my job and ultimately enable me to build a better widget in the limited time anybody with a ship date has.
One problem with the preprocessor #ifdef's is that they effectively duplicate the number of compiled versions that, in theory, you should test thorougly so that you can say that your delivered code is correct.
#ifdef DEBUG
//...
#else
//...
Ok, now I can produce the "Debug" version and the "Release" version. This is ok for me, I always do it, because I have assertions and debug traces which are only executed in the debug version.
If someone comes and writes (real life example)
#ifdef MANUALLY_MANAGED_MEMORY
...
And they write a pet optimization which they propagate to four or five different classes, then suddenly you have FOUR possible ways to compile your code.
If only you have another #ifdef-dependant code then you'll have EIGHT possible versions to generate, and what's more disturbing, FOUR of them will be possible release versions.
Of course runtime if()'s, like loops and whatever, create branches that you have to test - but I find it much more difficult to guarantee that every compile time variation of the configuration remains correct.
This is the reason why I think, as a policy, all #ifdef's except the one for Debug/Release version should be temporary, i.e. you're doing an experiment in development code and you'll decide, soon, if it stays one way or the other.
Bjarne Stroustrap provides his answer (in general, not specific to IoC) here
So, what's wrong with using macros?
(excerpt)
Macros do not obey the C++ scope and
type rules. This is often the cause of
subtle and not-so-subtle problems.
Consequently, C++ provides
alternatives that fit better with the
rest of C++, such as inline functions,
templates, and namespaces.
The support of preprocessing in C# is highly minimal.... verging on useless. Is that Evil?
Is the Preprocessor anything to do with OO? Surely it's for build configuration.
For instance I have a lite version and a pro-version of my app. I might want to exclude some code on the lite withour having to resort to building very similar versions of the code.
I might not want to ship a lite version which is the pro version with different runtime flags.
Tony
Using a #if instead of an IoC or some other mechanism for controlling different functionality based on configuration is probably a violation of the Single Responsibility Principle, which is the key for 'modern' OO designs. Here is an extensive series of articles about OO design principles.
Since the parts in the different sections of the #if by definition concern themselves with different aspects of the system, you are now coupling the implementation details of at least two different components into the dependency chain of your code that uses the #if.
By refactoring those concerns out, you have created a class that, assuming it is finished and tested, will no longer need to be cracked open unless the common code is broken.
In your original case, you'll need to remember the existence of the #if and take it into account any time any of the three components change with respect to possible side-effects of a breaking change.
In c# / VB.NET I would not say its evil.
For example, when debugging windows services, I put the following in Main so that when in Debug mode, I can run the service as an application.
<MTAThread()> _
<System.Diagnostics.DebuggerNonUserCode()> _
Shared Sub Main()
#If DEBUG Then
'Starts this up as an application.'
Dim _service As New DispatchService()
_service.ServiceStartupMethod(Nothing)
System.Threading.Thread.Sleep(System.Threading.Timeout.Infinite)
#Else
'Runs as a service. '
Dim ServicesToRun() As System.ServiceProcess.ServiceBase
ServicesToRun = New System.ServiceProcess.ServiceBase() {New DispatchService}
System.ServiceProcess.ServiceBase.Run(ServicesToRun)
#End If
End Sub
This is configuring the behavior of the application, and is certainly not evil. At the very least, its not as evil as trying to debug a service startup routine.
Please correct me if I read your OP wrong, but it seems that you are complaining about others using a preprocessor when a simple boolean would suffice. If thats the case, dont damn the preprocessors, damn those using them in such fashion.
EDIT:
Re: first comment. I dont get how that example ties in here. The problem is that the preprocessor is being mis-used, not that it is evil.
I'll give you another example. We have an application that does version checking between client and server on startup. In development, we often have different versions and dont want to do a version check. Is this evil?
I guess what I am trying to say is that the preprocessor is not evil, even when changing program behavior. The problem is that someone is misusing it. What is wrong with saying that? Why are you trying to dismiss a language feature?
Much later EDIT: FWIW: i haven't used preprocessor directives for this in a few years. I do use Environment.UserInteractive with a specific arg set ("-c" = Console), and a neat trick I picked up from somewhere here on SO to not block the application but still wait for user input.
Shared Sub Main(args as string[])
If (Environment.UserInteractive = True) Then
'Starts this up as an application.
If (args.Length > 0 AndAlso args[0].Equals("-c")) Then 'usually a "DeriveCommand" function that returns an enum or something similar
Dim isRunning As Boolean = true
Dim _service As New DispatchService()
_service.ServiceStartupMethod(Nothing)
Console.WriteLine("Press ESC to stop running")
While (IsRunning)
While (Not (Console.KeyAvailable AndAlso (Console.ReadKey(true).Key.Equals(ConsoleKey.Escape) OrElse Console.ReadKey(true).Key.Equals(ConsoleKey.R))))
Thread.Sleep(1)
End While
Console.WriteLine()
Console.WriteLine("Press ESC to continue running or any other key to continue shutdown.")
Dim key = Console.ReadKey();
if (key.Key.Equals(ConsoleKey.Escape))
{
Console.WriteLine("Press ESC to load shutdown menu.")
Continue
}
isRunning = false
End While
_service.ServiceStopMethod()
Else
Throw New ArgumentException("Dont be a double clicker, this needs a console for reporting info and errors")
End If
Else
'Runs as a service. '
Dim ServicesToRun() As System.ServiceProcess.ServiceBase
ServicesToRun = New System.ServiceProcess.ServiceBase() {New DispatchService}
System.ServiceProcess.ServiceBase.Run(ServicesToRun)
End If
End Sub
Preprocessor code injection is to the compiler what triggers are to the database. And it's pretty easy to find such assertions about triggers.
I mainly think of #define being used to inline a short expression because it saves the overhead of a function call. In other words, it's premature optimization.
One quick point to tell your coworkers is this: the preprocessor breaks operator precedence in mathematical statements if symbols are used in such statements.
I have no guru statement regarding to the usage of preprocessor directives in my mind and can not add a reference to a famous one. But I want to give you a link to a simple sample found at Microsoft's MSDN.
#define A
#undef B
class C
{
#if A
void F() {}
#else
void G() {}
#endif
#if B
void H() {}
#else
void I() {}
#endif
}
This will result in the simple
class C
{
void F() {}
void I() {}
}
and I think it is not very easy to read because you have to look at the top to see what exactly is defined at this point. This is getting more complex if you have defined it elsewhere.
For me it looks much simpler to create different implementations and
inject them into a caller instead of switching defines to create "new" class definitions. (... and because of this I understand why you compare the usage of preprocessor defintions with the usage of IoC instead). Beside the horrible readability of code using preprocessor instructions, I rarely used preprocessor definitions because they increase the complexity of testing your code because they result in multiple paths (but this is a problem of having multiple implementations injected by external IoC-Container, too).
Microsoft itself has used a lot of preprocessor definitions within the win32 api and you might know/remember the ugly switching between char and w_char method calls.
Maybe you should not say "Don't use it". Tell them "How to use it" and "When to use it" instead. I think everyone will agree with you if you're coming up with good (better understandable) alternatives and can describe the risks of using preprocessor defines/makros.
No need for a guru... just be a guru. ;-)
I wanted to ask a new question but looks like it fits here. I agree that having a full-blown preprocessor may be too much for Java. There is one clear need that is covered by precoprocessor in C world and not covered at all in Java world:
I want debug printouts being completely ignored by compiler depending on debug level. Now we rely on "good practice" but in practice this practice is hard to enforce and still some redundant CPU load remains.
In Java style that could be solved by having some designated methods like debug(), warning() etc. for which calls the code is generated conditionality.
Actually that would be about a bit of integration of Log4J into the language.