My current C# application is a single executable (EXE) and a few DLLs (so multiple binaries) and I want to create a LOGGER (some simple custom logger that writes to a single text file no matter where it is being called from - and this should be available in all the binaries (exe and dll's) - note that it is a single threaded application.
Right now I have a DLL (Logger) and it has a class (Log) with a method (trace) which logs, each project adds a reference to this DLL (Logger) and creates its own instance (pass in a file name) and then calls the .Trace(...) function - works fine ...
But I would rather not have to create many different trace files (minimum one per project) and having to create a new instance of Logger each time seems repetitive ... So I was looking into either creating a STATIC logger class or using a SINGLTON ... I am just not sure which is best and why ...
I was hoping someone could maybe point me in the right direction on this, what is the best way to create a logger class (it will be its own DLL) that will be used by many projects (referenced) and they should all write to the same log file...?
Any help would be much appreciated.
Thanks,
You choose between Singleton and Static logger the same way you always choose between the two: Do you want to be able to override methods (or use a Logging interface)? If you use a singleton, you have the opportunity to override methods to change functionality. You can even abstract the behavior away behind an interface.
With a static class you are now and forever tied to that class, and any changes affect everyone.
When dealing with my own systems, i have moved towards singleton instanced objects. It gives a level of flexibility not available with static classes.
Related
I have the following structure:
PdfGenerationService (umbrella)
PdfGenerationDataService (gets data for umbrella)
PdfGenerationFunctionService (calls Azure function "microservice" to gen PDF)
PdfAzureSaveService (saves PDF to storage)
The problem is that other devs and myself have the tendency to try to use one of the "supporting" services outside of the "umbrella" service, when they really have no stand-alone functionality. We remember this too late, and by the time we do, we need to refactor.
e.g. we call the supporting PdfAzureSaveService from a controller, and then remember that we have the PdfGenerationService that does all of this for us without getting into the details and need to refactor.
I want to limit the helper services to only be usable within the umbrella PdfGenerationService. Access levels don't seem to help me with that, unless I want to make an arbitrary parent class that all 4 inherit from and then make the helper services protected. The other alternative is to put all the helpers into private methods on the "umbrella" service, that really violates DRY imo.
tl;dr: Is there some way to mark methods as only accessible from one service?
edit: JHBonarius made a good point that I should mention - these services ARE exposed via standard .NET Core DI. There is no real reason that they couldn't be static (and avoiding DI entirely), but that seems to still leave the problem of other services/controllers just importing the namespace and using the static services where they shouldn't.
So your scenario is that you have a class which requires three other classes:
public PdfGenerationService(
PdfGenerationDataService s1,
PdfGenerationFunctionService s2,
PdfAzureSaveService s3
)
And you register those classes with DI:
services.AddTransient<PdfGenerationService>();
services.AddTransient<PdfGenerationDataService>();
services.AddTransient<PdfGenerationFunctionService>();
services.AddTransient<PdfAzureSaveService>();
And now you want to prevent developers from writing, in their own code:
public Foo(PdfGenerationFunctionService s1)
Because they're supposed to use PdfGenerationService as a dependency?
Then move all those classes into their own library, and make the three dependencies of the first service internal. Now other code can't refer to them by their name, so it can't ask for them to be injected.
Or write an analyzer that checks that other code doesn't use those classes. Or mark them obsolete, suppressing the warning in PdfGenerationService's constructor. Or throw an exception in the three other class's methods if the method one stack frame lower doesn't originate from PdfGenerationService (but don't).
We have the following scenario where we encounter an issue with Serilog due to .NET's AppDomain isolation:
|App| --> |3rd| --> |Class| --> Log.Debug("...") --> [NOK]
`-------------> |Class| --> Log.Debug("...") --> [OK]
(1) (2) (3) (4) (5)
We create a global Logger object in our app's Main() method and assign it to Log.Logger.
We invoke a third-party tool that internally creates a new AppDomain. (We have no control over this and cannot change this behavior.)
Instance objects of Class is created both within our app and from the third-party tool.
Class executes code for logging using Serilog's static Log class.
Log.Logger references the correctly configured Logger when accessed from the Class object created from our app, but is just set to a default SilentLogger in the Class object created via the third-party tool.
This, I guess, is the expected behavior since the two Class objects belongs to two different AppDomains and static variables are isolated to their own AppDomain.
Is there some way to work around this? Can we somehow make use of Serilog's convenient static Log class? Is there something else in the Serilog framework other than the Log class solving this kind of issue, that we might have missed? I found a similar issue in #380 but without a final solution, as far as I could see.
We considered just creating a new Logger object in the new AppDomain. However, that means properties that we pushed to the Logger somewhere between 1-2 and 1-3 is not included in the new Logger's log events.
When implementing the facade pattern in Java, I can easily hide the subsystem of the facade by using the package-private modifier. As a result, there is only a small interface accessible from outside the facade/package, other classes of the sub-system are not visible.
As you already know, there is no package-private modifier in C#, but a similar one called internal. According to the docs, classes defined as internal are only accessible within the same assembly.
From what I unterstand, I have to create at least two assemblies (means practically two .exe/.dll files) in order to hide the subsystem of the facade physically. By physically I mean that the classes a) cannot be instantiated from outside and b) are not shown by intellisense outside the facade.
Do I really have to split my small project into one .exe and one .dll (for the facade) so that the internal keyword has an effect? My facade's subsystem only consists of 2 classes, an own .dll seems to be overkill.
If yes, what is the best practice way in Visual Studio to outsource my facade to its own assembly?
Don't get me wrong, I have no real need to split up my program into several assemblies. I just want to hide some classes behind my facade from IntelliSense and prevent instantiation from outside. But if I'm not wrong, there is no easier way that that.
Using a separate project is the general preferred approach. In fact, you often have interfaces or facades in a third assembly that both the implementation and UI assemblies reference.
That said, you can accomplish this in a single assembly using a nested private subclass.
public interface IMyService {}
public static class MyServiceBuilder
{
public static IMyService GetMyService()
{
//Most likely your real implementation has the service stored somewhere
return new MyService();
}
private sealed class MyService : IMyService
{
//...
}
}
The outer class effectively becomes your 'package' for privacy scoping purposes. You probably wouldn't want to do this for large 'packages'; in those cases, it's cleaner to move the code to a separate assembly and use internal.
Note that if you primary objection to multiple assemblies is deployment, you can actually merge multiple assemblies for the purpose of creating a simpler executable or library deployment. This way you can retain the insulation benefits of multiple projects/assemblies without having the headache of multiple files that can potentially be distributed or versioned independently.
I'm developing a Class Library / API and I need to store some global parameters that will be used by some classes. I thought about two main ways to do so (ignoring configuration files which I'd prefer not using in this case):
1) Specifying the parameters in a static class, like this:
// Stores and validates settings
ApiConfiguration.SetConfiguration("some values or class here");
var methods1 = new MyFirstApiMethods();
methods1.DoStuff(); // Internally uses static ApiConfiguration.
var methods2 = new MySecondApiMethods();
methods2.DoOtherStuff(); // Internally uses static ApiConfiguration.
2) Creating an instance of the configuration class and pass it to the classes, like this:
// Create an instance of the configuration class
var config = new ApiConfiguration();
config.ServerName = "some-server-name";
var methods1 = new MyFirstApiMethods(config);
methods1.DoStuff(); // Uses the supplied ApiConfiguration instance.
var methods2 = new MySecondApiMethods(config);
methods2.DoOtherStuff(); // Uses the supplied ApiConfiguration instance.
The fist option feels more natural for me, but I can think of some possible downsides (if the config is set in two places with different values, for example).
I want to know the possible downsides of each implementation and what is the most common way to do this in known projects of this nature.
I'd say #1 is the most common. I know you said that you didn't want to use configuration files, but if you look at how .NET uses app.config I think you will see that a similar approach to #1 is taken. You don't see instances of app.config settings being passed around to every method/function that needs to read a setting. I normally do VB.NET, for which there is a static My.Settings class that basically achieves the same thing as your #1.
The biggest disadvantage I see to #2 (and probably why it is less common) is that the config class can get passed around a lot. If only a small number of methods actually need to read the config it might be ok, but if many methods need to read the config it starts to become a headache. In my opinion it also clutters up the method signatures. Imagine a class deep in the library that needs to read the config; you may have to pass the config through several higher level classes just to pass it through to the class that needs it.
I'd recommend at least considering using app.config or web.config as either one of these already have built in functionality for this type of thing.
EDIT
I was waiting for Brannon to respond with an example, but since he hasn't I'll go ahead and chime in. IOC containers are great tools to help with dependency injection, but I wouldn't dream of introducing one just for a settings class. If you were already using one, that might be a different story. Lets suppose that you were already using an IOC container and wanted to use it for your config class. That means you still have method signatures that look like:
Function Add (FirstNumber, SecondNumber, Config)
Admittedly that example is a stretch, but you get the idea. The IOC container will resolve your Config dependency (it will create the config class for you), but you still have the config as a parameter to each method/constructor that needs it.
To be honest some of it comes down to personal preference. Keep in mind that VS/.NET uses #1 out of the box when you use app.config. I know that static classes are frequently frowned upon and rightfully so in many cases, but I think that settings/config classes are exceptions to the rule.
I like to create a file full of custom functions which I have made, which I may use in another project or something. Now I don't fully understand how to go about this, normally in a language like php, you'd just create the php file and then go include("cust_lib.php") or whatever the file is called.
Now I think that the process involves the library having its own namespace, then either go using custom_lib; or custom_lib:: within the script (I don't want to get into a discussion over which is the best way to go here).
Is this right? Or should I create the library and convert it to a .dll, if so how do I go about this, what sort of syntax does a dll have inside it etc.
However if its just file within one project then I don't need to go down that route do I? I can just create the namespace and use that?
This is what I'm working for at the moment, and thought it would be something like this
namespace Custom_Lib{
~~functions to go here~~
}
However the functions have to exist within a class don't they? So that becomes something like
namespace Custom_Lib{
class custom_lib{
public string function1(string input){
return input;
}
}
}
So some help, pointers, examples would be appreciated so I can wrap my head around this
Thanks,
Psy.
(Yes I call them functions, that just comes from a long php/js etc background)
The normal approach would be to create a Class Library project, put your classes and methods in that project, making sure that those you want to expose are public. Then you add a reference to the resulting dll file in the client projects and you will have the functionality from the class library available to you.
Even if you decide to put it all into one single file, I would still recommend you to make it a class library since I imagine that will make it easier to maintain. For instance, consider the following scenarios:
You decide to put it in a file and include a copy of that file in all projects where you want to use it. Later you find a bug in the code. Now you will have a number of copies of the file in which to correct the bug.
You decide to put it in a file and include that same file in all projects. Now, if you want to change some behaviour in it, you will alter the behavior for all projects using it.
In those two cases, keeping it as a separate project will facilitate things for you:
You will have only one copy of the code to maintain
You can decide whether or not to update the dll used by a certain project when you make updates to the class library.
Regarding the syntax issues: yes all methods must exist within a class. However, if the class is merely a container of the methods, you can make it (and the methods static):
public static class CustomLib
{
public static string GetSomethingInteresting(int input)
{
// your code here...
}
}
That way you will not need to create an instance of CustomLib, but can just call the method:
string meaningOfLife = CustomLib.GetSomethingInteresting(42);
In addition to Fredrik Mörk's well-written and spot-on response, I'd add this:
Avoid creating a single class that is a kitchen-sink collection of functions/methods.
Instead, group related methods into smaller classes so that it's easier for you and consumers of your library to find the functionality they want. Also, if your library makes use of class-level variables, you can limit their scope.
Further, if you decide later on to add threading capabilities to your library, or if your library is used in a multi-threaded application, static methods will likely become a nightmare for you. This is a serious concern, and shouldn't be overlooked.
There no question here. You answered it yourself. Yes, you have to construct a class to include all helper methods. And yes, you can either compile it to a dll if you want to reuse in multiple projects it or just add the source file to the project.
Usually I declare the helper class and all functions as static to avoid initiating the class each time I use it.