What is a preferred approach for logging part specific errors with imported parts? E.G. if you have the following contract:
public interface IDoStuff
{
void DoYourStuff();
}
with multiple implementations:
[Export(typeof(IDoStuff))]
public class DoStuffCorrectly : IDoStuff
{
//implement void run
}
[Export(typeof(IDOstuff))]
public class DoStuffWithExceptions : IDoStuff
{
// implement void run and throws exception
}
and you have a Type that uses mef to compose the parts.
public class DoStuffRunner
{
[ImportMany(typeof(IDoStuff))]
IEnumerable<IDoStuff> DoStuffPats {get;set;}
//some method that loops through the IEnumerable and calls run
public void Run()
{
foreach(IDostuff doit in DoStuffParts)
{
doit.Run();
}
}
}
In the executing assembly with the importer I am using entlib Exception Handling and Logging Application blocks. The logging application block is configured to send general error messages to the team. Some of the information that I would like to be able to include is which part failed, and possibly which group gets the email.
This is simple enough to configure statically in the app config, but would lead to a 1:1 configuration for each part that is added and kind defeats the purpose of dropping the dll in the bin. It would be neat if you could control the configuration in the part's assembly.
So, what are some possible approaches that would allow a part to expose information that could allow an imported part to provide logging configuration information that would jive with the MEF ideology?
The MEF way is to import a logging interface into your plugins and/or export whatever metadata you need to configure your logger by using custom attributes on your exported class.
I'm not really familiar enough with that logging library (we use log4net) to know what metadata you need or how logging would be actuated in a unified manner if you didn't import a logging interface interface.
Related
I am (as something of a novice) implementing my own custom logger for use in ASP.NET Core MVC apps. I have this logger working functionally in every regard. But I cheated a little so far, namely I implemented the ILogger.IsEnabled method as follows:
public bool IsEnabled(LogLevel logLevel)
{
return true;
}
Functionally, this works fine, since the framework ensures that the Log() method is only invoked if the log level is at or higher than the one specified. So the correct "things" are being logged and the lower-level "things" are not being logged as expected.
However, I also want to support the following kind of situation in my code, where _logger is typed as ILogger and is properly injected in my controller:
if (_logger.IsEnabled(LogLevel.Debug))
{
_logger.LogDebug("This is an expensive message to generate: " +
JsonConvert.SerializeObject(request));
}
To make this effective, my IsEnabled() method should be able to know what the log level IS for the instance of the logger that was created with my LoggerProvider, but I don't know how to get that information directly, or how to pass it properly to the injected instance of the the logger I am working with.
Complex examples and tutorials I have been able to find seem to be constructed in every case for console app types, not network app types, and so far I have been unsuccessful at figuring out how to do this through the templated Startup class in ASP.NET MVC.
What is the simplest and most effective way to stop cheating at my custom IsEnabled() method in order to avoid the unnecessary serialization (in my example) if none of the registered loggers in the injected instance are handling the Debug log level? Or do you have a favorite example or tutorial in the ASP.NET core setting you can point me to?
You can take a look at built-in loggers source code and see how they implement it.
In short, they only check that logLevel != LogLevel.None, but depending on the logger logic, you might also want to check some other configuration. For example, DebugLogger logger also checks the Debugger.IsAttached property and EventLogLogger checks the EventLogSettings.Filter (supplied via constructor).
Update
To make this effective, my IsEnabled() method should be able to know what the log level IS for the instance of the logger that was created with my LoggerProvider, but I don't know how to get that information directly, or how to pass it properly to the injected instance of the the logger I am working with.
You can create an implementation of ILoggerProvider which in turn can make use of dependency injection to get the configuration you want. If you want to use the options pattern to configure it, you must do something along the lines of:
public class MyLoggerProvider : ILoggerProvider
{
private readonly IOptions<MyLoggerOptions> _options;
public MyLoggerProvider(IOptions<MyLoggerOptions> options)
{
_options = options;
}
public ILogger CreateLogger(string name)
{
return new MyLogger(name, _options.Value);
}
}
And optionally add an extension method to make registration easier:
public static class MyLoggerExtensions
{
public static ILoggingBuilder AddMyLogger(this ILoggingBuilder builder, Action<MyLoggerOptions> configure)
{
builder.Services.TryAddEnumerable(ServiceDescriptor.Singleton<ILoggerProvider, MyLoggerProvider>());
LoggerProviderOptions.RegisterProviderOptions<MyLoggerOptions, MyLoggerProvider>(builder.Services);
builder.Services.Configure(configure);
}
}
In principal this looks like a simple job, but I wonder if anyone can take me through the basic steps?
I have an application API, implemented as a C# class library project in the application solution. People can thus write their own conventional .Net applications using this API by referencing the dll directly.
I now need to make exactly the same functionality available as a web service so applications can be written to remotely access the same API over http. Ideally I would just like to tag the API classes and methods with appropriate web service attributes, but I suspect there is more to it than that. I also must have the API dll continue to work as an API for desktop applications as it does at present.
Is this do-able? If so, what are the steps I need to take?
The web service can be composed mostly of wrapper methods. Take the simple case...
If your API method in the assembly is
public void DoFoo(string bar)
Then your web API method (your choice of implementation, such as WebAPI, ASMX web service, etc) will look like
public void DoFoo(string bar) {
// ... initialization or validation
try {
refToDll.DoFoo(bar);
} catch (Exception e) {
// implementation specific return of error.
}
}
If you have mostly static methods or those taking primitive types, that becomes more easy. If your API has types defined, this becomes harder. You will need to change the type signature and reimplement methods. Without your API it would be difficult to make specific suggestions. However, there are several options. If you had
public class BazClass {
public string GetScore() {
return scores.Sum();
}
}
You basically need to ensure that the remote side (the web API) can reconstruct the context from your client side. You have to pass in a serializable instance or other representation of BazClass and let the remote API work on it. It just doesn't exist otherwise. You could also create a bunch of methods that store state on the server and you work with a "handle" on the client side, or object reference, but that will have to be a design decision (just look at interop with native libraries, and handles, and translate to cross network). Example:
public string BazGetScore(Transport.BazClass baz) {
// Depending on the framework and class (all public getters/setters)?
// your framework may allow for transparent serialization
BazClass bazReal = bazFactory(baz);
string score = bazReal.GetScore();
return score;
}
How much of your source API is based on interfaces? This may make the creation of a Proxy class much more transparent to your end user. If you have
public class Baz : IBaz { ... }
Then you can create a Proxy class that acts just like an IBaz but calls the remote API instead of acting locally. Depending on your framework and tooling, this may be able to be facilitated by the tools.
namespace RemoteAPIProxy {
public class Baz : IBaz {
public string GetScore() {
// initialization of network, API, etc
Transport.Baz baz = new Transport.Baz.From(this);
string score = CallRemoteAPI("BazGetScore", baz);
return score;
}
}
}
In summary, you may have to make some intermediate classes depending on if you need to support state, non-public methods, or full scope. The "how" can mostly be considered just another wrapper, but you need to be conscious of how you get your local state over the wire and into the context of the remote API. Use interfaces, serialization helpers, and lightweight transport objects for state to help with the "glue". Remember, the only "I" in "API" is for "Interface", so you might want to make sure you have some. Good luck!
Summary :
I have a DLL that hosts a class library. The library is used by an ASP.NET website. I need some code (initialization) to be run when the library is used. I have placed the code on the static constructor of one of the classes, which most likely will be used. It runs right now, but I was wondering
is there a better place to put this code? Some sort of DLL init
method?
are there any downfalls? If the class is never used, will the code
run anyways?
Details:
I have a DLL that hosts a class library that implements ECommerce to be used on ASP.NET websites. It contains controls and logic objects specific to my client. As part of it, it contains an HTTPhandler that handles AJAX calls to the library. The url that is associated with the Handler has to be registered. I have done this on the static constructor of one of the classes.
using System.Web.Routing;
class CMyClass {
static CMyClass() {
RouteTable.Routes.Insert(0, new Route("myapi/{*pathinfo}", new CMyHTTPHandlerRouter()));
}
}
This works right now. The site that uses the DLL does not have to register the route, which is very convenient. I was wondering, though:
is there a better place to register routes from a DLL? Or a better
way to associate a handler with a URL, directly from the DLL, so it
is always registered when the DLL is used.
are there any downfalls? If CMyClass is never used, will the code run anyways?
I can answer your second question: the static constructor will only run if you somehow interact with CMyClass. In other words, it's run on demand, not eagerly when you e.g. access the DLL.
Routes are to be construed as "application code". Meaning once it is "compiled" you cannot make changes to it. This is by design. Application_Start is the place where routes are normally registered.
I would normally abide by this convention. But my reusable logic (i.e. inside any publicly exposed method in the dll) should ensure that the routes are registered, else throw up an error. This is how the end developers know that they aren't using your component right. And if "it" knows the routes are registered it can safely go and execute the actual stuff.
I'd use a static boolean variable to accomplish that.
public class MyMvcSolution
{
public static bool Registered {get; set; }
static MyMvcSolution(){ Registered = false; }
public static void DoSomethingImportant()
{
if(Registered)
{
//do important stuff
}
else
throw new InvalidOperationException("Whoa, routes are not registered!");
}
//this should be called in the Application_Start
public static void Init()
{
RouteTable.Routes.Insert(0, new Route("myapi/{*pathinfo}", new CMyHTTPHandlerRouter()));
Registered = true;
}
}
I believe the above solution will kind of do.
There is an alternative strategy. We want to add routes "dynamically". This talks about forcing the BuildManager to register routes you mention is a .cs file. This file isn't "compiled" as part of the application; there will be a *.cs file in your application somewhere. You will make an assembly out of it on-the-fly, and from that force the buildmanager to register. There is also a mechanism to "edit" the routes once that file changes too. I'll leave it to you to explore this. Deep but interesting stuff.
I have a DLL with some classes and methods. And two applications using it.
One admin-application that needs almost every method and a client-application that only needs parts of the stuff. But big parts of it are used by both of them. Now I want make a DLL with the admin stuff and one with the client stuff.
Duplicating the DLL and edit things manually everytime is horrible.
Maybe conditional compiling helps me but I dont know how to compile the DLL twice with different coditions in one solution with the three projects.
Is there a better approach for this issue than having two different DLLs and manually editing on every change?
In general, you probably don't want admin code exposed on the client side. Since it's a DLL, that code is just waiting to be exploited, because those methods are, by necessity, public. Not to mention decompiling a .NET DLL is trivial and may expose inner-workings of your admin program you really don't want a non-administrator to see.
The best, though not necessarily the "easiest" thing to do, if you want to minimize code duplication, is to have 3 DLLs:
A common library that contains ONLY functions that BOTH applications use
A library that ONLY the admin application will use (or else compile it straight into the application if nothing else uses those functions at all)
A library that ONLY the client application will use (with same caveat as above)
A project that consists of a server, client, and admin client should likely have 3-4 libraries:
Common library, used by all 3
Client library, used by client and server
Admin library, used by server and admin client
Server library, used only by server (or else compile the methods directly into the application)
Have you considered using dependency injection on the common library, some form of constructor injection to determine the rules that need to be applied during execution.
Here's a very simple example:
public interface IWorkerRule
{
string FormatText(string input);
}
internal class AdminRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", "?");
}
}
internal class UserRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", ".");
}
}
public class Worker
{
private IWorkerRule Rule { get; set; }
public Worker(IWorkerRule rule)
{
Rule = rule;
}
public string FormatText(string text)
{
//generic shared formatting applied to any consumer
text = text.Replace("#", "*");
//here we apply the injected logic
text = Rule.FormatText(text);
return text;
}
}
class Program
{
//injecting admin functions
static void Main()
{
const string sampleText = "This message is #Important# please do something about it!";
//inject the admin rules.
var worker = new Worker(new AdminRules());
Console.WriteLine(worker.FormatText(sampleText));
//inject the user rules
worker = new Worker(new UserRules());
Console.WriteLine(worker.FormatText(sampleText));
Console.ReadLine();
}
}
When run you'll produce this output.
This message is *Important* please do something about it?
This message is *Important* please do something about it.
I am working with MEF to get a plug in architecture going. I want to design in some extensibility. I want to extend initialization.
What I have is a "driver" which repetively collects data from some source. These are my plugins. Each of these plug ins needs to be initialized. Right now I have an interface that these plugins are required to implement.
interface IDriverLiveCollection
{
ILiveCollection GetCollection(ILog logger, IDriverConfig config);
}
This Interface is basically creating an instance of a ILiveCollection from the plugin. for better understanding ILiveCollection looks like this.
interface ILiveCollection
{
void GetData(Parameter param, DataWriter writer);
void ShutDown();
}
And also the initialization loop:
foreach(IDriverConfig config in DriverConfigs)
{
//Use MEF to load correct driver
var collector = this.DriverLoader(config.DriverId).GetCollection(new Logger, config);
// someTimer is an IObservable<Parameter> that triggers to tell when to collect data.
someTimer.Subscribe((param)=> collector.GetData(param, new DataWriter(param)));
}
The problem is that some driver may require more information than their configuration in order to initialize. For example, some driver would like a set of parameters given to them during initialization.
I could easily extend the interface to now look like:
interface IDriverLiveCollection
{
ILiveCollection GetCollection(ILog logger, IDriverConfig config, IEnumerable<Parameter> params);
}
The down side to this approach is that the public interface has changed and now i need to recompile EVERY driver even though none of the have needed this parameter list in order to function before. I intend on having ALOT of drivers and I will also not have any control over who writes drivers.
I thought up another solution. I could create interfaces and inside my loop between when I call Get Collection and before i subscribe to the timer, i could check if the ILiveCollection object also extends one of these interfaces:
interface InitWithParameters
{
void InitParams(IEnumerable<Parameter> params);
}
in my loop:
foreach(IDriverConfig config in DriverConfigs)
{
//Use MEF to load correct driver
var collector = this.DriverLoader(config.DriverId).GetCollection(new Logger, config);
// Check to see if this thing cares about params.
if(collector is InitWithParameters)
{
((InitWithparameters)collector).InitParams(ListOfParams);
}
// Continue with newly added interfaces.
// someTimer is an IObservable<Parameter> that triggers to tell when to collect data.
someTimer.Subscribe((param)=> collector.GetData(param, new DataWriter(param)));
}
The difference here is that I will not need to recompile every driver in order to get this to work. Old drivers will simply not be of type InitWithParameters and will not be called that way while new drivers will be able to take advantage of the new interface. If an old driver wants to take advantage, then it can simply implement that interface and be recompiled. The bottom line: I will not need to recompile drivers UNLESS they want the functionality.
The downsides that I have recognized are: I will obviously need to recompile which ever program is in this loop. There becomes a versioning issue when a new driver is used with an old version of the program with the loop which could result in some issues. and finally I have to hold a huge list of every possible type in the program with the loop as these things grow.
Is there a better way to do this?
Edit Additional Info:
I am attempting to use MEF on the IDriverLiveCollection, not on ILiveCollection since IDriverLiveCollection allows me to construct a specific ILiveCollection with custom initialization parameters.
It is possible to have 2 ILiveCollections of the same type (2 FooLiveCollections) each with a different ILog and IDriverConfig and potentially IEnumerable. I would like to be able to specify these during the "initialization loop" and not at the time of composition of the plugins.
If you just use [ImportMany] directly on your ILiveCollection interface, you could handle this entire infrastructure via the [ImportingConstructor] attribute.
This allows your plugins to specify exactly what they need to be able to be constructed, without having to provide and construct the types later.
Effectively, your host application would need nothing but:
// This gets all plugins
[ImportMany]
IEnumerable<ILiveCollection> LiveCollections { get; set; }
Each plugin would then have their type that exported this, ie:
[Export(typeof(ILiveCollection))]
public class FooLiveCollection : ILiveCollection
{
[ImportingConstructor]
public FooLiveCollection(ILog logger, IDriverConfig config)
{
// ...
Or, alternatively, a plugin could leave off one constructor argument, or add laters (without effecting previous plugins), ie:
[ImportingConstructor]
public BarLiveCollection(ILog logger, IDriverConfig config, IBarSpecificValue barParam)
{
// ...