I have a DLL with some classes and methods. And two applications using it.
One admin-application that needs almost every method and a client-application that only needs parts of the stuff. But big parts of it are used by both of them. Now I want make a DLL with the admin stuff and one with the client stuff.
Duplicating the DLL and edit things manually everytime is horrible.
Maybe conditional compiling helps me but I dont know how to compile the DLL twice with different coditions in one solution with the three projects.
Is there a better approach for this issue than having two different DLLs and manually editing on every change?
In general, you probably don't want admin code exposed on the client side. Since it's a DLL, that code is just waiting to be exploited, because those methods are, by necessity, public. Not to mention decompiling a .NET DLL is trivial and may expose inner-workings of your admin program you really don't want a non-administrator to see.
The best, though not necessarily the "easiest" thing to do, if you want to minimize code duplication, is to have 3 DLLs:
A common library that contains ONLY functions that BOTH applications use
A library that ONLY the admin application will use (or else compile it straight into the application if nothing else uses those functions at all)
A library that ONLY the client application will use (with same caveat as above)
A project that consists of a server, client, and admin client should likely have 3-4 libraries:
Common library, used by all 3
Client library, used by client and server
Admin library, used by server and admin client
Server library, used only by server (or else compile the methods directly into the application)
Have you considered using dependency injection on the common library, some form of constructor injection to determine the rules that need to be applied during execution.
Here's a very simple example:
public interface IWorkerRule
{
string FormatText(string input);
}
internal class AdminRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", "?");
}
}
internal class UserRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", ".");
}
}
public class Worker
{
private IWorkerRule Rule { get; set; }
public Worker(IWorkerRule rule)
{
Rule = rule;
}
public string FormatText(string text)
{
//generic shared formatting applied to any consumer
text = text.Replace("#", "*");
//here we apply the injected logic
text = Rule.FormatText(text);
return text;
}
}
class Program
{
//injecting admin functions
static void Main()
{
const string sampleText = "This message is #Important# please do something about it!";
//inject the admin rules.
var worker = new Worker(new AdminRules());
Console.WriteLine(worker.FormatText(sampleText));
//inject the user rules
worker = new Worker(new UserRules());
Console.WriteLine(worker.FormatText(sampleText));
Console.ReadLine();
}
}
When run you'll produce this output.
This message is *Important* please do something about it?
This message is *Important* please do something about it.
Related
This is my first Topic here and I didn't find any similar Topics so I try to describe my problem as good as I can:
I was ordered by my Company to create a modular C# program to assist our Software Developers with Background tasks. The Programm is composed of a Windows Forms application with a User Interface that calls external DLLs that do the actual work. All These DLLs are written by me aswell and follow certain rules to make them compatible to the Main App. That way I can easily add new funcions to the Programm just by putting the DLL into a predefined Folder. So to say Plug-and-Run
The main program contains a ListBox that shows all available PlugIns and if one get's selected and the "start" button is clicked, the Main program calls the corresponding DLL and Invokes the method "program" that starts the DLLs actual function. Furthermore the Main contains a method "Output" that is supposed to write the result of every PlugIn into a Tab of my TabControl. That way the results of every PlugIn running in separate threads can be viewed independently. The Access to the tab already has a delegate to make it threadsafe. The Information is gathered by invoke from the PlugIn's own "returnOutput" method that simply Returns a List of strings containing the results to the Main.
My Problem now is: How can i implement a Kind of a callback into my PlugIn DLLs so they can order the Main Program to gather the results at any time?
My first idea was to simply add the result as return values to the "program" method itself but that would make the Information only available at the end of the program and some of the Tasks require a "live update" during runtime.
My second idea was to use the delegate for the Control as Parameter and pass it to the PlugIn so the PlugIn DLL could Access the Control on it's own. This idea failed because the DLL doesn't "know" the Main program and can't Access it's Methods or the delegates instance so I am Always missing a reference.
Is there a way to solve my problem? If necessary I can provide Code snippets but the program has already around 800 lines of Code and each PlugIn adds a few hundred more..
Thanks in advance for every answer and sorry for my non-native english :D
Best Regards
Gerrit "Raketenmaulwurf" M.
Edit: I am using SharpDevelop 5.1
Code Snippet for the DLL call:
PlugIn = PlugIns.SelectedItem.ToString();
Assembly PlugInDLL = Assembly.LoadFile(#PlugInOrdner+"\\"+PlugIn+".dll");
Object Objekt = PlugInDLL.CreateInstance("DLL.PlugIn");
MethodInfo Info1 = Objekt.GetType().GetMethod("Programm");
Info1.Invoke(Objekt, new Object[]{Projekt, TIAInstanz});
it basically Looks for a DLL file that has the same Name as the highlighted item in the ListBox
There are many different ways to do this. Some of the suggestions in the comments are really good and implementing them would make a robust and extendable solution.
If you are looking for a quick and easy way to get messages from your plugins, though, then you can pass your callback directly to the plugin as an Action:
public class PluginRunner
{
public class PluginMessageEventArgs
{
public string Text { get; set; }
}
public event EventHandler<PluginMessageEventArgs> PluginMessage;
public void Run( string pluginPath )
{
Assembly PlugInDLL = Assembly.LoadFile(pluginPath);
Object Objekt = PlugInDLL.CreateInstance("DLL.PlugIn");
MethodInfo Info1 = Objekt.GetType().GetMethod("Programm");
Info1.Invoke(Objekt, new Object[] { Projekt, TIAInstanz, new Action<string>(Log) });
}
private void Log(string s)
{
PluginMessage?.Invoke(this, new PluginMessageEventArgs { Text = s });
}
}
so you can use it like:
var path =
Path.Combine(
Path.GetDirectoryName(Assembly.GetEntryAssembly().Location),
"Plugins",
"MyAwesomePlugin.dll");
var pr = new PluginRunner();
// be aware that your event delegate might be invoked on a plugin's thread, not the application's UI thread!
pr.PluginMessage += (s,e) => Console.WriteLine("LOG: " + e.Text);
pr.Run(path);
then your plugin's Programm method writes its logs:
public void Programm( ProjektClass p0, TIAClass p1, Action<string> log )
{
Task.Run(() =>
{
// do something
log.Invoke("here am I!");
// do something else
log.Invoke("here am I again!");
// do something more
});
}
I must admit, that this is not the ideal way to deal with messaging. There are far better (and, unfortunately, more complicated to implement) solutions out there. This one is fairly simple though. Just don't forget that you receive your message on the same thread that have sent it and avoid deadlocks.
On an Azure Mobile App Services server side app using MVC 5, Web API 2.0, and EF Core 1.0, controllers can be decorated like so to implement token based authentication:
// Server-side EF Core 1.0 / Web API 2 REST API
[Authorize]
public class TodoItemController : TableController<TodoItem>
{
protected override void Initialize(HttpControllerContext controllerContext)
{
base.Initialize(controllerContext);
DomainManager = new EntityDomainManager<TodoItem>(context, Request);
}
// GET tables/TodoItem
public IQueryable<TodoItem> GetAllTodoItems()
{
return Query();
}
...
}
I want to be able to do something similar on the client side where I decorate a method with something like [Authorize] from above, perhaps with a, [Secured], decoration, below:
public class TodoItem
{
string id;
string name;
bool done;
[JsonProperty(PropertyName = "id")]
public string Id
{
get { return id; }
set { id = value;}
}
[JsonProperty(PropertyName = "text")]
public string Name
{
get { return name; }
set { name = value;}
}
[JsonProperty(PropertyName = "complete")]
public bool Done
{
get { return done; }
set { done = value;}
}
[Version]
public string Version { get; set; }
}
// Client side code calling GetAllTodoItems from above
[Secured]
public async Task<ObservableCollection<TodoItem>> GetTodoItemsAsync()
{
try
{
IEnumerable<TodoItem> items = await todoTable
.Where(todoItem => !todoItem.Done)
.ToEnumerableAsync();
return new ObservableCollection<TodoItem>(items);
}
catch (MobileServiceInvalidOperationException msioe)
{
Debug.WriteLine(#"Invalid sync operation: {0}", msioe.
}
catch (Exception e)
{
Debug.WriteLine(#"Sync error: {0}", e.Message);
}
return null;
}
Where [Secured] might be defined something like this:
public class SecuredFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Check if user is logged in, if not, redirect to the login page.
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
// Check some globally accessible member to see if user is logged out.
}
}
Unfortunately, the above code only works in Controllers in MVC 1.0 applications and above according to the Microsoft article on "Creating Custom Action Filters": https://msdn.microsoft.com/en-us/library/dd381609(v=vs.100).aspx
How do I implement something like a "Custom Action Filter" that allows me to use the "[Secured]" decoration in a Mobile App Service client instead of the server? The answer will help me create custom authentication from the client side and keep the code in one location without complicating the implementation, i.e., it is a cross-cutting concern like performance metrics, custom execution plans for repeated attempts, logging, etc.
Complicating the scenario, the client also implements Xamarin.Forms for iOS and has to be a functional Ahead of Time pattern due to iOS's requirement for native code, JIT is not yet possible.
The reason attributes work in the scenarios you describe is because other code is responsible for actually invoking the methods or reading the properties, and this other code will look for the attributes and modify behaviour accordingly. When you are just running C# code, you don't normally get that; there isn't a native way to, say, execute the code in an attribute before a method is executed.
From what you are describing, it sounds like you are after Aspect Oriented Programming. See What is the best implementation for AOP in .Net? for a list of frameworks.
In essence, using an appropriate AOP framework, you can add attributes or other markers and have code executed or inserted at compile time. There are many approaches to it, hence why I am not being very specific, sorry.
You do need to understand that the AOP approach is different from how things like ASP.Net MVC works as AOP will typically modify your runtime code (in my understanding anyway and I'm sure there are variations on that as well).
As to whether AOP is really the way to go will depend on your requirements, but I would proceed with caution - it's not for the faint of heart.
One completely alternative solution to this problem is to look at something like Mediatr or similar to break your logic into a set of commands, which you can call via a message bus. The reason that helps is that you can decorate your message bus (or pipeline) with various types of logic, including authorization logic. That solution is very different from what you are asking for - but may be preferable anyway.
Or just add a single-line authorisation call as the first line inside each method instead of doing it as an attribute...
What you are more generally describing in known by a few different names/terms. The first that comes to mind is "Aspect Oriented Programming" (or AOP for short). It deals with what are known as cross cutting concerns. Im willing to bet you want to do one of two things
Log exceptions/messages in a standardized meaningful way
Record times/performance of areas of your system
And in the generala sense, yes C# is able to do such things. There will be countless online tutorials on how to do so, it is much too broad to answer in this way.
However, the authors of asp.net MVC have very much thought of these things and supply you with many attributes just as you describe, which can be extended as you please, and provide easy access to the pipeline to provide the developer with all the information they need (such as the current route, any parameters, any exception, any authorization/authentication request etc etc)
This would be a good place to start: http://www.strathweb.com/2015/06/action-filters-service-filters-type-filters-asp-net-5-mvc-6/
This also looks good: http://www.dotnetcurry.com/aspnet-mvc/976/aspnet-mvc-custom-action-filter
We have a typical N-Layer .NET application which sits in between our database and Web API service layer. This application consists of Business Layer, Data Repository/Access along with the related DTOs and Business Objects.
We have solutions in place to version our stored procedures and our Web API endpoints. The issue is the solution to version this middle layer, the actual class methods and schema objects. All Google searches come up with results for versioning source code in a source control solution or how to version using the Assembly info, neither of these are what we are referring to so results are limited.
So for example, we have two endpoints:
...api/v1/tax/charges
...api/v2/tax/charges
v1 must hit one version of the method CalculateTaxPctgs and v2 hits another version with updated business logic. Along with both needing to use different versions of the POCO Tax and TaxItems as we changed the name of one field in v2.
The easy to develop but hard to manage and very rigid/static solution would be to create two different methods, CalculateTaxPctgs_V1 and CalculateTaxPctgs_V2. This doesn't seem like a good idea.
Hard to find best practices or even alternative solutions for this dilemma. This is an enterprise application which takes millions of requests every day so performance is extremely important but so is code management and reliability.
Instead of different methods I'd use object inheritance. This way if a method stays the same between different versions you don't need to change the implementation in any way. You could then use a factory of some sort to create the instance required. For example:
public virtual class TaxCalculatorBase {
public virtual ICollection<TaxPercentage> CalculateTaxPercentages() {
DefaultImplementation();
}
}
public sealed class TaxCalculatorV1 : TaxCalculatorBase {
//Same implementation so no need to override
}
public sealed class TaxCalculatorV2 : TaxCalculatorBase {
//Same implementation but with a bit extra
public override ICollection<TaxPercentage> CalculateTaxPercentages() {
base.CalculateTaxPercentages();
ExtraStuff();
}
}
public sealed class TaxCalculatorV3 : TaxCalculatorBase {
//Different implementation
public override ICollection<TaxPercentage> CalculateTaxPercentages() {
NewImplementation();
}
}
public static class TaxCalculatorFactory {
public static TaxCalculatorBase Create(int version) {
switch (version) {
case 1: return new TaxCalculatorV1;
case 2: return new TaxCalculatorV2;
case 3: return new TaxCalculatorV3;
default: throw new InvalidOperationException();
}
}
}
public class CallingClass {
public void CallingMethod(int versionFromURL) {
var calculator = TaxCalculatorFactory.Create(versionFromURL);
var percentages = calculator.CalculateTaxPercentages();
percentages.DoStuffWithThem();
}
}
If the api implements an entire new version each time the factory can be more generic and something like:
public static class MyFactory {
public static TaxCalculatorBase CreateTaxCalculator(int version) {
switch (version) {
case 1: return new TaxCalculatorV1;
case 2: return new TaxCalculatorV2;
case 3: return new TaxCalculatorV3;
default: throw new InvalidOperationException();
}
}
}
//various other methods to create classes which depend on version
}
Obviously this depends on exactly how your solution is put together but would redirecting assembly versions be something you could leverage :
https://msdn.microsoft.com/en-us/library/7wd6ex19%28v=vs.110%29.aspx
You can redirect your app to use a different version of an assembly in a number of ways: through publisher policy, through an app configuration file; or through the machine configuration file.
To solve this problem we have implemented dynamically loading assemblies which handles over 80 different versions. It works well. We don't change the deployed software (unless there is a serious flaw) since it's part of a production system that we can't afford to break once it works.
We also have some critical changes over time, like using several different versions of .NET. To handle this we route requests to different application deployments.
In principal this looks like a simple job, but I wonder if anyone can take me through the basic steps?
I have an application API, implemented as a C# class library project in the application solution. People can thus write their own conventional .Net applications using this API by referencing the dll directly.
I now need to make exactly the same functionality available as a web service so applications can be written to remotely access the same API over http. Ideally I would just like to tag the API classes and methods with appropriate web service attributes, but I suspect there is more to it than that. I also must have the API dll continue to work as an API for desktop applications as it does at present.
Is this do-able? If so, what are the steps I need to take?
The web service can be composed mostly of wrapper methods. Take the simple case...
If your API method in the assembly is
public void DoFoo(string bar)
Then your web API method (your choice of implementation, such as WebAPI, ASMX web service, etc) will look like
public void DoFoo(string bar) {
// ... initialization or validation
try {
refToDll.DoFoo(bar);
} catch (Exception e) {
// implementation specific return of error.
}
}
If you have mostly static methods or those taking primitive types, that becomes more easy. If your API has types defined, this becomes harder. You will need to change the type signature and reimplement methods. Without your API it would be difficult to make specific suggestions. However, there are several options. If you had
public class BazClass {
public string GetScore() {
return scores.Sum();
}
}
You basically need to ensure that the remote side (the web API) can reconstruct the context from your client side. You have to pass in a serializable instance or other representation of BazClass and let the remote API work on it. It just doesn't exist otherwise. You could also create a bunch of methods that store state on the server and you work with a "handle" on the client side, or object reference, but that will have to be a design decision (just look at interop with native libraries, and handles, and translate to cross network). Example:
public string BazGetScore(Transport.BazClass baz) {
// Depending on the framework and class (all public getters/setters)?
// your framework may allow for transparent serialization
BazClass bazReal = bazFactory(baz);
string score = bazReal.GetScore();
return score;
}
How much of your source API is based on interfaces? This may make the creation of a Proxy class much more transparent to your end user. If you have
public class Baz : IBaz { ... }
Then you can create a Proxy class that acts just like an IBaz but calls the remote API instead of acting locally. Depending on your framework and tooling, this may be able to be facilitated by the tools.
namespace RemoteAPIProxy {
public class Baz : IBaz {
public string GetScore() {
// initialization of network, API, etc
Transport.Baz baz = new Transport.Baz.From(this);
string score = CallRemoteAPI("BazGetScore", baz);
return score;
}
}
}
In summary, you may have to make some intermediate classes depending on if you need to support state, non-public methods, or full scope. The "how" can mostly be considered just another wrapper, but you need to be conscious of how you get your local state over the wire and into the context of the remote API. Use interfaces, serialization helpers, and lightweight transport objects for state to help with the "glue". Remember, the only "I" in "API" is for "Interface", so you might want to make sure you have some. Good luck!
Summary :
I have a DLL that hosts a class library. The library is used by an ASP.NET website. I need some code (initialization) to be run when the library is used. I have placed the code on the static constructor of one of the classes, which most likely will be used. It runs right now, but I was wondering
is there a better place to put this code? Some sort of DLL init
method?
are there any downfalls? If the class is never used, will the code
run anyways?
Details:
I have a DLL that hosts a class library that implements ECommerce to be used on ASP.NET websites. It contains controls and logic objects specific to my client. As part of it, it contains an HTTPhandler that handles AJAX calls to the library. The url that is associated with the Handler has to be registered. I have done this on the static constructor of one of the classes.
using System.Web.Routing;
class CMyClass {
static CMyClass() {
RouteTable.Routes.Insert(0, new Route("myapi/{*pathinfo}", new CMyHTTPHandlerRouter()));
}
}
This works right now. The site that uses the DLL does not have to register the route, which is very convenient. I was wondering, though:
is there a better place to register routes from a DLL? Or a better
way to associate a handler with a URL, directly from the DLL, so it
is always registered when the DLL is used.
are there any downfalls? If CMyClass is never used, will the code run anyways?
I can answer your second question: the static constructor will only run if you somehow interact with CMyClass. In other words, it's run on demand, not eagerly when you e.g. access the DLL.
Routes are to be construed as "application code". Meaning once it is "compiled" you cannot make changes to it. This is by design. Application_Start is the place where routes are normally registered.
I would normally abide by this convention. But my reusable logic (i.e. inside any publicly exposed method in the dll) should ensure that the routes are registered, else throw up an error. This is how the end developers know that they aren't using your component right. And if "it" knows the routes are registered it can safely go and execute the actual stuff.
I'd use a static boolean variable to accomplish that.
public class MyMvcSolution
{
public static bool Registered {get; set; }
static MyMvcSolution(){ Registered = false; }
public static void DoSomethingImportant()
{
if(Registered)
{
//do important stuff
}
else
throw new InvalidOperationException("Whoa, routes are not registered!");
}
//this should be called in the Application_Start
public static void Init()
{
RouteTable.Routes.Insert(0, new Route("myapi/{*pathinfo}", new CMyHTTPHandlerRouter()));
Registered = true;
}
}
I believe the above solution will kind of do.
There is an alternative strategy. We want to add routes "dynamically". This talks about forcing the BuildManager to register routes you mention is a .cs file. This file isn't "compiled" as part of the application; there will be a *.cs file in your application somewhere. You will make an assembly out of it on-the-fly, and from that force the buildmanager to register. There is also a mechanism to "edit" the routes once that file changes too. I'll leave it to you to explore this. Deep but interesting stuff.