C# Plugin architecture and references to user configurable database settings - c#

I have a database application that is configurable by the user - some of these options are selecting from different external plugin systems.
I have a base Plugin type, my database schema has the same Plugin record type with the same fields. I have a PlugingMananger to load plugins (via an IoC container) at application start and link them to the database (essentially copies the fields form the plugin on disk to the database).
public interface IPlugin
{
Guid Id{ get; }
Version Version { get; }
string Name { get; }
string Description { get; }
}
Plugins can then be retrieved using PlugingMananger.GetPlugin(Guid pluginId, Guid userId), where the user ID is that of the one of the multiple users who a plugin action may be called for.
A set of known interfaces have been declared by the application in advance each specific to a certain function (formatter, external data, data sender etc), if the plugin implements a service interface which is not known then it will be ignored:
public interface IAccountsPlugin : IPlugin
{
IEnumerable<SyncDto> GetData();
bool Init();
bool Shutdown();
}
Plugins can also have settings attributes PluginSettingAttribute defined per user in the multi-user system - these properties are set when a plugin is retrieved for a specific user, and a PluginPropertyAttribute for properties which are common across all users and read-only set by the plugin one time when the plugin is registered at application startup.
public class ExternalDataConnector : IAccountsPlugin
{
public IEnumerable<AccountSyncDto> GetData() { return null; }
public void Init() { }
public void Shutdown() { }
private string ExternalSystemUsername;
// PluginSettingAttribute will create a row in the settings table, settingId
// will be set to provided constructor parameter. this field will be written to
// when a plugin is retrieved by the plugin manager with the value for the
// requesting user that was retrieved from the database.
[PluginSetting("ExternalSystemUsernameSettingName")]
public string ExternalSystemUsername
{
get { return ExternalSystemUsername }
set { ExternalSystemUsername = value; }
}
// PluginPropertyAttribute will create a row in the attributes table common for all users
[PluginProperty("ShortCodeName")]
public string ShortCode
{
get { return "externaldata"; }
}
public Version PluginVersion
{
get { return new Version(1, 0, 0, 0); }
}
public string PluginName
{
get { return "Data connector"; }
}
public string PluginDescription
{
get { return "Connector for collecting data"; }
}
}
Here are my questions and areas I am seeking guidance on:
With the above abstraction of linking plugins in an IoC container to database, the user can select the database field Customer.ExternalAccountsPlugin = idOfExternalPlugin. This feels heavy - is there a simpler way that other systems achieve this (SharePoint for instance has lots of plugins that are referenced by the user database)?
My application dictates at compile time the interfaces that it supports and ignores all others - I have seen some systems claim to be fully expandable with open plugins which I presume would mean lots of loosely typed interfaces and casting, is there a half-way ground between the two options that would allow future updates to be issued without recompile but still use concrete interfaces?
My plugins could contain metadata (PluginProperty or PluginSetting) and I am unsure the best place to store this, either in a plugin metadata table (would make linq queries more complex) or direct in the plugin database record row (easy linq queries PluginManager.GetPluginsOfType<IAccounts>.Where(x => x.ShortCode = "externaldata").FirstOrDefault();, which is used as best practice?
Since plugins capabilities and interfaces rely so heavily on the database schema, what is the recommended way I can limit a plugin for use with a specific schema revision? Would I keep this schema revision as a single row in a settings table in the database and update this manually after each release? Would the plugin support a maximum schema version, or would the application support a list of known plugin versions?

1) I'm sorry, but I don't know for sure. However, I'm pretty sure, in software that have data created or handled by custom plugin, they handle the plugin the way you described. The idea being, if a user load the data but is missing that specific plugin, the data doesn't become corrupted and the user isn't allowed to modify that data. (An example that comes to my minds is 3D softwares in general)
2) Only giving a very strict interface implementation, of course, highly restrict the plugin creation. (Ex.: Excel, I can't create a new cell type) It's not bad or good, it highly depends what you want from it, it's a choice. If you want the plugin creator to only access the data by some very specific pipes, limit the types of data he can create, then it goes with your design. Otherwise, if you goal is to open your software to improvement, then you should also expose some classes and methods you judge safe enough to be used externally. (Ex.: Maya, I can create a new entity type that derive from a base class, not just an interface)
3) Well, it does depends of a lot of things, no? When serializing your data, you could create a wrapper that contain all information for a specific plugin, ID, MetaData and whatever else you would judge needed. I would go that way, as it would be easier to retrieve, but is it the best way for what you need? Hard to say without more informations.
4) A good example of that is Firefox. Smaller version increment doesn't change the plugin compatibility. Medium version increment tests from a database if the plugin is still valid considering what it implements. If the plugin isn't implementing something that change, it is still valid. Major version increment requires a recompile of all plugins to use the new definition. From my point of view, it's a nice middle ground that allow devs to not always recompile, but it makes the development of the main software slightly more tricky as changes must be planned ahead. The idea is to balance the PitA (Pain in the Ass) factor between the software dev and the plugin dev.
Well... that was my long collection of 2 cents.

Related

Best pattern for labels on website based on client profile

I have a requirement to make our current web application configurable based on client profile. So basically, allowing the application to scale and customize itself based on who the customer is. My current requirement to start off with will be simple, which is to make text within the web application configurable. So ATM, there will be 2 possible profiles, and based on which profile you select (either through AppSetting or database), all labels need to render accordingly. I can think of many ways of doing this. One thing I don't want to do is storing the label values in a database table because ATM there is no requirement to modify the labels through an interface, so I was thinking perhaps Resource files?
Also, my next requirement will be to all features within the website to be turned on/off based on profile, so I Need to keep this into consideration. Sometimes a feature will share 90% of the logic, so it wouldnt be feasible to duplicate to the feature and make the 10% changes for that profile and then have 2 copies of the same feature with minimum differences. So I'm looking for a solution for this as well. Perhaps an overall design that would cover both requirements?
Any advice will be highly appreciated.
Thanks
According to my understanding you need to update labels in the website and provide some features based on the profile selected.
So to do this , I would like to do this:
First implement MVC pattern, where our website will be in View , Model will be the Profile class and controller will host all the business logic.
If we don't want to use Database, we can serialize the object of Profile class and store (for ref
), in that we can have a File object which is a config file, where we can store the names of the features available to that profile.
At run time read all the features available for that profile and hence populate the view(website). This can be done by using either Inversion of Control Pattern. Like this:
public interface IFeatures{...}
public class Feature1 implements IFeatures{...}
public class Feature2 implements IFeatures{...}
public class Profile{
private String name;
private String pwd;
private File configFile;
...
}
public class Controller{
public List<?> getFeauturesForProfile(Profile p){
List<IFeatures> features;
List<String>feautures = scanConfigFileForFeatures(p.getConfigFile());
for(String feature : features)
features = Class.forName(feature).newInstance();
return features
}
}
Since I'm familiar with only java , I have written the syntax-es in java .

Should I use a factory pattern?

I currently have a class named ConfigProfile factory and it contains methods for say a default profile, current settings, ect. This class gets used internally by my Profile service. I was thinking that it would be better to simply make this a true factory and just create the appropriate Profile Service for each of the products we are configuring.
public string GetDefaultProfile(string product)
{
if (string.IsNullOrEmpty(product))
{
throw new ArgumentNullException("product");
}
string profile = null;
if (product.Contains("Product 1", StringComparison.CurrentCultureIgnoreCase) ||
product.Contains("product1", StringComparison.CurrentCultureIgnoreCase))
{
profile = Resources.product1DefaultProfile;
}
return profile;
}
that is only one product there, but we have several more which means I will have to add more if statements for each one. The profile service already has an interface and is what gets used for most of my program. Also there are several methods that use this same way of doing things. So would a factory that returns the appropriate profile service based on product name be a better solution or is there something else I could do?
Edit: This is one of the simpler methods in this class. the more complex one is the one that retrieves the current system settings from the required places. Like all products have IIS settings, but some will have theme support while others will have database configuration to do.
Factory is a very good solution. It allows you to hide the configuration complexity behind a simple interface.
If you need to be able to configure it at run-time/start-up, combine with Strategy.
Both solutions - static factory or Strategy - can be combined with Prototype. Prototype would be useful as an optimization, if you often use the same profile, and it's read-only.
EDIT: You are probably using Prototype already. Your sample code looks like you are copying/referencing a profile rather than building it as a complex product.

Single class with two databases

I have a two part application. One part is a web application (C# 4.0) which runs on a hosted machine with a hosted MSSQL database. That's nice and standard. The other part is a Windows Application that runs locally on our network and accesses both our main database (Advantage) and the web database. The website has no way to access the Advantage database.
Currently this setup works just fine (provided the network is working), but we're now in the process of rebuilding the website and upgrading it from a Web Forms /.NET 2.0 / VB site to a MVC3 / .NET 4.0 / C# site. As part of the rebuild, we're adding a number of new tables where the internal database has all the data, and the web database has a subset thereof.
In the internal application, tables in the database are represented by classes which use reflection and attribute flags to populate themselves. For example:
[AdvantageTable("warranty")]
public class Warranty : AdvantageTable
{
[Advantage("id", IsKey = true)]
public int programID;
[Advantage("w_cost")]
public decimal cost;
[Advantage("w_price")]
public decimal price;
public Warranty(int id)
{
this.programID = id;
Initialize();
}
}
The AdvantageTable class's Initialize() method uses reflection to build a query based on all the keys and their values, and then populates each field based on the database column specified. Updates work similarly - We call AdvantageTable.Update() on whichever object, and it handles all the database writes. It works quite well, hides all the standard CRUD, and lets us rapidly create new classes when we add a new table. We'd rather not change it, but I'm not going to entirely rule it out if there's a solution that would require it.
The web database needs to have this table, but doesn't have a need for the cost data. I could create a separate class that's backed by the web database (via stored procedures, reflection, LINQ-TO-SQL, ADO data objects, etc), but there may be other functionality in the Warranty object which I want to behave the same way regardless of whether it's called from the website or the internal app, without the need to maintain two sets of code. For example, we might change the logic of how we decide which warranty applies to a product - I want to need to create and test that in only one place, not two.
So my question is: Can anyone think of a good way to allow this class to sometimes be populated from the Advantage database and sometimes the web database? It's not just a matter of connection strings, because they have two very different methods of access (even aside from the reflection). I considered adding [Web("id")] type tags to the Advantage tags, and only putting them on the fields which exist in the web database to designate its columns, then having a switch of some kind to control which set of logic is used for reading/writing, but I have the feeling that that would get painful (Is this method web-safe? How do I set the flag before instantiating it?). So I have no ideas I like and suspect there's a solution I'm not even aware exists. Any input?
I think the fundamental issue is that you want to put business logic in the Warranty object, which is a data layer object. What you really want to do is have a common data contract (could be an interface in this case) that both data sources support, with logic encapsulated in a separate class/layer that can operate with either data source. This side-steps the issue of having a single data class attempt to operate with two different data sources by establishing a common data contract that your business layer can use, regardless of how the data is pulled.
So, with your example, you might have an AdvantageWarranty and WebWarranty, both of which implement IWarranty. You have a separate WarrantyValidator class that can operate on any IWarranty to tell you whether the warranty is still valid for given conditions. Incidentally, this gives you a nice way to stub out your data if you want to unit test your business logic in the WarrantyValidator class.
The solution I eventually came up with was two-fold. First, I used Linq-to-sql to generate objects for each web table. Then, I derived a new class from AdvantageTable called AdvantageWebTable<TABLEOBJECT>, which contains the web specific code, and added web specific attributes. So now the class looks like this:
[AdvantageTable("warranty")]
public class Warranty : AdvantageWebTable<WebObjs.Warranty>
{
[Advantage("id", IsKey = true)][Web("ID", IsKey = true)]
public int programID;
[Advantage("w_cost")][Web("Cost")]
public decimal cost;
[Advantage("w_price")][Web("Price")]
public decimal price;
public Warranty(int id)
{
this.programID = id;
Initialize();
}
}
There's also hooks for populating web-only fields right before saving to the web database, and there will be (but isn't yet since I haven't needed it) a LoadFromWeb() function which uses reflection to populate the fields.

How do I get the application's directory from my WPF application, at design time?

How do I get the application's directory from my WPF application, at design time? I need to access a resource in my application's current directory at design time, while my XAML is being displayed in the designer. I'm not able to use the solution specified in this question as at design time both System.IO.Path.GetDirectoryName(Process.GetCurrentProcess().MainModule.FileName) and System.Reflection.Assembly.GetExecutingAssembly().Location point to the IDE's location (Visual Studio... Common7 or something).
Upon request to further clarify my goals: I want to access a database table at design time and display a graphic of that data. The design is done in Visual Studio 2008, so what I need is a very specific solution to a very specific problem, and that is getting the assembly directory for my app.
From your description it sounds like your code is actually running inside the WPF Designer within Visual Studio, for example it is part of a custom control library that is being used for design.
In this case, Assembly.GetEntryAssembly() returns null, but the following code gets the path to the application directory:
string applicationDirectory = (
from assembly in AppDomain.CurrentDomain.GetAssemblies()
where assembly.CodeBase.EndsWith(".exe")
select System.IO.Path.GetDirectoryName(assembly.CodeBase.Replace("file:///", ""))
).FirstOrDefault();
The following steps can be used to demonstrate this works inside VS.NET 2008's WPF Designer tool:
Place this code inside a "WPF Custom Control Library" or "Class Library" project
Add whatever code is necessary to read the database and return the data for display (in my case I just returned the application directory itself as a string)
Reference the library project from the project you are designing
Use the custom controls or classes from a XAML file to populate your DataContext or otherwise supply data to your UI (in my case I bound DataContext using x:Static)
Edit that XAML file with the "Windows Presentation Foundation Designer", which can be done by just double-clicking unless you have changed your default editor, in which case use "Open With..."
When you follow these steps, the object you are looking at will be populated with data from your database the same way both at run time and design time.
There are other scenarios in which this same technique works just as well, and there are other solutions available depending on your needs. Please let us know if your needs are different those I assumed above. For example, if you are writing a VS.NET add-in, you are in a completely different ball game.
Are you trying to support a designer (such as the visual studio designer or Blend)?
If so then there are various different ways to approach this problem. You typically don't want to rely a relative path from executable because it can be hosted in various different design tools (VS, Expression Blend etc..)
Maybe you can more fully explain the problem you are trying to solve so we can provide a better answer?
I don't think this is possible - you're asking for the location of an assembly that potentially hasn't even been built yet. Your design-time code does not run inside your application and would have to make some assumptions about the IDE. This feels wrong and brittle to me - consider these questions:
Has the project been built yet?
If not, there is no executable to get the path of, so what then?
Would the other files be present if it hasn't been built, or are they build artefacts?
If it has been built, where was it built to?
Do you need to consider other IDEs?
In this situation you should probably ask the user, at design time, to provide or browse for a path by adding a property on your object for them to edit. Your design time code can then use the value of the property to find what it needs.
If you are extensively working on WPF designers using adorner etc, please use "Context" property/type
Details:-
In Design time you have instance of modelItem (I assume it, you know it) if not then you can instantiate it in Override implementation of Activate method
// in DesignAdorner class
public class DesignAdorner : PrimarySelectionAdornerProvider
{
protected override void Activate(ModelItem item)
{
modelItem = item;
}
}
Now you can access the current application path using following single line code
string aplicationPathDir = System.IO.Directory.GetParent(modelItem.Context.ToString()).FullName;
Let me know, if it does not help you.
Ok given the further clarification here is what I would do.
staying in line with the concern raised by GraemeF, doing what you want is brittle and prone to breaking at best.
Because of this the general practice is to treat design time data support as a wholly different approach then runtime data support. Very simply, the coupling you are creating between your design time environment and this DB is a bad idea.
To simply provide design time data for visualization I prefer to use a mock class that adheres to a common Interface as the runtime class. This gives me a way to show data that I can ensure is of the right type and conforms to the same contract as my runtime object. Yet, this is a wholly different class that is used for design time support (and often used for Unit Testing).
So for example. If I had a run time class that needs to show person details such as first name, last name and Email:
public class Person()
{
public String FirstName { get; set;}
public String LastName {get; set;}
public Email EmailAddress {get; set;}
}
and I was populating this object from a DB at runtime but also need to provide a design time visualization I would introduce an IPerson interface that defines the contract to adhere to, namely enforces that the property getters exist:
public interface IPerson()
{
String FirstName { get; }
String LastName { get; }
Email EmailAddress { get; }
}
Then I would update my runtime Person class to implement the interface:
public class Person() : IPerson
{
public String FirstName { get; set;}
public String LastName {get; set;}
public Email EmailAddress {get; set;}
}
Then I would create a mock class that implements the same interface and provides sensible values for design time use
public MockPerson() : IPerson
{
public String FirstName { get { return "John"; } }
public String LastName { get { return "Smith"; } }
public Email EmailAddress { get { return new Email("John#smith.com"); } }
}
Then I would implement a mechanism to provide the MockPerson object at design time and the real Person object at runtime. Something like this or this. This provides design time data support without the hard dependency between the runtime and design time environments.
This pattern is much more flexible and will allow you to provide consistent design time data support throughout your application.

How do you add sample (dummy) data to your unit tests?

In bigger projects my unit tests usually require some "dummy" (sample) data to run with. Some default customers, users, etc. I was wondering how your setup looks like.
How do you organize/maintain this data?
How do you apply it to your unit tests (any automation tool)?
Do you actually require test data or do you think it's useless?
My current solution:
I differentiate between Master data and Sample data where the former will be available when the system goes into production (installed for the first time) and the latter are typical use cases I require for my tests to run (and to play during development).
I store all this in an Excel file (because it's so damn easy to maintain) where each worksheet contains a specific entity (e.g. users, customers, etc.) and is flagged either master or sample.
I have 2 test cases which I (miss)use to import the necessary data:
InitForDevelopment (Create Schema, Import Master data, Import Sample data)
InitForProduction (Create Schema, Import Master data)
I use the repository pattern and have a dummy repository that's instantiated by the unit tests in question, it provides a known set of data that encompasses a examples that are both within and out of range for various fields.
This means that I can test my code unchanged by supplying the instantiated repository from the test unit for testing or the production repository at runtime (via a dependency injection (Castle)).
I don't know of a good web reference for this but I learnt much from Steven Sanderson's Professional ASP.NET MVC 1.0 book published by Apress. The MVC approach naturally provides the separation of concern that's necessary to allow your testing to operate with fewer dependencies.
The basic elements are that you repository implements an interface for data access, that same interface is then implemented by a fake repository that you construct in your test project.
In my current project I have an interface thus:
namespace myProject.Abstract
{
public interface ISeriesRepository
{
IQueryable<Series> Series { get; }
}
}
This is implemented as both my live data repository (using Linq to SQL) and also a fake repository thus:
namespace myProject.Tests.Respository
{
class FakeRepository : ISeriesRepository
{
private static IQueryable<Series> fakeSeries = new List<Series> {
new Series { id = 1, name = "Series1", openingDate = new DateTime(2001,1,1) },
new Series { id = 2, name = "Series2", openingDate = new DateTime(2002,1,30),
...
new Series { id = 10, name = "Series10", openingDate = new DateTime(2001,5,5)
}.AsQueryable();
public IQueryable<Series> Series
{
get { return fakeSeries; }
}
}
}
Then the class that's consuming the data is instantiated passing the repository reference to the constructor:
namespace myProject
{
public class SeriesProcessor
{
private ISeriesRepository seriesRepository;
public void SeriesProcessor(ISeriesRepository seriesRepository)
{
this.seriesRepository = seriesRepository;
}
public IQueryable<Series> GetCurrentSeries()
{
return from s in seriesRepository.Series
where s.openingDate.Date <= DateTime.Now.Date
select s;
}
}
}
Then in my tests I can approach it thus:
namespace myProject.Tests
{
[TestClass]
public class SeriesTests
{
[TestMethod]
public void Meaningful_Test_Name()
{
// Arrange
SeriesProcessor processor = new SeriesProcessor(new FakeRepository());
// Act
IQueryable<Series> currentSeries = processor.GetCurrentSeries();
// Assert
Assert.AreEqual(currentSeries.Count(), 10);
}
}
}
Then look at CastleWindsor for the inversion of control approach for your live project to allow your production code to automatically instantiate your live repository through dependency injection. That should get you closer to where you need to be.
In our company we discuss exact these problem a bunch of time since weeks and month.
To follow the guideline of unit testing:
Each test must be atomar and don't allow relate to each other (No data sharing), that means, each tust must be have there own data at the beginning and clear the data at end.
Out product is so complex (5 years development, over 100 tables in a database), that is nearly impossible to maintain this in a acceptable way.
We tried out database scripts, which creates and deletes the data before / after the test (there are automatic methods which call it).
I would say you are on a good way with excel files.
Ideas from me to make it a little well:
If you have a database behind your software google for "NDBUnit". It's a framework to insert and delete data in databases for unit tests.
If you have no database maybe XML is a little more flexible on systems like excel.
Not directly answering the question but one way to limit the amount of tests that need to use dummy data is to use a mocking framework to create mocked objects that you can use to fake the behavior of any dependencies you have in a class.
I find that using mocked objects rather then a specific concrete implementation you can drastically reduce the amount of real data you need to use as mocks don't process the data you pass into them. They just perform exactly as you want them to.
I'm still sure you probably need dummy data in a lot of instances so apologies if you're already using or are aware of mocking frameworks.
Just to be clear, you need to differenciate between UNIT testing (test a module with no implied dependencies on other modules) and app testing (test parts of application).
For the former, you need a mocking framework (I'm only familiar with Perl ones, but i'm sure they exist in Java/C#). A sign of a good framework would be ability to take a running app, RECORD all the method calls/returns, and then mock the selected methods (e.g. the ones you are not testing in this specific unit test) using recorded data.
For good unit tests you MUST mock every external dependency - e.g., no calls to filesystem, no calls to DB or other data access layers unless that is what you are testing, etc...
For the latter, the same mocking framework is useful, plus ability to create test data sets (that can be reset for each test). The data to be loaded for the tests can reside in any offline storage that you can load from - BCP files for Sybase DB data, XML, whatever tickles your fancy. We use both BCP and XML.
Please note that this sort of "load test data into DB" testing is SIGNIFICANTLY easier if your overall company framework allows - or rather enforces - a "What is the real DB table name for this table alias" API. That way, you can cause your application to look at cloned "test" DB tables instead of real ones during testing - on top of such table aliasing API's main purpose of enabling one to move DB tables from one database to another.

Categories

Resources