I have a two part application. One part is a web application (C# 4.0) which runs on a hosted machine with a hosted MSSQL database. That's nice and standard. The other part is a Windows Application that runs locally on our network and accesses both our main database (Advantage) and the web database. The website has no way to access the Advantage database.
Currently this setup works just fine (provided the network is working), but we're now in the process of rebuilding the website and upgrading it from a Web Forms /.NET 2.0 / VB site to a MVC3 / .NET 4.0 / C# site. As part of the rebuild, we're adding a number of new tables where the internal database has all the data, and the web database has a subset thereof.
In the internal application, tables in the database are represented by classes which use reflection and attribute flags to populate themselves. For example:
[AdvantageTable("warranty")]
public class Warranty : AdvantageTable
{
[Advantage("id", IsKey = true)]
public int programID;
[Advantage("w_cost")]
public decimal cost;
[Advantage("w_price")]
public decimal price;
public Warranty(int id)
{
this.programID = id;
Initialize();
}
}
The AdvantageTable class's Initialize() method uses reflection to build a query based on all the keys and their values, and then populates each field based on the database column specified. Updates work similarly - We call AdvantageTable.Update() on whichever object, and it handles all the database writes. It works quite well, hides all the standard CRUD, and lets us rapidly create new classes when we add a new table. We'd rather not change it, but I'm not going to entirely rule it out if there's a solution that would require it.
The web database needs to have this table, but doesn't have a need for the cost data. I could create a separate class that's backed by the web database (via stored procedures, reflection, LINQ-TO-SQL, ADO data objects, etc), but there may be other functionality in the Warranty object which I want to behave the same way regardless of whether it's called from the website or the internal app, without the need to maintain two sets of code. For example, we might change the logic of how we decide which warranty applies to a product - I want to need to create and test that in only one place, not two.
So my question is: Can anyone think of a good way to allow this class to sometimes be populated from the Advantage database and sometimes the web database? It's not just a matter of connection strings, because they have two very different methods of access (even aside from the reflection). I considered adding [Web("id")] type tags to the Advantage tags, and only putting them on the fields which exist in the web database to designate its columns, then having a switch of some kind to control which set of logic is used for reading/writing, but I have the feeling that that would get painful (Is this method web-safe? How do I set the flag before instantiating it?). So I have no ideas I like and suspect there's a solution I'm not even aware exists. Any input?
I think the fundamental issue is that you want to put business logic in the Warranty object, which is a data layer object. What you really want to do is have a common data contract (could be an interface in this case) that both data sources support, with logic encapsulated in a separate class/layer that can operate with either data source. This side-steps the issue of having a single data class attempt to operate with two different data sources by establishing a common data contract that your business layer can use, regardless of how the data is pulled.
So, with your example, you might have an AdvantageWarranty and WebWarranty, both of which implement IWarranty. You have a separate WarrantyValidator class that can operate on any IWarranty to tell you whether the warranty is still valid for given conditions. Incidentally, this gives you a nice way to stub out your data if you want to unit test your business logic in the WarrantyValidator class.
The solution I eventually came up with was two-fold. First, I used Linq-to-sql to generate objects for each web table. Then, I derived a new class from AdvantageTable called AdvantageWebTable<TABLEOBJECT>, which contains the web specific code, and added web specific attributes. So now the class looks like this:
[AdvantageTable("warranty")]
public class Warranty : AdvantageWebTable<WebObjs.Warranty>
{
[Advantage("id", IsKey = true)][Web("ID", IsKey = true)]
public int programID;
[Advantage("w_cost")][Web("Cost")]
public decimal cost;
[Advantage("w_price")][Web("Price")]
public decimal price;
public Warranty(int id)
{
this.programID = id;
Initialize();
}
}
There's also hooks for populating web-only fields right before saving to the web database, and there will be (but isn't yet since I haven't needed it) a LoadFromWeb() function which uses reflection to populate the fields.
Related
I am currently using EF 6 to do the following. Execute a stored procedure, then bring in the data I need to use. The data is usually 30-40 rows per application run.
I then iterate over the var, object, table (whatever you would like to call it), performing similar (sometimes different) tasks on each row. It works great. I am able to create an Entity object, expose the different complex functions of it, and then create a var to iterate over.
Like:
foreach (var result in StoredProcedureResult)
{
string strFirstname = result.FirstName
string strLastName = result.LastName
//more logic goes here using those variables and interacting with another app
}
I recently thought it would be cool if I had a class solely for accessing the data. In this way, I could just reference that class, toss the corresponding connection string into my app.config, and then I can keep the two sets of logic separate. So when attempting to do the above in that structure, I get to the point at which, you can't return a var, or when I attempt to match object return type. The return type of the execution of a stored procedure is object (which I can't iterate on).
So my question is, how does one get to the above example, except, the var result, get returned from this data access class?
If I am missing something, or its not possible because I am doing this incorrectly, do let me know. It appeared right in my head.
I'm not going to describe the architecture in full. But based on your comments you can do the following (this is not the definitive nor the only way how to do it):
in your data access project you keep the DBContext class, all the code for the stored procedure call and also the class that defines the result of the SP call, let's call it class A;
in your shared layer project - I would suggest calling it Service layer - you can create a XYService class, that has a method e.g. GetListOfX that connects to the DB and calls the procedure, if needed this method can also perform some logic, but more importantly: it doesn't return class A, but returns a new class B (this one is defined in the service layer, or can be defined in yet another project - that might be the true shared/common project; as it would be just a definition of common structures it isn't really a layer);
in your application layer you work only with the method GetListOfX of the XYService and the class B, that way you don't need a reference to the data access project
In a trivial case the class B has the same properties as the class A. But depending on your needs the class B can have additional properties/functionality it can also ignore some properties of A or even combine multiple properties into one: e.g. combining the FirstName and LastName as one property called simply Name.
Basically what you are looking for is the multi-tier application architecture (usually 3-4 tier). The full extent of such approach (which includes heavy usage of concepts like interfaces and dependency injection) might not be suitable or needed based on your goals, e.g. if you are building just a small application for yourself with a couple of functions or you know there won't be any reuse of the components of the final solution, then this approach is too wasteful and you can work faster with everything in one project - you should still apply principles like SOLID, DRY and Separation of concerns.
I was given a few dozen legacy SQL statements that are each hundred(s) of lines long. Each SQL is mapped to code with its own unique POCO in a shared Models project.
For example, the SQL Select Name, Birthday From People has an equivilent POCO in the Models project:
public class BirthdayPerson : SqlResultBase {
public string Name { get; set; }
public datetime Birthday { get; set; }
//SqlResultBase abstraction:
public string HardcodedSql { get {
return "Select Name, Birthday From People";
}}
}
In my DAL, I have a single generic SQL runner whose <T> represents the POCO for the SQL. So my business logic can call GetSqlResult<BirthdayPerson>():
public IEnumerable<T> GetSqlResult<T>() where T : SqlResultBase, new() {
return context.Database.SqlQuery<T>((new T()).HardcodedSql);
}
The problem is that my Models library is used across the application, and I don't want SQL exposed across the application in that HardcodedSql property.
This is the architecture I'm using:
At first you have to separate your model (i.e. POCOs) from the SQL which actually belongs to the DAL. Inversion of Control is right way to do this. Instead of generic sql runner it is better to register mappings in the IoC container from abstract repositores (e.g. IRepository<MyPOCO>) to implementations that contain the SQLs.
EDIT: To be more concrete, a possible solution:
Place all SQLs to a separate file(s) inside DAL, for example to a set of embedded resource files with name convention, e.g. Legacy-{0}.sql where {0} is name of the POCO.
Create a generic implementation of legacy repository that uses POCO name as a key and picks corresponding Legacy-{0}.sql file from the resource set. Note that there may be other implementations as well that use other data access techniques, like ORM.
In the composition root register explicitly all mappings from the legacy POCOs to the legacy implementation: IRepository<MyPOCO1> => LegacyRepo<MyPOCO1>; IRepository<MyPOCO2> => LegacyRepo<MyPOCO2>; etc. Moreover you may register other mappings from non-legacy entities to other implementations of repository.
The simplest solution would be to make HardcodedSql internal instead of public so it's only visible within a DAL Project. If the DAL is a separate project from the model you could use InternalsVisibleTo to expose it to that project. This assumes you can configure your project structure accordingly.
I suggest perhaps two possible ways of dealing with the question.
As for the first method, I would rather change how the sql is accessed and wrap the call locally in a method. So the class may have a function called public IEnumerable GetFromSql() you could pass in a context, or create a new one, I am not sure how you have setup EF in your project. this way you never publically expose the raw sql, as you would rather make it a private variable or local constant perhaps and simply access it from within the function.
As a second option, and I have actually done this and it turned out pretty great, was I moved all the sql to views and used EF to access them. That way there was no sql pollution in my code.
Seeing that the models already exists, the result from calling the views would match the types that you already have.
I have a database that contains "widgets", let's say. Widgets have properties like Length and Width, for example. The original lower-level API for creating wdigets is a mess, so I'm writing a higher-level set of functions to make things easier for callers. The database is strange, and I don't have good control over the timing of the creation of a widget object. Specifically, it can't be created until the later stages of processing, after certain other things have happened first. But I'd like my callers to think that a widget object has been created at an earlier stage, so that they can get/set its properties from the outset.
So, I implemented a "ProxyWidget" object that my callers can play with. It has private fields like private_Length and private_Width that can store the desired values. Then, it also has public properties Length and Width, that my callers can access. If the caller tells me to set the value of the Width property, the logic is:
If the corresponding widget object already exists in the database, then set
its Width property
If not, store the given width value in the private_Width field for later use.
At some later stage, when I'm sure that the widget object has been created in the database, I copy all the values: copy from private_Width to the database Width field, and so on (one field/property at a time, unfortunately).
This works OK for one type of widget. But I have about 50 types, each with about 20 different fields/properties, and this leads to an unmaintainable mess. I'm wondering if there is a smarter approach. Perhaps I could use reflection to create the "proxy" objects and copy field/property data in a generic way, rather than writing reams of repetitive code? Factor out common code somehow? Can I learn anything from "data binding" patterns? I'm a mathematician, not a programmer, and I have an uneasy feeling that my current approach is just plain dumb. My code is in C#.
First, in my experience, manually coding a data access layer can feel like a lot of repetitive work (putting an ORM in place, such as NHibernate or Entity Framework, might somewhat alleviate this issue), and updating a legacy data access layer is awful work, especially when it consists of many parts.
Some things are unclear in your question, but I suppose it is still possible to give a high-level answer. These are meant to give you some ideas:
You can build ProxyWidget either as an alternative implementation for Widget (or whatever the widget class from the existing low-level API is called), or you can implement it "on top of", or as a "wrapper around", Widget. This is the Adapter design pattern.
public sealed class ExistingTerribleWidget { … }
public sealed class ShinyWidget // this is the wrapper that sits on top of the above
{
public ShinyWidget(ExistingTerribleWidget underlying) { … }
private ExistingTerribleWidget underlying;
… // perform all real work by delegating to `underlying` as appropriate
}
I would recommend that (at least while there is still code using the existing low-level API) you use this pattern instead of creating a completely separate Widget implementation, because if ever there is a database schema change, you will have to update two different APIs. If you build your new EasyWidget class as a wrapper on top of the existing API, it could remain unchanged and only the underlying implementation would have to be updated.
You describe ProxyWidget having two functions (1) Allow modifications to an already persisted widget; and (2) Buffer for a new widget, which will be added to the database later.
You could perhaps simplify your design if you have one common base type and two sub-classes: One for new widgets that haven't been persisted yet, and one for already persisted widgets. The latter subtype possibly has an additional database ID property so that the existing widget can be identified, loaded, modified, and updated in the database:
interface IWidget { /* define all the properties required for a widget */ }
interface IWidgetTemplate : IWidget
{
IPersistedWidget Create();
bool TryLoadFrom(IWidgetRepository repository, out IPersistedWidget matching);
}
interface IPersistedWidget : IWidget
{
Guid Id { get; }
void SaveChanges();
}
This is one example for the Builder design pattern.
If you need to write similar code for many classes (for example, your 50+ database object types) you could consider using T4 text templates. This just makes writing code less repetitive; but you will still have to define your 50+ objects somewhere.
I'm developing an app for Windows Phone with SQlite and have a lot of custom SQL queries. Like
string query = "SELECT distinct(destinations.name) as Destinations
FROM destinations, flights
WHERE destinations.d_ID = flights.d_ID
AND flights.Date = #" + date.ToShortDateString() + "#";
then run:
var result = (Application.Current as App).db.Query(query);
For working with SQlite i'm using http://dotnetslackers.com/articles/silverlight/Windows-Phone-7-Native-Database-Programming-via-Sqlite-Client-for-Windows-Phone.aspx#s2-introduction-to-sqlite-client-for-windows-phone
and theirs DBHelper
I want all Queries will be in one place, so I can quickly change them.
Wanted to ask how to do it correctly?
create one static class
create Enum or Dictionary with queries collection
create some XML or similar file with collection -
Thanks for advise
I don't think any of the approaches are valid, for the following reasons:
Create one static class
This is a God object and is considered an anti-pattern, best to stay away from it. It's just going to be a nightmare to maintain.
create Enum or Dictionary with queries collection
Instead of having a God object now, you have a God collection, and are really just implementing the same anti-pattern in a different way.
Additionally, you'll have string keys (or enum keys) and there's not a strong link between the two (what if the dictionary doesn't populate for some reason?).
create some XML or similar file with collection
It could be argued that you're doing the same thing you would be doing with a dictionary; you'd have to key the query somehow and then look it up. It's a very brittle approach.
Possible solution
I recommend that you first abstract out your data layer into logical units. Create a class for data operations which are related.
For example, if you have a few queries and operations that are related to destinations, create an interface that exposes those operations:
public interface IDestinationDataOperations
{
// Get destinations by date.
IEnumerable<string> GetDestinationsByDate(DateTime asOf);
}
Then, create a class that implements this which is specific to SQL Lite. Where you want to make the calls, the variable is of the interface type.
The benefits of this are:
If you change the implementation from SQL Lite to some other underlying data store (web service call, JSON REST call, whatever) you only have to change where you populate the interface variable (this is where dependency injection begins to be of use), as all of your calls are against the abstraction
The interface is more easily testable:
You can test the direct implementation against any test data you want
For items that rely on the interface, you can mock the interface any way you like and not have an actual database underneath for testing.
Then, for other data operations, you can wash, rinse, and repeat.
For bonus points, you can separate out the interface into a unit-of-work for writes and a repository for reads, depending on whether what best suits your needs.
I have a database application that is configurable by the user - some of these options are selecting from different external plugin systems.
I have a base Plugin type, my database schema has the same Plugin record type with the same fields. I have a PlugingMananger to load plugins (via an IoC container) at application start and link them to the database (essentially copies the fields form the plugin on disk to the database).
public interface IPlugin
{
Guid Id{ get; }
Version Version { get; }
string Name { get; }
string Description { get; }
}
Plugins can then be retrieved using PlugingMananger.GetPlugin(Guid pluginId, Guid userId), where the user ID is that of the one of the multiple users who a plugin action may be called for.
A set of known interfaces have been declared by the application in advance each specific to a certain function (formatter, external data, data sender etc), if the plugin implements a service interface which is not known then it will be ignored:
public interface IAccountsPlugin : IPlugin
{
IEnumerable<SyncDto> GetData();
bool Init();
bool Shutdown();
}
Plugins can also have settings attributes PluginSettingAttribute defined per user in the multi-user system - these properties are set when a plugin is retrieved for a specific user, and a PluginPropertyAttribute for properties which are common across all users and read-only set by the plugin one time when the plugin is registered at application startup.
public class ExternalDataConnector : IAccountsPlugin
{
public IEnumerable<AccountSyncDto> GetData() { return null; }
public void Init() { }
public void Shutdown() { }
private string ExternalSystemUsername;
// PluginSettingAttribute will create a row in the settings table, settingId
// will be set to provided constructor parameter. this field will be written to
// when a plugin is retrieved by the plugin manager with the value for the
// requesting user that was retrieved from the database.
[PluginSetting("ExternalSystemUsernameSettingName")]
public string ExternalSystemUsername
{
get { return ExternalSystemUsername }
set { ExternalSystemUsername = value; }
}
// PluginPropertyAttribute will create a row in the attributes table common for all users
[PluginProperty("ShortCodeName")]
public string ShortCode
{
get { return "externaldata"; }
}
public Version PluginVersion
{
get { return new Version(1, 0, 0, 0); }
}
public string PluginName
{
get { return "Data connector"; }
}
public string PluginDescription
{
get { return "Connector for collecting data"; }
}
}
Here are my questions and areas I am seeking guidance on:
With the above abstraction of linking plugins in an IoC container to database, the user can select the database field Customer.ExternalAccountsPlugin = idOfExternalPlugin. This feels heavy - is there a simpler way that other systems achieve this (SharePoint for instance has lots of plugins that are referenced by the user database)?
My application dictates at compile time the interfaces that it supports and ignores all others - I have seen some systems claim to be fully expandable with open plugins which I presume would mean lots of loosely typed interfaces and casting, is there a half-way ground between the two options that would allow future updates to be issued without recompile but still use concrete interfaces?
My plugins could contain metadata (PluginProperty or PluginSetting) and I am unsure the best place to store this, either in a plugin metadata table (would make linq queries more complex) or direct in the plugin database record row (easy linq queries PluginManager.GetPluginsOfType<IAccounts>.Where(x => x.ShortCode = "externaldata").FirstOrDefault();, which is used as best practice?
Since plugins capabilities and interfaces rely so heavily on the database schema, what is the recommended way I can limit a plugin for use with a specific schema revision? Would I keep this schema revision as a single row in a settings table in the database and update this manually after each release? Would the plugin support a maximum schema version, or would the application support a list of known plugin versions?
1) I'm sorry, but I don't know for sure. However, I'm pretty sure, in software that have data created or handled by custom plugin, they handle the plugin the way you described. The idea being, if a user load the data but is missing that specific plugin, the data doesn't become corrupted and the user isn't allowed to modify that data. (An example that comes to my minds is 3D softwares in general)
2) Only giving a very strict interface implementation, of course, highly restrict the plugin creation. (Ex.: Excel, I can't create a new cell type) It's not bad or good, it highly depends what you want from it, it's a choice. If you want the plugin creator to only access the data by some very specific pipes, limit the types of data he can create, then it goes with your design. Otherwise, if you goal is to open your software to improvement, then you should also expose some classes and methods you judge safe enough to be used externally. (Ex.: Maya, I can create a new entity type that derive from a base class, not just an interface)
3) Well, it does depends of a lot of things, no? When serializing your data, you could create a wrapper that contain all information for a specific plugin, ID, MetaData and whatever else you would judge needed. I would go that way, as it would be easier to retrieve, but is it the best way for what you need? Hard to say without more informations.
4) A good example of that is Firefox. Smaller version increment doesn't change the plugin compatibility. Medium version increment tests from a database if the plugin is still valid considering what it implements. If the plugin isn't implementing something that change, it is still valid. Major version increment requires a recompile of all plugins to use the new definition. From my point of view, it's a nice middle ground that allow devs to not always recompile, but it makes the development of the main software slightly more tricky as changes must be planned ahead. The idea is to balance the PitA (Pain in the Ass) factor between the software dev and the plugin dev.
Well... that was my long collection of 2 cents.