I have a base class that declares a private non-static reference to the DataBase Handler instance (DBH).
DBH is a class we use to simplify database operations of derived classes. It contains the usual methods, like ExecuteScalar, StartTransaction among others; and it provides additional benefits in the application context, like caching and zero configuration.
Instances of derived classes use DBH to read/save its state in the database, and since their operations are not atomic, all derived classes use this transaction. Everything is going on in one place: a virtual method called InsertUpdate() declared in the base class.
Next, I have a collection (called Book) of instances of derived classes. I want to take collection updates as transaction.
I want to achieve something similar to this:
DatabaseHandler dbh = new DatabaseHandler()
t = dbh.StartTrasaction();
foreach( Type o in Book<Type> )
{
o.prop1 = ..
o.prop2 = ...
o.method1() ...
o.InsertUpdate(t); // uses its own instance of DatabaseHandler and starts its own transaction
}
dbh.EndTransaction(t);
Currently the InsertUpdate method is parameter-less. I guess I'll have to introduce an overloaded version which accepts a transaction object.
Besides solving my current issue, are there any design issues I need to know about? How can I improve this design or institute a better design?
Make sure you read this question
Personally, I usually go with "my own" implementation of a TrasactionScope like object that wacks data on to TLS with the added benefit of having a factory that allows for easy profiling and logging.
To me your current design sound fairly complex. By decoupling your raw database access code from your classes it will reduce duplication (and avoid requiring all your data access classes inherit off a base class). Defining an object as opposed to a set of static methods for DB access will ease testing (you can substitute a mock class)
Have you looked at the System.Transactions namespace? Unless you have already discounted it for some reason you may be able to leverage the built in nested transaction support provided there - e.g:
using (var scope = new TransactionScope())
{
// call a method here that uses a nested transaction
someObject.SomeMethodThatAlsoUsesATransactionScope();
scope.Complete();
}
If the updates are all happening on the same database connectino, then the nested transactions will work as expected. Each InsertUpdate() will run its own transaction, with the overall transaction on dbh being able to roll back the entire thing.
Related
I am currently using EF 6 to do the following. Execute a stored procedure, then bring in the data I need to use. The data is usually 30-40 rows per application run.
I then iterate over the var, object, table (whatever you would like to call it), performing similar (sometimes different) tasks on each row. It works great. I am able to create an Entity object, expose the different complex functions of it, and then create a var to iterate over.
Like:
foreach (var result in StoredProcedureResult)
{
string strFirstname = result.FirstName
string strLastName = result.LastName
//more logic goes here using those variables and interacting with another app
}
I recently thought it would be cool if I had a class solely for accessing the data. In this way, I could just reference that class, toss the corresponding connection string into my app.config, and then I can keep the two sets of logic separate. So when attempting to do the above in that structure, I get to the point at which, you can't return a var, or when I attempt to match object return type. The return type of the execution of a stored procedure is object (which I can't iterate on).
So my question is, how does one get to the above example, except, the var result, get returned from this data access class?
If I am missing something, or its not possible because I am doing this incorrectly, do let me know. It appeared right in my head.
I'm not going to describe the architecture in full. But based on your comments you can do the following (this is not the definitive nor the only way how to do it):
in your data access project you keep the DBContext class, all the code for the stored procedure call and also the class that defines the result of the SP call, let's call it class A;
in your shared layer project - I would suggest calling it Service layer - you can create a XYService class, that has a method e.g. GetListOfX that connects to the DB and calls the procedure, if needed this method can also perform some logic, but more importantly: it doesn't return class A, but returns a new class B (this one is defined in the service layer, or can be defined in yet another project - that might be the true shared/common project; as it would be just a definition of common structures it isn't really a layer);
in your application layer you work only with the method GetListOfX of the XYService and the class B, that way you don't need a reference to the data access project
In a trivial case the class B has the same properties as the class A. But depending on your needs the class B can have additional properties/functionality it can also ignore some properties of A or even combine multiple properties into one: e.g. combining the FirstName and LastName as one property called simply Name.
Basically what you are looking for is the multi-tier application architecture (usually 3-4 tier). The full extent of such approach (which includes heavy usage of concepts like interfaces and dependency injection) might not be suitable or needed based on your goals, e.g. if you are building just a small application for yourself with a couple of functions or you know there won't be any reuse of the components of the final solution, then this approach is too wasteful and you can work faster with everything in one project - you should still apply principles like SOLID, DRY and Separation of concerns.
I have a project structured like this :
WebSite --> Services --> Repositories --> Domain Objects
I use Entity Framework 6 and Autofac.
I've been told I should remove all construction logic from my domain objects so they remain as POCO as possible. The thing is, I have properties that should be initialized when a new object is created such as CreationDate and UserIdCreatedBy
As an example, if I have a Client object, I would use the following logic in the Client class constructor :
public class Client
{
public Client()
{
this.CreationDate = DateTime.UtcNow;
if (Thread.CurrentPrincipal is CustomPrincipal
&& ((CustomPrincipal)Thread.CurrentPrincipal).CustomIdentity != null
&& ((CustomPrincipal)Thread.CurrentPrincipal).CustomIdentity.User != null)
{
this.UserIdCreatedBy = ((CustomPrincipal)Thread.CurrentPrincipal).CustomIdentity.User.UserId;
}
}
... Properties and such
}
So now, I would like to take this constructor logic out of the domain object into a factory for that object. How can I do it gracefully so that Entity Framework uses it when I call MyContext.Clients.Create()? Is that even possible? I know calling the Thread's CurrentPrincipal is not all that good either, but it's for the example to show the logic could be more complex than a plain default value.
Thanks a lot
Assuming that you use DB storage to store the items (not for manipulations with them) I think you could use separate class for instantiating objects. (Some kind of factory as you described.)
For example, in my apps I often have UserManager class. This class does all work relating to the users creation depending on the login method (email+password, social ID etc). Also this class might contain methods for changing password etc.
UPD:
I use data layer as something that knows how to create/update/read/delete objects from/to database. In addition, class that works with db can has methods like selectByThis, selectByThat etc. So you never need to write something db-specific somewhere in your code, except db layer. (I mean that you never need to write something like .Where(a => a.SomeProp == true), you just use special method for this, so if you change the db you will just have to change your db layer, now whole the project.)
So yes, when I need some special logic for initializing object I use separate class. (Like some kind of manager.) That class does all the work and then just tells the db layer: “hey, I did all the work, so just save this object for me!”
It simplifies the maintenance for you. Also, this is how to follow the rule of single responsibility. One class initialize, do some work and the other class saves.
I have a database that contains "widgets", let's say. Widgets have properties like Length and Width, for example. The original lower-level API for creating wdigets is a mess, so I'm writing a higher-level set of functions to make things easier for callers. The database is strange, and I don't have good control over the timing of the creation of a widget object. Specifically, it can't be created until the later stages of processing, after certain other things have happened first. But I'd like my callers to think that a widget object has been created at an earlier stage, so that they can get/set its properties from the outset.
So, I implemented a "ProxyWidget" object that my callers can play with. It has private fields like private_Length and private_Width that can store the desired values. Then, it also has public properties Length and Width, that my callers can access. If the caller tells me to set the value of the Width property, the logic is:
If the corresponding widget object already exists in the database, then set
its Width property
If not, store the given width value in the private_Width field for later use.
At some later stage, when I'm sure that the widget object has been created in the database, I copy all the values: copy from private_Width to the database Width field, and so on (one field/property at a time, unfortunately).
This works OK for one type of widget. But I have about 50 types, each with about 20 different fields/properties, and this leads to an unmaintainable mess. I'm wondering if there is a smarter approach. Perhaps I could use reflection to create the "proxy" objects and copy field/property data in a generic way, rather than writing reams of repetitive code? Factor out common code somehow? Can I learn anything from "data binding" patterns? I'm a mathematician, not a programmer, and I have an uneasy feeling that my current approach is just plain dumb. My code is in C#.
First, in my experience, manually coding a data access layer can feel like a lot of repetitive work (putting an ORM in place, such as NHibernate or Entity Framework, might somewhat alleviate this issue), and updating a legacy data access layer is awful work, especially when it consists of many parts.
Some things are unclear in your question, but I suppose it is still possible to give a high-level answer. These are meant to give you some ideas:
You can build ProxyWidget either as an alternative implementation for Widget (or whatever the widget class from the existing low-level API is called), or you can implement it "on top of", or as a "wrapper around", Widget. This is the Adapter design pattern.
public sealed class ExistingTerribleWidget { … }
public sealed class ShinyWidget // this is the wrapper that sits on top of the above
{
public ShinyWidget(ExistingTerribleWidget underlying) { … }
private ExistingTerribleWidget underlying;
… // perform all real work by delegating to `underlying` as appropriate
}
I would recommend that (at least while there is still code using the existing low-level API) you use this pattern instead of creating a completely separate Widget implementation, because if ever there is a database schema change, you will have to update two different APIs. If you build your new EasyWidget class as a wrapper on top of the existing API, it could remain unchanged and only the underlying implementation would have to be updated.
You describe ProxyWidget having two functions (1) Allow modifications to an already persisted widget; and (2) Buffer for a new widget, which will be added to the database later.
You could perhaps simplify your design if you have one common base type and two sub-classes: One for new widgets that haven't been persisted yet, and one for already persisted widgets. The latter subtype possibly has an additional database ID property so that the existing widget can be identified, loaded, modified, and updated in the database:
interface IWidget { /* define all the properties required for a widget */ }
interface IWidgetTemplate : IWidget
{
IPersistedWidget Create();
bool TryLoadFrom(IWidgetRepository repository, out IPersistedWidget matching);
}
interface IPersistedWidget : IWidget
{
Guid Id { get; }
void SaveChanges();
}
This is one example for the Builder design pattern.
If you need to write similar code for many classes (for example, your 50+ database object types) you could consider using T4 text templates. This just makes writing code less repetitive; but you will still have to define your 50+ objects somewhere.
I actually have 2 questions related to each other:
I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass > ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list.
However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say:
foreach(MyClass obj in MyClassList) {
obj.update();
}
What is a better implementation and why?
The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static?
Any comments would be greatly appreciated.
For your first question, I would suggest putting an update method in MyClass. It sounds like you may be instantiating multiple copies of the same object, and perhaps a better solution would be to update the original MyClass objects directly through their update methods.
This would also give you the added advantage of being able to update individual objects in the future and should be more maintainable.
For your second question, it sounds like MyClass contains data from a database, making it an entity object. Entity objects shouldn't contain business logic, so I think you'd be okay having a Service class use the XMLReader to perform operations on the data and then use the getters/setters to manipulate the data in the object. Same as before, this has the advantage of keeping your code loosely coupled and more maintainable.
Do not include Update() within the class. I know it seems tempting because it the update call "easier" but what that would be creating dependencies. (Presumably) MyClass contains db data because it is a domain object which is represents the state of some real world "unit" (tangible, conceptual, or otherwise). If you include an update() method; now you're domain object is not only responsible for representing the state of some logical "thing", but it is also responsible for persistence logic (save, load, new, delete). You'd be better off creating a service which handles those responsibilities. This relates to the design principle of high cohesion, ie. each class has only 1 responsibility (or type of responsibility at least). eg.... persistenceService.saveUser(myUser);
This is basically the same question, except now you are talking about making your class directly dependant (as a descendant in this case) of a specific type of persistence (writing to xml file) which is even worse than having your class be dependent on persistence in a more generalized way.
Think about it like this when trying to make design decisions... plan on change (instability, chaos, or whatever you would like to call it). What if a month from now you need to switch out the XML persistance for a database? Or what if you all of a sudden have to deal with MyClassVariantA, MyClassVariantB, MyClassVariantC? By minimizing dependencies, when you do have to change something it won't necessitate a cascade of changes throughout every other part of your application.
I'm currently writing a data access layer for an application. The access layer makes extensive use of linq classes to return data. Currently in order to reflect data back to the database I've added a private data context member and a public save method. The code looks something like this:
private DataContext myDb;
public static MyClass GetMyClassById(int id)
{
DataContext db = new DataContext();
MyClass result = (from item in db.MyClasss
where item.id == id
select item).Single();
result.myDb = db;
return result;
}
public void Save()
{
db.SubmitChanges();
}
That's a gross over simplification but it gives the general idea. Is there a better way to handle that sort of pattern? Should I be instantiating a new data context every time i want to visit the db?
It actually doesn't matter too much. I asked Matt Warren from the LINQ to SQL team about this a while ago, and here's the reply:
There are a few reasons we implemented
IDisposable:
If application logic needs to hold
onto an entity beyond when the
DataContext is expected to be used or
valid you can enforce that contract by
calling Dispose. Deferred loaders in
that entity will still be referencing
the DataContext and will try to use it
if any code attempts to navigate the
deferred properties. These attempts
will fail. Dispose also forces the
DataContext to dump its cache of
materialized entities so that a single
cached entity will not accidentally
keep alive all entities materialized
through that DataContext, which would
otherwise cause what appears to be a
memory leak.
The logic that automatically closes
the DataContext connection can be
tricked into leaving the connection
open. The DataContext relies on the
application code enumerating all
results of a query since getting to
the end of a resultset triggers the
connection to close. If the
application uses IEnumerable's
MoveNext method instead of a foreach
statement in C# or VB, you can exit
the enumeration prematurely. If your
application experiences problems with
connections not closing and you
suspect the automatic closing behavior
is not working you can use the Dispose
pattern as a work around.
But basically you don't really need to dispose of them in most cases - and that's by design. I personally prefer to do so anyway, as it's easier to follow the rule of "dispose of everything which implements IDisposable" than to remember a load of exceptions to it - but you're unlikely to leak a resource if you do forget to dispose of it.
Treat your datacontext as a resource. And the rule of using resource says
"acquire a resource as late as
possible, release it as soon as its
safe"
DataContext is pretty lightweight and is intended for unit of work application as you are using it. I don't think that I would keep the DataContext in my object, however. You might want to look at repository patterns if you aren't going to use the designer generated code to manage your business objects. The repository pattern will allow you to work with your objects detached from the data context, then reattach them before doing updates, etc.
Personally, I'm able to live with the DBML designer generated code for the most part, with partial class implementations for my business and validation logic. I also make the designer-generated data context abstract and inherit from it to allow me to intercept things like stored-procedure and table-valued function methods that are added directly to the data context and apply business logic there.
A pattern that I've been using in ASP.NET MVC is to inject a factory class that creates appropriate data contexts as needed for units of work. Using the factory allows me to mock out the data context reasonably easy by (1) using a wrapper around the existing data context class so that it's mockable (mock the wrapper since DataContext is not easily mockable) and (2) creating Fake/Mock contexts and factories to create them. Being able to create them at will from a factory makes it so that I don't have to keep one around for long periods of time.