FYI: the verbose preamble is to help explain why I am using Activator.CreateInstance. I have a number of entities (objects corresponding to database column information) that are "contained" in multiple databases, each of which has a different table/column setup. So I am able to retrieve an entity from each database, but the way I retrieve it is different per database. The database type is not known till runtime and could vary throughout execution. I have created the following setup:
First define the query operations each entity should support and each entity reader should support these operations.
public abstract class Operations<T> {
public delegate T findDelegate(int id);
public findDelegate find;
}
// there are many of these N=1,2,..., but here is one
// use abstract class since implementation of queries should be done by handlers
public class EntityNReader : Operations<EntityN> {
public Reader();
}
Define an interface for "Handler" classes, i.e. these classes implement the query operations listed above.
public interface IHandler<T> {
public string find(int id);
}
// there are many of these N,M=1,2..., but here is one
// use of interface is to force handlers to implement all query types
public class EntityNHandlerForDbTypeM : IHandler<EntityN> {
public string find(int id) {/*whatever*/}
}
This allows the developers to create a single class for handling EntityN query operations for DbTypeM. Now, create a Database class that contains the reader objects and binds the handler methods to the reader delegates.
public class Database {
// there are many of these, but here is one
public EntityNReader EntitiesN;
public Database(string dbType) {
// this is called for each EntityNReader
bindHandlers<Reader, TypeN>(MyReader, dbType);
// ...
// nullreferenceexception
EntitiesN.find(0);
}
// this is a factory that also performs late binding
private void bindHandlers<T,K>(T reader, string dbTypeM)
where T: Operations<K>, new()
{
// create instance of EntityNReader
r = (T)Activator.CreateInstance(typeof(T));
// r != null
// create instance of handler
IHandler<K> h = (IHandler<K>)(Activator.CreateInstance(
Type.GetType("namespace.to.EntityNHandlerForDbTypeM"),
new object[] { this }
));
// bind delegates
r.find = h.find;
}
}
As you can see in Databases constructor, the way the code is written now, I get a NullReferenceException even though instance r of EntityNReader is created and (verified to be) not null.
However, if I instantiate EntitiesN where it is declared in Database instead of within bindHandlers, the code compiles and everything works. The reason I don't just do this is that (subsequently) I would like to conditionally create readers/handlers inside of bindHandlers at the time the Database class is instantiated.
What is happening here? Link to actual code if necessary.
P.S. I am relatively new to programming, so I am open to hearing how an experience developer might design this component (especially if I am heading down the wrong path).
I realize your code was just samples, but I spotted this right off the bat...
if (supports[typeof(Entity1).Name]) { bindHandlers<Entity1Reader, Entity1>(Entities1, CamaDbType); }
if (supports[typeof(Entity2).Name]) { bindHandlers<Entity1Reader, Entity1>(Entities1, CamaDbType); }
Is it possible that you have a simple copy/paste mistake? Notice that Entities1 is passed in for both bindHandlers calls.
Related
I've written a method:
class CopyableFloatCommand : FloatCommand
{
public CopyableFloatCommand DeepCopy(LocationHeaderDTO locHeader, string commandId,
List<FloatProductDetailsDTO> recountProuducts)
{
var newCommand = (CopyableFloatCommand)MemberwiseClone();
newCommand.Location = locHeader ?? newCommand.Location;
newCommand.CommandId = commandId ?? newCommand.CommandId;
newCommand.RecountProducts = recountProuducts ?? newCommand.RecountProducts;
return newCommand;
}
}
And am then calling it via:
_tCheckinCommand = _pTCommand.DeepCopy(stagingLocHeadDto, SCICommand,
new List<FloatProductDetailsDTO>(_pTCommand.MoveProducts));
In order to deepcopy an object of type FloatCommand.
As the MemberwiseClone() is a protected method, it's got to be called the way you see above - one cannot parse in a FloatCommand type in the method parameter and call it via fc.MemberwiseClone(), for example. As my method ought to work on a FloatCommand type, I've created a new nested class CopyableFloatCommand which inherits from FloatCommand. DeepCopy method then shallow clones the FloatCommand, casts to the child type and changes some properties as/when needed.
Creating a new class specifically for this purpose seems a bit clunky and I didnt' see a more obvious way of writing it at the time. In terms of lines-of-code, would there be a simpler way of employing a deepcopy such as the above? What about if another class, UserCommand, attempted to deepcopy a User object? UserComand would be a sibling to FloatCommand such that they both inherit from Command. The method would have different parameters parsed for the different types (although I can just remove the parameters altogether and use the instance variables if need be) as the different sub-types have slightly different properties.
In light of this is there a more generic method of writing the DeepCopy method, to be available for access for all the Command types in order to avoid some code duplication, given the above constraints?
Thanks!
I think you're suspecting that the responsibility of cloning the object and mutate its state after it is cloned should be separated - since you're facing with the similar task again (i mean UserCommand).
I would do the following in this situation:
Create a mutation interface:
public interface ICopyCommandMutation
{
void Mutate(Command target);
}
For the sake of extensability i would create the default muate implementation:
public class NoMutation : ICopyCommandMutation
{
public void Mutate(Command target) {}
}
Create the CopyableCommand class and move the DeepCopy() method there (you should also inherit FloatCommand from CopyableCommand):
public CopyableCommand : Command
{
public CopyableCommand DeepCopy(ICopyCommandMutation commandMutation = null)
{
var newCommand = (CopyableCommand)MemberwiseClone();
if (commandMutation == null) commandMutation = new NoMutation();
commandMutation.Mutate(newCommand);
return newCommand;
}
}
Now all the CopyableCommand inheritors can be copied with 'mutations' - you just need to implement the class. For example the FloatCommand 'mutations' from your question:
public class ChangeLocationRecountProducts : ICopyCommandMutation
{
// these fields should be initialized some way (constructor or getter/setters - you decide
LocationHeaderDTO locHeader;
string commandId;
List<FloatProductDetailsDTO> recountProducts;
public void Mutate(Command floatCommand)
{
var fc = floatCommand as FloatCommand;
if (fc == null) { /* handle problems here */ }
fc.Location = locHeader ?? fc.Location;
fc.CommandId = commandId ?? fc.CommandId;
fc.RecountProducts = recountProuducts ?? fc.RecountProducts;
}
}
Here is the usage:
var clrp = new ChangeLocationRecountProducts();
// ... setting up clrp
_tCheckinCommand = _pTCommand.DeepCopy(clrp);
Now if you need to 'mutate' the UserCommand - you can do the separate mutation class for it and keep the mutation logic there. The ability to make different mutations in different sutations (just by defining the separate mutation classes) comes for free.
The only problem i can see here - is that you probably cannot create CopyableCommand and inherit other commands from it (3rd party library?). The solution would be to use Castle dynamic proxy.
I haven't used the Automapper but i suspect that it is doing something similar.
The solution is not 'lines-of-code optimal' - but you would benefit from it if you have to mutate large number of command classes when copying instances.
I have the following command handler. The handler takes a command object and uses its properties to either create or update an entity.
It decides this by the Id property on the command object which is nullable. If null, then create, if not, then update.
public class SaveCategoryCommandHandler : ICommandHandler<SaveCategoryCommand>
{
public SaveCategoryCommandHandler(
ICategoryRepository<Category> categoryRepository,
ITracker<User> tracker,
IMapProcessor mapProcessor,
IUnitOfWork unitOfWork,
IPostCommitRegistrator registrator)
{
// Private fields are set up. The definitions for the fields have been removed for brevity.
}
public override void Handle(SaveCategoryCommand command)
{
// The only thing here that is important to the question is the below ternary operator.
var category = command.Id.HasValue ? GetForUpdate(command) : Create(command);
// Below code is not important to the question. It is common to both create and update operations though.
MapProcessor.Map(command, category);
UnitOfWork.Commit();
Registrator.Committed += () =>
{
command.Id = category.Id;
};
}
private Category GetForUpdate(SaveCategoryCommand command)
{
// Category is retrieved and tracking information added
}
private Category Create(SaveCategoryCommand command)
{
// Category is created via the ICategoryRepository and some other stuff happens too.
}
}
I used to have two handlers, one for creating and one for updating, along with two commands for creating and updating. Everything was wired up using IoC.
After refactoring into one class to reduce the amount of code duplication I ended up with the above handler class. Another motivation for refactoring was to avoid having two commands (UpdateCategoryCommand and CreateCategoryCommand) which was leading to more duplication with validation and similar.
One example of this was having to have two validation decorators for what were effectively the same command (as they differed by only having an Id property). The decorators did implement inheritance but it is still a pain when there are a lot of commands to deal with.
There are a few things that bug me about the refactored handler.
One is the number of dependencies being injected. Another is that there is a lot going on the class. The if ternary bothers me - it seems like a bit of a code smell.
One option is to inject some sort of helper class into the handler. This could implement some sort of ICategoryHelper interface with concrete Create and Update implementations. This would mean the ICategoryRepository and ITracker dependencies could be replaced with a single dependency on ICategoryHelper.
The only potential issue is that this would require some sort of conditional injection from the IoC container based on whether the Id field on the Command was null or not.
I am using SimpleInjector and am unsure of the syntax of how to do this or even if it can be done at all.
Is this doing this via IoC also a smell, or should it be the handlers responsibility to do this?
Are there any other patterns or approaches for solving this problem? I had thought a decorator could possibly be used but I can't really think of how to approach doing it that way.
My experience is that having two separate commands (SaveCategoryCommand and UpdateCategoryCommand) with one command handler gives the best results (although two separate command handlers might sometimes be okay as well).
The commands should not inherit from a CategoryCommandBase base class, but instead the data that both commands share should be extracted to a DTO class that is exposed as a property on both classes (composition over inheritance). The command handler should implement both interfaces and this allows it to contain shared functionality.
[Permission(Permissions.CreateCategories)]
class SaveCategory {
[Required, ValidateObject]
public CategoryData Data;
// Assuming name can't be changed after creation
[Required, StringLength(50)]
public string Name;
}
[Permission(Permissions.ManageCategories)]
class UpdateCategory {
[NonEmptyGuid]
public Guid CategoryId;
[Required, ValidateObject]
public CategoryData Data;
}
class CategoryData {
[NonEmptyGuid]
public Guid CategoryTypeId;
[Required, StringLength(250)]
public string Description;
}
Having two commands works best, because when every action has its own command, it makes it easier to log them, and allows them to give different permissions (using attributes for instance, as shown above). Having shared data object works best, because it allows you to pass it around in the command handler and allows the view to bind to it. And inheritance is almost always ugly.
class CategoryCommandHandler :
ICommandHandler<SaveCategory>,
ICommandHandler<UpdateCategory> {
public CategoryCommandHandler() { }
public void Handle(SaveCategory command) {
var c = new Category { Name = command.Name };
UpdateCategory(c, command.Data);
}
public void Handle(UpdateCategory command) {
var c = this.repository.GetById(command.CategoryId);
UpdateCategory(c, command.Data);
}
private void UpdateCategory(Category cat, CategoryData data) {
cat.CategoryTypeId = data.CategoryDataId;
cat.Description = data.Description;
}
}
Do note that CRUDy operations will always result in solutions that seem not as clean as that task based operations will. That's one of the many reasons I push developers and requiremwnt engineers to think about the tasks they want to perform. This results in better UI, greater UX, more expressive audit trails, more pleasant design, and better overall software. But some parts of your application will always be CRUDy; no matter what you do.
I think, that you can separate this command to two well defined commands, e.g. CreateCategory and UpdateCategory (of course you should choose the most appropriate names). Also, design both commands via Template Method design pattern. In base class you can define protected abstract method for category creation and in 'Handle' method you should call this protected method, after that you can process remaining logic of original 'Handle' method:
public abstract class %YOUR_NAME%CategoryBaseCommandHandler<T> : ICommandHandler<T>
{
public override void Handle(T command)
{
var category = LoadCategory(command);
MapProcessor.Map(command, category);
UnitOfWork.Commit();
Registrator.Committed += () =>
{
command.Id = category.Id;
};
}
protected abstract Category LoadCategory(T command);
}
In derived classes you just override LoadCategory method.
I was asked to create a series of reports for an application and as always, I'm looking for ways to reduce the amount of code written. I've started trying to come up with the easiest way to request a single report. Here's what I imagined:
var response = ReportGenerator.Generate(Reports.Report1);
//Reports would be an enum type with all of the available reports.
As soon as I tried to design that, the problems appeared. Every report has a different input and output. The input being the entity (or entities) on which the report is based and the output being the DTO holding the processed data.
Backing this up, I created this:
// The interface for every report
public interface IReport<INPUT, OUTPUT>
{
public OUTPUT GenerateReport(INPUT input);
}
// A base class for every report to share a few methods
public abstract class BaseReport<INPUT, OUTPUT> : IReport<INPUT, OUTPUT>
{
// The method required by the IReport interface
public OUTPUT GenerateReport(INPUT input)
{
return Process(input);
}
// An abstract method to be implemented by every concrete report
protected abstract OUTPUT Process(INPUT input);
}
public class ConcreteReport : BaseReport<SomeEntity, SomeDto>
{
protected override SomeDto Process(SomeEntity input)
{
return default(SomeDto);
}
}
At first I was considering to have every concrete report to specify the logic responsible to determine its own input. I quickly saw that it would make my class less testable. By having the report request an instance of the INPUT generic type I can mock that object and test the report.
So, what I need is some kind of class to tie a report (one of the enum values) to a concrete report class responsible for its generation. I'm trying to use an approach similar to a dependency injection container. This is the class I'm having trouble to write.
I'll write below what I have with comments explainning the problems I've found (it's not supposed to be syntatically correct - it's just a stub since my problem is exactly the implementation of this class):
public class ReportGenerator
{
// This would be the dictionary responsible for tying an enum value from the Report with one of the concrete reports.
// My first problem is that I need to make sure that the types associated with the enum values are instances of the BaseReport class.
private readonly Dictionary<Reports, ?> registeredReports;
public ReportGenerator()
{
// On the constructor the dictionary would be instantiated...
registeredReports = new Dictionary<Reports, ?>();
// and the types would be registered as if in a dependency injection container.
// Register(Reports.Report1, ConcreteReport);
// Register(Reports.Report2, ConcreteReport2);
}
// Below is the most basic version of the registration method I could come up with before arriving at the problems within the method GenerateReport.
// T repository - this would be the type of the class responsible for obtainning the input to generate the report
// Func<T, INPUT> expression - this would be the expression that should be used to obtain the input object
public void Register<T, INPUT>(Reports report, Type reportConcreteType, T repository, Func<T, INPUT> expression)
{
// This would basically add the data into the dictionary, but I'm not sure about the syntax
// because I'm not sure how to hold that information so that it can be used later to generate the report
// Also, I should point that I prefer to hold the types and not instances of the report and repository classes.
// My plan is to use reflection to instantiate them on demand.
}
// Based on the registration, I would then need a generic way to obtain a report.
// This would the method that I imagined at first to be called like this:
// var response = ReportGenerator.Generate(Reports.Report1);
public OUTPUT Generate(Reports report)
{
// This surely does not work. There is no way to have this method signature to request only the enum value
// and return a generic type. But how can I do it? How can I tie all these things and make it work?
}
}
I can see it is not tied with the report interface or abstract class but I can't figure out the implementation.
I am not sure that it is possible to achieve such behaviour with enum, so I can propose you the following solution:
Use some identifier generic class(interface) in place of enum values. To use it as key in dictionary you will also have to have some non-generic base for this class.
Have some static class with aforementioned identifier classes as specific static properties.
Use values from static class properties as keys in ReportGenerator class.
Here are required interfaces:
public interface IReportIdentifier
{
}
public interface IReportIdentifier<TInput, TOutput> : IReportIdentifier
{
}
public interface IReport<TInput, TOutput>
{
TOutput Generate(TInput input);
}
Here is the static "enum" class:
public static class Reports
{
public static IReportIdentifier<String, Int32> A
{
get { return null;}
}
public static IReportIdentifier<Object, Guid> B
{
get { return null; }
}
}
And here is the ReportGenerator class:
public class ReportGenerator
{
IDictionary<IReportIdentifier, Object> reportProducers = new Dictionary<IReportIdentifier, Object>();
public void Register<TInput, TOutput>(IReportIdentifier<TInput, TOutput> identifier, IReport<TInput, TOutput> reportProducer)
{
reportProducers.Add(identifier, reportProducer);
}
public TOutput Generate<TInput, TOutput>(IReportIdentifier<TInput, TOutput> identifier, TInput input)
{
// Safely cast because it is this class's invariant.
var producer = (IReport<TInput, TOutput>)reportProducers[identifier];
return producer.Generate(input);
}
}
As you see, we use cast but it is hidden inside the Generate method and if our Register method is the only access point to the reportProducers dictionary this cast will not fail.
And also as #CoderDennis pointed:
Then you could always use T4 to generate that static class and its
static properties and could even create an extension method that
returns the proper IReportIdentifier from your enum.
It seems to me that you may want to rethink the design.
You essentially have methods that take objects in and spit objects out. Granted, you use generics, but that doesn't mean much since there are no constraints on input/output and thus no way to commonly process them in calling code.
In fact, I think the use of generics is potentially a hindrance with the given approach, because passing in the wrong combination of generic types will result in a error, and it's not clear to the caller what is valid and what is not.
Given the approach, it's unclear what benefit all of the extra classes give over non-abstractions like:
int r1Output = Report1StaticClass.GetOutput(string input);
string r2Output = Report2StaticClass.GetOtherOutput(int input);
double r3Output = Report3StaticClass.GetWhatever(double input);
A different approach might be to encapsulate input/output something similar to this, but adjusted to your needs. This isn't meant to be an exact approach, but just something to demonstrate what I'm suggesting. Also, I haven't actually tested/compile this. Consider it pseudo-code:
//something generic that can be easily mocked and processed in a generic way
//your implementation almost certainly won't look exactly like this...
//but the point is that you should look for a common pattern with the input
interface IInput
{
ReportTypeEnum EntityType{ get; set; }
int EntityId{ get; set; }
}
interface IReportTemplate
{
//return something that can be bound to/handled generically.
//for instance, a DataSet can be easily and dynamically bound to grid controls.
//I'm not necessarily advocating for DataSet, just saying it's generic
//NOTE: the guts of this can use a dynamically assigned
// data source for unit testing
DataSet GetData(int entityId);
}
//maybe associate report types with the enum something like this.
[AttributeUsage (AttributeTargets.Field, AllowMultiple = false)]
class ReportTypeAttribute : Attribute
{
public Type ReportType{ get; set; }
//maybe throw an exception if it's not an IReportTemplate
public ReportTypeAttribute(Type reportType){ ReportType = reportType; }
}
//it should be easy for devs to recognize that if they add an enum value,
//they also need to assign a ReportType, thus your code is less likely to
//break vs. having a disconnect between enum and the place where an associated
//concrete type is assigned to each value
enum ReportTypeEnum
{
[ReportType(typeof(ConcreteReportTemplate1))]
ReportType1,
[ReportType(typeof(ConcreteReportTemplate2))]
ReportType2
}
static class ReportUtility
{
public static DataSet GetReportData(IInput input)
{
var report = GetReportTemplate(input.EntityType);
return report.GetData(input.EntityId);
}
private static IReportTemplate GetReportTemplate(ReportTypeEnum entityType)
{
//spin up report by reflecting on ReportTypeEnum and
//figuring out which concrete class to instantiate
//based on the associated ReportTypeAttribute
}
}
I have a series of classes which initialize themselves when created based on using reflection to read a custom attribute on each property/field. The logic for all that is contained in an Initialize() method which they all call, which exists on the base class they inherit from.
I want to add usages of Lazy<T> to these classes, but I don't want to specify the function(s) in the constructor for each class, because they are "thin" constructors and the heavy lifting is in Initialize(). Conversely, I want to keep type-safety and such so I can't just provide a string of the code to use to initialize the Lazy<T>. The problem is that any usage which refers to the specific properties of the object can't be used in a static context.
Specifically, this is what I want my code to look like in an ideal world:
public class Data : Base
{
public Data(int ID) { Initalize(ID); }
[DataAttr("catId")] // This tells reflection how to initialize this field.
private int categoryID;
[LazyDataAttr((Data d) => new Category(d.categoryID))] // This would tell reflection how to create the Lazy<T> signature
private Lazy<Category> _category;
public Category Category { get { return _category.Value; } }
}
public abstract class Base
{
protected void Initalize(int ID)
{
// Use reflection to look up `ID` and populate all the fields correctly.
// This is where `categoryID` gets its initial value.
// *** This is where _category should be assigned the correct function to use ***
}
}
I would then access this the same way I would if Category were an automatic property (or an explicitly lazy loaded one with an _category == null check)
var data = new Data();
var cat = data.Category;
Is there any way I can pass the type information so that the compiler can check that new category(d.categoryID) is a valid function? It doesn't have to be via an Attribute, but it needs to be something I can see via Reflection and plug in to anything that has a Lazy<T> signature.
As an alternative, I will accept a way to do
private Lazy<Category> _category = (Data d) => new Category(d.categoryID);
This could either avoid reflection altogether, or use it to transform from this form to a form that Lazy<T> can handle.
I ended up using a solution inspired by #Servy's suggestion to get this working. The base class's Initialize() method now ends with:
protected void Initialize()
{
// Rest of code...
InitializeLazyVars();
/* We need do nothing here because instantiating the class object already set up default values. */
foreach (var fi in GetLazyFields())
{
if (fi.GetValue(this) == null)
throw new NotImplementedException("No initialization found for Lazy<T> " + fi.Name + " in class " + this.GetType());
}
}
InitializeLazyVars() is a virtual method that does nothing in the base class, but will need to be overridden in the child classes. If someone introduces a new Lazy<T> and doesn't add it to that method, we'll generate an exception any time we try to initialize the class, which means we'll catch it quickly. And there's only one place they need to be added, no matter how many constructors there are.
I'm not sure exactly how to describe this question, but here goes. I've got a class hierarchy of objects that are mapped in a SQLite database. I've already got all the non-trivial code written that communicates between the .NET objects and the database.
I've got a base interface as follows:
public interface IBackendObject
{
void Read(int id);
void Refresh();
void Save();
void Delete();
}
This is the basic CRUD operations on any object. I've then implemented a base class that encapsulates much of the functionality.
public abstract class ABackendObject : IBackendObject
{
protected ABackendObject() { } // constructor used to instantiate new objects
protected ABackendObject(int id) { Read(id); } // constructor used to load object
public void Read(int id) { ... } // implemented here is the DB code
}
Now, finally, I have my concrete child objects, each of which have their own tables in the database:
public class ChildObject : ABackendObject
{
public ChildObject() : base() { }
public ChildObject(int id) : base(id) { }
}
This works fine for all my purposes so far. The child has several callback methods that are used by the base class to instantiate the data properly.
I now want to make this slightly efficient. For example, in the following code:
public void SomeFunction1()
{
ChildObject obj = new ChildObject(1);
obj.Property1 = "blah!";
obj.Save();
}
public void SomeFunction2()
{
ChildObject obj = new ChildObject(1);
obj.Property2 = "blah!";
obj.Save();
}
In this case, I'll be constructing two completely new memory instantiations and depending on the order of SomeFunction1 and SomeFunction2 being called, either Property1 or Property2 may not be saved. What I want to achieve is a way for both these instantiations to somehow point to the same memory location--I don't think that will be possible if I'm using the "new" keyword, so I was looking for hints as to how to proceed.
Ideally, I'd want to store a cache of all loaded objects in my ABackendObject class and return memory references to the already loaded objects when requested, or load the object from memory if it doesn't already exist and add it to the cache. I've got a lot of code that is already using this framework, so I'm of course going to have to change a lot of stuff to get this working, but I just wanted some tips as to how to proceed.
Thanks!
If you want to store a "cache" of loaded objects, you could easily just have each type maintain a Dictionary<int, IBackendObject> which holds loaded objects, keyed by their ID.
Instead of using a constructor, build a factory method that checks the cache:
public abstract class ABackendObject<T> where T : class
{
public T LoadFromDB(int id) {
T obj = this.CheckCache(id);
if (obj == null)
{
obj = this.Read(id); // Load the object
this.SaveToCache(id, obj);
}
return obj;
}
}
If you make your base class generic, and Read virtual, you should be able to provide most of this functionality without much code duplication.
What you want is an object factory. Make the ChildObject constructor private, then write a static method ChildObject.Create(int index) which returns a ChildObject, but which internally ensures that different calls with the same index return the same object. For simple cases, a simple static hash of index => object will be sufficient.
If you're using .NET Framework 4, you may want to have a look at the System.Runtime.Caching namespace, which gives you a pretty powerful cache architecture.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx
Sounds perfect for a reference count like this...
#region Begin/End Update
int refcount = 0;
ChildObject record;
protected ChildObject ActiveRecord
{
get
{
return record;
}
set
{
record = value;
}
}
public void BeginUpdate()
{
if (count == 0)
{
ActiveRecord = new ChildObject(1);
}
Interlocked.Increment(ref refcount);
}
public void EndUpdate()
{
int count = Interlocked.Decrement(ref refcount);
if (count == 0)
{
ActiveRecord.Save();
}
}
#endregion
#region operations
public void SomeFunction1()
{
BeginUpdate();
try
{
ActiveRecord.Property1 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
ActiveRecord.Property2 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
SomeFunction1();
SomeFunction2();
}
finally
{
EndUpdate();
}
}
#endregion
I think your on the right track more or less. You can either create a factory which creates your child objects (and can track "live" instances), or you can keep track of instances which have been saved, so that when you call your Save method it recognizes that your first instance of ChildObject is the same as your second instance of ChildObject and does a deep copy of the data from the second instance over to the first. Both of these are fairly non-trivial from a coding standpoint, and both probably involve overriding the equality methods on your entities. I tend to think that using the first approach would be less likely to cause errors.
One additional option would be to use an existing Obect-Relational mapping package like NHibernate or Entity Framework to do your mapping between objects and your database. I know NHibernate supports Sqlite, and in my experience tends to be the one that requires the least amount of change to your entity structures. Going that route you get the benefit of the ORM layer tracking instances for you (and generating SQL for you), plus you would probably get some more advanced features your current data access code may not have. The downside is that these frameworks tend to have a learning curve associated with them, and depending on which you go with there could be a not insignificant impact on the rest of your code. So it would be worth weighing the benefits against the cost of learning the framework and converting your code to use the API.