Efficiently determine type - c#

I'm faced with the prospect of using a factory method like below to effect setter based DI (because Entity Framework prevents me from using constructor based DI). So when an object is materialized in EF I call the method below to do the setter DI.
public void AttachKeyHolder(ILocalizableEntity localizableEntity)
{
var ft = (localizableEntity as FeeType);
if (ft != null)
{
localizableEntity.KeyHolder = new FeeTypeKeyHolder();
return;
}
var other = (localizableEntity as OtherType);
if (other != null)
{
localizableEntity.KeyHolder = new OtherTypeKeyHolder();
return;
}
// And on and on and on for every applicable type
}
I don't like this because it becomes a n-based problem whereby if I have 20 types that need this setter injection then whichever one is type checked in 20th place takes 20 times the amount of time to check as the first type check and since Im doing this every single time an object is materialized in EF it will likely not scale.
So Im looking for a better algorithm. The previous "solution" was to just assign the appropriate KeyHolder in the constructor of the associated object as below:
public FeeType()
{
this.KeyHolder = new FeeTypeKeyHolder();
}
This still seems like the most runtime efficient solution and one I'm still leaning very strongly towards regardless of testability due to potentially massive effect setter based DI will likely have with the above factory method. I'd really likely to decouple these classes but not at the expense of the scalability for this web application.

You can mark properties you need to set via DI with your attribute, like [Inject], and then in desired place (like constructor) call helper method like MyHelper.InjectProperties(this). In helper you can get attributes with [Inject] and resolve their values directly from constructor.
Most of IoC/DI frameworks supports property injection in a performance effective way, so in best of worlds you won't need to implement it by yourself.

First of all, are you sure this is going to have such a bad performance impact. I would doubt so.
Anyway, you can use Visitor patter to implement double dispatch, so instead of 20 test for type of the object, you invoke 2 virtual methods, that is two lookups in virtual methods tables, which might be faster than 20 tests, however there is the overhead of two indirect function calls, stack frame allocation for each of them, etc. But again, do we really want to go so deep in case of a C# project with Entity Framework?
Example:
class Visitor {
public virtual void Visit(FeeType f) {}
public virtual void Visit(OtherType t) {}
}
class abstract BaseType { public abstract void Accept(Visitor v); }
class FeeType : BaseType {
public override void Accept(Visitor v) { v.Visit(this); }
}
class OtherType : BaseType {
public override void Accept(Visitor v) { v.Visit(this); }
}
class DIVisitor : Visitor {
public virtual void Visit(FeeType f) { f.KeyHolder = new ... }
public virtual void Visit(OtherType t) { t.KeyHolder = new ... }
}
public void AttachKeyHolder(ILocalizableEntity localizableEntity)
{
var ft = (localizableEntity as TypeBase);
ft.Accept(new DIVisitor());
}
You can also implement the lookup for the correct method based on type through hashing.
HashMap<Type, Action<BaseType>> actions;
actions.Add(typeof(FeeType), x => { ((FreeType)x).KeyHolder = new .... });
...
actions[ft.GetType()].Invoke(ft);

Related

More concise way to pass, as constructor parameter, an Action that references private data?

This is stripped down from a more complex situation.
The goal is to construct several instances of class SubAction, each of which uses an action to alter how it uses its internal data.
Consider:
public class SubAction
{
private Action<SubAction> _DoIt;
public SubAction(Action<SubAction> doIt)
{
_DoIt = doIt;
}
public void DoIt()
{
_DoIt(this);
}
static public Action<SubAction> GetAction1 => (it) => it.DoSomething(it._Data.Value1);
static public Action<SubAction> GetAction2 => (it) => it.DoSomething(it._Data.Value2);
private void DoSomething(string value)
{
// ...
}
// This gets set by code not shown.
protected Data _Data;
}
public class Data
{
public string Value1;
public string Value2;
}
public class SubActionTests
{
static SubActionTests()
{
var actions = new List<SubAction>
{
new SubAction(SubAction.GetAction1),
new SubAction(SubAction.GetAction2),
};
// ... code not shown that calls a method to update each instance's _Data...
foreach (var subAction in actions)
{
subAction.DoIt();
}
}
}
This works, but it seems cumbersome. Specifically:
public Action<SubAction> _DoIt { get; set; }
...
static public Action<SubAction> GetAction1 => (it) => it.DoSomething(it._Data.Value1);
...
new SubAction(SubAction.GetAction1)
If I set DoIt AFTER constructing the object, could simply be:
public Action DoIt { get; set; }
...
public Action GetAction1 => () => DoSomething(_Data.Value1);
...
var it = new SubAction();
it.DoIt = it.GetAction1;
Which has simpler action declarations:
The actions don't need <SubAction>.
`GetAction1,2,3.. declarations are much simpler.
But more verbose instance initialization, because access to it is needed to set DoIt.
Unfortunately it isn't possible to refer to "it" during object initializer, so there doesn't seem to be any way to have BOTH the simpler initialization syntax AND the simpler action-declaration syntax.
Am I overlooking some solution?
ALTERNATIVE: factory method
NOTE: This could be approached quite differently, by using an enum to select between the different actions. But that is a different sort of complication; I'm looking for a way to describe these Actions themselves more succinctly.
Specifically, I'm aware there could be a factory method that takes an enum, to hide the complexity:
public enum WhichAction
{
Action1,
Action2
}
...
public static CreateSubAction(WhichAction which)
{
var it = new SubAction();
switch (which)
{
case WhichAction.Action1:
it.DoIt = it.GetAction1;
break;
case WhichAction.Action2:
it.DoIt = it.GetAction2;
break;
}
return it;
}
The downside of this is that each added action requires editing in multiple places.
ALTERNATIVE: sub-classes
Another alternative is to create multiple sub-classes.
That is what I was doing originally, but that was even more verbose - multiple lines per each new action.
And felt like "overkill".
After all, the approach I've got isn't terrible - its a single line for each new GetAction. It just felt like each of those lines "ought" to be much simpler.
Sadly, from what I understand, I don't think you can make the complexity disappear. You probably need to choose an approach from the ones you suggested (or even other solutions like using a strategy pattern).
Advice
When confronted with a design choice like this. I suggest you optimize for the consumer's side of things. In other words, design your classes to make them simple to use.
In your scenario, that would mean opting for your initial solution or the more complex solutions (factory method, sub-classes, strategy pattern, etc.).
The problem with the second solution is that your object can be in a limbo state when initializing it.
var it = new SubAction();
// Before you set DoIt, the object is not fully initialized.
it.DoIt = it.GetAction1;
Consumers can also forget to set DoIt. When possible, you should probably avoid designs that allow such mistakes.
While I'm still curious whether there are syntax alternatives that would streamline what I showed, so I'll accept an answer that shows a simpler syntax, turns out in my situation, I can easily avoid the need for those actions.
Discussing with a colleague, they pointed out that my current actions all have a similar pattern: get a string, pass it to SubAction.DoSomething.
Therefore I can simplify those actions down to a property that gets the appropriate string:
public abstract string CurrentValue { get; }
...
public virtual void DoIt()
{
DoSomething(CurrentValue);
}
Given the above, subclasses become so simple they no longer feel like "overkill":
public class SubAction1 : SubAction
{
protected override string CurrentValue => _Data.Value1;
}
...
// usage
new SubAction1()
That is straightforward; highly readable. And trivial to extend when additional conditions are needed.
There will be more complicated situations that do need to override DoSomething. In those, the "real work" dwarfs what I've shown; so its appropriate to subclass those anyway.

General design guidance c#; finding im unnecessarily passing objects between methods

Sorry its a bit vague perhaps but its been bugging me for weeks. I find each project I tackle I end up making what I think is a design mistake and am pretty sure theres a bettwe way.
When defining a class thats serialized from an event source like a sinple json doc definition. Lets call it keys class with various defined integers, bools and strings. i have multiple methods that make use of this and i find that i constantly need to paas this class as an object by means of an overload. So method a calls methods b, method b doesnt need these objects but it calls method c which does... In doing this bad practice im passing these 'keys' objects to method b for the sole purpose of method c accessibility.
Im probably missing one major OOP fundamental :) any guidance or reading would be appreciated as im googled out!!
public class Keys
{
public child Detail { get; set; }
}
public class child
{
public string instance { get; set; }
}
//my main entry point
public void FunctionHandler(Keys input, ILambdaContext context)
{
methodA(input)
}
static void methodA(Keys input)
{
//some-other logic or test that doesn't need Keys object/class if (foo==bar) {proceed=true;}
string foo = methodB(input)
}
static string methodB(Keys input)
{
//here i need Keys do do stuff and I return a string in this example
}
What you do is not necessarily bad or wrong. Remember that in C# what you actually pass are references, not objects proper, so the overhead of parameter passing is really small.
The main downside of long call chains is that the program logic is perhaps more complicated than it needs to be, with the usual maintainability issues.
Sometimes you can use the C# type system to let the compiler or the run time choose the proper function.
The compiler is employed when you overload method() for two different types instead of defining methodA() and methodB(). But they are distinguished by the parameter type, so you need different Key types which may be (but don't have to be) related:
public class KeyA {/*...*/}
public class KeyB {/*...*/}
void method(KeyA kA) { /* do something with kA */ }
void method(KeyB kB) { /* do something with kB */ }
This is of limited benefit; that the functions have the same name is just syntactic sugar which makes it clear that they serve the same purpose.
The other, perhaps more elegant and versatile technique is to create an inheritance hierarchy of Keys which each "know" what a method should do.
You'll need a base class with a virtual method which will be overridden by the inheriting classes. Often the base is an interface just declaring that there is some method(), and the various implementing types implement a method() which suits them. Here is a somewhat lengthy example which uses a virtual Output() method so that we see something on the Console.
It's noteworthy that each Key calls a method of an OutputterI, passing itself to it as a parameter; the outputter class then in turn calls back a method of the calling object. That's called "Double Dispatch" and combines run-time polymorphism with compile-time function overloading. At compile time the object and it's concrete type are not known; in fact, they can be implemented later (e.g. by inventing another Key). But each object knows what to do when its callback function (here: GetData()) is called.
using System;
using System.Collections.Generic;
namespace DoubleDispatch
{
interface KeyI
{ // They actually delegate that to an outputter
void Output();
}
interface OutputterI
{
void Output(KeyA kA);
void Output(KeyExtra kE);
void Output(KeyI k); // whatever this does.
}
class KeyBase: KeyI
{
protected OutputterI o;
public KeyBase(OutputterI oArg) { o = oArg; }
// This will call Output(KeyI))
public virtual void Output() { o.Output(this); }
}
class KeyA : KeyBase
{
public KeyA(OutputterI oArg) : base(oArg) { }
public string GetAData() { return "KeyA Data"; }
// This will compile to call Output(KeyA kA) because
// we pass this which is known here to be of type KeyA
public override void Output() { o.Output(this); }
}
class KeyExtra : KeyBase
{
public string GetEData() { return "KeyB Data"; }
public KeyExtra(OutputterI oArg) : base(oArg) { }
/** Some extra data which needs to be handled during output. */
public string GetExtraInfo() { return "KeyB Extra Data"; }
// This will, as is desired,
// compile to call o.Output(KeyExtra)
public override void Output() { o.Output(this); }
}
class KeyConsolePrinter : OutputterI
{
// Note: No way to print KeyBase.
public void Output(KeyA kA) { Console.WriteLine(kA.GetAData()); }
public void Output(KeyExtra kE)
{
Console.Write(kE.GetEData() + ", ");
Console.WriteLine(kE.GetExtraInfo());
}
// default method for other KeyI
public void Output(KeyI otherKey) { Console.WriteLine("Got an unknown key type"); }
}
// similar for class KeyScreenDisplayer{...} etc.
class DoubleDispatch
{
static void Main(string[] args)
{
KeyConsolePrinter kp = new KeyConsolePrinter();
KeyBase b = new KeyBase(kp);
KeyBase a = new KeyA(kp);
KeyBase e = new KeyExtra(kp);
// Uninteresting, direkt case: We know at compile time
// what each object is and could simply call kp.Output(a) etc.
Console.Write("base:\t\t");
b.Output();
Console.Write("KeyA:\t\t");
a.Output();
Console.Write("KeyExtra:\t");
e.Output();
List<KeyI> list = new List<KeyI>() { b, a, e };
Console.WriteLine("\nb,a,e through KeyI:");
// Interesting case: We would normally not know which
// type each element in the vector has. But each type's specific
// Output() method is called -- and we know it must have
// one because that's part of the interface signature.
// Inside each type's Output() method in turn, the correct
// OutputterI::Output() for the given real type was
// chosen at compile time dpending on the type of the respective
// "this"" argument.
foreach (var k in list) { k.Output(); }
}
}
}
Sample output:
base: Got an unknown key type
KeyA: KeyA Data
KeyExtra: KeyB Data, KeyB Extra Data
b,a,e through KeyI:
Got an unknown key type
KeyA Data
KeyB Data, KeyB Extra Data

Strategy Pattern with each algorithm having a different method signature

I am doing a refactor over certain code.
We have a list of investors with amounts assigned to each. The total of amounts should be equal to another total, but sometimes there are a couple of cents of difference, so we use different algorithms to assign these differences to each investor.
The current code is something like this:
public void Round(IList<Investors> investors, Enum algorithm, [here goes a list of many parameters]) {
// some checks and logic here - OMMITED FOR BREVITY
// pick method given algorithm Enum
if (algoritm == Enum.Algorithm1) {
SomeStaticClass.Algorithm1(investors, remainders, someParameter1, someParameter2, someParameter3, someParameter4)
} else if (algoritm == Enum.Algorithm2) {
SomeStaticClass.Algorithm2(investors, remainders, someParameter3)
}
}
so far we only have two algorithms. I have to implement the third one. I was given the possibility to refactor both existing implementations as well as do some generic code to make this function for future algorithms, maybe custom to each client.
My first thought was "ok, this is a strategy pattern". But the problem I see is that both algorithms receive a different parameter list (except for the first two). And future algorithms can receive a different list of parameters as well. The only thing in "common" is the investor list and the remainders.
How can I design this so I have a cleaner interface?
I thought of
Establishing an interface with ALL possible parameters, and share it
among all implementations.
Using an object with all possible parameters as properties, and use that generic object as part of the interface. I
would have 3 parameters: The list of investors, the remainders object, and a "parameters" object. But in this case, I have a similar problem. To instantiate each object and fill the required properties depends on the algorithm (unless I set all of them). I
would have to use a factory (or something) to instantiate it, using all parameters in the interface, am I right? I would be moving the problem of too many parameters to that "factory" or whatever.
Using a dynamic object instead of a statically typed object. Still
presents the same problems as before, the instantiation
I also thought of using the Visitor Pattern, but as I understand, that would be the case if I had different algorithms for different entities to use, like, another class of investors. So I don't think it is the right approach.
So far the one that convinces me the most is the second, although I am still a bit reticent about it.
Any ideas?
Thanks
Strategy has different implementations. Its straightforward when all alternate Concrete Strategies require same type signature. But when concrete implementations start asking for different data from Context, we have to gracefully take a step back by relaxing encapsulation ("breaking encapsulation" is known drawback of strategy), either we can pass Context to strategies in method signature or constructor depending upon how much is needed.
By using interfaces and breaking big object trees in to smaller containments we can restrict the access to most of the Context state.
following code demonstrates passing through method parameter.
public class Context {
private String name;
private int id;
private double salary;
Strategy strategy;
void contextInterface(){
strategy.algorithmInterface(this);
}
public String getName() {
return name;
}
public int getId() {
return id;
}
public double getSalary() {
return salary;
}
}
public interface Strategy {
// WE CAN NOT DECIDE COMMON SIGNATURE HERE
// AS ALL IMPLEMENTATIONS REQUIRE DIFF PARAMS
void algorithmInterface(Context context);
}
public class StrategyA implements Strategy{
#Override
public void algorithmInterface(Context context) {
// OBSERVE HERE BREAKING OF ENCAPSULATION
// BY OPERATING ON SOMEBODY ELSE'S DATA
context.getName();
context.getId();
}
}
public class StrategyB implements Strategy{
#Override
public void algorithmInterface(Context context) {
// OBSERVE HERE BREAKING OF ENCAPSULATION
// BY OPERATING ON SOMEBODY ELSE'S DATA
context.getSalary();
context.getId();
}
}
Okay, I might be going in the wrong direction... but it seems kinda weird that you're passing in arguments to all the algorithms, and the identifier to which algorithm to actually use. Shouldn't the Round() function ideally just get what it needs to operate?
I'm imagining the function that invokes Round() to look something like:
if (something)
algToUse = Enum.Algorithm1;
else
if (otherthing)
algToUse = Enum.Algorithm2;
else
algToUse = Enum.Algorithm3;
Round(investors, remainder, algToUse, dayOfMonth, lunarCycle, numberOfGoblinsFound, etc);
... what if, instead, you did something like this:
public abstract class RoundingAlgorithm
{
public abstract void PerformRounding(IList<Investors> investors, int remainders);
}
public class RoundingRandomly : RoundingAlgorithm
{
private int someNum;
private DateTime anotherParam;
public RoundingRandomly(int someNum, DateTime anotherParam)
{
this.someNum = someNum;
this.anotherParam = anotherParam;
}
public override void PerformRounding(IList<Investors> investors, int remainder)
{
// ... code ...
}
}
// ... and other subclasses of RoundingAlgorithm
// ... later on:
public void Round(IList<Investors> investors, RoundingAlgorithm roundingMethodToUse)
{
// ...your other code (checks, etc)...
roundingMethodToUse.Round(investors, remainders);
}
... and then your earlier function simply looks like:
RoundingAlgorithm roundingMethod;
if (something)
roundingMethod = new RoundingByStreetNum(1, "asdf", DateTime.Now);
else
if (otherthing)
roundingMethod = new RoundingWithPrejudice(null);
else
roundingMethod = new RoundingDefault(1000);
Round(investors, roundingMethod);
... basically, instead of populating that Enum value, just create a RoundingAlgorithm object and pass that in to Round() instead.

Using an Ioc Container at runtime within a factory to determine class initialization

Is this a good pattern? It has a code smell to me with having a factory class aware of the IUnityContainer...
My basic need was to resolve an ICalculationRuleProcess at runtime depending on an Id of a class. It could be based on something other than the Id, I am aware of that... basically I have a known set of Ids that I need to deal with because I bootstrapped the records into the database manually and there is no way to edit the records. With each Id I have a related class. I also have a varying number of constructor parameters within each class that implements the ICalculationRuleProcess, so using an IoC container is extremely helpful versus some crazy switch statement and variable constructor aguments using Activator.CreateInstance
Here is what I did:
Registered the IUnityContainer instance within the container itself. I wasnt sure if this was even possible, but it worked.
Registered all of the ICalculationRuleProcess classes with a unique identifier within the registration (basically just the Id.ToString() of each possible DistributionRule)
Created a factory to determine the correct ICalculationRuleProcess, and had it use the IoC container to figure out the correct class to load.
Registered the factory class (ICalculationRuleProcessFactory) to the IoC container
Wherever the ICalculationRuleProcess needed to be used, I had the class take an ICalculationRuleProcessFactory in its constructor and have it call the Create method to figure out which ICalculationRuleProcess to use.
The code for the factory is here:
public interface ICalculationRuleProcessFactory
{
ICalculationRuleProcess Create( DistributionRule distributionRule );
}
public class CalculationRuleProcessFactory : ICalculationRuleProcessFactory
{
private readonly IBatchStatusWriter _batchStatusWriter;
private readonly IUnityContainer _iocContainer;
public CalculationRuleProcessFactory(
IUnityContainer iocContainer,
IBatchStatusWriter batchStatusWriter )
{
_batchStatusWriter = batchStatusWriter;
_iocContainer = iocContainer;
}
public ICalculationRuleProcess Create( DistributionRule distributionRule )
{
_batchStatusWriter.WriteBatchStatusMessage(
string.Format( "Applying {0} Rule", distributionRule.Descr ) );
return _iocContainer.Resolve<ICalculationRuleProcess>(
distributionRule.Id.ToString() );
}
}
This seems okay to me, given the constraints you described. The most important thing is that all of your rules implement ICalculationRuleProcess and that all consumers of those rules only know about that interface.
It isn't inherently bad that your factory takes the container dependency, especially as an interface. Consider that if you ever had to change container implementations, you could create an IUnityContainer implementation that doesn't use Unity at all (just forward all the members of the interface to their corresponding methods in the replacement container).
If it really bothers you, you can add yet another layer of indirection by creating an agnostic IoC interface with the requisite Register, Resolve, etc. methods and create an implementation that forwards these to Unity.
There is another way you can achieve this without factory taking dependency on IUnityContainer, which is not inherently bad in and of itself. This is just a different way to think about the problem.
The flow is as follows:
Register all different instances of ICalculationRuleProcess.
Get all registered ICalculationRuleProcess and create a creation lambda for each one.
Register ICalculationRuleProcessFactory with a list of ICalculationRuleProcess creation lambdas.
In ICalculationRuleProcessFactory.Create return the right process.
Now the tricky part of this is to preserve the Ids that the registrations were made under. Once solution is to simply keep the Id on the ICalculationProcess interface, but it might not semantically belong there. This is where this solution slips into ugly (which is more of a case of missing functionality in Unity). But, with an extension method and a small extra type, it looks nice when it's run.
So what we do here is create an extension method that returns all registrations with their names.
public class Registration<T> where T : class {
public string Name { get; set; }
public Func<T> CreateLambda { get; set; }
public override bool Equals(object obj) {
var other = obj as Registration<T>;
if(other == null) {
return false;
}
return this.Name == other.Name && this.CreateLambda == other.CreateLambda;
}
public override int GetHashCode() {
int hash = 17;
hash = hash * 23 + (Name != null ? Name.GetHashCode() : string.Empty.GetHashCode());
hash = hash * 23 + (CreateLambda != null ? CreateLambda.GetHashCode() : 0);
return hash;
}
}
public static class UnityExtensions {
public static IEnumerable<Registration<T>> ResolveWithName<T>(this UnityContainer container) where T : class {
return container.Registrations
.Where(r => r.RegisteredType == typeof(T))
.Select(r => new Registration<T> { Name = r.Name, CreateLambda = ()=>container.Resolve<T>(r.Name) });
}
}
public class CalculationRuleProcessFactory : ICalculationRuleProcessFactory
{
private readonly IBatchStatusWriter _batchStatusWriter;
private readonly IEnumerable<Registration<ICalculationRuleProcess>> _Registrations;
public CalculationRuleProcessFactory(
IEnumerable<Registration<ICalculationRuleProcess>> registrations,
IBatchStatusWriter batchStatusWriter )
{
_batchStatusWriter = batchStatusWriter;
_Registrations= registrations;
}
public ICalculationRuleProcess Create( DistributionRule distributionRule )
{
_batchStatusWriter.WriteBatchStatusMessage(
string.Format( "Applying {0} Rule", distributionRule.Descr ) );
//will crash if registration is not present
return _Registrations
.FirstOrDefault(r=>r.Name == distributionRule.Id.ToString())
.CreateLambda();
}
}
//Registrations
var registrations = container.ResolveWithName<ICalculationRuleProcess>(container);
container.RegisterInstance<IEnumerable<Registration<ICalculationRuleProcess>>>(registrations);
After I wrote this I realised that this is more creative lambda douchebaggery than a architecturally pretty solution. But in any case, feel free to get ideas out of it.
Hey Rob, I'm intending to use essentially the same pattern. I've got multiple types of shopping cart item that need to be associated with their own specific set of validator instances of varying class.
I think there is a smell about this pattern and its not that the factory has a reference to the IoC container, its that typically, an IoC container is configured in the application root which is typically the UI layer. If a crazy custom factory was created just to handle these associations then possibly it should be in the domain.
In short, these associations are possibly not part of the overall program structure that's set up before the application runs and so shouldn't be defined in the application root.

Initializing constructor from stored cache in C#

I'm not sure exactly how to describe this question, but here goes. I've got a class hierarchy of objects that are mapped in a SQLite database. I've already got all the non-trivial code written that communicates between the .NET objects and the database.
I've got a base interface as follows:
public interface IBackendObject
{
void Read(int id);
void Refresh();
void Save();
void Delete();
}
This is the basic CRUD operations on any object. I've then implemented a base class that encapsulates much of the functionality.
public abstract class ABackendObject : IBackendObject
{
protected ABackendObject() { } // constructor used to instantiate new objects
protected ABackendObject(int id) { Read(id); } // constructor used to load object
public void Read(int id) { ... } // implemented here is the DB code
}
Now, finally, I have my concrete child objects, each of which have their own tables in the database:
public class ChildObject : ABackendObject
{
public ChildObject() : base() { }
public ChildObject(int id) : base(id) { }
}
This works fine for all my purposes so far. The child has several callback methods that are used by the base class to instantiate the data properly.
I now want to make this slightly efficient. For example, in the following code:
public void SomeFunction1()
{
ChildObject obj = new ChildObject(1);
obj.Property1 = "blah!";
obj.Save();
}
public void SomeFunction2()
{
ChildObject obj = new ChildObject(1);
obj.Property2 = "blah!";
obj.Save();
}
In this case, I'll be constructing two completely new memory instantiations and depending on the order of SomeFunction1 and SomeFunction2 being called, either Property1 or Property2 may not be saved. What I want to achieve is a way for both these instantiations to somehow point to the same memory location--I don't think that will be possible if I'm using the "new" keyword, so I was looking for hints as to how to proceed.
Ideally, I'd want to store a cache of all loaded objects in my ABackendObject class and return memory references to the already loaded objects when requested, or load the object from memory if it doesn't already exist and add it to the cache. I've got a lot of code that is already using this framework, so I'm of course going to have to change a lot of stuff to get this working, but I just wanted some tips as to how to proceed.
Thanks!
If you want to store a "cache" of loaded objects, you could easily just have each type maintain a Dictionary<int, IBackendObject> which holds loaded objects, keyed by their ID.
Instead of using a constructor, build a factory method that checks the cache:
public abstract class ABackendObject<T> where T : class
{
public T LoadFromDB(int id) {
T obj = this.CheckCache(id);
if (obj == null)
{
obj = this.Read(id); // Load the object
this.SaveToCache(id, obj);
}
return obj;
}
}
If you make your base class generic, and Read virtual, you should be able to provide most of this functionality without much code duplication.
What you want is an object factory. Make the ChildObject constructor private, then write a static method ChildObject.Create(int index) which returns a ChildObject, but which internally ensures that different calls with the same index return the same object. For simple cases, a simple static hash of index => object will be sufficient.
If you're using .NET Framework 4, you may want to have a look at the System.Runtime.Caching namespace, which gives you a pretty powerful cache architecture.
http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx
Sounds perfect for a reference count like this...
#region Begin/End Update
int refcount = 0;
ChildObject record;
protected ChildObject ActiveRecord
{
get
{
return record;
}
set
{
record = value;
}
}
public void BeginUpdate()
{
if (count == 0)
{
ActiveRecord = new ChildObject(1);
}
Interlocked.Increment(ref refcount);
}
public void EndUpdate()
{
int count = Interlocked.Decrement(ref refcount);
if (count == 0)
{
ActiveRecord.Save();
}
}
#endregion
#region operations
public void SomeFunction1()
{
BeginUpdate();
try
{
ActiveRecord.Property1 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
ActiveRecord.Property2 = "blah!";
}
finally
{
EndUpdate();
}
}
public void SomeFunction2()
{
BeginUpdate();
try
{
SomeFunction1();
SomeFunction2();
}
finally
{
EndUpdate();
}
}
#endregion
I think your on the right track more or less. You can either create a factory which creates your child objects (and can track "live" instances), or you can keep track of instances which have been saved, so that when you call your Save method it recognizes that your first instance of ChildObject is the same as your second instance of ChildObject and does a deep copy of the data from the second instance over to the first. Both of these are fairly non-trivial from a coding standpoint, and both probably involve overriding the equality methods on your entities. I tend to think that using the first approach would be less likely to cause errors.
One additional option would be to use an existing Obect-Relational mapping package like NHibernate or Entity Framework to do your mapping between objects and your database. I know NHibernate supports Sqlite, and in my experience tends to be the one that requires the least amount of change to your entity structures. Going that route you get the benefit of the ORM layer tracking instances for you (and generating SQL for you), plus you would probably get some more advanced features your current data access code may not have. The downside is that these frameworks tend to have a learning curve associated with them, and depending on which you go with there could be a not insignificant impact on the rest of your code. So it would be worth weighing the benefits against the cost of learning the framework and converting your code to use the API.

Categories

Resources