I have a summary objects, who's responsibilities actually to combine a lot of things together and create a summary report, who later going to be serialized into the XML.
In this objects I have a lot of structures like this:
public class SummaryVisit : Visit, IMappable
{
public int SummaryId { get; set; }
public int PatientId { get; set; }
public int LocationId { get; set; }
public IMappable Patient
{
get
{
return new SummaryPatient(PatientBusinessService.FindPatient(this.PatientId));
}
}
public IMappable Location
{
get
{
return new SummaryLocation(LocationBusinessService.FindLocation(this.LocationId));
}
}
public IEnumerable<IMappable> Comments
{
get
{
return new SummaryComments(CommentBusinessService.FindComments(this.SummaryId, Location));
}
}
// ... can be a lot of these structures
// ... using different business services and summary objects
public IEnumerable<IMappable> Tasks
{
get
{
return new SummaryTasks(TaskBusinessService.FindTasks(this));
}
}
}
PatientBusinessService, LocationBusinessService etc. are statics.
And each of these SummaryPatient, SummaryLocation etc. have the same type of structure inside.
What is the best approach to refactor and unit test this?
Tried to replace static calls with calls via the interfaced proxies (or refactor statics to non-static classes & interfaces), but this class just got a lot of these interfaces as the constructor injection stuff and start to be super greedy. In addition, these interfaces have a one used method inside (if I going to create it just to this summary needs).
And as soon as this is a summary object, commonly this static services used just once for the whole structure to get appropriate properties for output.
You could change your tests to be more integrational (test more than one class at the time). You could try to modify your services to be more universal and be able to take data from different sources (like TestDataProvider and your current data provider).
Better solution I think is to modify classes you want to test:
Use strong typing for properties and gain all benefits. I think you should return more specific types instead of IMappable
It looks like some of your data is stored inside class (ids) some data is not (IMappable object references). I would refactor this to hold references to objects inside class:
private SummaryPatient _patient;
public SummaryPatient Patient
{
get
{
if (_patient == null)
_patient = new SummaryPatient(PatientBusinessService.FindPatient(this.PatientId));
return _patient;
}
}
Then you can assign your tests data in constructor or create static method CreateDummy(...) just for unit tests. This method then should use CreateDummy for child objects. You can use it in your unit tests.
Related
I have a legacy C# library (a set of interrelated algorithms) in which there is a global god object which is passed to all classes. This god object (simply called Manager :D ) has a Parameters member, and an ObjectCollection member (among lots of others).
public class Manager
{
public Parameters {get; private set;}
public ObjectCollection {get; private set;}
...
...
}
I am unable to test the algorithms because everything takes the manager as dependency, and initializing that means I have to initialize everything. So I want to refactor this design.
Parameters has more than 100 fields in it, the values control the different algorithms. The ObjectCollection has the entities required for the overall execution of the engine, stored by Id, by Name, etc.
The following are the approaches I've though of, but not satisfied with:
Pass Parameters and ObjectCollection (or IParameters and IObjectCollection) instead of the Manager, but I don't think this solves any issue. I wouldn't know which of the parameters the algorithms would depend on.
Splitting the parameters class to smaller ones also is difficult as one parameter may affect many algorithms, so a logical separation is difficult. Plus the dependencies for each algorithm may end up to be many.
A singleton pattern like is usually done for a Logger, but that too is not testable.
Some of the parameters control the algorithm logic, some of the parameters are just required for the algorithm. I'm thinking of making each algorithm a separate class implementing an interface, and at the application start, deciding which algorithm to instantiate based on the parameter. I might end up splitting the current set of algorithm classes to many more, and I'm afraid I'll end up complicating it more and losing the structure of the algorithms.
Is there any standard way to deal with this, or is just splitting big classes to smaller ones and passing dependencies by constructor the only general advice?
In order to allow yourself to make small steps I'd start with a single algorithm and identify the parameters it requires. These can then be exposed in an interface so...
public interface IAmTheParametersForAlgorithm1
{
int OneThing {get;}
int AnotherThing {get;}
}
Then you can alter Manager so that it implements that interface and as in #marcel's answer expose those parameters directly on Manager.
Now you can test Algorithm1 with a very small mock or self-shunt because you don't need to initialise a gigantic Manager in order to run your test. And Algorithm1 no longer knows it takes a Manager object.
public Manager : IAmTheParametersForAlgorithm1 {}
public class Algorithm1
{
public Algorithm1(IAmTheParametersForAlgorithm1 parameters){}
}
Bit by bit you can continue expanding this to each of the sets of parameters and dealing with small, specific interfaces will allow you to identify where different algorithms have common parameters.
public Manager :
IAmTheParametersForAlgorithm1,
IAmTheParametersForAlgorithm2,
IAmTheParametersForAlgorithm3,
IAmTheParametersForAlgorithm4 {}
It also means that as you identify algorithms whose parameters are no longer accessed outside of their interface you can stop injecting Manager into those algorithms, take the parameters out of Manager, and create a new class which only provides those parameters.
This means you can keep your application running the whole time you're making this change if you aren't able to dedicate time to make one gigantic breaking change
For the Parameters, I would go with something like this:
public class Parameters
{
public int MyProperty1 { get; set; }
public int MyProperty2 { get; set; }
public int MyProperty3 { get; set; }
}
public class AlgorithmParameters1
{
private Parameters parameters;
public int MyProperty1 { get { return parameters.MyProperty1; } }
public int MyProperty3 { get { return parameters.MyProperty3; } }
public AlgorithmParameters1(Parameters parameters)
{
this.parameters = parameters;
}
}
public class Algorithm1
{
public void Run(AlgorithmParameters1 parameters)
{
//Access only MyProperty1 and MyProperty3...
}
}
Usage would look like:
var parameters = new Parameters()
{
MyProperty1 = 4,
MyProperty2 = 5,
MyProperty3 = 6,
};
new Algorithm1().Run(new AlgorithmParameters1(parameters));
By the way, I don't see how you could differ between parameters that control an algorithm and are required for it. By control do you mean they are used to make a decision which algorithm to take?
I have an entity called "Set" which contains Cards. Sometimes I want to see the entire card and its contents (card view), when sometimes I just want to know how many cards are in the Set (table views). In my effort to keep things DRY, I decided to try and re-use my SetDto class with multiple constructors like this:
public class SetDto
{
public SetDto()
{
Cards = new List<CardDto>();
}
// Called via SetDto(set, "thin")
public SetDto (Set set, string isThin)
{
var setDto = new SetDto()
{
SetId = set.SetId,
Title = set.Title,
Details = set.Details,
Stage = set.Stage,
CardCount = set.Cards.Count
};
return setDto;
}
// Called via SetDto(set)
public SetDto(Set set)
{
SetId = set.SetId;
UserId = set.UserId;
Title = set.Title;
Details = set.Details;
FolderId = set.FolderId;
Stage = set.Stage;
IsArchived = set.IsArchived;
Cards = new List<CardDto>();
foreach (Card card in set.Cards)
{
Cards.Add(new CardDto(card));
}
}
/// property definitions
I originally had two different DTOs for sets - ThinSetDto and FullSetDto - but this seemed messy and tougher to test. Does the above solution seem ok, or am I breaking a known best-practice? Thank you for your time!
I would create three methods in the SetManager class (a class handling CRUD operations) not in the DTO.
The dto shold have no such a logic inside. Anyway I agree with you that the replication is useless (and evil).
public class BaseSetDTO
{
public BaseSetDTO()
{
Set();
}
internal virtual void Set()
{
//Do your base set here with base properties
}
}
public class SetDTO : BaseSetDTO
{
internal override void Set()
{
//Do a full set here
}
}
Create a base class, then let your types handle what they are supposed to set. Create a new on for your ThinSetDTO and override again.
Instead, I would prefer extension method by declaring all properties in Set class and modifying the properties by passing required parameters. Otherwise initialize a baseDTO and have various versions by adding required properties and call extension method to create required version DTO and return baseDTO.
public static Set SetDto(this Set set, bool isThin)
{
if(isThin)
{
}
return objSet;
}
A common solution to this is to have the repository (or equivalent) return the 'flavor' of the DTO/entity you want by either having different access methods ie: Get() ... GetSet(), or to enumerate your 'flavors' of the entity in question and pass that to your 'Get' (or equivalent) method ie:
enum ContactCollectionFlavors { Full, CountOnly, CountWithNames .... }
...
foo = ContactRepository.GetByLastName('Jones', ContactCollectionFlavors.CountWithNames);
This can get a little messy, from experience the entity in question should have some way of knowing what 'flavor' it is, which smells bad since it breaks encapsulation and seperation of concerns - but in my opinion its better hold your nose and keep some out of band data, so that later you can have lazy loading of the entity allowing you to turn 'light flavors' into fully populated entities.
I have a situation where I am querying a RESTful web-service (using .NET) that returns data as XML. I have written wrapper functions around the API so that instead of returning raw XML I return full .NET objects that mirror the structure of the XML. The XML can be quite complicated so these objects can be pretty large and heavily nested (ie. contain collections that in turn may house other collections etc.).
The REST API has an option to return a full result or a basic result. The basic result returns a small subset of the data that the full result does. Currently I am dealing with the two types of response by returning the same .NET object for both types of request - but in the basic request some of the properties are not populated. This is best shown by a (very oversimplified) example of the code:
public class PersonResponse
{
public string Name { get; set; }
public string Age { get; set; }
public IList<HistoryDetails> LifeHistory { get; set; }
}
public class PersonRequest
{
public PersonResponse GetBasicResponse()
{
return new PersonResponse()
{
Name = "John Doe",
Age = "50",
LifeHistory = null
};
}
public PersonResponse GetFullResponse()
{
return new PersonResponse()
{
Name = "John Doe",
Age = "50",
LifeHistory = PopulateHistoryUsingExpensiveXmlParsing()
};
}
}
As you can see the PersonRequest class has two methods that both return a PersonResponse object. However the GetBasicResponse method is a "lite" version - it doesn't populate all the properties (in the example it doesn't populate the LifeHistory collection as this is an 'expensive' operation). Note this is a very simplified version of what actually happens.
However, to me this has a definite smell to it (since the caller of the GetBasicResponse method needs to understand which properties will not be populated).
I was thinking a more OOP methodology would be to have two PersonResponse objects - a BasicPersonResponse object and a FullPersonResponse with the latter inheriting from the former. Something like:
public class BasicPersonResponse
{
public string Name { get; set; }
public string Age { get; set; }
}
public class FullPersonResponse : BasicPersonResponse
{
public IList<object> LifeHistory { get; set; }
}
public class PersonRequest
{
public BasicPersonResponse GetBasicResponse()
{
return new FullPersonResponse()
{
// ...
};
}
public FullPersonResponse GetFullResponse()
{
return new FullPersonResponse()
{
// ...
};
}
}
However, this still doesn't quite "feel" right - for reasons I'm not entirely sure of!
Is there a better design pattern to deal with this situation? I feel like I'm missing something more elegant? Thanks!
I my opinion you have describe a proxy pattern. See details here: Illustrated GOF Design Patterns in C#
I also have a nagging bad feeling about using inheritance to add on 'extra data', rather than adding/modifying behavior. The main advantage of this is that your methods can specify which level of detail they require in their argument types.
In this particular example, I would be inclined to use the first approach for the data transfer object (the Response object), but then immediately consume this data transfer object to create data model objects, the exact nature of which depends heavily on your specific application. The data transfer object should be internal (as the presence or absence of the data field is an implementation detail) and the public objects or interfaces should provide a view that's more suitable to the consuming code.
I want to add metadata to my object graph for non-domain type data that will be associated to my objects but is not essential to the problem set of that domain. For example, I need to store sort settings for my objects so that the order in which they appear in the UI is configurable by the user. The sort indices should be serializable so that the objects remember their positions. That's just one among a few other metadata items I need to persist for my objects. My first thought is to solve this by having a MetadataItem and a MetadataItemCollection where the base Entity class will have a "Meta" property of type MetadataItemCollection. E.g.:
public class MetadataItem
{
public string Name;
public object Data;
}
public class MetadataItemCollection
{
/* All normal collection operations here. */
// Implementation-specific interesting ones ...
public object Get(string name);
public MetadataItem GetItem(string name);
// Strongly-type getters ...
public bool GetAsBool(string name);
public string GetAsString(string name);
// ... or could be typed via generics ...
public T Get<T>(string name);
}
public class Entity
{
public MetadataItemCollection Meta { get; }
}
A few concerns I can think of are:
Serialization - the database has a single table of EntityID | Name | Value where Value is a string and all types are serialized to a string?
Future Proofing - what if a metadata item's type (unlikely) or name needs to be changed?
Refactorability - should the keys come from a static list via enum or a class with static string properties, or should free-form strings be allowed:
var i = entity.Meta["SortIndex"];
vs.
public enum Metadatas { SortIndex };
var i = entity.Meta[Metadatas.SortIndex];
vs.
public static class Metadatas
{
public static string SortIndex = "SortIndex";
}
var i = entity.Meta[Metadatas.SortIndex];
Anything else?
Thoughts, ideas, gotchas???
Thanks for your time.
Solution:
Following #Mark's lead, and after watching the Udi video Mark linked to, I created two new interfaces: IUiPresentation and IUiPresentationDataPersistor. It's important to note that none of the objects in my Entity object model have any awareness of these interfaces; the interfaces are in a separate assembly and never referenced by my Entity object model. The magic is then done via IoC in the presentation models. It would be something like the following:
public class PhoneViewModel
{
IUiPresentationDataPersistor<Phone> _uiData
IUiPresentation<Phone> _presenter;
// Let IoC resolve the dependency via ctor injection.
public PhoneViewModel(Phone phone, IUiPresentationDataPersistor<Phone> uiData)
{
_uiData = uiData;
_presenter = uiData.Get(phone); // Does a simple lookup on the phone's ID.
}
public int SortIndex
{
get { return _presenter.SortIndex; }
set { _presenter.SortIndex = value; }
}
public void Save()
{
_uiData.Save();
}
}
It's a little more complicated in that the ViewModel implements INotifyPropertyChanged to get all the goodness that it provides, but this should convey the general idea.
Metadata literally means data about data, but what you seem to be asking for is a way to control and change behavior of your objects.
I think such a concern is much better addressed with a Role Interface - see e.g. Udi Dahan's talk about Making Roles Explicit. More specifically, the Strategy design pattern is used to define loosely coupled behavior. I'd look for a way to combine those two concepts.
As we already know from .NET, the use of static, weakly typed attributes severely limits our options for recomposing components, so I wouldn't go in that direction.
I have a Report Interface which has a Run method.
There are different types of reports which implement this interface and each run their own kind of report, getting data from different tables.
Each report, using its own data context, gets data which then populates Business Objects with and at the moment they are returned as an array (I would like to be able to at least return something like a list but because you have to define the list type it makes it a bit more difficult).
Reflection is then used to find out the properties of the returned data.
I hope I have explained this well enough!
Is there a better way of doing this?
By request:
public interface IReport
{
int CustomerID { get; set; }
Array Run();
}
public class BasicReport : IReport
{
public int CustomerID { get; set; }
public virtual Array Run()
{
Array result = null;
using (BasicReportsDataContext brdc = new BasicReportsDataContext())
{
var queryResult = from j in brdc.Jobs
where j.CustomerID == CustomerID
select new JobRecord
{
JobNumber = j.JobNumber,
CustomerName = c.CustomerName
};
result = queryResult.ToArray();
}
}
}
The other class then does a foreach over the data, and uses reflection to find out the field names and values and puts that in an xml file.
As it stands everything works - I just can't help thinking there is a better way of doing it - that perhaps my limited understanding of C# doesn't allow me to see yet.
Personnally I would first ask myself if I Really need an interface. It would be the case if the classes implementing it are Really different by nature (not only by report kind).
If not, i.e all the implementing classes are basically "Reporters", then yes, there is a more convenient way to do this which is :
Writing a parent abstract Report
Having a virtual Run method and the CustomerID accessor
inheriting your "Reporter" classes from it