I have two objects. One, the parent, references a Locale. This locale is from a list of locales. When that locale is deleted, I want it to clean up any references to itself from all referencing types (setting the relevant value to null).
Right now, I have a system that walks across all entities that NHibernate is mapping and, by using their class metadata, determines which types reference the locale type. Then, I build a query (using ICriteria) for that referencing type where the property of type Locale equals the locale's Id that I'm trying to delete. Any objects that come back, I set that property to null and then update them.
Question: Is there a better way - hopefully using something built into NHibernate - to instruct an object to remove all references to itself on delete?
Objects:
public class Parent
{
public virtual Guid Id { get; set; }
public virtual Locale Loc { get; set; }
}
public class Locale
{
public virtual Guid Id { get; set; }
}
Mappings:
public class ParentMapping : ClassMap<Parent>
{
Id(x => x.Id).GeneratedBy.Guid();
References(x => x.Loc).Nullable();
}
public class LocaleMapping : ClassMap<Locale>
{
Id(x => x.Id).GeneratedBy.Guid();
}
As requested, here's how I wound up dealing with this problem. I actually used a suggestion originally given by #Fran to come up with a solution.
Solution
This solution is very specific to my type of application and involves using a number of parts of the application working together to achieve my desired result. Specifically, my application is a RESTful web service, powered by WCF, JSON.NET, and NHibernate.
First, I added a reference to all parents in the locale and used a HasMany mapping, so that the locale knew all of the parents that reference it:
public virtual IList<Parent> Parents { get; set; }
and
HasMany(x => x.Parents);
It's also important to point out here that I use lazy loading throughout the application.
While this allowed me to easily delete the locale by using the proper cascade behaviors, this posed a problem in loading/GET scenarios in that when I passed the locale into JSON.NET (on its way out the door to the client), JSON.NET would walk the Parents collection, and serialize the whole thing. Obviously, this is undesired as we're feeding the client much more than they asked for. This is the problem I alluded to in my comment in the OP.
As #Fran mentioned, I could use projections; however, all of my reference lists are accessed through a common endpoint in order to abstract their CRUD operations and reduce the amount of repeated code: all of my reference lists implement an abstract class called ReferenceListBase. Anyways, I wanted a solution in which the implementing class itself was able to decide how much of it should be sent to the client (serialized).
My solution was to put a [JsonIgnore] attribute on the Parents collection, which, in conjuction with lazy loading, means that JSON.NET never looks at the property and therefore, the relationship never gets loaded.
This solution has always kind of felt like a hack, but it has achieved all of the results I want and made adding new reference lists very easy. I hope this helps you; if it doesn't, post a new question, link it here, and I'll try to help you out. :)
Related
So, I've got an aggregate( Project ) that has a collection of entities (ProjectVariables) in it. The variables do not have Ids on them because they have no identity outside of the Project Aggregate Root.
public class Project
{
public Guid Id { get; set; }
public string Name { get; set; }
public List<ProjectVariable> ProjectVariables { get; set; }
}
public class ProjectVariable
{
public string Key { get; set; }
public string Value { get; set; }
public List<string> Scopes { get; set; }
}
The user interface for the project is an Angular web app. A user visits the details for the project, and can add/remove/edit the project variables. He can change the name. No changes persist to the database until the user clicks save and the web app posts some json to the backend, which in turns passes it down to the domain.
In accordance to DDD, it's proper practice to have small, succinct methods on the Aggregate roots that make atomic changes to them. Examples in this domain could be a method Project.AddProjectVariable(projectVariable).
In order to keep this practice, that means that the front end app needs to track changes and submit them something like this:
public class SaveProjectCommand
{
public string NewName { get; set; }
public List<ProjectVariable> AddedProjectVariables { get; set; }
public List<ProjectVariable> RemovedProjectVariables { get; set; }
public List<ProjectVariable> EditedProjectVariables { get; set; }
}
I suppose it's also possible to post the now edited Project, retrieve the original Project from the repo, and diff them, but that seems a little ridiculous.
This object would get translated into Service Layer methods, which would call methods on the Aggregate root to accomplish the intended behaviors.
So, here's where my questions come...
ProjectVariables have no Id. They are transient objects. If I need to remove them, as passed in from the UI tracking changes, how do identify the ones that need to be removed on the Aggregate? Again, they have no identification. I could add surrogate Ids to the ProjectVariables entity, but that seems wrong and dirty.
Does change tracking in my UI seem like it's making the UI do too much?
Are there alternatives mechanisms? One thought was to just replace all of the ProjectVariables in the Project Aggregate Root every time it's saved. Wouldn't that have me adding a Project.ClearVariables() and the using Project.AddProjectVariable() to the replace them? Project.ReplaceProjectVariables(List) seems to be very "CRUDish"
Am I missing something a key component? It seems to me that DDD atomic methods don't mesh well with a pattern where you can make a number of different changes to an entity before committing it.
In accordance to DDD, it's proper practice to have small, succinct
methods on the Aggregate roots that make atomic changes to them.
I wouldn't phrase it that way. The methods should, as much as possible, reflect cohesive operations that have a domain meaning and correspond with a verb or noun in the ubiquitous language. But the state transitions that happen as a consequence are not necessarily small, they can change vast swaths of Aggregate data.
I agree that it is not always feasible though. Sometimes, you'll just want to change some entities field by field. If it happens too much, maybe it's time to consider changing from a rich domain model approach to a CRUD one.
ProjectVariables have no Id. They are transient objects.
So they are probably Value Objects instead of Entities.
You usually don't modify Value Objects but replace them (especially if they're immutable). Project.ReplaceProjectVariables(List) or some equivalent is probably your best option here. I don't see it as being too CRUDish. Pure CRUD here would mean that you only have a setter on the Variables property and not even allowed to create a method and name it as you want.
I selected ServiceStack OrmLite for my project which is a pure Data-Oriented application. I am willing to allow the end user to create his own Object Types defined in an XML format that will be used to generate classes at runtime using CodeDOM.
I will be also defining some "system" objects required by the application (i.e. User) but I cannot foresee all the properties the end user will use and therefore I am looking for a way to allow extending the classes I create in design time. Sample bellow
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
}
The end user wants to have an Email and an Address. He should be able to add the 2 properties to the upper class and the whole class will be (which still can be used by OrmLite, since it allows overwriting :
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
public String Email{ get; set; }
public String Address { get; set; }
}
I know that there might be a risk of doing so to crash the system (if the class is already instantiated) so I am looking for the best way to avoid this issue and mimic the need I have.
It seems that there are two parts to what you're doing here. You need to create types dynamically to support the additional properties. You also need to ensure that you never end up with duplicate types in your AppDomain, i.e. two different definitions of User.
Runtime type generation
The various suggestions already given handle how to create the types. In one project, we had something similar. We created a base class that had the core properties and a dictionary to store the 'extension' properties. Then we used Reflection.Emit to create a derived type that had the desired properties. Each property definition simply read from or wrote to the dictionary in the base class. Since Reflection.Emit entails writing low-level IL code, it seems complex at first. We wrote some sample derived classes in another class library and compiled them. These were examples of what we'd actually need to achieve at runtime. Then we used ildasm.exe to see what code the compiler produced. This made it quite easy to work out how we could generate the same code at runtime.
Avoiding namespace collisions
Your second challenge is to avoid having duplicate type names. We appended a guid (with invalid characters removed) to the name of each generated type to make sure this never happened. Easy fix, though I don't know whether you could get away with that with your ORM.
If this is server code, you also need to consider the fact that assemblies are never unloaded in .NET. So if you're repeatedly generating new types at runtime, your process will continue to grow. The same will happen in client code, but this may be less of an issue if you don't expect the process to run for an extended period of time.
I said assemblies are not unloaded; however, you can unload an entire AppDomain. So if this is server code you could have the entire operation run in its own appdomain, then tear it down afterwards to ensure that the dynamically created types are unloaded.
Check out the ExpandoObject, which provides dynamic language support for doing something like this. You can use it to add additional properties to your POCO's at runtime. Here's a link on using .NET's DLR features: http://msdn.microsoft.com/en-us/library/system.dynamic.expandoobject%28v=vs.100%29.aspx
Why not use a key value pair for all its properties, or at least the dynamic ones?
http://msdn.microsoft.com/en-us/library/system.collections.hashtable.aspx
You can do it the way you're describing with Reflection but it will take a performance hit, this way will allow removal of properties also.
The project I'm currently working on has a similar requirement. We have a system already in production and had a client request addition fields.
We solved this by simply adding a CustomFields property to our model.
public class Model: IHasId<Guid>
{
[PrimaryKey]
[Index(Unique = true)]
public Guid Id { get; set; }
// Other Fields...
/// <summary>
/// A store of extra fields not required by the data model.
/// </summary>
public Dictionary<string, object> CustomFields { get; set; }
}
We've been using this for a few weeks with no issues.
An additional benefit we found from this was that each row could have its own custom fields so we could handle them on a per record basis instead of requiring them for every record.
I'm using AutoFixture to generate data for a structure involving a parent object and complex child objects, like this:
public class Parent
{
public int Id { get; set; }
public string Name { get; set; }
public Child[] Children { get; set; }
}
public class Child
{
public string Name { get; set; }
public int ParentId { get; set; }
}
Is there a way to automatically set the property ParentId of the generated Child object to the id assigned to the parent? Right now my solution looks like this, which isn't very pretty:
var parent = fixture.Build<Parent>().Without(p => p.Children).CreateAnonymous();
parent.Children = fixture.CreateMany<Child>(10).ToArray();
foreach (var i in parent.Children)
{
i.ParentId = parent.Id;
}
It feels like there's a better way to do this that I am missing? I looked into creating a custom ISpecimenBuilder but didn't manage to solve it that way either.
AutoFixture is based on a set of rules and assumptions about the API it may be asked to work with. Consider that it's been created and compiled without any prior knowledge of the Child and Parent classes, or any other types in a given API. All it has to work with is the public API.
Think of AutoFixture as a very dim programmer who doesn't even understand your language (not even English). The more fool-proof you can make your API, the easier it will be to use AutoFixture with it.
The problem with circular references like the Parent/Child relationship described here is that it breaks encapsulation. You'll need to create at least one of the class instances initially in an invalid state. That it's difficult to make AutoFixture work with such an API should mainly be taken as a warning sign that the API might benefit from refactoring.
Additionally, the .NET Framework Design Guidelines recommends against exposing arrays as properties - particularly writable properties. Thus, with a better encapsulated design, the API might be much easier to work with, both for AutoFixture and yourself and your colleagues.
Given the API above, I don't see any way this can be made much easier to work with. Consider how to remove the circular reference and make collection properties read-only, and it will be much easier.
For the record, I haven't written an API with a circular reference for years, so it's quite possible to avoid those Parent/Child relations.
My model looks something like this:
public class Product
{
public string Name {get; set;}
public string Description {get; set;}
public double Price {get; set;}
public List<string> Features {get; set;}
}
I want my database table to be flat - the List should be stored as a delimited string:
Feature one|Feature two|Feature three for example.
When retrieved from the db, it should place each of those items back into a List
Is this possible?
I'm doing the very same in my current project, only I'm persisting a collection of enums as pipe-delimited numbers. It works the same way.
public class Product
{
protected string _features; //this is where we'll store the pipe-delimited string
public List<string> Features {
get
{
if(string.IsNullOrEmpty(_features)
return new List<String>();
return _features.Split(new[]{"|"}, StringSplitOptions.None).ToList();
}
set
{
_features = string.Join("|",value);
}
}
}
public class ProductMapping : ClassMap<Product>
{
protected ProductMapping()
{
Map(x => x.Features).CustomType(typeof(string)).Access.CamelCaseField(Prefix.Underscore);
}
}
I implemented something similar for the MySql set data type, which is a comma separated list in the db but a list of strings in the entity model. It involved using a custom data type in NHibernate, based on the PrimitiveType class. You wire this in using the mappings and the .CustomType< CustomType >( ) method on a map.
If you want I can send you a code snipet for the custom class.
I also implemented something similar for a Point3D struct. As cdmdotnet said you basically want to implement and IUserType that will pack/unpack Features into a single string via the NullSafeSet/NullSafeGet methods.
You may also need to implement the Equals() method, which is a little subtle. The reason why is best illustrated by an example:
Product p = session.Load(...);
p.Features.Add("extra feature");
session.Save(p);
The thing is, NHibernate upon hydration stores a reference to p.Features, and compares it to the value of p.Features upon a save request. For immutable property types this is fine, but in the above example, these references are identical, so the effective comparison is
var x = p.Features;
var changed = Equals(x, x);
Obviously a standard implementation of this will always return false.
How should one deal with this? I have no idea what the best practice is, but solutions are:
Make IUserType.Equals(object x, object y) always return false. This will force the packed string to be rebuilt and a database call to be made every single time the Product is saved, irregardless of whether Product has been semantically changed or not. Whether or not this is an issue depends on any number of factors (size/count of Feature objects, whether Product objects are saved when not changed, how many Product objects you have etc).
Make Features an IList and implement a ChangeAwareList<T> : IList<T> which is able to track changes (or keep a copy of its original) aware. Implement IUserType.Equals(object x, object y) to check if x/y are ChangeAwareList and implement the necessary logic to see if the list really has changed. This is the solution I went with in the end.
Maybe you could reuse code from the NHibernate GenericListType type. At the time I implemented the previous solution I didn't have enough experience to have a go at this.
If you have some prior experience with NHibernate I hope this should help get you started. If not let me know and I will try and put together a more verbose solution.
What approach should I take when serializing an object to the database, keeping in mind that the objects properties will change in time?
At first, I thought of making sure all my objects implement an interface, but what happens when an interface looses a property and gains another. What happeneds to the existing serialized data in the database upon restoration?
Should I use abstract classes or interfaces? Best approach people have used.
Thank you
The partial implementation below may give you some ideas for flexibly storing and retrieving properties. For managing property addition and removal, when you add properties no problem. When you drop properties you could not allow new properties of that type to be created in the future but preserve existing data or remove all data associated with those properties.
/// <summary>
/// The data storage could look something like this
/// create table PersistedObject (ObjectId int )
/// create table PersistedProperty (PropertyId int , PropertyName varchar(50) )
/// create table Data (ValueId int, PropertyId int, SerializedValue image )
/// </summary>
interface IFlexiblePersistence
{
object this[string propertyName] { get; set;}
void Persist();
}
class Person : IFlexiblePersistence
{
Dictionary<string, object> data;
public Person(int personId)
{
data = PopulatePersonData(personId);
}
public object this[string propertyName]
{
get { return data[propertyName]; }
set
{
data[propertyName] = value;
Persist();
}
}
public void Persist()
{
LoopThroughAllValuesAndSaveToDB();
}
}
what will the life time of the objects be? if it was only short term you could use datasets and update the properties on the fly.
That's a time honored question with no one correct answer.
It depends greatly on how the data will be used in your application. The amount of RAM you'll want to cap the use at, as well as the anticipated largest size for your database. All of those factors will push you toward or away from various approaches.
This article discusses datatables and their use within that specific app.
This article discusses the use of data transfer objects.
This forum discussion gives many opinions on data transfer objects vs domain modeling.
Approaches 2 and 3 above are fully compatible with David Silva Smiths IFlexiblePersistence approach listed in his reply. However, you potentially use more memory and lose much of the performance gains from doing the "typical" DTO. If memory and speed aren't a concern, his approach will likely be very simple.
It depends what you mean by "serialization" (it is an overloaded term). If you mean your existing object as columns, then and ORM: EF, LINQ-to-SQL, NHibernate, etc. If you mean your object as a single varbinary(max) / varchar(max):
Whatever else you do, don't use BinaryFormatter - this will break if you change the fields (for example, changing a regular property into an auto-property). You need something contract-based, most of which will be fine with changes. Suggestions:
XmlSerializer / DataContractSerializer / Json.NET (text)
protobuf-net (binary, uses google's "protocol buffers" wire format)
With the above you can generally use pretty standard DTOs; for example (compatible with DataContractSerializer and protobuf-net):
[DataContract]
public class Customer {
[DataMember(Order=1)]
public int {get;set;}
[DataMember(Order=2)]
public string Name {get;set;}
}
I wouldn't use NetDataContractSerializer for similar reasons to BinaryFormatter - fine for short-lived requests, but not good for storage. Note that most serializers use concrete classes, since very few include the type metadata in the output (and hence would need additional configuration to tie ICustomer to your Customer class).