Has anyone figured out the best way to return an object hierarchy with
a multiple record set stored procedure?
At the moment I am pulling out each of the record sets and mapping
them individually into their types and then constructing the main
type.
Example: an account object with roles and phone numbers, a stored proc
returns 3 record sets, one with the account, one with the phone
numbers for the account and the last with the accounts roles.
Is there a better way of combining the record set into one so that I can cast
directly?
The next feature in line (for version 0.14, planned release in the next couple of weeks) is the With clause for queries, which will populate an object hierarchy. Having seen this question, I will think about how to expose enough of the implementation to make it usable in a stored procedure scenario.
Related
New to NHibernate and C#.
I have these two classes:
User //Simplified version
{
private long _id;
private String _username; // unique
private ISet<Role> _roles;
//Properties
}
and
Role
{
private long _id;
private String _name;
//Properties
}
Is it better to store a reference to the Role class (as done above) or just store the IDs of the Role class (so: private ISet<Long> _roles)? Why?
Any pros and cons I should be aware of?
Well, firstly NHibernate is ORM.
... In object-oriented programming, data management tasks act on object-oriented (OO) objects that are almost always non-scalar values. For example, consider an address book entry that represents a single person along with zero or more phone numbers and zero or more addresses. This could be modeled in an object-oriented implementation by a "Person object" with attributes/fields to hold each data item that the entry comprises: the person's name, a list of phone numbers, and a list of addresses. The list of phone numbers would itself contain "PhoneNumber objects" and so on. The address book entry is treated as a single object by the programming language (it can be referenced by a single variable containing a pointer to the object, for instance). Various methods can be associated with the object, such as a method to return the preferred phone number, the home address, and so on....
Secondly - is it better to do A or B... would be more dependent on a use case.
But I can say, (based on my experience) that if:
there are two objects in our domain, e.g. User and Role
we can represent them as one-to-many andmany-to-one` (bidirectional mapping)
I will always map them via references. Because there is no benefit to map them as long ReferenceId and ISet<long> ReferenceIds.
The only use case where to map just IDs (I can imagine) would be to use it in stateless session to get some huge amount of data. But even in this scenario, we can use projections.
"Storing" the Ids doesn't sound like a good idea to me. In fact, the database schema would look the same, so it's not a difference how you store data, just how you design your classes. And ids aren't very useful in contrast to actual objects.
Here some pros and cons anyway:
Advantage of mapping IDs
You could serialize your entity more easily, because the object graph ends here and you wouldn't end up in serializing too many objects. (Note that serializing entities has some other issues and is not recommended in many cases.)
Advantages of mapping objects:
You can easily navigate to the objects without DB interaction thus taking full advantages of using an ORM (maintainability).
You can make use of batch size, which avoids the N+1 problem without optimizing data access in your problem domain (performance and maintainability)
When you build the domain model it should use the proper references rather than using id values,
Advantages,
You can have a proper domain model, so programming becomes easier (if you want to get list of role names per user, then in domain model it's pretty straightforward while if you have id list then it
Easy to query (using either QueryOver / Linq or HQL)
Efficient SQL (if you want to load the user and roles, you can use Future to load all in a single query if you use references, but if you use Id then you have to use multiple queries)
I don't see any disadvantages of using references as long as mapping is correct.
However I'd rather use Id of entities or a DTO stored if the requirement is to store a object over multiple sessions. For example if you want to store the user in the Web Session object, I would not store any domain objects there rather I'd store the Id or a DTO object.
I'm working with RavenDB, and I'm trying to do something. I have a base type, Principal, from which two classes are derived: User and ApplicationInstance. In my application, instances of User and ApplicationInstance must be created and stored frequently. The thing is, though, that I need to also be able to query all of the Principal objects stored in the database at once, determine whether a given Principal is a User or an ApplicationInstance, and then query for the entire User or ApplicationInstance object.
How can I do this?
You can define a Multi Map Index that uses as source both the User and ApplicationInstance collections.
If you define your index using C# code (by implementing AbstractMultiMapIndexCreationTask<>), you'll have to call AddMap twice to achieve that (as illustrated in the blog post link above)
If you define the index using Raven Studio, simply click the Add Map button and you'll get a new text area which allows you to define an additional Map.
Note, however, that the output structure of both maps must have the same properties (pretty much as you would do with UNION in SQL).
What are you trying to do? If the end result is to query all Principals, then load the entire User or AppInstance, why not just go straight for querying all Users or all AppInstances?
Raven won't store base classes; it will only store the most derived type (User or AppInstance, in your case). So you'll have a collection of Users and a collection of AppInstances.
If you really need to query both at once, you can use Multi Map indexes. You can also change the entity name and store both in a single Principal collection.
But it's difficult to recommend a solution without knowing what you're trying to do. Explain what you're trying to do and we'll tell you the proper way to get there.
I have a question about how my classes should be mapped into Azure Table Storage entities. Say I have two entities, BikeRider and BikeRace. In my C# code, I have 2 classes, and each one has a property that is a collection of the other. So the BikeRider class has a List property, and vice versa. If that's not ideal here, feel free to opine, but that's what I've got today.
How does that map into an Azure Table? I've found questions on SO here that discuss how to store a many to many relationship, but my question is more specifically how a List property should be handled when you save the object containing that property.
thanks!
UPDATE
While I was waiting for a reply to this (and thinking I might not get any), I came up with this solution:
The C# BikeRider and BikeRace types both have a List property of the type of the other, as I described above. Each has an instance method to Add an instance of the other to their list (i.e. BikeRace has a method called "AddBikeRider" that adds an instance of BikeRider to the BikeRace.BikeRiders list, and vice versa).
However, when I save to Azure Table Storage, there are 3 tables with the following information:
BikeRider Table
PartitionKey, RowKey,
BikeRider Info RiderRace Table
PartitionKey: BikeRider RowKey
RowKey: BikeRace RowKey
Other Info
BikeRace Table
PartitionKey, RowKey
A separate value for the ID of every BikeRider in the race (the BikeRider RowKey - but could have many).
This way, if you are querying from the BikeRider and trying to get the BikeRaces he's been in, you have the partition key, and all of the rows in that partition are the BikeRaces. If you're querying from the BikeRace, and trying to get all the BikeRiders in a race, you have the BikeRider ID's to use in querying the BikeRider table. You're not duplicating data beyond storing those keys in multiple places, so you don't have to multiple storage locations if your data changes.
The challenge here is what Ming has posted - if the Table Service save method throws an exception for a type with a List property, can you create a simpler interface with its own preliminary save method that handles the list type and turns it into a save-able data type first, to persist it to the Azure Tables in the way I described? Is this too complicated, or too brittle?
this is not supported. Table storage does not support list properties. If you use a list property in your CLR data model, WCF Data Services will try to serialize it, and result in an exception.
To simulate a relationship, you can create a property on the BikeRace entity, such as BikeRiderID.
If you need a read only collection, you can create a method (instead of a property) in the BikeRider class, such as GetBikeRaces. Inside the implementation, you query the BikeRace table to find all entities whose BikeRiderID is the current bike rider's ID. If you want to update the list of bike races, you also need to update the BikeRider table using a separate request.
You can refer to http://blog.smarx.com/posts/one-to-many-relationships-in-windows-azure-tables for a sample.
I have a table that, some of its columns are unknown at compile time. Such columns could either be of an integer value, or some Enum value. There is a table that holds all the names of such dynamic columns and also holds the column's type. This "metatable" has the following columns:
DynamicColumnId (Pk)
Name
TypeId (Integer / Enum, as Fk from a separate table)
Integer columns have the Name from this table, whereas Enum columns are Fk columns from a table that has that Name, with some modification (e.g. a "DynamicTable" prefix).
The only solution I could think of for this situation is using Reflection.Emit to dynamically create an Entity class and a corresponding Mapping class. Admittedly, I'm new to NHybernate / Fluent NHybernate and it seems like a relatively simple hierarchy between the tables, and so I wanted to verify my solution isn't as ugly as it initially appears...
I would also welcome solutions that completely disregard my table hierarchy, in order to effectively acheive the same results (that is, to enumerate the rows on the dynamic table, going over all the columns, with knowledge of whether they are Enums and, if they are, their possible values as well).
(Edit: Additional information re problem domain)
I initially included minimal details, as to avoid Too-Much-Info related confusion.
This description is much more complex, but it unravels the motives behind this design.
The application involved is designed to automate log/dump analysis. Analysis-scenarios are frequently provided by the log/dump experts and so, in order to streamline the typical process of requirements=>implementation=>verification cycle, such analysis-scenarios are implemented by the experts directly as an Iron Python code snippet, with some domain-specific constructs injected into the snippets' scope. Each snippet has a "context" for which it is relevant. An example of "context" could be "product," "version," etc... So, the snippet itself is only invoked in certain contexts - this helps simplifying the Python code by eliminating branching (you could view it as Aspect Oriented Programming, to some extent). A non-expert could use the application, with a given code-context database, to analyze a log/dump, after choosing values for the various contexts.
When an expert decides that a new context is required for cataloging a certain code snippet, he could add a context, indicating the possible values it could have. Once a new context is added to the database, a non-expert that runs an analysis will be given the option to choose a value for the newly-added context.
The "dynamic table" is the table that associates a code snippet with values of the various contexts (columns) that existed when the snippet was issued, plus default values for the columns that did not exist at that time.
I won't claim to fully understand your scenario, but it seems to me that you'd be better off using a key-value store such as Redis or a schema-less database like CouchDB instead of SQL. This doesn't seem to be a problem for a relational database, but if you really need to use a RDBMS I'd map NHibernate as closely as possible to the real schema (DynamicColumnId, Name, TypeId) then build whatever data structure you need on top of that.
I'm thinking of building a ecommerce application with an extensible data model using NHibernate and Fluent NHibernate. By having an extensible data model, I have the ability to define a Product entity, and allow a user in the application to extend it with new fields/properties with different data types including custom data types.
Example:
Product can have an addition fields like:
Size - int
Color - string
Price - decimal
Collection of ColoredImage - name, image (e.g. "Red", red.jpg (binary file))
An additional requirement is to be able to filter the products by these additional/extended fields. How should I implement this?
Thanks in advance.
I think this link describes kind of what you want...
http://ayende.com/Blog/archive/2009/04/11/nhibernate-mapping-ltdynamic-componentgt.aspx
More info on dynamic-component:
http://www.mattfreeman.co.uk/2009/01/nhibernate-mapping-with-dynamic-component/
http://bartreyserhove.blogspot.com/2008/02/dynamic-domain-mode-using-nhibernate.html
The idea behind dynamic-component is that you can build your data model by not having a one to one mapping of databse columns with properties. Instead you have only a dictionary property that can contain data from as many properties as you like. This way when you fetch the entity, the dictionary gets the data of all columns configured to belong in there. You can extend the database table's schema to include more columns and that will be reflected to the databse model if you update the mapping file accordingly (manually or though code at application start).
To be honest I do not know you can query such entity using the "attributes" property but if I had to guess I would do an IN statement to it.
One of the options is EAV model (Entity-Attribute-Value).
This model is good to apply if you have a single class in your domain, which table representation would result in a wide table (large number of columns, many null values)
It's originally designed for medical domain, where objects may have thousands of columns (sympthoms).
Basically you have
Entity (Id) (for example your Product table)
Attribute(Id, ColumnName)
Value(EntityId, AttributeId, value)
You can have some additional metadata tables.
Value should better be multiple tables, one for a type.
For example:
ShortStringValue(EntityId, AttributeId, Value nvarchar(50));
LongStringValue(EntityId, AttributeId, Value nvarchar(2048));
MemoValue(EntityId, AttributeId, Value nvarchar(max));
IntValue(EntityId, AttributeId, Value int);
or even a comple type:
ColorComponentsValue(EntityId, AttributeId, R int, G int, B int );
One of the things from my experience is that you should not have EAV for everything. Just have EAV for a single class, Product for example.
If you have to use extensibility for different base classes, let it be a separate set of EAV tables.
Onother thing is that you have to invent a smart materialization strategy for your objects.
Do not pivot these values to a wide row set, pivot just a small number of collumns for your query criteria needs, then return a narrow collection of Value rows for each of the selected objects. Otherwise pivoting would involve massive join.
There are some points to consider:
. Each value takes storage space for foreign keys
. For example row-level locking will behave different for such queries, which may result in performance degradation.
. May result in larger index sizes.
Actually in a shallow hellow world test my EAV solution outperformed it's static counterpart on a 20 column table in a query with 4 columns involved in criteria.
Possible option would be to store all extra fields in an XML structure and use XPath/XQuery to retrieve them from the database.
Each extensible entity in your application will have an XML field, like ExtendedData, which will contain all extra properties.
Another option is to use Non-relationnal Databases which are typically suited for this kind of things.
NOSQL databases(couchDB, mongoDB, cassandre...) let you define dynamically your propretyfields, you could add fields to your product class whenever you want.
I'm searching for similar thing and just found N2 CMS (http://n2cms.com) which implements domain extensibility in quite usable way. It also supports querying over extension fields which is important. The only downside I find out is that it's implemented using HQL so it would take some time to reimplement it to be able to query using QueryOver/Linq, but the main idea and mappings are there. Take a look on ContentItem, DetailCollection, ContentDetail classes, their mappings and QueryBuilder/DetailCriteria.