I posted this question at DevExpress support, nevertheless I know that here I might get answers more quickly, this is a super big issue for me and I have been pulling my hairs for 2 days without any success. Hopefully people here might have experience in this framework.
My problem is related to the association table that is created by default when an M-N relation exists between two Business Objects.
Idea is so:
Lets assume I have these objects -> UNITS and USERS
For each USER within a UNIT I need to store a STATUS, hence I created a new attribute from SQL , called STATUS_IN_UNIT
The problem arises when I want to access this property programatically,
which of course is impossible since I do not have any object
associated to this table. Only way is to access it through querying
database.
What I want to achieve is to show this table as in a ListView inside
the UNIT DetailView, this is MANDATORY since USERS attached to a UNIT
must be enabled and disabled. I have seen these threads
https://www.devexpress.com/Support/Center/Example/Details/E2334
https://www.devexpress.com/Support/Center/Question/Details/T500887
but then again I am looking for a more trivial solution to this,
otherwise it would be really time consuming to create views and
controllers my self in order to handle it. So my questions are as
below:
1. How to create a class that references to THIS table ?
2. How to show THIS table inside UNITS DetailView and access its properties from the above table?
It would be greatly appreciated if you would answer this question.
Thank you in advance !!!
I assume you have created a user class and create a collection of Unit in it, and a collection user inside Unit class. In that case, XAF will create an auto-created intermediate table called Users_Units which holds both primary key and you can not add any attributes inside it. If you want to add an attribute(s) or property inside the intermediate class, you should create the class explicitly, so here's the code:
public class User : BaseObject
{ //... your code here
[Association("User-UserUnits")]
public XPCollection<UserUnit> UserUnits
{
get
{
return GetCollection<UserUnit>("UserUnits");
}
}
}
public class Unit : BaseObject
{ // ... your code here
[Association("Unit-UserUnits")]
public XPCollection<UserUnit> UsersUnitss
{
get
{
return GetCollection<UserUnit>("UserUnits");
}
}
}
public class UserUnit : BaseObject
{
User user;
[Association("User-UserUnits")]
public User User
{
get
{
return user;
}
set
{
SetPropertyValue("User", ref user, value);
}
}
Unit unit;
[Association("Unit-UserUnits")]
public Unit Unit
{
get
{
return unit;
}
set
{
SetPropertyValue("Unit", ref unit, value);
}
}
int status;
public int Status
{
get
{
return status;
}
set
{
SetPropertyValue("Status", ref status, value);
}
}
}
But of course with above code you can not link/unlink each other between User and Unit directly. Instead, you should add the detail record manually as it acts as a normal master-detail or one-to-many entity relationship.
Related
Right now Im working in a big project, So theres a lot of data and tables going around.
For best practices Im creating a class for every table and object
Its goes like this:
public class Employee
{
private String Name;
public String Name
{
get
{
return Name;
}
set
{
Name = value;
}
}
public Employee(int EmployeeID)
{
/*
GET DATA ROW AND ASSIGN IT TO EVERY PROPERTIE
*
* Name = row("name")
* AND DO THIS FOR EVERY PROPERTIE!
*
*/
}
}
So whats happening here is that I have to assign every propertie from a query in the class constructor.
But imagine a table with like 50+ columns, I have to do this 50+ times and this takes a lot of time.
Theres a way to automate this automate the creation of the 50+ properties and the asignation of the 50+ properties in the class withouth taking a lot of time.
I just wanna find a way to create a class automating the properties assignation from a datarow instead of writting all the columns string to the properties. Something like Entityt Framework but done by me.
Greeting and thanks
There are heaps of examples online to make C# classes from dB tables & stored procedures, research that and POCO's, eg:
Generate class from database table
http://www.codeproject.com/Articles/8397/C-Code-Generator-for-Stored-Procedures
You're not the first to encounter this, best to do a quick google next time.
you should use better entity framework code first approach, it works even if your database is already created.
use a tool to generate the class model, and create a context like:
public class MyContext : DbContext
{
public MyContext (){}
public DbSet<MyModel> MyModel { get; set; }
}
In my OData service I have to create a custom primary key in the OnPreInsert event handler.
I know I can't use #event.Id to assign the key because it doesn't expose the setter property.
I used the reflection to set the value of this property as shown below:
public bool OnPreInsert(PreInsertEvent #event)
{
if(#event.Entity is MyEnity)
{
var myEntity = #event.Entity as MyEnity;
string newKey = GetCustomKey(...);
myEntity.myId = newKey;
var property = typeof(AbstractPreDatabaseOperationEvent).GetProperty("Id");
if (property != null)
{
property.SetValue(#event,newKey);
}
}
return false;
}
During the debug mode I can see that the value of #event.Id is initialized properly, however the key saved in the database is not the one I generated in the OnPreInsert event handler.
What am I doing wrong here?
Please, try to check this recent Q&A:
NHibernate IPreUpdateEventListener, IPreInsertEventListener not saving to DB
The point is, that as described here:
NHibernate IPreUpdateEventListener & IPreInsertEventListener
...Here comes the subtlety, however. We cannot just update the entity state. The reason for that is quite simple, the entity state was extracted from the entity and place in the entity state, any change that we make to the entity state would not be reflected in the entity itself. That may cause the database row and the entity instance to go out of sync, and make cause a whole bunch of really nasty problems that you wouldn’t know where to begin debugging.
You have to update both the entity and the entity state in these two event listeners (this is not necessarily the case in other listeners, by the way). Here is a simple example of using these event listeners...
I couldn't find some way to use the reflection to achieve what I described in my question above. I tried to use reflection because I didn't know about the Generators available in NHibernate (as I am new to NHibernate).
I have a table named sys_params which holds the next key values for different tables. My target was to fetch the next key for my table my_entity, assign it to the primary key of the new record, increment the next key value in the sys_params table and save the new record into the database.
To achieve this first I defined following classes.
public class NextIdGenerator : TableGenerator
{
}
public class NextIdGeneratorDef : IGeneratorDef
{
public string Class
{
get { return typeof(NextIdGenerator).AssemblyQualifiedName; }
}
public object Params
{
get { return null; }
}
public Type DefaultReturnType
{
get { return typeof(int); }
}
public bool SupportedAsCollectionElementId
{
get { return true; }
}
}
And then in my mapping class I defined the generator like below:
public class MyEnityMap : ClassMapping<MyEnity>
{
public MyEnityMap()
{
Table("my_entity");
Id(p => p.myId,
m=>{
m.Column("my_id");
m.Generator(new NextIdGeneratorDef(), g =>g.Params( new
{
table = "sys_params",
column = "param_nextvalue",
where = "table_name = 'my_entity'"
}));
});
.......
}
}
Hope this will help someone else. Improvements to this solution are highly appreciated.
I'm using EF5 database first with partial classes. There's a property in my partial class which contains n object which is stored as a column in my database containing XML data. I want to handle the serialization/deserialization of this object when the EF tries to read/write it with a custom getter/setter.
Is it possible to expose the column in my partial class and map it using the EF, without auto-generating a property for it?
ie:
public SomeObject BigComplexObject { get; set; } // forms etc in my app use this
public string BigComplexObjectString // when the EF tries to read/write the column, my custom getter/setter kicks in
{
get { return this.BigComplexObject.ToXmlString(); }
set { this.BigComplexObject = new BigComplexObject(value); }
}
At present, the EF is auto-generating a member for the column so I'm left with two.
Try to change the logic. Leave EF generated property that will be populated with XML string from the database:
public string BigComplexObjectString { get; set; }
Then do the following:
[NotMapped]
public SomeObject BigComplexObject
{
get { return new SomeObject(this.BigComplexObjectString); }
set { this.BigComplexObjectString = value.ToXmlString(); }
}
Don't forget to add [NotMapped] to instruct EF to ignore this property.
Well, we use a little trick for a quite similar case...
We use the property panel (in the edmx file) of our... properties and add something in the "documentation" (summary or long description) line (probably not the best place, but anyway). This can be access by your T4 file.
So you could write something like "useXml" in the property panel, then modify your tt to generate the desired code when (example to get the info in the .tt file)
if (edmProperty.Documentation != null && edmProperty.Documentation.Summary = "useXml")
//generate something special
It would be great to have a better place for "cusom infos" in the edmx, but we didn't find anything better for instant.
I have business objects that are stored across two data storages. A part of the object is stored in Azure Table Storage and the other part in Azure SQL. Basically the SQL part is used in queries while the Table Storage is used for properties that take a lot of space.
Most of the times, only the SQL part of the object is used (in SQL queries). The Table Storage properties are only needed when someone explicitly asks for that object. What I am trying to achieve is a design that will hide the fact that there are two data sources behind the business object, lazy load the Storage Table properties (since they are not needed when performing SQL queries) and still make the code testable.
My current design has some POCOs that are created by a unit of work. I don't want to create two POCOs, one for Table Storage and one for SQL, so I was thinking about the following design:
//Make the properties virtual
public class Customer
{
public virtual string Name {get;set;} //Stored in SQL
public virtual string Age {get;set;} //Stored in SQL
public virtual string Details {get;set;} // This prop is stored in Table Storage
}
//Create a derived internal POCO that can notify when a property is asked
internal class CustomerWithMultipleStorage
{
public event EventHandler OnDetailsGet;
public override string Details
{
get { if (OnDetailsGet!=null) OnDetailsGet( ... ); /* rest of the code */ }
set { /* code */ }
}
}
All my data layer code will work with CustomerWithMultipleStorage while all the "external" code, outside the DL, will use Customer and the events will not be exposed. Now, when the unit of work returns a Customer, it will load only the SQL properties and subscribe to the Get events. If someone using the Customer needs the rest of the properties, the event will be triggered and the Table Storage properties will be loaded.
What do you think about this design? Is it the correct approach? Do you know of a better way of doing this?
You could use Lazy<T> with dependency injection. Note this is just to give you some ideas.
internal class CustomerWithMultipleStorage : Customer
{
private readonly ISqlDataLayer _sqlDataLayer;
private readonly ITableStorageDataLayer _tableStorageDataLayer;
private readonly Lazy<string> _details;
private string _detailsValue;
public CustomerWithMultipleStorage(ISqlDataLayer sqlDataLayer, ITableStorageDataLayer tableStorageDataLayer)
{
_sqlDataLayer = sqlDataLayer;
_tableStorageDataLayer = tableStorageDataLayer;
_details = new Lazy<string>(() => return (string)_tableStorageDataLayer.GetValue<Customer>(this, "Details"));
}
public override string Details
{
get
{
return (_detailsValue ?? (_detailsValue = _details.Value));
}
set
{
_detailsValue = value;
_tableStorageDataLayer.SetValue<Customer>(this, _detailsValue);
}
}
}
public interface ITableStorageDataLayer
{
object GetValue<T>(T item, [CallerMemberName] string property = "");
void SetValue<T>(T item, object value, [CallerMemberName] string property = "");
}
You could also just use a data layer with mapping data for each object (I will provide examples later).
This is a very weird architecture. Please bear with me.
We have an existing tiered application (data, logic/service, client).
The latest requirement is that the service layer should access two data sources!!!! (no other way around)
These two data sources have the same DB schema.
As with most tiered architectures, we have read and write methods like:
IEnumerable<Product> GetAllProducts(),
Product GetProductById(ProductKey id),
IEnumerable<Product> FindProductsByName(string name)
the product DTOs are:
class Product
{
public ProductKey Key { get; set;}
...
}
class ProductKey
{
public long ID { get; }
}
We narrowed it down to two possible solutions:
Alternative 1:
Add a parameter into the read methods so that the service knows what DB to use like so:
Product GetProductById(ProductKey id, DataSource dataSource)
DataSource is an enumeration.
Alternative 2 (my solution):
Add the DataSource property to the key classes. this will be set by Entity Framework when the object is retrieved. Also, this will not be persisted into the db.
class ProductKey
{
public long ID { get; }
public DataSource Source { get; } //enum
}
The advantage is that the change will have minimal impact to the client.
However, people dont like this solution because
the DataSource doesn't add business value. (My response is that
the ID doesn't add business value either. Its a surrogate key. Its
purpose is for tracking the persistence)
The children in the object graph will also contain DataSource which is redundant
Which solution is more sound? Do you have other alternatives?
Note: these services are used everywhere.
What I would suggest is door number 3:
[||||||||||||||]
[|||||||||s! ]
[||||nerics! ]
[ Generics! ]
I use a "dynamic repository" (or at least that is what I have called it). It is setup to be able to connect to any datacontext or dbset while still being in the same using block (i.e. without re-instantiation).
Here is a snippet of how I use it:
using (var dr = new DynamicRepo())
{
dr.Add<House>(model.House);
foreach (var rs in model.Rooms)
{
rs.HouseId = model.House.HouseId;
dr.Add<Room>(rs);
}
}
This uses the "default" dbcontext that is defined. Each one must be defined in the repository, but not instantiated. Here is the constructor I use:
public DynamicRepo(bool Main = true, bool Archive = false)
{
if (Main)
{
this.context = new MainDbContext();
}
if (Archive)
{
this.context = new ArchiveDbContext();
}
}
This is a simplified version where there are only two contexts. A more in depth selection method can be implemented to choose which context to use.
And then once initialized, here would be how the Add works:
public void Add<T>(T te) where T : class
{
DbSet<T> dbSet = context.Set<T>();
dbSet.Add(te);
context.SaveChanges();
}
A nice advantage of this is that there is only one spot to maintain the code for interacting with the database. All the other logic can be abstracted away into different classes. It definitely saved me a lot of time to use a generic repository in this fashion - even if I spent some time modifying it at first.
I hope I didn't misunderstand what you were looking for, but if you are trying to have one repository for multiple data sources, I believe this is a good approach.