Should I test a Model? - c#

Using different languages (php, .net) and frameworks(zf2), I fetch data from a database and store it into a model class. Every property of this class maps to a column on the database.
So if I have a table: tbl_user: user_id, user_name.
I would have a class: +User: +string user_id, +string user_name.
One of the TDD principles say: "Write some code that causes the test to pass"
Do I need to test the model too? Because it looks to me to be a really redundant test.

No. If the class only contains Properties / Fields and doesn't contain any logic, there is no need to test it. If you're concerned about code coverage, these classes will be 'tested' by the tests for whichever class consumes them.
For example:
public class DomainObject
{
public int Id{ get; set; }
public string Name {get;set; }
}
public class BusinessLogic
{
public void DoSomethingBusinessLike(DomainObject do)
{
//stuff happens
}
}
It is not necessary to test DomainObject directly, it is implicilty tested when you create tests for BusinessLogic.

Related

Manage several almost identical client databases using Entity framework (or other ORM?)

I'm prototyping an ASP.NET Web API that needs to talk to several databases which are almost identical. Each of our customers have their own instance of our database structure, but some are specialized to integrate with other systems they have. So for example in one database the Client table might have the column AbcID to reference a table in another system, but other databases won't have this column. Other than that the two tables are identical in name and columns. The columns can also have different lengths, varchar(50) instead of varchar(40) for example. And in some databases there can be one extra table. I have focused on solving the different columns problem first.
I was hoping to use an ORM to handle the data access layer of the API, and right now I'm experimenting with Entity framework. I already solved how to dynamically connect to the different databases from an API-call, but right now they have to be completely identical in structure.
I have tried to set up double .edmx models with a Database-first approach but this causes conflicting class names between the models. So instead I tried Code-first and come up with this (which isn't working).
DbContext extension:
In the constructor I check which database is being accessed and if it is one of the special ones I flag it for the model configuration.
public partial class MK_DatabaseEntities : DbContext
{
private string _dbType = "dbTypeDefault";
public DbSet<Client> Client { get; set; }
public DbSet<Resource> Resource { get; set; }
public MK_DatabaseEntities(string _companycode)
: base(GetConnectionString(_companycode))
{
if(_companycode == "Foo")
this._dbType = "dbType1";
}
// Add model configurations
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
modelBuilder.Configurations
.Add(new ClientConfiguration(_dbType))
.Add(new ResourceConfiguration());
}
public static string GetConnectionString(string _companycode)
{
string _dbName = "MK_" + _companycode;
// Start out by creating the SQL Server connection string
SqlConnectionStringBuilder sqlBuilder = new SqlConnectionStringBuilder();
sqlBuilder.DataSource = Properties.Settings.Default.ServerName;
sqlBuilder.UserID = Properties.Settings.Default.ServerUserName;
sqlBuilder.Password = Properties.Settings.Default.ServerPassword;
// The name of the database on the server
sqlBuilder.InitialCatalog = _dbName;
sqlBuilder.IntegratedSecurity = false;
sqlBuilder.ApplicationName = "EntityFramework";
sqlBuilder.MultipleActiveResultSets = true;
string sbstr = sqlBuilder.ToString();
return sbstr;
}
}
ClientConfiguration:
In the configuration for Client I check the flag before mapping properties to database columns. This however does not seem to work.
public class ClientConfiguration : EntityTypeConfiguration<Client>
{
public ClientConfiguration(string _dbType)
{
HasKey(k => k.Id);
Property(p => p.Id)
.HasColumnName("ID")
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
if (_dbType == "dbType1")
{
Property(p => p.AbcId).HasColumnName("AbcID");
}
Property(p => p.FirstName).HasColumnName("FirstName");
Property(p => p.LastName).HasColumnName("LastName");
}
}
Client class:
This is how my Client class looks like, nothing weird here.
public class Client : IIdentifiable
{
public int Id { get; set; }
public string AbcId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public interface IIdentifiable
{
int Id { get; }
}
Back-up solution is to use raw SQL queries to deal with the offending tables and ORM for the rest, but it would be awesome if there is some way to do this that I have not thought of. Right now I'm trying Entity framework, but I am not opposed to trying some other ORM if that one can do it better.
Using Code First supports this scenario:
1) Common entities for both models:
public class Table1
{
public int Id { get; set; }
public string Name { get; set; }
}
2) Base version of table 2
public class Table2A
{
public int Id { get; set; }
public int Name2 { get; set; }
public Table1 Table1 { get; set; }
}
3) "Extended" version of table 2, inherits version A, and adds an extra column
public class Table2B : Table2A
{
public int Fk { get; set; }
}
4) Base context, including only the common entities. Note that there is a constructor which accepts a connection string, so there is no parameterless constructor. This forces inheriting contexts to provide their particular connection string.
public class CommonDbContext : DbContext
{
public CommonDbContext(string connectionString)
:base(connectionString)
{
}
public IDbSet<Table1> Tables1 { get; set; }
}
5) The context A, inherits the common context, adds the Table2A, and ignores the Table2B
public class DbContextA : CommonDbContext
{
public DbContextA() : base("SimilarA") { } // connection for A
public IDbSet<Table2A> Tables2A { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Ignore<Table2B>(); // Ignore Table B
}
}
The context B, inherits the common, and includes the Table2B
public class DbContextB: CommonDbContext
{
public DbContextB() :base("SimilarB") { } // Connection for B
public IDbSet Tables2B { get; set; }
}
With this setup, you can instance either DbContextA or DbContextB. One advantage is that both inherit CommonDbContext, so you can use a variable of this base class to access the common entities, no matter if the concrete implementation is version A or B. You only need to change to the concrete type to access the specific entities of A or B (Table2A or Table2Bin this sample).
You can use a factory, or DI or whatever to get the required context depending on the DB. For example this could be your factory implementation:
public class CommonDbContextFactory
{
public static CommonDbContext GetDbContext(string contextVersion)
{
switch (contextVersion)
{
case "A":
return new DbContextA();
case "B":
return new DbContextB();
default:
throw new ArgumentException("Missing DbContext", "contextVersion");
}
}
}
NOTE: this is working sample code. You can of course adapt it to your particular case. I wanted to keep it simple to show how it works. For your case you'll probably need to change the factory implementation, and expose the connection string in A and B context constructors, and provide it in the factory method
Handling the different classes of your entities
The easiest way to handle the different entities of each DbContext is to use polymorphism, and or generics.
If you use polymorphism you need to implement methods which use the type of the base class (as parameter and as return type). This parameters and vars will hold entities either of the base or of the derived class (Table2A or Table2B). In this case, each context will receive an entity of the right type, and it will work directly without trouble.
The problem is when your app is multilayered, uses services or is a web app. In this case when you use the base class the polymorphic behavior can be lost, and you'll need to handle the entities of the base class. (For example if you let the user edit an entity of derived class in a web app form, the form can only take care of the properties of the base class, and when it's posted back, the properties of the derived class will be lost) In this case, you need to handle it intelligently (see note below):
For reading purposes, if you have a Table2B, you have a direct casting to Table2A. You can implement functionality for Table2A and directly used it. I.e. you can return collections or individual values of the base class (in many cases implicit casting will be enough). No more worries.
For inserting/updating, you have to take extra steps, but it's not too difficult. You need to implement methods that receive/return Table2A parameters in your contexts, or in another layer, depending on your architecture. For example, you can make the base context abstract and define virtual methods for this. (See example below). Then you need to make the right implementation for each particular case.
if you receive a Table2A but need to insert it in Table2B, simply map entity A into entity B with AutoMapper or ValueInjecter and fill the remaining properties with default values (beware of AutoMapper and EF dynamic proxies: it won't work).
if you receive a Table2A and need to update a Table2B, simply read the existing entity from the DB and repeat the mapping procedure (ValueInjecter will be less troublesome than AutoMapper also for this case).
This is a very simple example of what can be done, but you need to adapt it to your particular case:
Inside CommonDbContext class, declare virtual methods for the base type, like this:
public virtual Table2A GetTable2AById(int id);
public virtual void InsertTable2A(Table2A table);
You can also use generic interfaces/ methods, instead of abstract class / virtual methods, like this:
public T GetTable2AById<T>(int id)
{
// The implementation
}
In this case you should add the necessary constraints to the T type, like where T: Table2A or the ones you need (class new()).
NOTE It's not exact to say that the polymorphism is lost in this cases, because you can really make polymorphic Web Services with WCF, or Web API, adapt your UI to the real class of your entity (with templates for each case) and so on. That depends on what you need or want to achieve.
Been there, done that.
In all seriousness: dump EF in this specific case; it will bring a lot of pain and suffering for no benefit.
What you'll eventually end up doing (putting my Fortuneteller Hat on) is you'll rip out all the EF-based code, create an abstract object model and then write a series of backends that will map all the various database structures back and forth to said clean abstract object model. And you'll be either using raw SQL or something lightweight like Dapper or BLToolkit.

Migrate from CRUD to DDD

Now I want to try to start from the objects of my model according to the dictates DDD but I have some difficulty in understanding how to migrate my thought patterns because I can not turn the examples that I find lying around on my specific case .
My main concept are the activities , each activity has an indicative code , a description , a status that changes over time and a quarter each Result.
Users want to be able to see the history of all the states hired from the activities, with the dates on which the changes were made . In addition, they also want to be able to create new states, change the description of the existing ones and possibly prevent the use of some of these while maintaining the value for the previous activities .
Each quarter, users want to be able to insert an Result that contains Outcome and recommendations, a rating and the date of formulation of the outcome .
The ratings must be a list freely maintainable by users.
Thinking to my old way I would create classes like this:
public class Activity
{
public int ID;
public string Desc;
public IList<ActivityStatus> ActivityStatusList;
public IList<Result> ResultList;
}
public class ActivityStatus
{
public Activity Activity;
public Status Status;
public DateTime StartDate;
public DateTime EndDate;
}
public class Status
{
public int ID;
public string Desc;
public bool Valid;
}
public class Result
{
public Activity Activity;
public int Quarter;
public string Outcome;
public string Recommendations;
public Rating Rating;
}
public class Rating
{
public int ID;
public string Desc;
public bool Valid;
}
than i will implement a DataAccessLayer mapping this class to a new db (created from this class) with NHibernate and add repository to grant user CRUD operation to all of this object
According to DDD are there better ways?
I'd recommend to read the book or at least the Wikipedia article.
DDD is about focussing on domain logic and modelling this first - in an object-oriented way. Persistence is a technical concern, which should not be the starting point of your design and (usually) not determine, how you will design your domain classes.
If you're eager to code and believe you understand the domain well, I would suggest a BDD test-first approach. Use tools like SpecFlow to describe your business processes in plain english, then gradually fill in the steps and functionality as you go, using mocks, design patterns, inversion of control etc.
Background reading is a must if you're unfamiliar with DDD. Read the book that EagleBeak suggests, get clued up on SOLID principles and experiment yourself.
I canĀ“t tell if there are better ways but what you said would be one way to solve this problem in a DDD fashion.
In my data access layer I typically use an abstract factory of respositories. This way I can plug an specific implementation for data access such as NHibernate.
public interface IRepositoryFactory {
T Repository<T>();
};
public class NHibernateRepositoryFactory {
T Repository<T>() {
..... // find class that implements T in Assemblies with reflection
return repository;
}
};
public static class Persistence {
IRepositoryFactory Factory { get; set; }
};
This way you can call your repository without referencing any specific implementation:
User user = Persistence.Factory.Get<IUserRepository>().FindByEmail("john#tt.com");
user.name = "James";
Persistence.Factory.Get<IUserRepository>().save(user);
Another advantage of using abstract factories for repositories as above is that you can test your code by pluging a fake implementation for the repository.
public class FakeRepositoryFactory {
T Repository<T>() {
..... // find class that implements T in Assemblies of fake repositories
return repository;
}
};
public class FakeUserRepository : public IUserRepository {
User FindByEmail(string email) {
// create mocked user for testing purposes ....
return userMock;
}
};
Your code will not and should not know where the user data is coming from with abstract factories for persistence. This way switch from one way to another can be done in a transparent way.

Where do derived or inferred properties belong in an application?

I'm building an app using code first and generating the DB.
I can no longer modify the DB so, I can't add/change columns and tables. But the Domain Model (not sure if I'm using the term correctly) requires new properties (that are part of the domain) that can be inferred from the database data, but do not exist explicitly.
My database stores sales info for houses. So I have two tables, Houses and Sales. The tables are related by houseID. Now I want houses to have a property called LastSaleDate, but I can't change the underlying database.
So, How would I properly construct this new property and add it into the appropriate layer? Here is what my poco/entities look like. Just pseudo coded...
[I am trying to learn all I can about the tools and methods I use. I may be completely wrong on all my assumptions and maybe I am to add it to my pocos. If that is the case please explain how that would work]
[Table("HOUSE_TABLE")]
public class house {
//some properties
public int HouseID {get;set;}
}
[Table("SALE_TABLE")
public class sale {
//some properties
public int HouseID {get;set;
public int SaleID {get;set;}
public datetime SaleDate {get;set;}
public virtual House House {get;set;}
}
I almost feel like this would create 2 levels of mapping. Though, I don't believe I've ever seen this done in any code I've seen online.
poco -> AutoMapper?? -> entities -> Automapper -> viewModels
This logic most likely belongs on the Entity. Entities should have both data and behaviour. What you seem to be describing is some behaviour that is exposed as a property. So, you should add a property for the derived value to your entity. By default, if the property only has a getter, then EF will not try to map the value to the database.
For example:
[Table("HOUSE_TABLE")]
public class house
{
//some properties
public int HouseID {get;set;}
public virtual ICollection<Sale> Sales { get; set; }
public DateTime LastSaleDate
{
get
{
return this.Sales.OrderByDescending(s => s.SaleDate).First();
}
}
}

Entity Framework TDD, howto unit-test model for required field

I am start using TDD for the following class using Entity Framework 4.1:
public class Agent
{
// Primary key
public int ID { get; set; }
[Required]
public string Name { get; set; }
public string Address { get; set; }
public string City { get; set; }
public string Country { get; set; }
public string Phone1 { get; set; }
}
My assertion will fail:
/// <summary>
///A test for Agent Constructor need have name
///</summary>
[TestMethod()]
public void AgentConstructorTest()
{
Agent target = new Agent();
Assert.IsNull(target);
}
When I look at the generated target object, it is created with ID = 0. How could I test that Name is required then?
And if the Name field is required, how could I still create an Agent object? When will the real ID been assigned? To test model itself, do I need create/mockup a DbContext to be able to assigned a ID?
Keep in mind that your are just dealing with POCO classes here - there is no magic going on that would allow the construction of the Agent class to fail just because you have put a custom attribute on one of its properties.
Entity framework is checking for custom attributes during its validation and for data mapping - in this case it will check for the Required attribute and only declare the entity as "valid" if the corresponding string property is not null and also it will map Name to a non-nullable column in the database.
To mirror that you could write a custom validation routine in your unit test that performs the same checks, i.e. makes sure that all properties that are decorated with the Required attribute indeed have a value, i.e something like this:
[TestMethod()]
public void AgentWithNoNameIsInvalid()
{
Agent target = new Agent();
Assert.IsFalse(IsValid(target));
}
This does feel like you are testing EF now though, not your code.
Since the ID is your primary key it will only be assigned when the entity has been committed to the database. So yes for full testing you will have to mock a unit of work and a context that does this for you as well. There are many pitfalls though and subtle (and not so subtle) differences between IQueryable<T> and IEnumerable<T> that makes this approach very fragile.
Personally I would recommend you do integration testing with EF based on a separate test database with known content and write your unit tests and expected results based on this test database - this might not be true TDD but I found it is the only way to be sure that I am testing the right thing.

c# DAL class and business layer class

HI,
can you tell me if this is possible.
public class Person
{
public string Name { get; set; }
public int ID { get; set; }
}
Populate a class call say person which is in an assembly called Entities like this with the population of the code being done in a different assembly called DataAccessLayer (so person and the place where it is populated are not in the same assembly)
//the below code would be reading from a datareader etc but have just done this to make it //easy to explain.
Person p=new Person();
p.Name="tom";
p.id = 10;
The person class is now to be made accessible to another system to allow them to be able to access person. What i would like is to prevent the other system from being able to change the ID. be able to read it but not write. Do i need to create another class etc to allow this and only expose this class to the other system (i.e. a business object) (i.e. ORM)?
i know alot of people are going to say just make the ID readonly. i.e.
public int ID { get; }
but if i do this then i cannot populate the ID from the code similar to above because in my DataAccessLayer i will not be able to set the ID as it is readonly.
thanks
Niall
You can create an internal constructor for the object that you can pass ID into, then set the flag for the Entities DLL that allows another DLL (DataAccessLayer) to be able to see and use the internal calls within this DLL. (InternalsVisibleTo attribute)
Look toward ORM tools which will assign ID of entity for you and your id property will look:
public class MyEntity
{
public virtual int ID { get; protected set; }
// other properties
}
if you choose this way, you don't need to worry about assigning properties and casting of types.

Categories

Resources