Moq a data source for a web service - c#

I'm working on a fairly complex multi-tiered application and I'd like to mock the data source for one of the layers as it's very difficult for it to get to the database much of the time. (Some of it doesn't even exist yet.)
What I'd like to be able to do is set a flag in one of the web services to have it use the mocked data source instead of the database connection. I'm just going to put data in xml files. I've successfully used moq in unit tests but it seems I can't make that mental leap to where I can replace the injected database with the mock at run time.
The Stack:
VS 2013
.Net 4.5.1
Ninject
Entity Framework 5?
SQL Server 2012
Several attached databases that are called via stored procedures in SQL Server
Moq 2.x
WCF
Web API 2

Rather than set a flag, why not use and interface and pass that in? E.g. IDataSource.
Your web service, for example, takes an IDataSource as part of its construction.
Then, your moq can implement the interface and you can pass that in, rather than a real implementation of IDataSource. Similarly, your real database would implement the interface, too...
public class MoqDataSource : IDataSource
{
...
}
public class RealDatabase : IDataSource
{
...
}
As for replacing the real data source, at run time, you could use some kind of factory class that returns an IDataSource, and then use any number of methods to decide what the factory returns.
E.g. the factory reads some config file, and depending on what you've set there, it either returns a real data source, or the moq...
public class DataSourceFactory
{
public static IDataSource CreateDataSource()
{
if (/* are we using real data source */)
{
return new RealDatabase();
}
else
{
return new MoqDataSource();
}
}
}
It doesn't matter whether you call it a factory or something else... it's just one way of encapsulating the creation of an IDataSource. Only the factory class needs to be concerned with what type of IDataSource you want to create, the rest of the application doesn't have to worry about it.

IMHO, the easiest way might be to use a file that you can set the dependency information into. This way, you can switch between the Moq and the actual data source with a simple xml file change (could even drive that using different build targets, etc. but that's going beyond the scope of your original question).
In order to integrate an xml file for Ninject to consume, you will need an extension:
https://github.com/ninject/ninject.extensions.xml
HTH...

Related

Effort- FirstOrDefault returns null when Faking Database

I'm trying to create some unit tests for my project, after much digging around I found Effort, the idea is great, it mocks the database instead of the dealing with faking the DBContext which by the way is really hard to get it right when using a complex schema.
However I'm trying to get the Email of a user after I specifically added it to the in-memory database create by Effort, here is the code
MyContext contextx = new MyContext(Effort.DbConnectionFactory.CreateTransient());
var client = new Client
{
ClientId = 2,
PersonId = 3,
Person = new Person
{
PersonId = 3,
EMail = "xxxxx#gmail.com"
}
};
contextx.Client.Add(client); //<-- client got added, I checked it and is there
var email = contextx.Client.Select(c => c.Person.EMail).FirstOrDefault();
In the last line above I can't make it to return the email xxxx#gmail.com instead it always returns null.
Any ideas?
Answering Your Direct Question
For the specific question you asked, I would suggest two things:
Take a look at contextx.Client.ToArray() and see how many members you really have in that collection. It could be that the Client collection is actually empty, in which case you'll indeed get null. Or, it could be that the first element in the Client collection has a null value for EMail.
How does the behavior change if you call contextx.SaveChanges() before querying the Client collection on the DbContext? I'm curious to see if calling SaveChanges will cause the newly inserted value to exist in the collection. This really shouldn't be required, but there might be some strange interaction between Effort and the DbContext.
EDIT: SaveChanges() turns out to be the answer.
General Testing Suggestions
Since you tabbed this question with the "unit-testing" tag, I'll offer some general unit testing advice based on my ten years spent as a unit testing practitioner and coach. Unit testing is about testing various small parts of your application in isolation. Typically this means that unit tests only interact with a few classes at once. This also means that unit tests should not depend on external libraries or dependencies (such as the database). Conversely, an integration test exercises more parts of the system at once and may have external dependencies on things like databases.
While this may seem like a quibble over terminology, the terms are important for conveying the actual intent of your tests to other members of your team.
In this case, either you are really wanting to unit test some piece of functionality that happens to depend on DbContext, or you are attempting to test your data access layer. If you're trying to write an isolated unit test of something that depends on the DbContext directly, then you need to break the dependency on the DbContext. I'll explain this below in Breaking the Dependency on DbContext below. Otherwise, you're really trying to integration test your DbContext including how your entities are mapped. In this case, I've always found it best to isolate these tests and use a real (local) database. You probably want to use a locally installed database of the same variety you're using in production. Often, SqlExpress works just fine. Point your tests at an instance of the database that the tests can completely trash. Let your tests remove any existing data before running each test. Then, they can setup whatever data they need without concern that existing data will conflict.
Breaking the Dependency on DbContext
So then, how do you write good unit tests when your business logic depends on accessing DbContext? You don't.
In my applications that use Entity Framework for data persistence, I make sure access to the DbContext is contained within a separate data access project. Typically, I will create classes that implement the Repository pattern and those classes are allowed to take a dependency on DbContext. So, in this case, I would create a ClientRepository that implements an IClientRepository interface. The interface would look something like this:
public interface IClientRepository {
Client GetClientByEMail(string email);
}
Then, any classes that need access to the method can be unit tested using a basic stub / mock / whatever. Nothing has to worry about mocking out DbContext. Your data access layer is contained, and you can test it thoroughly using a real database. For some suggestions on how to test your data access layer, see above.
As an added benefit, the implementation of this interface defines what it means to find a Client by email address in a single, unified place. The IClientRepository interface allows you to quickly answer the question, "How do we query for Client entities in our system?"
Taking a dependency on DbContext is roughly the same scale of a testing problem as allowing domain classes to take a dependency on the connection string and having ADO.Net code everywhere. It means that you have to create a real data store (even with a fake db) with real data in it. But, if you contain your access to the DbContext within a specific data access assembly, you'll find that your unit tests are much easier to write.
As far as project organization, I typically only allow my data access project to take a reference to Entity Framework. I'll have a separate Core project in which I define the entities. I'll also define the data access interfaces in the Core project. Then, the concrete interface implementations get put into the data access project. Most of the projects in your solution can then simply take a dependency on the Core project, and only the top level executable or web project really needs to depend on the data access project.

Repository Pattern with Dynamic Connection String Based on Database Values

I have been looking into updating an existing codebase to better follow design patterns, principals, handle unit testing, separating concerns, etc. I am new to implementing a lot of these concepts so I am still doing a lot of research and trying to see how they could be implemented in the current codebase.
Currently each business entity has its own vb file. Within that vb file contains the entity, entity collection, and entity dalc classes for that entity. If you want to perform a database operation on the entity you would do so by calling Enity.Save, Entity.Delete, etc. These methods on the entity class would create the entity dalc object and then call the Save, Delete, etc. method on the entity dalc object. The dalc would then call a Save, Delete, etc. stored procedure through a SqlHelper class that handles the low level stuff.
Each entity class requires a Location object to be passed into it's constructor. This object is used to know what database the user is logged into as well as create the appropriate connection string to the database. The databases all have the same schema; they just have different names and can live on different SQL instances. Basically each client has their own database and the Location object hits up a shared database to find out what SQL instance the client needs to connect to based on the client's name which is stored in a cookie.
I have been looking into a more Model/Repository/Service approach but the Location object is throwing me off, especially since it too needs to access the database to get the information it needs to create the correct connection string. The repository objects need the connection string, but all of the examples I have seen have it hardcoded in the class. I am thinking the repository objects will need to take in an interface of the Location object but I'm not sure if the MVC project would do that directly or pass it into the service objects and they would handle it. At what point does the Location object get created, since it too needs to access the database in order for it to create the connection string, how does it get created?
I am also not clear on how the MVC project would interact with the Service and Repository layers. It seems like everything should run through the service objects, but for testing you would want them to take in an interface for the repository. Doing this would mean the MVC project would need to pass in the repository object, but it doesn't seem like the MVC project should know about the repository objects. However, if you are just doing basic CRUD it seems like it would be simpler to have the MVC project directly call those methods on the repository objects instead of running them through a service object.
Here is an example of what I am currently looking into. The plan is to use ADO.NET and SQL Server for now but possibly switch to an ORM or even a different SQL backend in the future. I am hoping the Model/Repository/Service approach will make it easy to make those changes in the future so if not feel free to offer advice on that as well.
Project.Model
public class Person
{
public int Id;
public string Name;
}
Project.Repository
public class PersonRepository
{
public Person FindById(int id)
{
// Connect to the database based on the Location's connection string
}
}
Project.Service
public class PersonService
{
private IPersonRepository _personRepository;
// Should this even take in the repository object?
public PersonService(IPersonRepository personRepository)
{
_personRepository = personRepository;
}
// Should the MVC project call this directly on the repository object?
public Person FindById(int id)
{
return _personRepository.FindById(id);
}
}
Project.MCV
// I think the Location object needs to come from here, as the client name is
// in the cookie. I'm not sure how the controllers should interact with the
// service and repository classes.
I second #Christian's advice. Using an ORM will greatly simplify your interactions with the underlaying data store; and NHibernate is a great choice.
However, in your example, the common way to interact with the data layer from the presentation (aka ASP.NET MVC project) is to inject the service as a dependency for your controller.
There are several ways to do this, the simplest, most straightforward is to use a dependency injection framework (like Unity) to instantiate your services as you specify in the controller's constructor,
public class PersonController : Controller
{
private readonly IPersonService personService;
public PersonController(IPersonService personService)
{
this.personService = personService;
}
}
Another way is to implement your own ControllerFactory implementation and inject the required services as needed. It is a lot more work, but if you have the time, you can lear a ton about the over all flow of the ASP.NET MVC routing flow and a bit of DI itself.
In a DI framework you (mostly) register interfaces with concrete classes implementations, basically saying that when an instance of IPersonRepository is required, use a new instance of PersonRepositoryImpl. With these registration rules in place, the DI framework will then recursively instantiate each dependency as it appears in the class constructor.*
In other words, when you request an instance of PersonController, the DI framework will then try to create an instance of type PersonController; when it sees that the constructor required an argument of type IPersonService, it first tries to instantiate one based on the same rules. Thus, the process starts again until all dependencies have been resolved and injected into the constructor for PersonController,
resolve PersonController
-> construct PersonController(IPersonService personService)
-> resolve IPersonService with PersonService
-> construct PersonService(IPersonRepository personRepository)
-> resolve IPersonRepository with PersonRepository
-> construct PersonRepository() <- this one has no dependencies
And back up the stack until a new instance of PersonController is returned.
*For this to work you must have only one public constructor for the given class, where each argument is a dependency that needs to be resolved (your examples nailed this down). If the type of the dependency is not registered with the framework, the resolution will fail. If there are multiple public constructors, the resolution will also fail (there is no sure way to determine which one to use), unless you register which constructor should be used (usually using attributes, but it depends on the DI framework in place). Some DI frameworks (like Unity) might allow you to have no constructor at all (which defaults to an empty, parameterless constructor) and have dependencies as public properties marked with a dependency attribute. I suggest not to use this method as it provides no way of knowing from a consumer class what dependencies the class needs (without using Reflection to inspect all properties and see which ones are marked as dependencies), which will in turn cause a myriad of NullReferenceExceptions.

NHibernate Unit Testing Mocking/In Memory Database

I am trying to learn how to mock my generic repository so i can unit test all my services.
Im using NHibernate Fluent to handle data access and a Ninject for dependency (I'm not interested in testing that)
My repository interface looks like:
public interface IRepository<TEntity> where TEntity : class
{
IQueryable<TEntity> GetAll();
TEntity Get(int key);
void Insert(TEntity entity);
void Update(TEntity entity);
void Delete(int id);
}
And the actual repository looks like:
public class GenerRepository<TEntity> : IRepository<TEntity>where TEntity : Entity
{
protected ISession Session{get { return NHibernateHelper.OpenSession(); }}
public IQueryable<TEntity> GetAll(){return Session.Query<TEntity>();}
public TEntity Get(int key){return Session.Get<TEntity>(key);}
public void Insert(TEntity entity){Session.Save(entity);}
public void Update(TEntity entity){Session.Update(entity);}
public void Delete(int id){Session.Delete(Session.Load<TEntity>(id));}
}
All my services do the following take the created repository in and use it.
I've read so many articles on how to do this but none are simple or well explained. So any advice between creating a test generic repository or even mocking it. I would also be interested in creating a in memory database but how do i set the configuration up for fluent nhibernate in my test project without editing code in my real project?
Is it possible just to make the generic repository hit a list of Tentity rather than the database or in memory database.
Thanks for reading and look forward to the advice.
My answer should/could be a comment, maybe. Because I would like to tell you: do not do it. Do not waste your time to create a fake of the data to be returned from persistence. And do not invest your time to: take the data from a client and put them into some virtual DB in memory.
You need to be sure, that your services (consuming repository) can really serialize/render the real data. And deserialize/persist the changed. And that would really require a real data.
Rather spend some time to create scripts, which will populate the test data. The data which you can expect in your tests: when doing Business validation, Service data serialization...
Also take a look here: Ayende: NHibernate Unit Testing. An extract:
When using NHibernate we generally want to test only three things,
that properties are persisted, that cascade works as expected and that
queries return the correct result. In order to do all of those, we
generally have to talk to a real database, trying to fake any of those
at this level is futile and going to be very complicated.
A note: some time ago, we used to wrap all the tests in Transaction Begin() and Rollback(). Which was looking good. But we realized, that lot of stuff - because of missing Flush() call - was not tested all the way down (e.g. setting not-null).
I have to agree with Radim, that unit testing nhibernate code by mocking the nhibernate functionality in most cases in not what you want to do.
Unless you want to test complex business logic which is based on data you retrieve via nhibernate, then this is perfectly fine.
But to test if your mappings, data retrieval and persistence works fine, you have to test against a real database.
If you target MSSQL Server, I would not use another type of database. Instead there is SQL Express which has all features of the real server.
MSSQL Express can optionally be installed with local DB. This will allow you to load mdf files via connection string which will more or less instantiate an instance of MSSQL Server...
I used that for integration testing and it works really nice.
Create a data base file in your unit test project
Depending on your model (code first/db first) let nhibernate create the scheme, otherwise simple populate the scheme into that database file
Add the file to the deployment items of your test settings so that the file gets copied to the test target directory
Generate a connection string which uses the copied database file.
Example connection string: Data Source=(LocalDB)\v11.0;AttachDbFileName=[whateverthepathis]\DatabaseFileName.mdf;InitialCatalog=DatabaseName;Integrated Security=True;MultipleActiveResultSets=True
Run your tests
This way your tests will run with an empty database every time and you will have reproduceable integration tests without the need of a real server where you would have to create a DB or reset it everytime...
There are couple of ways to achieve this,
Use a real database for testing using scripts to setup and revert the database, but with this approach it would take time and effort to create and maintain these scripts when there are changes to the database
Use a real database, and use transaction scope for testing (starting the transaction persist, and do the test and once all is done only rolling back the transaction), this is a really good approach and I use this for a large scale project. However one problem with this is it takes a lot of time to run tests (I have around 3500 tests and it takes total of 40 minutes to run them all)
Use a fake repositories (having an internal list of entities) for business logic test and use actually repositories to verify the mappings. This approach would require additional effort to create and maintain fake repositories. The same tests executed on actual repositories can be executed on fake repositories to verify fakes are working. With this approach test execution would be faster.

How to use Breeze with Generic Unit of Work and Repositories?

Using this:
https://genericunitofworkandrepositories.codeplex.com/
and the following set of blog posts:
http://blog.longle.net/2013/05/11/genericizing-the-unit-of-work-pattern-repository-pattern-with-entity-framework-in-mvc/
We are trying to use those repositories with Breeze since it handles client side javascript and OData very well.
I was wondering how we could use these with Breeze to handle overriding the BeforeSaveEntity correctly.
We have quite a bit of business logic that needs to happen during the save (modifying properties like ModifiedBy, ModifiedTime, CreatedBy, etc) but when we change those they aren't updated by breeze, so we have to re query after the save (we've tried manually mapping the changes back but it requires us to duplicate all of the business logic).
Our second option was to check the type of each entity and then request the correct repository for it, handle the save internally, and then do a new get request on the client to get the updated information. This is chatty though so we were hoping there is a better way. What would the correct way of updating these objects while bypassing breeze's save without returning an error or having to reget the data afterward?
Any examples of Breeze with Business Logic during the save would be very helpful, especially if it happens in a service, repository or something else besides directly in the BeforeSaveEntity method.
This is many questions rolled into one and each is a big topic. The best I can do is point you in some directions.
Before I get rolling, let me explain why you're not seeing the effects of setting "properties like ModifiedBy, ModifiedTime, CreatedBy, etc)". The EFContextProvider does not update every property of the modified entities but rather only those properties mentioned in the EntityInfo.OriginalValuesMap, a dictionary of the property names and original values of just the properties that have changed. If you want to save a property that is only set on the server, just add it to the original values map:
var map = EntityInfo.OriginalValuesMap;
map["ModifiedBy"]=null; // the original value does not matter
map["ModifiedTime"]=null;
Now Breeze knows to save these properties as well and their new values will be returned to the client.
Let's return to the bigger picture.
Breeze is first and foremost an client-side JavaScript library. You can do pretty much whatever you want on the server-side and make Breeze happy about it as long as your server speaks HTTP and JSON.
Writing a server that provides all the capabilities you need is not trivial no matter what technology you favor. The authors of Breeze offer some .NET components out of the box to make your job easier, especially when you choose the Web API, EF and SQL Server stacks.
Our .NET demos typically throw everything into one web application. That's not how we roll in practice. In real life we would never instantiate a Breeze EFContextProvider in our Web API controller. That controller (or multiple controllers) would delegate to an external class that is responsible for business logic and data access, perhaps a repository or unit-of-work (UoW) class.
Repository pattern with Breeze .NET components
We tend to create separate projects for the model (POCOs usually), data access (ORM) and web (Web API plus client assets) projects. You'll see this kind of separation in the DocCode Sample and also in John Papa's Code Camper sample, the companion to his PluralsSight course "Building Apps with Angular and Breeze".
Those samples also demonstrate an implementation of the repository pattern that blends the responsibilities of multiple repositories and UoW in one class. This makes sense for the small models in these samples. There is nothing to stop you from refactoring the repositories into separate classes.
We keep our repository class in the same project as the EF data access material as we see no particular value in creating yet another project for this small purpose. It's not difficult to refactor into a separate project if you're determined to do so.
Both the Breeze and Code Camper samples concentrate on Breeze client development. They are thin on server-side logic. That said, you will find valuable clues for applying custom business logic in the BeforeSaveEntities extension point in the "NorthwindRepository.cs" and `NorthwindEntitySaveGuard.cs" files in the DocCode sample. You'll see how to restrict saves to certain types and certain records of those types based on the user who is making the request.
The logic can be overwhelming if you try to channel all save changes requests through a single endpoint. You don't have to do that. You could have several save endpoints, each dedicated to a particular business operation that is limited to insert/updating/deleting entities of just a few types in a highly specific manner. You can be as granular as you please. See "Named Saves" in the "Saving Entities" topic.
Have it your way
Now there are a gazillion ways to implement repository and UoW patterns.
You could go the way set forth by the post you cited. In that case, you don't need the Breeze .NET components. It's pretty trivial to wire up your Web API query methods (IQueryable or not) to repository methods that return IQueryable (or just objects). The Web API doesn't have to know if you've got a Breeze EFContextProvider behind the scenes or something completely different.
Handling the Breeze client's SaveChanges request is a bit trickier. Maybe you can derive from ContextProvider or EFContextProvider; maybe not. Study the "ContextProvider.cs" documentation and the source code, especially the SaveChanges method, and you'll see what you need to do to keep Breeze client happy and interface with however you want to handle change-set saves with your UoW.
Assuming you change nothing on the client-side (that's an assumption, not a given ... you can change the save protocol if you want), your SaveChanges needs to do only two things:
Interpret the "saveBundle" from the client.
Return something structurally similar to the SaveResult
The saveBundle is a JSON package that you probably don't want to unpack yourself. Fortunately, you can derive a class from ContextProvider that you use simply to turn the saveBundle into a "SaveMap", a dictionary of EntityInfo objects that's pretty much what anyone would want to work with when analyzing a change-set for validation and save.
The following might do the trick:
using System;
using System.Collections.Generic;
using System.Data;
using Breeze.ContextProvider;
using Newtonsoft.Json.Linq;
public class SaveBundleToSaveMap : ContextProvider
{
// Never create a public instance
private SaveBundleToSaveMap(){}
/// <summary>
/// Convert a saveBundle into a SaveMap
/// </summary>
public static Dictionary<Type, List<EntityInfo>> Convert(JObject saveBundle)
{
var dynSaveBundle = (dynamic) saveBundle;
var entitiesArray = (JArray) dynSaveBundle.entities;
var provider = new SaveBundleToSaveMap();
var saveWorkState = new SaveWorkState(provider, entitiesArray);
return saveWorkState.SaveMap;
}
// override abstract members but DO NOT USE ANY OF THEM
}
Then it's up to you how you make use of the "SaveMap" and dispatch to your business logic.
The SaveResult is a simple structure:
public class SaveResult {
public List<Object> Entities; // each of the entity type you serialize to the client
public List<KeyMapping> KeyMappings;
public List<Object> Errors;
}
public class KeyMapping {
public String EntityTypeName;
public Object TempValue;
public Object RealValue;
}
Use these classes as is or construct your own. Breeze client cares about the JSON, not these types.

Data Access Layer

how we can create a generic data access layer that can be used by any asp.net application using different datasource provider or webservices?
Can we create data access layer for application that consumes webservice?
You might look into the Repository Pattern. Repository is a facade that presents persisted objects as though they are in a collection in memory. Whichever provider you choose to get data is hidden behind the Repository interface.
IRepository with LINQ to SQL
Code Project Tutorial
A sample by Fredrik Kalseth
You have plenty of options! :-)
You mention you want to use the data access layer (DAL) from asp.net and web services. No problem.
Basically, what you need to figure out is a basic design you want to follow, and encapsulate the DAL into its own assembly which can be used / referenced from various consumers.
There are numerous ways of doing this:
create a Linq-to-SQL mapping for your tables, if you're using SQL Server as your backend, and use the Linq-to-SQL entities and methods
create a repository pattern (for each "entity", you have an "EntityRepository" class, which can be used to retrieve entities, e.g. EntityReposity.GetByID(int id), or EntityRepository.GetByForeignKey(string fk) or whatever
use some other means of accessing the data (NHibernate, your own ADO.NET based mapper)
you could actually also use webservice calls as your data providers
Your biggest challenge is to define a standard way of doing things, and sticking to it.
See some articles - maybe they'll give you an idea:
Creating a Data Access Layer in .NET - Part 1
Building a DAL using Strongly Typed TableAdapters and DataTables in VS 2005 and ASP.NET 2.0
Try the tutorials at www.asp.net:
DataAccess
Woah, there are a ton of resources out there. Best advice to start is to find a pattern that you feel comfortable with and stick to it for the project, there is nothing worse then changing your mind 3/4 the way in.
One that I have found and like to use is the Repository or Provider patter. The Repository pattern just makes sure you have standard access to repositories, like your store catalog or CMS system. You create an interface that in my case expose sets of IQueryable the object or the data model are just standard c# classes with now extra fluff POCO (Plain Old CLR Objects).
public interface ICMSRepository {
IQueryable<ContentSection> GetContentSections();
void SaveContentSection(ContentSection obj);
}
Then just implement the interface for your different providers, like a LINQ to SQL context, making sure to return the POCO objects as queryable. The nice thing about this is that you can then make extension methods off of the IQueryable to get what you need easily. Like:
public static IQueryable<ContentSection> WithID(this IQueryable<ContentSection> qry, int ID) {
return from c in qry select c;
}
//Allow you to chain repository and filter to delay SQL execution
ICMSRepository _rep = new SqlCMSRepository();
var sec = _rep.GetContentSections().WithID(1).SingleDefault();
The nice thing about using the Interface approach is the ability to test and mock it up or dependency inject your preferred storage at run time.
Another method I have used and is used in the ASP.Net framework a lot is the Provider model. This is similar except instead of an Interface you create a singleton abstract class, your implementations on top of the abstract class will then define the storage access means (XML, Flat file, SQL, MySql, etc). The base abstract class will be also be resonsible for creating it's singleton based on configuration. Take a look at this for more info on the provider model.
Otherwise you can look at this similar question.
IRepository of T is general good approach. This Interface should expose GetAll, GetByID, Insert, Delete and Submit methods.
But it's important to keep your business logic out of Repository class, and put it to custom logic/service class.
Also, if you return IQueriable in your GetAll method, what you can often see in various implementation of IRepository, UI layer can also query against that IQueriable interface. But querying of object graph should remain in Repository layer. There's a lot debate around this issue.
Look at using the Interfaces:
IDbConnection
IDbCommand
IDbDataAdapter
IdataReader

Categories

Resources