How does it work the Session Per Request pattern? - c#

We're working on a project using ASP.NET MVC4. In one of team's meeting, came out an idea of using Session per request
pattern.
I did a little search and found out some questions here in SO saying - in general - that this pattern (if may be called) it's indicated to frameworks ORM.
A little example
//GET Controller/Test
public ActionResult Test()
{
//open database connection
var model = new TestViewModel
{
Clients = _clientService.GetClients(),
Products = _productService.GetProducts()
};
//close database connection
return View(model);
}
Without session per request:
//GET Controller/Test
public ActionResult Test()
{
var model = new TestViewModel
{
Clients = _clientService.GetClients(), // Open and close database connection
Products = _productService.GetProducts() // Open and close database connection.
};
return View(model);
}
Doubts
To contextualize, how does session per request works?
Is it a good solution?
What is the best way to implement it? Open the connection on web?
Is it recommended in projects with complex queries / operations?
Is there a possibility of giving a concurrency problem when transactions are involved?

Looks like you mean "DB context per request". You can achieve it with Unit of work pattern.
Simple implementation of that you can check int this article of Radu Pascal: https://www.codeproject.com/Articles/243914/Entity-Framework-context-per-request
Another implementation (for Entity Framework and NHibernate), you can find in ASP.NET Boilerplate which is more complex: http://www.aspnetboilerplate.com/Pages/Documents/Unit-Of-Work

In a web (web application, wcf, asp.net web api) it is a good idea to use one DB context per request. Why? Because the requests are short lived, at least that is the idea or your application will have a slow response time, so there is no point in creating many db contexts.
For example, if you are using EF as ORM and you issue a request to Find method, EF will first search for whatever you are asking in the local cache of the db context. If it is found, it will simply return it. If
not found, it will go to the database and pull it out and keep it in the cache. This can be really beneficial in scenarios where you query for the same items multiple times until your web application has fulfilled the request. If you create a context, query something, close the context then there is a possibility you will make many trips to the database which could be avoided.
To elaborate further, imagine you create many new records: a customer record, an order record, then do some work and then based on whatever criteria you create some discount records for the customer, then some other records, and then some orderitem records. If you use the Single Context Per-Request approach, you can keep adding them and call SaveChanges at the end. EF will do this in one transaction: either they all succeed or every thing is rolled back. This is great because you are getting transactional behavior without even creating transactions. If you do it without Single Context Per-Request approach, then you need to take care of such things yourself. That does not mean in the Single approach, everything needs to be in one transaction: You can call SaveChanges as many times as you want within the same http request. Consider other possibilities, where you pull a record, and later on decide to edit the record and then edit it some more: again in the Single approach, it will all be applied to the same object and then saved in one shot.
In addition to the above, if you still want to read more then you may find this helpful. Also, if you search for Single Context Per-Request, you will find many articles.

Related

What is the best way to use the dbContext when I have a lot of operations (former stored procedures) going on async?

I migrated a lot of stored procedures (we want to get rid of them) and I coded them in LINQ using Entity Framework core and SQL server. First let me explain 2 projets that we have in the Backend solution:
Repository : we use the repository pattern with UnitOfWork for simple operations and of course CRUD.
Manager : We use Manager to store all the more complicated queries we can say the real business logic is there.
So far , it's okay but I only use one instance of my DbContext in both projects so i'm wondering if it's better for us to do something like this in each operation instead.
using (var context = new DBContext())
{
// Perform data access using the context
}
My goal is to make sure we don't get performance issue because we use the same context for too long and I don't want to keep track of modifications on data over all operations. Also, let's say we have an operation that contains a lot of modifications if an error/exception is thrown I don't want to keep track of anything I want a complete "rollback". We are working in async btw. First time posting here thanks in advance.

Handling transaction data at ASP.NET Web Api

I want to build server for Point Of Sales which is going to be on top of Web Api asp.net
My plan is to have one controller for bills.
For handling bills from web-api post to sql server I am planing to use micro ORM PetaPoco.
One bill is going to be written in database at three tables.
PetaPoco pushing me to have three "pocos" for-each table one.
I want to write these three pocos to database inside transaction.
How I shod design my controller and classes to looks nice also work nice.
Should I ?
Make my controller accept three (3) classes for for parameters, Is this possible at asp.net web api at all ? Can I deserialize from one request three different classes ?
Make my controller accept one class, after that on server side from that class make three pocos which is going to be written to Database server ? Can someone post how wold looks like that class which is going to be split at three parts ?
Make my controller have three methods for posting separate data (bills-header, bills-payment, bills-articles) one by one ?
Perhaps it will be so hard in this case to have one transaction for three separate calls?
Any other approach ?
I would definitely go with option 2 - since your web client should be agnostic of the implementation details - i.e. whether you are persisting to one table or 3 tables shouldn't really matter to the client.
The controller or the service method would look like this (obviously the naming is not great - you'll have to modify it according to your domain lingo):
public void AddBill(BillDTO bill)
{
//Map the DTO to your entities
var bill1 = mapper1.Map(bill);
var bill2 = mapper2.Map(bill);
var bill3 = mapper3.Map(bill);
//Open the transaction
using (var scope = db.Transaction)
{
// Do transacted updates here
db.Save(bill1);
db.Save(bill2);
db.Save(bill3);
// Commit
scope.Complete();
}
}
You should read about DTO pattern, it would answer some of your questions:
1. WebAPI supports it.
2. That sounds like DTO, so it is a good solution, as you hide your persistence model from consumer.
3. There's no point to force consumer to make three calls, each call has own 'infrastructure' cost, so it is better have one infrastructure cost instead of one.

Best practices to avoid pross-thread/process database accessment

I am developing the MVC web application.
Which means I am creating views, models, view models. I use linq-to-sql as a database provider, I have implemented custom Unit-of-Work pattern, and also the Data Source, Repositories and Service patterns which looks perfectly (in my implementation) and completely separated from the direct SQL code. Actually, from any database, which is designed for unit testing, so I could test my application with in-memory data sources with no affect on the database.
And eventually I am stuck with one problem: I have no protection agains cross thread (or cross process, because as far as I know, IIS can create more than one app domain with a single web app) operations.
For example, I have a table with some records. And every now and again a web request happens which (inside controller and then service and then repository) picks the SQL table' row on the maximum of let's say TicketId column and then inserts in that table another row with (that column value + 1).
In the case two or more threads or processes do the same thing, the duplicated values can appear in the database. Some time ago, when my webapp was somekind smaller, I used the direct SQL code and simple UPDLOCK in SELECT statements which inside TransactionScope using block was locking the record I'm modifying (or anything else) preventing all other database clients to wait until I finish.
With all these patterns I forgot one thing:
How do I actually implement database multi-access protection issue?
Without any direct SQL code.
How do I actually implement database multi-access protection issue?
It's the database engine's job to do that. It's your job to ensure your app fails gracefully should there be any issues reported back. See Locking in the Database Engine.
For example, I have a table with some records. And every now and again a web request happens which
(inside controller and then service and then repository) picks the SQL table' row on the maximum > of let's say TicketId column and then inserts in that table another row with (that column value + 1).
I get the impression here that you don't seem to have much faith in your database considering you are trying to replicate it's behaviour. Set your field to be auto-increment and that should solve your issue. If you do have to implement your own manual auto-increment then you need to use some form of locking because what you essentially have is a race condition e.g.
private static object lockObject = new Object();
lock(lockObject)
{
... DB stuff
}
For cross-process locking, you would need to look at possibly a named Mutex.
Why not use a single table to represent a ticket number and the use transaction with Seriliazable transaction isolation level?
int GetTicketNumber(bool shouldIncrement)
{
if(shouldIncrement)
{
// use transaction with serilizable isolation level
select and update
}
else
{
//just select
}
}

Cache Entity Framework data and track its changes?

What I want is pretty simple conceptually but I can't figure out how it would be best to implement such a thing.
In my web application I have services which access repositories which access EF which interacts with the SQL Server database. All of these are instanced once per web request.
I want to have an extra layer between the repositories and EF (or the services and the repositories?) which statically keeps track of objects being pulled from and pushed to the database.
The goal, assuming DB-access is only accomplished through the application, would be that we know for a fact that unless some repository access EF and commits a change, the object set didn't really change.
An example would be:
Repository invokes method GetAllCarrots();
GetAllCarrots() performs a query on SQL Server retrieving a List<Carrot>, if nothing else happens in between, I would like to prevent this query from being actually made on the SQL Server each time (regardless of it being on a different web request, I want to be able to handle that scenario)
Now, if a call to BuyCarrot() adds a Carrot to the table, then I want that to invalidate the static cache for Carrots, which would make GetAllCarrots(); require a query to the database once again.
What are some good resources on database caching?
You can use LinqToCache for this.
It allows you to use the following code inside your repository:
var queryTags = from t in ctx.Tags select t;
var tags = queryTags.AsCached("Tags");
foreach (Tag t in tags)
{
...
}
The idea is that you use SqlDependency to be notified when the result of a query changes. As long as the result doesn't change you can cache it.
LinqToCache keeps track of your queries and returns the cached data when queried. When a notification is received from SqlServer the cache is reset.
I recommend you reading the http://rusanu.com/2010/08/04/sqldependency-based-caching-of-linq-queries/ .
I had a similar challenge, and due to EF's use and restrictions, i've decided to implement the cache as an additional service between the client and server's service, using an IoC. Monitoring all service methods that could affect the cached data.
Off course is not a perfect solution when you have a farm of servers running the services, if the goal is to support multiple servers i would implement using the SqlDependency.

How to manage session/transaction lifetime for proccessing many entities

In the project my team is working on, there is a windows service which iterates through all the entities in a certain table, and updates some of their fields based on some rules we defined. We use NHibernate as our ORM tool. Currently, we open one session and one transaction for the entire proccess, which means the transaction is commited after all the entities have been proccessed. I think this approach isn't good, and I wanted to hear some more opinios:
Should we keep our current way of managing the session, Or should move to a different approach?
One option I thought about is opening a transaction per entity, and another suggestion was to open a new session for each entity.
What approach you think will work best?
There isn't a single way to do it; it all depends on the specific cases.
In the app I'm working on, I have examples of the three approaches, and there's a reason for choosing each one. For example:
The whole process must have transactional atomicity: use a single transaction
The process has a lot of common data, but each record in the "master" table can be considered a unit of work: use a single session, multiple transactions
Processing each record in the master table should be independent from the others (including error handling): use a session per record

Categories

Resources