Log EF query time to database - c#

I want to log EF query (or transaction) times back to the database so that I can monitor performance in a live system. Note I know that more than one query can happen within my UOW. That is fine, I just want to be able to tell what areas of the system are slowing down etc.
What is the best way for me to do this? My initial thought was to do this in UnitOfWork.Dispose() so that every query would automatically be logged once completed. But this smells a bit to me because I'd have to call Save() to persist the information, but what if the caller didn't want to save?
Is there another, better way I can automatically log the query time to the database?
protected virtual void Dispose(bool disposing)
{
if (this.logQueryToDatabase)
{
var timeTaken = DateTime.UtcNow - this.start;
LogPerformance(this.callingFnName, timeTaken.TotalMilliseconds);
}
this.ctx.Dispose();
}

If you know which actions to measure you can create a simple class to handle this. All you need is to wrap around the action.
The profiler class.
public class Profiler:IDisposable
{
private readonly string _msg;
private Stopwatch _sw;
public Profiler(string msg)
{
_msg = msg;
_sw = Stopwatch.StartNew();
}
public void Dispose()
{
_sw.Stop();
LogPerformance(_msg, _sw.ElapsedMilliseconds);
}
}
The usage:
using (new Profiler("Saving products"))
{
db.SaveChanges();
}

Related

Does Apache Ignite offer transactions across multiple CacheStores?

I'm working on an application using an Ignite.Net cache infront of an Oracle database.
I read that I can write to multiple caches at once safely using Ignite Transactions (https://apacheignite-net.readme.io/v1.5/docs/transactions#Cross-cache transactions).
I also read that each cache can have it's own CacheStore that writes to the underlying database but I've yet to find any documentation that explains how I should implement the CacheStore classes so the database writes are safe across the whole Ignite transaction.
I've seen information on SessionEnd and CacheStoreSession (https://apacheignite-net.readme.io/v2.6/docs/persistent-store#section-sessionend-) but these don't mention multiple CacheStores.
The following article explains how transactions are handled for 3rd party persistence but this again only talks of a single Cache/CacheStore (https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-transaction-handling-level-3rd-party)
Can anyone advise how this works (assuming it does) or point me to further examples/documentation?
For a definitive answer (appreciate your time #alamar), I've spoken with one of the nice people at Gridgain and can confirm it is possible to safely perform transactions across multiple CacheStores, where all stores write to the same database without data inconsistency. It's not done via a mechanism specifically coded into Ignite as I had wondered about but can be implemented safely via a simple shared database connection.
For this to work you need to:
Make your caches transactional (AtomicityMode = CacheAtomicityMode.Transactional, WriteThrough = true)
Share a single database connection between the data stores (either inject via the CacheStoreFactory or use a singleton)
In all write operations on the CacheStores, write to the shared session database but do not commit. Mark the session as requiring a commit (your own boolean flag).
Implement SessionEnd (https://apacheignite-net.readme.io/docs/persistent-store#section-sessionend-) in each of your CacheStores. The implementation should call commit on your shared database connection if it has not already been called (check the boolean flag from the step before and reset after commit). You could always encapsulate that logic in your database connection class.
A simplified code example:
public class SharedDatabaseSession
{
private bool commitRequired;
private DatabaseConnection databaseConnection;
// ....
public void Write( /*xyz*/)
{
databaseConnection.Write( /*xyz*/);
commitRequired = true;
}
public void Commit()
{
if (commitRequired)
{
databaseConnection.Commit();
commitRequired = false;
}
}
public static SharedDatabaseSession GetInstance()
{
return instance;
}
}
public class FirstCacheStore : CacheStoreAdapter<int, int>
{
private SharedDatabaseSession database = SharedDatabaseSession.GetInstance();
/// ......
public override void Write(int key, int val)
{
database.Write( /*xyz*/);
}
public override void SessionEnd(bool commit)
{
if (commit)
{
database.Commit();
}
}
}
public class SecondCacheStore : CacheStoreAdapter<int, int>
{
private SharedDatabaseSession database = SharedDatabaseSession.GetInstance();
/// ......
public override void Write(int key, int val)
{
database.Write( /*xyz*/);
}
public override void SessionEnd(bool commit)
{
if (commit)
{
database.Commit();
}
}
}
Have you tried it?
My expectation, it should technically be supported, but since Cache Store uses two phase commit, and multiple Cache Stores will need to use "three phase commit", and there's no such thing - that you can expect data inconsistency on edge cases.
Happy path should, however, work OK.

Is this a bad way of using DbContext for EF CodeFirst?

I have one base controller class which has following DbContext. Instead of using "using" statement for each database work, can I rely on this. So far app runs as expected. I am not sure if Dispose part is really needed.
private static Context _database;
public static Context Db
{
get
{
if (_database == null)
{
_database = new Context();
}
return _database;
}
}
protected override void Dispose(bool disposing)
{
if (_database == null)
{
_database.Dispose();
}
base.Dispose(disposing);
}
No. not sure what class is being disposed of here but if it ever got disposed it would crash every time after that. you have to set _database = null after calling dispose or you would use it in a disposed state. if you want to do any type of multi threading this won't work. if more than 1 user is using this at same time it will crash. you won't be able to unit test the use of database with static either. It will cache all data you get causing stale data after a time too unless you are only user.
Having a automatic disposal of requests is very convenient don't make it a manual task.
Just don't use static in anyway, always using usingstatement for every request.

Keeping connections to web services open the entire application time

I just have a question related to "best practice". If this question isn't constructive, then just vote it down :)
I had a discussion with a colleague some days ago, we have two completely different philosophies when it comes to best practice regarding open connections to a web service, COM-object, database etc.
I prefer wrapping the connection code in a class that implements IDisposable and let that handle all that comes to connection etc. A short example.
public class APIWrapper : IDiposable
{
public APIWrapper(...)
{
DataHolder = new DataHolder(...);
/// Other init methods
Connect();
}
public [Webservice/Database/COM/etc.] DataHolder { get; private set; }
public void Connect()
{
DataHolder.Connect(...);
}
public void Disconnect()
{
DateHolder.Disconnect();
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if(disposing)
{
if (DataHolder != null)
{
Disconnect();
DataHolder = null;
}
}
}
}
And I will use it like this in my Data controller.
public class UserController
{
....
public IList<User> getUsers()
{
using(APIWrapper datalayer = new APIWrapper(...))
{
var users = datalayer.DataHolder.GetUsers();
/// Map users to my enity of users
return mappedUsers;
}
}
}
And my colleagues would look like this:
public class UserController
{
protected [Webservice/Database/COM/etc.] DataHolder { get; set; }
public UserController()
{
DataHolder = new DataHolder(...);
DataHolder.Connect(...);
}
public IList<User> getUsers()
{
var users = DataHolder.GetUsers();
/// Map users to my enity of users
return mappedUsers;
}
/// Who should call this?
public void Dispose()
{
DataHolder.Disconnect();
}
}
The code above are just examples that are simplified (and written i stackoverflow text editor), but I think they show the necessary philosophies.
So, the first example will open and close to connection at each call. The second example will hold the connection open for a longer amount of time.
What is "generally" best practice in your opinion?
I would recommend disposing of any unmanaged resources as soon as possible, as you outline in your example. Garbage collection will get there eventually, but why wait? Comprehensive answer here: Proper use of the IDisposable interface
Specifically re SQL server the connection pool has an upper limit of connections, either a default or defined in your connection string. If you don't close the connections you open, you will exhaust the pool.
Agree with James Thorpe comment that this might depend on the cost of creating the resource, but the examples you specify (db connection, web service call) shouldn't be particularly expensive

unit of work - I don't need to use transactions?

If I use Microsoft implementation unit of work from this tutorial:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
public class UnitOfWork : IDisposable
{
private SchoolContext context = new SchoolContext();
private GenericRepository<Department> departmentRepository;
private GenericRepository<Course> courseRepository;
public GenericRepository<Department> DepartmentRepository
{
get
{
if (this.departmentRepository == null)
{
this.departmentRepository = new GenericRepository<Department>(context);
}
return departmentRepository;
}
}
public GenericRepository<Course> CourseRepository
{
get
{
if (this.courseRepository == null)
{
this.courseRepository = new GenericRepository<Course>(context);
}
return courseRepository;
}
}
public void Save()
{
context.SaveChanges();
}
//......
}
I don't need to use transactions when I must add related items? For example when I must add order and order positions to database I don't need to start transaction because if something will go wrong then method Save() won't execute yes? Am I right?
_unitOfWork.OrdersRepository.Insert(order);
_unitOfWork.OrderPositionsRepository.Insert(orderPosition);
_unitOfWork.Save();
??
SaveChanges itself is transactional. Nothing happens at the database level when you call Insert, which based on the tutorial merely calls Add on the DbSet. Only once SaveChanges is called on the context does the database get hit and everything that happened up to that point is sent in one transaction.
You need transactions if you have multiple save changes in one method ... or chain of method calls using the same context.
Then you can roll back over the multiple save changes when your final update fails.
An example would be multiple repositories wrapping crud for an entity under the unit of work (IE a generic class). You may have many functions inserting and saving in each repository. However at the end you may find an issue which causes you to roll back previous saves.
EG in a service layer that needs to hit many repositories and execute a complex operation.

How to ensure that a system level operation is atomic? any pattern?

I have a method which internally performs different sub-operations in an order and at failure of any of the sub operation i want to Rollback the entire operation.
My issue is the sub-operations are not all database operations. These are mainly system level changes like adding something in windows registry, creating a folder at a specified path and setting permissions etc. the sub-operations can be more than this.
want to do somthing like this;
CreateUser(){
CreateUserFtpAccount();
CreateUserFolder();
SetUserPermission();
CreateVirtualDirectoryForUser();
....
....
....
and many more
}
if last operation fails, i want to roll back all previous operations.
So, what is the standard way to do this? is there a design pattern do handle this?
Note: i'm using C#.net
Here's one way to do it:
Using the command pattern, you can create undoable actions. With each operation, you register the related commands, so that you can undo the executed commands when a fail condition occurs.
For example, this might all belong in a transaction-like context object that implements IDisposable and put in a using block. The undoable actions would be registered to this context object. On dispose, if not committed, "undo" is carried out for all registered commands. Hope it helps. The downside is you may have to convert some methods to classes. This might be a necessary evil though.
Code sample:
using(var txn = new MyTransaction()) {
txn.RegisterCommand(new CreateUserFtpAccountCommand());
txn.RegisterCommand(new CreateUserFolderCommand());
txn.RegisterCommand(new SetUserPermissionCommand());
txn.RegisterCommand(new CreateVirtualDirectoryForUserCommand());
txn.Commit();
}
class MyTransaction : IDisposable {
public void RegisterCommand(Command command){ /**/ }
public void Commit(){ /* Runs all registered commands */ }
public void Dispose(){ /* Executes undo for all registered commands */ }
}
class UndoableCommand {
public Command(Action action) { /**/ }
public void Execute() { /**/ }
public void Undo{ /**/ }
}
Update:
You mentioned that you have hundreds of such reversible operations. In this case, you can take a more functional approach and get rid of UndoableCommand completely. You would register delegates instead, like this:
using(var txn = new MyTransaction()) {
txn.Register(() => ftpManager.CreateUserAccount(user),
() => ftpManager.DeleteUserAccount(user));
txn.Register(() => ftpManager.CreateUserFolder(user, folder),
() => ftpManager.DeleteUserFolder(user, folder));
/* ... */
txn.Commit();
}
class MyTransaction : IDisposable {
public void Register(Action operation, Action undoOperation){ /**/ }
public void Commit(){ /* Runs all registered operations */ }
public void Dispose(){ /* Executes undo for all registered and attempted operations */ }
}
As a note, you'd need to be careful with closures with this approach.
I think your best bet would be to encapsulate the execution and reversal of each step of the process. It will make a lot more easily read code than nested try-catch blocks. Something like:
public interface IReversableStep
{
void DoWork();
void ReverseWork();
}
public void DoEverything()
{
var steps = new List<IReversableStep>()
{
new CreateUserFTPAccount(),
new CreateUserFolder(),
...
}
var completed = new List<IReversableStep>();
try
{
foreach (var step in steps)
{
step.DoWork();
completed.Add(step);
}
}
catch (Exception)
{
//if it is necessary to undo the most recent actions first,
//just reverse the list:
completed.Reverse();
completed.ForEach(x => x.ReverseWork());
}
}
Both NTFS and the Registry support enrollment in KTM and MS DTC transactions (and by extension, TransactionScope). However, the transactional file system has been deprecated due to complexity, and may not be in some future version of windows.
Example of using transactional file system and registry from C#
Transactional NTFS
If not everything fits in a transaction, I would certainly look to the command history patterns presented in other answers to this question.
I'm not aware of any standard pattern for this type of thing but I'd probably do it with nested try/catch blocks myself - with appropriate code for the rollback of non-database operations in the catch block. Use a TransactionScope to ensure all the database operations are transactionactional.
eg:
using (TransactionScope scope)
{
try
{
DoOperationOne();
try
{
DoOperationTwo();
DoDataBaseOperationOne(); // no need for try/catch surrounding as using transactionscope
try
{
DoOperationThree();
}
catch
{
RollBackOperationThree();
throw;
}
}
catch
{
RollBackOperationTwo();
throw;
}
}
catch
{
RollbackOperationOne();
throw;
}
scope.Complete(); // the last thing that happens, and only if there are no errors!
}

Categories

Resources