I have a class that uses linq to access the database. Some methods call others. For example:
class UserManager
{
public User[] getList()
{
using(var db = new MyContext())
{
return db.Users.Where(item => item.Active == false);
}
}
public User[] addUser(string name)
{
using(var db = new MyContext())
{
db.Users.InsertOnSubmit(new User() { id = Guid.NewId(), name = name, active = false ...});
}
return getList();
}
...
In the call to addUser I am required to return the new list. (I know as it stands it isn't a great design, but I have eliminated detail for simplicity.) However, the call to getList creates a second data context.
I could pad this out with extra methods, viz:
public getList()
{
using(var db = new MyContext())
return getList(db);
}
public getList(MyContext db)
{
...
}
Then replace my call in addUser so as to keep the same data context.
I seem to see this type of thing a lot in my code, and I am concerned with the cost of creating and releasing all these data contexts. Does anyone have an opinion as to whether it is worthwhile putting in the extra work to eliminate the creation and deletion of these contexts?
Microsoft provides the following advice/recommendation to not reuse DataContext instances http://msdn.microsoft.com/en-us/library/bb386929.aspx
Frequently Asked Questions (LINQ to SQL)
Connection Pooling
Q. Is there a construct that can help
with DataContext pooling?
A. Do not try to reuse instances of
DataContext. Each DataContext
maintains state (including an identity
cache) for one particular edit/query
session. To obtain new instances based
on the current state of the database,
use a new DataContext.
You can still use underlying ADO.NET
connection pooling. For more
information, see SQL Server Connection
Pooling (ADO.NET).
It is ok to reuse for different parts of the same logical operation (perhaps by passing the data-context in as an argument), hut you shouldn't reuse much beyond that:
it caches objects; this will grow too big very quickly
you shouldn't share it between threads
once you've hit an exception, it gets very unwise to reuse
Etc. So: atomic operations fine; a long-life app context; bad.
What I usually do is create a class the you could call something like DataManager with all data functions as members. This class creates an instance of MyContext on its constructor.
class DataManager
{
private MyContext db;
public DataManager() {
db = new MyContext();
}
public User[] getList()
{
return db.Users.Where(item => item.Active == false);
}
public User[] addUser(string name)
{
db.Users.InsertOnSubmit(new User() { id = Guid.NewId(), name = name, active = false ...});
return getList();
}
}
You create an instance of this class whenever you are doing a set of operations. On a Controller, for instance, you could have this class as a member. Just don't make a global var out of it, instantiate and dispose when you are done with it.
Related
I have one ASP-WebForms-Application for 3 Companies. Every Company has its own Database and own EDMX-Model. The Structure in the Databases is the same. On Parameters I am checking which Company it is and want to have one DbContext for all the Models.
I am very new in Entity Framework and don't know how to make one Context for a few Models.
I have tried to make a class that gives me a DbContext, in which I want to make the DBContext one of the Models.
I am trying:
public static DbContext holeDbContextWsvWsbSvs()
{
DbContext context = new DbContext();
string verbandKürzel = config.holeVerbandKürzel();
if (verbandKürzel == "wsv")
{
context = new wsvEntities();
}
else if (verbandKürzel == "wsb")
{
context = new wsbEntities();
}
else if (verbandKürzel == "svs")
{
context = new svsEntities();
}
return context;
}
But in Entity Framework 6.0 it seems as an emtpy Constructor is not possible!
Is it possible to intialize a Null(Fake,Pseudo)-DbContext or something and then change it to the Model-Context?
Can anyone give me an impulse please, how can i realize my plans?
EDIT:
Okay, changing the Context to the Model inside the Method achieved by giving the Constructor any string. But although giving the Context a specific Model inside the method, I am returning a DbContext an can not use Object-Attributes after that.
Any Suggestions? Or do really have to make at every position i need to work with the DataModel IF-ELSEs and do the same logic for each Model?
If the structure of the DbContexts are identical and you just want to point at one of three databases, you don't need to declare 3 separate DbContexts, just one, then use the Constructor argument to nominate the appropriate connection string.
public class AppDbContext : DbContext
{
// Define DbSets etc.
public AppDbContext(string connection)
: base (connection)
{ }
}
Then in your initialization code:
public AppDbContext holeDbContextWsvWsbSvs()
{
string verbandKürzel = config.holeVerbandKürzel();
switch (verbandKürzel)
{
case "wsv":
return new AppDbContext("wsvEntities");
case "wsb":
return new AppDbContext("wsbEntities");
case "svs":
return new AppDbContext("svsEntities");
default:
throw new InvalidOperationException("The configured client is not supported.");
}
}
Assuming you have connections strings called "wsvEntities", "wsbEntities", and "svsEntities" respectively.
If the DbContexts are not identical in structure then honestly you likely will have much bigger problems that won't be solved by exposing them via the base DbContext.
I found out this way of creating DbContext instances a few years ago and only updated it slightly.
My code works, but I am wondering if it will cause any problems in the future.
My question is, should I use the "using" statement for my context calls or leave it as is?
This is for RAGEMP, a GTAV modification. Server syncs players and makes calls to the MySQL database when needed.
public class DefaultDbContext : DbContext
{
public DefaultDbContext(DbContextOptions options) : base(options)
{
}
// Accounts table
public DbSet<Account> Accounts { get; set; }
}
public class ContextFactory : IDesignTimeDbContextFactory<DefaultDbContext>
{
private static DefaultDbContext _instance;
public DefaultDbContext CreateDbContext(string[] args)
{
var builder = new DbContextOptionsBuilder<DefaultDbContext>();
builder.
UseMySql(#"Server=localhost;
database=efcore;
uid=root;
pwd=;",
optionsBuilder => optionsBuilder.MigrationsAssembly(typeof(DefaultDbContext).GetTypeInfo().Assembly.GetName().Name));
return new DefaultDbContext(builder.Options);
}
public static DefaultDbContext Instance
{
get
{
if (_instance != null) return _instance;
return _instance = new ContextFactory().CreateDbContext(new string[] { });
}
private set { }
}
// somewhere else
// create a new Account object
var account = new Account
{
Username = "test",
Password = "test"
};
// Add this account data to the current context
ContextFactory.Instance.Accounts.Add(account);
// And finally insert the data into the database
ContextFactory.Instance.SaveChanges();
There is nothing wrong with this approach if you are keeping your DbContext short lived and not trying to cache them or overly reuse an instance.
However, personally i find this a little verbose. For inhouse-applications, i tend to keep setup and connection strings in app.config and just use the using statement .
using(var db = new MyContext())
{
var lotsOfStuff = db.SomeTable.Where(x => x.IsAwesome);
//
}
On saying that, there is really only a few rules you need to abide by (without this being an opinionated answer)
Don't try to overly use a DbContext. They are internally cached, and there is little overhead in creating them and closing them.
Don't try to hide everything behind layers of abstractions unnecessarily.
Always code for readability and maintainability first, unless you have a need to code for performance.
Update
Maybe I am misunderstanding something but if I am saving changes to
the database more often than not, is my approach then bad? Little
things get updated when something is changed, not big chunk of data
here and there
It depends how long you are keeping open your DefaultDbContext, I mean if its only for a couple of queries year thats fine.
Context are designed to be opened and closed fairly quickly, they are not designed to stay open and alive for long periods of time. Doing so will sometimes cause you more issues than not.
Saving to the database often, while making perfect sense, its not really the issue here.
I'm trying to create a common function in a C# Webapi project to connect to one of two databases based on the value of an input flag. Each database has the same structure, but different data and each invocation would use the same database throughout.
The rest of my code is database agnostic, so it needs to be able to use a common db object, rather than making the decision every time a call to the database is done.
This is the code I thought would do the trick:
public static dynamic GetDb(string scope = "dev") {
dynamic db;
if (Globals.db == null) {
switch (scope.ToLower()) {
case "tns":
db = new TnsDb();
break;
case "sng":
db = new SngDb();
break;
default:
db = new TnsDb();
break;
}
Globals.db = db;
} else {
db = Globals.db;
}
return db;
}
I'm using the Entity Framework, which has created an access class for each database I've connected to the project and contains a method for each stored procedure I need to access. What I would like to do is to get this common method to return an object representing the appropriate database class.
The problem I'm having is that the method isn't returning a usable object. I've tried using a return type of dynamic (as above), a plain object and DbContext (the parent class of the Entity Framework db class) and a few other things out of desperation, but each time the calling statement receives back an object that has none of the methods present in the Entity Framework database class.
Any ideas of how to dynamically select and return the appropriate object that does retain all the methods will be gratefully received, as I will be able to save at least some of my hair.
You mention that each database has the same structure yet your functions uses different db context. You cant use dynamic object in this context using dynamic object.
Since your db has the same structure you could just change the connection string to the db when you initialize the DbContext. If you really want to use separate db contexts then you should use an interfact which your fuctions returns
IMyDatabase
{
BbSet<Model1> Model1{get;set;}
BbSet<Model2> Model2{get;set;}
}
TnsDb:IMyDatabase
{
BbSet<Model1> Model1{get;set;}
BbSet<Model2> Model2{get;set;}
}
SngDb:IMyDatabase
{
BbSet<Model1> Model1{get;set;}
BbSet<Model2> Model2{get;set;}
}
and both of you context need to implement this then your fuctions could be like this
public static IMyDatabase GetDb(string scope = "dev") {
dynamic db;
if (Globals.db == null) {
switch (scope.ToLower()) {
case "tns":
return new TnsDb();
case "sng":
return new SngDb();
default:
return new TnsDb();
}
}
But you shouldn't be using two separate db context just one would be enough in your case with different connection string
Although using static functions for this purpose not really good, you could some repository pattern using dependency injection
Success! I ended up creating an interface that had the same methods as the db class and returned that from the common method.
Here's a snippet from the interface:
public interface IDatabase {
ObjectResult<Method_1_Result> Method_1(Nullable<int> id, string username);
ObjectResult<Method_2_Result> Method_2(string username, string password);
}
and here are the equivalent methods from the db classes:
public partial class TnsDb : DbContext, IDatabase {
public TnsDbDev()
: base("name=TnsDb")
{
}
public virtual ObjectResult<Method_1_Result> Method_1(Nullable<int> id, string username)
{
return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction<Method_1_Result>("Method_1", idParameter, usernameParameter);
}
public virtual ObjectResult<Method_2_Result> Method_2(string username, string password)
{
return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction<Method_2_Result>("Method_2", usernameParameter, passwordParameter);
}
}
public partial class SngDb : DbContext, IDatabase {
public SngDbDev()
: base("name=SngDb")
{
}
public virtual ObjectResult<Method_1_Result> Method_1(Nullable<int> id, string username)
{
return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction<Method_1_Result>("Method_1", idParameter, usernameParameter);
}
public virtual ObjectResult<Method_2_Result> Method_2(string username, string password)
{
return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction<Method_2_Result>("Method_2", usernameParameter, passwordParameter);
}
}
Note that I had to delete the get and set bodies from the interface method, as they weren't needed and kept generating errors.
I also suspect that this solution is specific to my situation of needing to connect to two databases with exactly the same schema and wouldn't work if the schemas were slightly different, as they couldn't share the same interface.
The technique also requires you to remember to re-add the reference to the interface every time you re-generate the db class (in this case. the TnsDb class) and keep it up to date whenever you change any methods in the db class.
Anyway, I hope that helps anyone with the same problem as I had. Thanks to all that helped me solve this.
I'm currently testing an Entity Framework's DbContext using the In-Memory Database.
In order to make tests as atomic as possible, the DbContext is unique per test-method, and it's populated with initial data needed by each test.
To set the initial state of the DbContext, I've created a void SetupData method that fills the context with some entities that I will use in the tests.
The problem with this approach is that the objects that are created during the setup cannot be accessed by the test, because Entity Framework will assign the Ids itself, that are unknown until run-time.
To overcome this problem, I've thought that my SetupData method could become something like this:
public Fixture SetupData(MyContext context)
{
var fixture = new Fixture();
fixture.CreatedUser = new User();
context.Users.Add(fixture.CreatedUser);
context.SaveChanges();
return fixture;
}
public class Fixture
{
public User CreatedUser { get; set;}
}
As you see, it's returning an instance of what I called "Fixture". (I don't know if the name fits well).
This way, the SetupData will return an object (Fixture) with references to the entities. Thus, the test can use the created object. Otherwise, the object will be impossible to identify, since the Id isn't created until the SaveChanges is called.
My question is:
Is this a bad practice?
Is there a better way to reference initial
data?
I prefer this approach:
public void SetupData(MyContext context)
{
var user = new User() { Id = Fixture.TEST_USER1_ID, UserName = Fixture.TEST_USER1_NAME };
context.Users.Add(user);
context.SaveChanges();
}
public class Fixture
{
public const int TEST_USER1_ID = 123;
public const string TEST_USER!_NAME = "testuser";
}
Your approach is probably fine, too, but you probably will want to know the user ID somewhere in your tests and this makes it very easy to specify it in a single known location so that it won't change if for instance you later on change your test data and the order in which you add users.
This is not a bad practice. In fact it is a good approach to create readable Given-When-Then tests. If you consider:
splitting your SetupData method
renaming it
possibly changing to a extension method
public static MyContextExtensions
{
public static User Given(this MyContext #this, User user)
{
#this.Users.Add(user);
#this.SaveChanges();
return user;
}
public static OtherEntity Given(this MyContext #this, OtherEntity otherEntity)
{
// ...
}
// ...
}
you can then write (a conceptual example, details need to be reworked to match your implementation):
[Test]
public GivenAUser_WhenSearchingById_ReturnsTheUser()
{
var expectedUsername = "username";
var user = _context.Given(AUser.WithName(expectedUsername));
var result = _repository.GetUser(user.Id);
Assert.That(result.Name, Is.EqualTo(expectedUsername));
}
... and similarly for other entities.
I'm using NHibernate + Fluent to handle my database, and I've got a problem querying for data which references other data. My simple question is: Do I need to define some "BelongsTo" etc in the mappings, or is it sufficient to define references on one side (see mapping sample below)? If so - how? If not please keep reading.. Have a look at this simplified example - starting with two model classes:
public class Foo
{
private IList<Bar> _bars = new List<Bar>();
public int Id { get; set; }
public string Name { get; set; }
public IList<Bar> Bars
{
get { return _bars; }
set { _bars = value; }
}
}
public class Bar
{
public int Id { get; set; }
public string Name { get; set; }
}
I have created mappings for these classes. This is really where I'm wondering whether I got it right. Do I need to define a binding back to Foo from Bar ("BelongsTo" etc), or is one way sufficient? Or do I need to define the relation from Foo to Bar in the model class too, etc? Here are the mappings:
public class FooMapping : ClassMap<Foo>
{
public FooMapping()
{
Not.LazyLoad();
Id(c => c.Id).GeneratedBy.HiLo("1");
Map(c => c.Name).Not.Nullable().Length(100);
HasMany(x => x.Bars).Cascade.All();
}
}
public class BarMapping : ClassMap<Bar>
{
public BarMapping()
{
Not.LazyLoad();
Id(c => c.Id).GeneratedBy.HiLo("1");
Map(c => c.Name).Not.Nullable().Length(100);
}
}
And I have a function for querying for Foo's, like follows:
public IList<Foo> SearchForFoos(string name)
{
using (var session = _sessionFactory.OpenSession())
{
using (var tx= session.BeginTransaction())
{
var result = session.CreateQuery("from Foo where Name=:name").SetString("name", name).List<Foo>();
tx.Commit();
return result;
}
}
}
Now, this is where it fails. The return from this function initially looks all fine, with the result found and all. But there is a problem - the list of Bar's has the following exception shown in debugger:
base {NHibernate.HibernateException} = {"Initializing[MyNamespace.Foo#14]-failed to lazily initialize a collection of role: MyNamespace.Foo.Bars, no session or session was closed"}
What went wrong? I'm not using lazy loading, so how could there be something wrong in the lazy loading? Shouldn't the Bar's be loaded together with the Foo's? What's interesting to me is that in the generate query it doesn't ask for Bar's:
select foo0_.Id as Id4_, foo0_.Name as Name4_ from "Foo" foo0_ where foo0_.Name=#p0;#p0 = 'one'
What's even more odd to me is that if I'm debugging the code - stepping through each line - then I don't get the error. My theory is that it somehow gets time to check for Bar's during the same session cause things are moving slower, but I dunno.. Do I need to tell it to fetch the Bar's too - explicitly? I've tried various solutions now, but it feels like I'm missing something basic here.
This is a typical problem. Using NHibernate or Fluent-NHibernate, every class you use that maps to your data is decorated (which is why they need to be virtual) with a lot of stuff. This happens all at runtime.
Your code clearly shows an opening and closing of a session in a using statement. When in debugging, the debugger is so nice (or not) to keep the session open after the end of the using statement (the clean-up code is called after you stop stepping through). When in running mode (not stepping through), your session is correctly closed.
The session is vital in NH. When you are passing on information (the result set) the session must still be open. A normal programming pattern with NH is to open a session at the beginning of the request and close it at the end (with asp.net) or keep it open for a longer period.
To fix your code, either move the open/close session to a singleton or to a wrapper which can take care of that. Or move the open/close session to the calling code (but in a while this gets messy). To fix this generally, several patterns exist. You can look up this NHibernate Best Practices article which covers it all.
EDIT: Taken to another extreme: the S#arp architecture (download) takes care of these best practices and many other NH issues for you, totally obscuring the NH intricacies for the end-user/programmer. It has a bit of a steep learning curve (includes MVC etc) but once you get the hang of it... you cannot do without anymore. Not sure if it is easily mixed with FluentNH though.
Using FluentNH and a simple Dao wrapper
See comments for why I added this extra "chapter". Here's an example of a very simple, but reusable and expandable, Dao wrapper for your DAL classes. I assume you have setup your FluentNH configuration and your typical POCO's and relations.
The following wrapper is what I use for simple projects. It uses some of the patterns described above, but obviously not all to keep it simple. This method is also usable with other ORM's in case you'd wonder. The idea is to create singleton for the session, but still keep the ability to close the session (to save resources) and not worry about having to reopen. I left the code out for closing the session, but that'll be only a couple of lines. The idea is as follows:
// the thread-safe singleton
public sealed class SessionManager
{
ISession session;
SessionManager()
{
ISessionFactory factory = Setup.CreateSessionFactory();
session = factory.OpenSession();
}
internal ISession GetSession()
{
return session;
}
public static SessionManager Instance
{
get
{
return Nested.instance;
}
}
class Nested
{
// Explicit static constructor to tell C# compiler
// not to mark type as beforefieldinit
static Nested()
{
}
internal static readonly SessionManager instance = new SessionManager();
}
}
// the generic Dao that works with your POCO's
public class Dao<T>
where T : class
{
ISession m_session = null;
private ISession Session
{
get
{
// lazy init, only create when needed
return m_session ?? (m_session = SessionManager.Instance.GetSession());
}
}
public Dao() { }
// retrieve by Id
public T Get(int Id)
{
return Session.Get<T>(Id);
}
// get all of your POCO type T
public IList<T> GetAll(int[] Ids)
{
return Session.CreateCriteria<T>().
Add(Expression.In("Id", Ids)).
List<T>();
}
// save your POCO changes
public T Save(T entity)
{
using (var tran = Session.BeginTransaction())
{
Session.SaveOrUpdate(entity);
tran.Commit();
Session.Refresh(entity);
return entity;
}
}
public void Delete(T entity)
{
using (var tran = Session.BeginTransaction())
{
Session.Delete(entity);
tran.Commit();
}
}
// if you have caching enabled, but want to ignore it
public IList<T> ListUncached()
{
return Session.CreateCriteria<T>()
.SetCacheMode(CacheMode.Ignore)
.SetCacheable(false)
.List<T>();
}
// etc, like:
public T Renew(T entity);
public T GetByName(T entity, string name);
public T GetByCriteria(T entity, ICriteria criteria);
Then, in your calling code, it looks something like this:
Dao<Foo> daoFoo = new Dao<Foo>();
Foo newFoo = new Foo();
newFoo.Name = "Johnson";
daoFoo.Save(newFoo); // if no session, it creates it here (lazy init)
// or:
Dao<Bar> barDao = new Dao<Bar>();
List<Bar> allBars = barDao.GetAll();
Pretty simple, isn't it? The advancement to this idea is to create specific Dao's for each POCO which inherit from the above general Dao class and use an accessor class to get them. That makes it easier to add tasks that are specific for each POCO and that's basically what NH Best Practices was about (in a nutshell, because I left out interfaces, inheritance relations and static vs dynamic tables).