I'm building a Visual Studio .NET application in c# with 2 databases and Entity Framework.
At first, I thought it would be better to load the most used datas in static Lists when the app is launched, but it was a bit too long, so now I try to use Tasks. I have one method per table I want to load, and I call them in a more general method :
public static void fillConstantes()
{
new Task(FillFiches).Start();
new Task(FillCustomers).Start();
new Task(FillEmployees).Start();
new Task(FillContact1).Start();
new Task(FillServiceItems).Start();
new Task(FillServiceLine).Start();
}
The fact is, if if I try to access the datas in another class and the task isn't finished, it crashes, obvisouly.
So, I was wondering if there's a good way to check if the tasks are done, are if I just have to check if it worked whenever I try to retrieve datas ?
Thanks for your help !
Task t = new Task(() => { });
t.Start();
you can use t.IsCompleted for your purpose
Related
I'm new to ASP.NET Core, so maybe I'm doing something wrong, here is my issue:
What do I need: After loading start page start loading application specific data from Azure Storage in background and show it's status on front-end
What I did to achieve this:
public ViewResult Index()
{
var homeModel = new HomeModel();
Task.Run(() =>
{
// Some specific work to do...
System.Threading.Thread.Sleep(4000);
});
return View(homeModel);
}
When I run this code front-end view doesn't displayed until Task is not finished, even though it's different managed thread (also I tried to do the same with new Thread.Start(() => ...); ).
My question is: What it the reason of such behavior? As I said I'm new to ASP.NET Core and I've not worked with ASP.NET of previous versions.
I have some ideas how to work around with this behavior but I want to know the right way to do it and understand the reason :)
P.S. I'm using SignalR for updating views from back-end.
Your approach to application startup jobs is wrong. Have a look at the IApplicationLifetime interface. Here is a question about it
ThreadStart starter = delegate { SomeVoid(somevar); };
Thread tr = new Thread(starter);
tr.IsBackground = true;
tr.Start();
tr.Join();
if need more thread
use with list of thread and loop, sorry if i don't help you
I think there are a couple of questions embedded in here.
First, all documentation on calling a function from EF core seems to assume I'm waiting for a result set. E.g.:
_context.Set<MyEntity>().FromSql<MyEntity>("select myDbFunction({0})", myParam);
However, I have a background job I want to run after inserting or updating certain entities (it catalogs some json for faster searching).
This is the approach I was planning to take, or something like it:
private void AsyncQueryCallback(Task QueryTask)
{
if (QueryTask.Exception != null)
{
System.Diagnostics.Debug.WriteLine(QueryTask.Exception.ToString());
}
}
public void UpdateAccessionAttributes(int AccessionId)
{
var cancelToken = new System.Threading.CancellationToken();
var idParam = new Npgsql.NpgsqlParameter("accessionId", NpgsqlTypes.NpgsqlDbType.Integer);
idParam.Value = AccessionId;
_context.Database.ExecuteSqlCommandAsync("generate_accession_attributes #accessionId", cancelToken, new[] { idParam }).ContinueWith(AsyncQueryCallback);
}
But, of course, Postgres doesn't support named parameters, etc.
I'm also fumbling a bit on error handling. I don't want/need to await - just need a callback function to log any errors...
Help appreciated, as always!
The easiest way to accomplish what you're trying to do is probably the following:
_context.Database.ExecuteSqlCommand("SELECT generate_accession_attributes({0})", AccessionId);
If you don't want to use async, then call ExecuteSqlCommand (the sync version) instead of ExecuteSqlCommandAsync. You can place the logging line right after that (instead of a callback), and any issue will generate an exception as usual.
As the title suggest i'm having a problem with the first query against a SQL Server database using the Entity Framework. I have tried looking for an answer but no one seems to actually have a solution to this.
The tests was done in Visual Studio 2012 using Entity Framework 6, I also used the T4 views template to pre-compile the views. The database was on a SQL Server 2008. We have about 400 POCOs (400 mapping files), only have 100 rows data in database table.
Following capture is my test code and result.
static void Main(string[] args){
Stopwatch st=new Stopwatch();
st.Start();
new TestDbContext().Set<Table1>.FirstOrDefault();
st.stop();
Console.WriteLine("First Time "+st.ElapsedMilliseconds+ " milliseconds");
st.Reset();
st.Start();
new TestDbContext().Set<Table1>.FirstOrDefault();
st.stop();
Console.WriteLine("Second Time "+st.ElapsedMilliseconds+ " milliseconds");
}
Test results
First Time 15480 milliseconds
Second Time 10 milliseconds
On the first query EF compiles the model. This can take some serious time for a model this large.
Here are 3 suggestions: http://www.fusonic.net/en/blog/2014/07/09/three-steps-for-fast-entityframework-6.1-first-query-performance/
A summary:
Using a cached db model store
Generate pre-compiled views
Generate pre-compiled version of entityframework using n-gen to avoid jitting
I would also make sure that I compile the application in release mode when doing the benchmarks.
Another solution is to look at splitting the DBContext. 400 entities is a lot and it should be nicer to work with smaller chunks. I haven't tried it but I assume it would be possible to build the models one by one meaning no single load takes 15s. See this post by Julie Lerman https://msdn.microsoft.com/en-us/magazine/jj883952.aspx
With EF Core, you can cheat and load the model early after you call services.AddDbContext (you can probably do something similar with EF6 too, but I haven't tested it).
services.AddDbContext<MyDbContext>(options => ...);
var options = services.BuildServiceProvider()
.GetRequiredService<DbContextOptions<MyDbContext>>();
Task.Run(() =>
{
using(var dbContext = new MyDbContext(options))
{
var model = dbContext.Model; //force the model creation
}
});
This will create the model of the dbcontext in another thread while the rest of the initialization of the application is done (and maybe other warmups) and the beginning of a request. This way, it will be ready sooner. When you need it, EFCore will wait for the Model to be created if it hasn't finished already. The Model is shared across all DbContext instances so it is ok to fire and forget this dummy dbcontext.
You can try something like this: (it worked for me)
protected void Application_Start()
{
Start(() =>
{
using (EF.DMEntities context = new EF.DMEntities())
{
context.DMUsers.FirstOrDefault();
}
});
}
private void Start(Action a)
{
a.BeginInvoke(null, null);
}
Entity Framework - First query slow
this work for me:
using (MyEntities db = new MyEntities())
{
db.Configuration.AutoDetectChangesEnabled = false; // <----- trick
db.Configuration.LazyLoadingEnabled = false; // <----- trick
DateTime Created = DateTime.Now;
var obj = from tbl in db.MyTable
where DateTime.Compare(tbl.Created, Created) == 0
select tbl;
dataGrid1.ItemsSource = obj.ToList();
dataGrid.Items.Refresh();
}
If you have many tables that are not being used on c#, exclude them.
Add a partial class, add the following code and reference this function on OnModelCreating
void ExcludedTables(DbModelBuilder modelBuilder)
{
modelBuilder.Ignore<Table1>();
modelBuilder.Ignore<Table>();
// And so on
}
For me, just using AsParallel() in the first query solved the problem. This runs the query on multiple processor cores (apparently). All my subsequent queries are unchanged, it is only the first one which was causing the delay.
I also tried pre-generated mapping views https://learn.microsoft.com/en-us/ef/ef6/fundamentals/performance/pre-generated-views but this did not improve startup time by much.
I think that is not a very good solution. Ado.net looks like a lot more performance. However, this is my opinion.
Alternatively look at them.
https://msdn.microsoft.com/tr-tr/data/dn582034
https://msdn.microsoft.com/en-us/library/cc853327(v=vs.100).aspx
I have a WP8 app that has multiple (at times, up to 40) threads that have to get the data from a webservice and then commit to a localdb.
I have implemented an AutoResetEvent-based pattern where each Repository method looks somewhat like this:
public class MySuperAppRepository
{
public static AutoResetEvent DataAccess = new AutoResetEvent(true);
public MyFancyObject CreateMyFancyObject(string path, int something)
{
DataAccess.WaitOne();
try
{
using (var dbContext = new MySuperAppDataContext(MySuperAppDataContext.DbConnectionString))
{
var mfo = new MyFancyObject();
dbContext.MyFancyObjects.InsertOnSubmit(mfo);
mfo.Path = path;
mfo.Something = something;
dbContext.SubmitChanges();
return mfo;
}
}
finally
{
DataAccess.Set();
}
}
}
This is all nice and clean, but as soon as i get multiple threads (as mentioned above), the performance is PATHETIC. i can get lots of requests come down and then they're all waiting for db to be free.
Is there a better alternative? Would using lock(object) improve the performance?
Can you try not creating a new DataContext on every data operation.
Also try out some of the best practices mentioned here.
in particular :
Enabling fast updates with a version column
One of the easiest ways to optimize the performance of an update operation on a table is to add a version column. This optimization is specific to LINQ to SQL for Windows Phone. For example, in an entity, add the following code.
[Column(IsVersion=true)]
private Binary _version;
I have created a Singleton-patterned class which contains some instance variables (Dictionaries) which are very expensive to fill.
This class is used in an .NET MVC 4 project. And they key is that the data provided by the dictionaries in this Singleton class is nice to have, but is not required for the web app to run.
In other words, when we process a web request, the request would be enhanced with the information from the dictionaries if they are available, but if it's not available, it's fine.
So what I would like to do is find the best way to load the data into these Dictionaries within the Singleton, without blocking the web activity as they are filled with data.
I would normally find a way to do this with multithreading, but in the past I read about and ran into problems using multithreaded techniques within ASP.NET. Have things changed in .NET 4 / MVC 4? How should I approach this?
UPDATE
Based on feedback below and more research, what I am doing now is below, and it seems to work fine. Does anyone see any potential problems? In my testing, no matter how many times I call LazySingleton.Instance, the constructor only gets called once, and returns instantly. I am able to access LazySingleton.EXPENSIVE_CACHE immediately, although it may not contain the values I am looking for (which I test for in my app using .Contains() call). So it seems like it's working...
If I'm only ever editing the EXPENSIVE_CACHE Dictionary from a single thread (the LazySingleton constructor), do I need to worry about thread safety when reading from it in my web app?
public class LazySingleton
{
public ConcurrentDictionary<string, string> EXPENSIVE_CACHE = new ConcurrentDictionary<string, string>(1, 80000); // writing to cache in only one thread
private static readonly Lazy<LazySingleton> instance = new Lazy<LazySingleton>(() => new LazySingleton());
private LazySingleton()
{
Task.Factory.StartNew(() => expensiveLoad());
}
public static LazySingleton Instance
{
get
{
return instance.Value;
}
}
private void expensiveLoad()
{
// load data into EXPENSIVE_CACHE
}
}
You may fill your cash repository on any of
Application_Start
Session_Start
your web application events.
Something like this
<%# Application Language="C#" %>
<script runat="server">
void Application_Start(object sender, EventArgs e)
{
SingletonCache.LoadStaticCache();
}
</script>
May this be useful