I am working on a few applications which connect to an MS Access database backend (.mdb) to read/insert/update records.
Everything is working fine, but I noticed that my db operations were quite slow. The backend is accessed by other users, but I still get the issue when querying a copy of the access file which no one else connects to.
I managed to narrow this down so that I can now see the offending code is the line
connection.Close()
Called on an open OleDBConnection which has just executed some query, e.g:
var con = new OleDbConnection(connectionString);
con.Open();
var query = "SELECT * FROM subGRCReceived WHERE GRVNo=#grv";
var args = new DynamicParameters();
args.Add("#grv", grvNumber);
// Using Dapper
var pallets = (List<Pallet>)con.Query<Pallet>(query, args);
con.Close(); // This is often taking between 7-10 seconds
I can confirm that this is occurring when using using/con.Close()/con.Dispose(), and using or not using Dapper makes no difference.
I did notice that this only seems to happen with web based prjojects (ASP MVC or WCF soap service) and not Console applications. The issue seems to be intermittent, but occurs frequently enough for it to be a pain for the user (especially when navigating to a page uses 2-3 db queries, as this can take as long as 20 seconds to load).
The problem does not lie with the code itself, as I am able to host the same application on my laptop on the same network as the server and the speed is perfect (~200ms per request). See the specs of the 2 machines below:
Laptop Details:
Processor : Intel Core i7-6700HQ CPU # 2.60GHz
RAM : 16.0 GB
OS : Windows 10 x64
Server Details
Processor : Intel Xeon CPU E5-2603 v4 # 1.70GHz (2 processors)
RAM : 32.0 GB
OS : Windows Server 2012 x64
Setup
32 bit MS Access 2016
2016 Microsoft ACE OLE Engine
64 it OS
What I have tried
Moved the database to same folder as application
Disabled antivirus (ESET)
Increased the MaxBufferSize key value from 0 in Access Connectivity Engine in the registry (does this need a restart?)
Measure the time it takes to run GC.Collect() before calling Close() to ensure it is not the garbage collector
Workarounds
Threading
I tried calling Close() from a new thread, which seemed to work after 1 request, but if I try accessing the application again I am getting unhandled win32 errors on the server (Even though I wrapped my thread and connection.Close() calls in try/catch. I suspect this might be failing because the thread might take 7 seconds to close the connection, but the IIS worker process gets terminated before that, so there may be some missing resources that Close() needs. It might be nice if I could get this working, but I understand that this i bad practise in MVC, and also does not actually solve the issue.
Persistent Connection
I could also just have 1 OleDBConnection and keep it open throughout the session. I did this with the WCF service (1 connection per request) and it works find, however I get the feeling that it wont work quite as well with ASP MVC, and after doing a bit of research it looks like this is not a good idea.
I have been struggling with this for a week now and its driving me crazy, does anyone have any advice at all?
Related
I have .Net CF 3.5 C# console application on Win CE 6.0 R3 OS and ARM 4i processor with 64 MB RAM and SQL Server CE 3.5 database in my device.
I am using 3 tier architecture
My project
BLL
DAL
In the DAL, I am only creating an object of SqlCeConnection, calling Object.Open() and Object.Close() and in finally block Object.Dispose() without making any transaction or executing any queries. Find below code snippet.
try
{
lock (_executeScalar)
{
using (_myConnection = new SqlCeConnection("Getting Connection String from App.config"))
{
_myConnection.Open();
_myConnection.Close();
}
}
}
catch(...){ }
finally
{
if (_myConnection.State != ConnectionState.Closed)
_myConnection.Close();
_myConnection.dispose();
}
From my app I am calling the above code snippet infinitely via BLL to DAL in a while loop to check memory leak issue. I also used devhealth60 tool for memory snapshot and observed that every 2-2.5 min some 5 kb (without executing any query like select, insert...) of heap memory of my application increases so as physical memory but not increasing the available virtual memory.
Kindly suggest me how to deal with SQL Server CE 3.5 database in a .Net CF application without any memory leak as I have very frequent use of .sdf database file from 3 diff apps in my entire project and all apps are never ending so after 1-2 days it stops functioning, required hard reboot.
Any help will be appreciated.
Thanks in advance,
Vijay
I have developed an Asp.Net MVC Web Application and deployed it on IIS 8 Server and in my application I am using a list to store online users and display them in a page using the following code
if (HttpRuntime.Cache["LoggedInUsers"] != null)
{
List<string> loggedInUsers = (List<string>)HttpRuntime.Cache["LoggedInUsers"];
if (loggedInUsers.Contains(model.UserName))
{
}
else
{
loggedInUsers.Add(model.UserName);
HttpRuntime.Cache["LoggedInUsers"] = loggedInUsers;
}
}
For some reason, the list gets cleared every night and when I look for the active users, I see it empty.
Is it something that has to be dealt on IIS 8 or is there any better way to implement the Online users using a database table may be..
IIS can recycle your application pool (for several reasons, including idling and no requests, too much memory use, etc...) At that point your application will be unloaded, and then loaded again later. Hence your cache values are gone.
Second, do you have any code that at some point prunes and removes old entries from cache? If not, it means you got a memory leak as it'll continue to grow indefinitely (and thereby trigger application pool recycle).
If you do have prunning code (so the cache is actively managed to avoid indefinite growth), and you need its contents to survive past pool restarts, then you have few options:
Use database. Simply have a table of active users and add/delete there.
Pro: survives even unexpected crashes of app, iis, and even machine itself.
Con: Slow due to db access and db contention point possibilities.
Put code in your application start / end event handlers to serialize contents to a file on end, and deserialize on start.
Pro: faster than db. works during graceful shutdowns.
Con: will not work due to unexpected crash.
Your site probably shuts down after a certain amount of time when there is no activity. Look at IIS settings for Application Pools (more specifically "Set idle-timeout to never in IIS") on google...
I have an ASP.NET MVC 4 site running on a Xeon E7540 # 2 GHz 2 Cores, 8 GB Ram on a 64-bit Windows Server 2008 R2 Standard.
The application pool is using .NET 4.0 with Integrated Managed Pipeline Mode.
I've decorated my Index function with OutputCaching:
[OutputCache(CacheProfile = "ContentPageController.Index")]
and the following in the Web.config
<add name="ContentPageController.Index" duration="60" varyByParam="none" location="Server" />
The index action access a MS SQL database, I have SQL Profiler running on the DB server and I can see that if I refresh the same page from a browser within a 60 seconds period, the database is hit only once which indicates that the OutputCache is running.
However when I load test the site generating a rush of 1-50 users within 60 seconds (graph attached) the OutputCache fails to work almost immediately after the test starts, I can see in the SQL Profiler the queries hitting the database while I would have expected only the first request to hit the database.
About 15 seconds into the test the Webserver CPU started fluctuating between 85% and 100%,
while the Private Memory of the worker process did not increase, staying below the 149MB mark having started at 147MB before the test.
I'm completely clueless, any idea what I should be looking for?
Many thanks!
If you see multiple db hits and want to prevent this (e.g. run only once and cache) you would have to synchronize the access of this method (lock threads).
This ensures that only the first request hits the DB and all following requests could use the cache.
pseudo code:
if(!cache)
lock start
if(cache)return cache; // this will be hit when the 2nd thread passed the first if...
cache = access db...
lock end
return cache;
If this doesn't solve your issues you would have to investigate this a little bit further...
What is the returned output of the site when the response times drop?
Seems to return an error?
You also have several timeouts...
What is the code which hits the database?
I'm working on a team project that reads data from a MSSQL server. We are using an asynchronous call to fetch the data.
SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString);
SqlCommand cmdData = new SqlCommand("get_report", conn);
cmdData.CommandType = CommandType.StoredProcedure;
conn.Open();
IAsyncResult asyResult = cmdData.BeginExecuteReader();
try
{
SqlDataReader drResult = cmdData.EndExecuteReader(asyResult);
table.Load(drResult);
}
catch (SqlException ex)
{
throw;
}
The project itself uses TFS source control with gated check-ins, and we have verified that both computers are running the exact same version of the project. We are also using the same user login and executing the stored procedure with the exact same parameters (which are not listed for brevity).
The stored procedure itself takes 1:54 to return 42000 rows under SQL Server Management Studio. While running on Windows 7 x86, the .NET code takes roughly the same amount of time to execute as on SSMS, and the code snippet above executes perfectly. However, on my computer running Windows 7 x64, the code above encounters an error at the EndExecuteReader line at the 0:40 mark. The error returned is "Invalid Operation. The connection has been closed."
Adding cmdData.CommandTimeout = 600 allows the execution to proceed, but the data takes over 4 minutes to be returned, and we are at a loss as to explain what might be going on.
Some things we considered: my computer has .NET 4.5 Framework installed, is running 64-bit OS against 32-bit assemblies, may be storing information in the local project file that isn't being synchronized to the TFS server. But we can't seem to figure out exactly what might actually be causing the disparity in times.
Anyone have any ideas as to why this disparity exists or can give me suggestions of where to look to isolate the problem?
Invalid Operation error is received when EndExecuteReader was called more than once for a single command execution, or the method was mismatched against its execution method.
The title pretty much says it all. Some caveats are:
I need to be able to do it in C#
It needs to be able to be done from a remote server (ie, running on one server, checking IIS on another)
Needs to be close to real-time (within 1 second)
Can use WMI calls
I've tried watching the log file, but it turns out that isn't nearly close enough to real-time.
Thanks!
EDIT: I put this in a comment on Tom's answer, but it's more visible here:
I was able to look for changes using this counter:
var perf = new PerformanceCounter("ASP.NET Apps v2.0.50727", "Requests Total", "_LM_W3SVC_[IIS-Site-ID]_ROOT", "[Server-Name]");
How about reading the ASP.NET requests/sec performance counter on the remote machine?
The System.Diagnostics.PerformanceCounter class has a constructor which takes a machine name.