I have a user reporting this error when they're using my application.
The application is a .NET Winforms application running on Windows XP Embedded, using SQL Server CE 3.5 sp1, and Linq-To-SQL as the ORM. The database itself is located in a subdirectory my application creates in the My Documents folder. The user account is an adminstrator account on the system. There are no other applications or processes connecting to the database.
For the most part, the application seems to run fine. It starts up, can load data from and save data to the database. The user is using the application to access the database maybe a couple hundred times a day. They get this error, but only intermittently. Maybe 3-4 times a day.
In the code itself, all of the calls to the database are using a Linq-To-SQL data context that's wrapped in a using clause. So in other words:
using(MyDataContext db = new MyDataContext(ConnectionString))
{
List<blah> someList = db.SomeTable.Where(//selection criteria).ToList();
return(someList);
}
That's what pretty much all of the calls to the database look like (with the exception that the ones that save data obviously aren't selecting and returning anything). As I mentioned before, they have no issue 99% of the time but only get the shared memory error a few times a day.
My current "fix" is on application startup I simply read all of the data out of the database (there's not a lot) and cache it in memory and converted my database calls to read from the in-memory lists. So far, this seems to have fixed the problem. For a day and a half now they've reported no problems. But this is still bugging me, because I don't know what would cause the error in the first place.
While the application is accessing the database a few hundred times a day, it's typically not in rapid-fire succession. It's usually once every few minutes at the least. However, there is one use-case where there might be two calls one right after the other, as fast as possible. In other words, something like:
//user makes a selectio n on the screen
DatabaseCall1();
DatabaseCall2();
Both of those would follow the pattern in the code block above where they create a new context, do work, and then return. But these calls aren't asynchronous, so I would expect the connection would be closed and disposed of before DatabaseCall2 is invoked. However, could it be that something on the SQL Server CE end isn't closing the connection fast enough? It might explain why it's intermittent since maybe most of the time it doesn't have a problem? I should also mention that this exact program without the fix is installed on a few other systems with the exact same hardware and software (they're clones of each other), and users of the other systems have not reported any errors.
I'm stuck scratching my head because I can't reproduce this error on my development machine or a test machine, and answers to questions about this exception here and other places typically revolve around insufficient user permissions or the database on a shared network folder.
Check this previous post,I think you will find your answer :-
SQL Server CE - Internal error: Cannot open the shared memory region
Related
I have a C# .NET based program which runs 24/7 checking for and processing data in SQL Server. This program runs fine all day long, but a little after 2AM each morning, something happens on my customer's server which causes SQL to report slow I/O. SQL reports something like the following in its logs:
SQL Server has encountered 129 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\tempdb.mdf] in database id 2. The OS file handle is 0x0000000000000D9C. The offset of the latest long I/O is: 0x00000007ce0000
There are usually several of these message logged each morning, one after another.
Sometimes my program loses the connection to SQL server and exits, after which it is automatically restarted by a service. Other times my program stops responding and I have to manually kill and restart the process in the morning.
My customer's IT has not been able to identify the cause of the slowdown, so I'm trying to change my program or connection settings to reliably detect the problem and exit, or overcome it in some other way.
I've searched, but can't find anything online related to this sort of issue.
Any help would be greatly appreciated.
I would suggest you a little different solution as you said any help would be appreciated.
As I understand you do not know what is causing the problem and in my opinion you should focus on getting the knowledge what it is.
In your situation the best would be to ask your customer to set SQL Server Profiler for the time that is being under suspection that causes the problems. This is the tool that harvest all SQL Server activity. I have used this many times at my work.
For the question how to set SQL Server Profiler you would find easily the answer on google. This is quite easy and intuitive.
Be prepare to catch lots of queries, but you can filter them easily too.
Based on the logs in SQL Server Profiler you would know what is causing the problem and you can fix it in your application.
Let me know if this helped.
We have a WCF service developed in C# running in a production environment where it crashes every few hours with no observable pattern. Memory usage will hover at ~250mb for a while, then all of a sudden memory usage starts going up until it crashes with an OutOfMemoryException at 4gb (it's a 32bit process).
We have a hard time identifying the problem, our exceptions logged are from different places in the code, presumably from another request trying to use some memory and it receive the exception.
We have taken a memory dump when the process is at 4gb and a list of ~750k database objects is in memory when the crash occurs. We have looked up the queries of those said objects but can't pinpoint the one that loads up the entire table. The service make calls to the database using EF6.
Another thing to note, this problem never occured in our preproduction environment. The data in the database is sufficient in our preproduction environment for this to occur, if it were to load the entire table also. It's probably a specific call with a specific parameter that triggers this issue, but we can't pinpoint it.
I am out of ideas what to try next to solve our issues. Is there a tool that can help us in this situation ?
Thanks
If you want to capture all your SQL and are using Entity, you can print out queries like this
Context.Database.Log = s => Debug.Print(s);
If you mess around with that a bit you can get it to output to a variable and save the result to text file or Db. You would have to wrap it around all Db calls-not sure how big your project is?
Context.Database.Log = null;
turns it off
I have a C# Winforms application. Part of the application pulls the contents of a SQLite table and displays it to the screen on a datagridview. I seem to have a problem where multiple users/computers are using the application.
When the program loads, it opens a single connection to the SQLite DB engine, which remains open until the user exits the program. On load, it refreshes the table in question and continues to do so at regular intervals. The table correctly updates when one user is using it or if that one user has more than one instance of the program open. If, however, more than one person uses it, the table doesn't seem to reflect changes made by other users until the program is closed and reopened.
An example - the first user (user A) logs in. the table has 5 entries. they add one to it. there are now 6 entries. User B now logs in and sees 6 entries. User A enters another record, for a total of 7. User B sees 6 even after the automatic refresh. And won't see 7 until they close out and reopen the program. User A sees 7 without any issue.
Any idea what could be causing this problem? It has to be something related to the DB engine for SQLite as I'm 100% sure my auto refresh is working properly. I suspect it has something to do with the write-ahead-logging feature or the connection pooling (which I have enabled). I disabled both to see what would happen, and the same issue occurs.
It could be a file lock issue - the first application may be taking an exclusive write lock, blocking out the other application instances. If this is true, then SQLite's may be simply waiting until the lock is released before updating the data file, which is not an ideal behaviour, but then again using SQLite for multi-user applications is also not ideal.
I have found hints that SHARED locks can (or should in the most recent version) be used. This may be a simple solution, but documentation of this is not easy to find (probably because this bends the specification of SQLite too far?)
Despite this however, it may be better to serialize file access yourself. And this depends on your precise system architecture, in how you should best approach such a feature.
Your system architecture is not clear from your description. You speak of "multiple users/computers" accessing the SQLite file.
If the multiple computers requirement is implemented using network share of the SQLfile, then this is indeed going to be a growing problem. A better architecture or another RDBMS would be advisable.
If multiple computers are accessing the data through a server process (or multiple server processes on the same machine?), then a simple Monitor lock (lock keyword), or ReaderWriterLock will help (in the case of multiple server processes an OS mutex would be required).
Update
Given your original question, the above still holds true. However given your situation, looking at your root problem - no access to your businesses RDBMS, I have some other suggestions:
Maria DB / mySQL / postgreSQL on your PC - of course this would require your PC to be kept on.
Some sort of database and/or service layer hosted in a datacentre (there are many options here, such as VPS, Azure DB, shared hosting DB etc.., of course all incurring a cost [perhaps there are some small free ones out there])
SQLite across network file systems is not a great solution. You'll find that the FAQ and Appropriate Uses pages gently steer you away from using SQLite as a concurrently accessed database across an NFS.
While in theory it could work, the implementation and latency of network file systems dramatically increases the chance of locking conflicts occurring during write actions.
I should point out that reading the database creates a read-only lock, which is fine for concurrent access.
Need some help from Oracle app developers out there:
I have an C#.NET 4.0 application which updates and inserts into a table using DDTek.Oracle library. My app runs everyday for about 12 hours and this exception came exactly twice and 15 days apart and never before. On these days, it was running fine for hours(it did both inserts and updates during this period). And then this exception comes. I have read that this exception could be from a bad connection string, but as I said before, the app has been running fine for a while. Could this be a db or network issue or could it be something else?
System.InvalidOperationException: Connection must be valid and open
at DDTek.Oracle.OracleConnection.get_Session()
at DDTek.Oracle.OracleConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at DDTek.Oracle.OracleConnection.BeginTransaction()
FYI(if this could be the cause), I have two connections on two threads. Each thread updates a different table.
PS: If anyone know a good documentation for DDTek. Please reply with a link.
From what you describe I can only speculate - there are several possibilities:
most providers offer built-in pooling, sometimes a connection in the pool becomes invalid and you get some strange behaviour
sometimes the network settings/firewalls/IDS... limit how long a TCP connection can stay open
sometimes a subtle bug (accessing same connection from 2 different threads) leads to strange problems
sometimes the DB server (or a DB firewall) limits how long a session can stay connected
sometimes memory problems cause such behaviour, almost every Oracle provider uses OCI under the hood which requires using unmanaged heap etc.
I had one provider leaking unmanaged memory (diagnosed via a memory profiler and fixec by the vendor rather fast)
sometimes when connected to a RAC one listener/node goes down and/or some failover happens leaving current connections invalid
As for a link to comprehensive documentation of DDTek.Oracle see here and here.
I have a Silverlight 4 application that works a lot with a WCF service. The application has normally ran fine, with fast response times for even some hefty queries. Recently however, it's gotten quite slow, and I'm having a hard time troubleshooting why.
My database is hosted on a remote server. The application is hosted on the same server. Here's what I've noted:
When I run the application locally, using the ASP.NET as my server instead of IIS, and I hit the website via localhost, which hits the remote database, speeds are fast.
When I run the application locally, but use the remote WCF service rather than the local service, things are slow.
When I run the application over the web, (i.e. the remote application which is, again, on the same server as the database, so they're local to one another) the application is slow. This is pretty much what the production environment is...
When I log on to the server, and hit the the website from within the server, things are fast.
The queries to the database are fast. Manually running the queries on the database themselves, yields the results in a split second.
Using the WCFTestClient and hitting the remote WCF service is also really fast, and has virtually immediate turn around.
Lastly, when I'm using the expected setup of my local machine hitting the website over the web, which hits the database, etc:
Not all queries react the same way. Some of the heavier queries which result in large data sets actually have a quick response time. Some of the light queries - straight SELECT statements with no JOINS, that generate only a kilobyte of data, takes a lot longer...about 30 seconds. There are a few queries that are sometimes fast, sometimes slow, but the ones that are always slow are the worst.
About the server:
The server is a dedicated server, I've monitored the CPU and it's not being taxed by anything. I'm hosting with IIS 7, on Win Server 2K8, and Sql Server 2K8. The only thing that's changed in the past few weeks have been some Windows updates, and I've been told by one person that they made some Firewall changes - that's my current theory on the cause, but I don't know what else to try at this point, or how to show that it is the firewall..
Any thoughts?
It's hard to find out the reasons according to what you described, I think you should start to profile your application by logging the database time, WCF request processing time etc.
Once you get the data, you can find the real reason. This is what we have been doing on our products.
If I had to guess, you're experiencing a combination of network latency and a less-than-optimal database design. Your description of "small" queries taking longer than queries yielding large result sets is a classic indicator that you need to evaluate your query plans, and ensure that they are using the right indexes (you are using indexes, right?).
I suspect that sorting out your database issues will solve a great deal of the slowness you're experiencing; caching query results in memcached or something like it will solve most of the rest.
Generally, WCF is the last place I look for performance problems - every time I've gone that direction in the past, the trouble ended up being our code; WCF performs admirable well for its size.
I'm sorry that I can't be more specific, but performance questions are quite application-specific and we don't have much information here to go on.
Fiddler. Fiddler was the answer (as it usually turns out to be.)
If you've experienced similar issues, hopefully what I've learned can be of help.
Here's what I saw:
First, when using both the Chrome/IE Profiler, it became clear that the Request itself was causing the lag, while the Response was quite quick.
This lead me down two paths of possibilities: Either the server was causing lag in the requests due to some specific configuration that I wouldn't see when running via localhost, or there was something wrong with the request itself.
After using Fiddler to get a full view of the request, it became apparent that it was the request I was sending. One of the objects I was passing as a parameter to my WCF service had a property that, when serialized, amounted to about 1 megabyte's worth of data - and that was with gzip enabled. Initially this object was a rather small object, but as the application grew, so did this particular object, resulting in the sudden slow down.
The reason why it happened for certain calls and not others was purely determined by whichever call had this object as a parameter.
The reason why it happens when going over the web, vs. going through the localhost is that over the web, you inevitably face your provider's Upload limit, as well as a number of hops until you hit your server, vs. the direct connection from your localhost to your database.
The lesson: Always transmit the least amount of information you can get away with.