I get this error:
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
When running this code:
namespace ProjectInterface
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1()); -exception unhandled(error)!!!!
}
}
}
When I search a subject name from the database .
Here my textbox code:
private void txtSubject_TextChanged(object sender, EventArgs e)
{
con.Open();
MySqlCommand cmd = new MySqlCommand("SELECT SubjectName From databse.subject WHERE SubjectName LIKE #name", con);
cmd.Parameters.Add(new MySqlParameter("#name", "%" + txtSubject.Text + "%"));
MySqlDataReader dr = cmd.ExecuteReader();
AutoCompleteStringCollection subjectColl = new AutoCompleteStringCollection();
while (dr.Read())
{
subjectColl.Add(dr.GetString(0));
}
txtSubject.AutoCompleteCustomSource = subjectColl;
con.Close();
}
Sometimes it is OK to run, but it often shows this error. How do I solve it?
Issues with the code
The only issues I can identify in the code as you have provided (as some information is lacking) is that every time you type a character, you do a database query and you do that on the same thread (the one the UI is on) and so if the query took 2 seconds to complete, your typing would take 2 seconds per character to type. Very bad experience for the user.
For the MySQL statement itself, there is nothing inherently wrong with it, although I do believe this is the cause of your issue.
Fixing the UI thread
Firstly I would say to understand why you have the thread issue you should first understand threads. I've made a video on it here https://www.youtube.com/watch?v=XXg9g56FS0k and here https://www.youtube.com/watch?v=S0jPzb9kk3o
After you understand the issue you should be able to safely drop the call onto another thread. However you will then have another issue... Then if each call takes 2 seconsd and the user types 10 characters in that time you will get 10 database calls going on. As that is not however the issue in this question I am sure, I do not want to muddy the waters with a long post on solving an unrelated issue.
The underlying problem and solution
So from the given amount of code and detail, I can pretty safely say (99.95%) that the code to cause the memory exception must come from the MySQL calls. As you do not provide the library/dll/reference you are using for the MySQL I can simply presume you are possibly using an old or buggy version.
An example of a driver for MySQL causing this same issue is here Asp.net Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
What I would recommend you use if you have not already, and hopefully to fix your issue, is this version of a maintained library for working with MySQL https://www.nuget.org/packages/Mysql.Data/
If you still get issues and you are using that, I would start to suspect the database itself or the network it is intermittent and causing the error.
An important note is you should in Visual Studio enable all error catching so press Ctrl + Alt + E and check every box. Then when you get this error it should take you directly to the exact line of code it fails on and we can investigate further
Related
I'm using the .NET Connector to access a MySQL database from my C# program. All my queries are done with MySqlCommand.BeginExecuteReader, with the IAsyncResults held in a list so I can check them periodically and invoke appropriate callbacks whenever they finish, fetching the data via MySqlCommand.EndExecuteReader. I am careful never to hold one of these readers open while attempting to read results from something else.
This mostly works fine. But I find that if I start two queries at the same time, then I get the dreaded MySqlException: There is already an open DataReader associated with this Connection which must be closed first exception in EndExecuteReader. And this is happening the first time I invoke EndExecuteReader. So the error message is full of baloney; there is no other open DataReader at that point, unless the connector has somehow opened one behind the scenes without me calling EndExecuteReader. So what's going on?
Here's my update loop, including copious logging:
for (int i=queries.Count-1; i>=0; i--) {
Debug.Log("Checking query: " + queries[i].command.CommandText);
if (!queries[i].operation.IsCompleted) continue;
var q = queries[i];
queries.RemoveAt(i);
Debug.Log("Finished, opening Reader for " + q.command.CommandText);
using (var reader = q.command.EndExecuteReader(q.operation)) {
try {
q.callback(reader, null);
} catch (System.Exception ex) {
Logging.LogError("Exception while processing: " + q.command.CommandText);
Logging.LogError(ex.ToString());
q.callback(null, ex.ToString());
}
}
Debug.Log("And done with callback for: " + q.command.CommandText);
}
And here's the log:
As you can see, I start both queries in rapid succession. (This is the first thing my program does after opening the DB connection, just to pin down what's happening.) Then the first one I check says it's done, so I call EndExecuteReader on it, and boom -- already it claims there's another open one. This happens immediately, before it even gets to my callback method. How can that be?
Is it not valid to have two open queries at once, even if I only call EndExecuteReader on one at a time?
When you run two queries concurrently, you must have two Connection objects. Why? Each Connection can only handle one query at a time. It looks like your code got into some kind of race condition where some of your concurrent queries worked and then a pair of them collided and failed.
At any rate your system will be more resilient in production if you can keep your startup sequences simple. If I were you I'd run one query after another rather than trying to run them all at once. (Obvs if that causes real performance problems you'll have to run them concurrently. But keep it simple until you need it to be complex.)
I've been looking at this problem the last two days and I'm all out of ideas, so I'm hoping someone might have some insight into what exactly is going on.
In brief, the program loads in a CSV file (with 240K lines, so not a ton) into a DataGridView, does a bunch of different operations on the data, and then dumps it into an Excel file for easy viewing and graphic and what not. However, as I iterate over the data in the DataGridView, ram usage constantly increases. I trimmed down the code to the simplest form and it's still happened and I don't quite understand why.
try
{
string conStr = #"Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=C:\Downloads;Extensions=csv,txt";
OdbcConnection conn = new OdbcConnection(conStr);
//OdbcDataAdapter da = new OdbcDataAdapter("Select * from bragg2.csv", conn);
OdbcDataAdapter da = new OdbcDataAdapter("Select * from bragg4.csv", conn);
DataTable dt = new DataTable("DudeYa");
da.Fill(dt);
dgvCsvFile.DataSource = dt;
da.Dispose();
conn.Close();
conn.Dispose();
}
catch (Exception e) { }
string sDeviceCon;
for (int i = 0; i < dgvCsvFile.Rows.Count; i++)
{
if (dgvCsvFile.Rows[i].Cells[48].Value != DBNull.Value)
sDeviceCon = (string)dgvCsvFile.Rows[i].Cells[48].Value;
else
sDeviceCon = "";
if ((sDeviceCon == null) || (sDeviceCon == ""))
continue;
}
When I enter that for loop, the program is using 230 megs. When I finish it, I'm using 3 gigs. At this point I'm not even doing anything with the data, I'm literally just looking at it and then moving on.
Futhermore, if I add these lines after it:
dgvCsvFile.DataSource = null;
dgvCsvFile.Rows.Clear();
dgvCsvFile.Columns.Clear();
dgvCsvFile.Dispose();
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
The RAM usage stays the same. I can't figure out how to actually get the memory back without closing the program.
I've downloaded various memory profiling tools to try and figure out exactly whats going on:
Using VMMap from Sys Internals to take periodic snapshots, there were instances of the Garbage Collector running that were filling up. Each later snapshot I took usually had more Garbage Collectors running with all the previous ones using ~260MB of ram.
Using .NET Memory Profiler, I was able to get a little bit more detailed information. According to that, I have approximately 4 million instances of PropertyStore, DataGrieViewTextBoxCell and PropertyStore.IntegerEntry[]. All of them appear to have 1 reference and are existing uncollected in the Garbage Collector.
I ran the program in x86 mode and it caused an out of memory exception (not terribly unexpected). It throws the error inside of DataGridViewTextBoxCell.Clone(), which is in turn inside a DataGridViewRow.CloneCells() and DataGridViewRow.Clone(). So obviously it's doing a whole bunch of cloning to access the cell data... but why is it never being cleaned up? What could possibly be holding onto the reference for them?
I also tried getting the data into the DataSource in a different way using the CachedCsvReader from LumenWorks.Framwork.IO with no change in the functionality.
Is there something that I'm missing? Is there a setting I should be changing somewhere? Why aren't any of these things being disposed of?
Any help would be greatly appreciated, thanks!
I am getting error
80004005 There is a file sharing violation. A different process might be using the file.
when trying to open a SqlCeConnection.
Is there a way to close a SQL Server CE database programmatically, to try to nip that problem in the bud? Something like (pseudocode):
SqlCeDatabase SQLCeDb = "\My Documents\HHSDB003.sdf";
if (SQLCeDb.IsOpen)
{
SQLCeDb.Close();
}
?
Or a way to set the connection so that it doesn't care if the database is open elsewhere/wise, such as:
SqlCeConnection conn = new SqlCeConnection(#"Data Source=\My Documents\HHSDB003.sdf;File Mode = 'shared read'");
...or:
SqlCeConnection conn = new SqlCeConnection(#"Data Source=\My Documents\HHSDB003.sdf;File Mode = 'read write'");
I can't test these at present, because I'm back to getting
Cannot copy HHS.exe The device has either stopped responding or has been disconnected
when I attempt to copy over a new version of the .exe to the handheld.
If there's something more frustrating to program against (and "against" is the correct word here, I think) than the prehistoric versions of Windows CE / Compact Framework / .NET, I'm not at all sure I want to know what it is.
UPDATE
Adding to my frustrusion (haywire combination of confusion and frustration), I found the following at http://www.pocketpcfaq.com/faqs/activesync/exchange_errors.php:
0x80004005 N/A Synchronization failed due to a device software error. Contact your network administrator.
1. Obtain the latest Pocket PC End User Update from your service provider.
UPDATE 2
Is this possibly problematic (than all but the first setting is blank):
UPDATE 3
With this code:
private void menuItemTestSendingXML_Click(object sender, System.EventArgs e)
{
string connStr = "Data Source=My Documents\\HHSDB003.SDF";
SqlCeConnection conn = null;
try
{
try
{
conn = new SqlCeConnection(connStr);
conn.Open();
MessageBox.Show("it must have opened okay");
}
finally
{
conn.Close();
}
}
catch (Exception ex)
{
if (null == ex.InnerException)
{
MessageBox.Show("inner Ex is null");
}
MessageBox.Show(String.Format("msg is {0}", ex.Message));
}
}
...I am now seeing "it must have opened okay" (that's a good thing, but...why it's now working, I have no idea, because the code has not changed since I last ran it and it failed. Something beyond the code must have been at play.
The only thing I can think of that happened that MAY have had a bearing on this change is that, thinking there may have been a rogue instance of either the .exe or its ancillary dll in memory on the handheld device, I wrote an quick-and-dirty utility that looped through the running processes, looking for them and, if finding them, killing them, but they were not there, so the utility really did "nothing" (maybe the Hawthorne effect?).
That is the way working with this combination of tools and technologies seems to go, though: everything is working fine one minute and the next, BAM! It no longer is. Then the reverse can also happen: for no apparent reason it seems to "heal itself".
In the interests of "full disclosure," here is the utility code:
// Got this from http://www.codeproject.com/Articles/36841/Compact-Framework-Process-class-that-supports-full
private void btnKillRogue_Click(object sender, EventArgs e)
{
ProcessInfo[] list = ProcessCE.GetProcesses();
foreach (ProcessInfo item in list)
{
MessageBox.Show("Process item: " + item.FullPath);
if (item.FullPath == #"\Windows\iexplore.exe") item.Kill(); //<= this was the example search; it probably could be a problem, so I'll use it, too
if (item.FullPath.EndsWith("HHS.exe"))
{
MessageBox.Show("about to kill hhs.exe");
item.Kill();
}
if (item.FullPath.EndsWith("HUtilCE.dll"))
{
MessageBox.Show("about to kill hutilce.dll");
item.Kill();
}
}
}
Maybe there was an instance of iexplore.exe resident in memory that was problematic (I'm not showing a messagebox if that is what is found)...?
As an attempt to claim unused bounty ... do not, however, feel obligated to pass around free points on my behalf ...
Aside from force killing of possible tasks, had you rebooted the system amidst your search for an answer? If your tool did not return the message, it is certainly possible that a reboot would have done the very thing that you had attempted with the kill utility - or possible iexplore.exe had something to do with it ... the lack of the additional messagebox may leave you never knowing - unless this issue occurs again.
If no rebooting occurred, then perhaps whatever program/dll was held in memory by some other process concluded its task and released it hold.
There are several scenarios that might have occurred, it is certainly hard to determine with absolution; hence the lack of answers. I would be interested, though, if this problem occurred again.
I'm hosting the WCF service as a managed Windows service, and I keep getting an AccessViolationException when the consumer/client invokes its method for a second, third or fourth time. The crashes are completely random, so sometimes it might not crash until several more invocations later.
Here's the code with syntax highlighting for easier reading: http://pastebin.com/Z3Z06944
See the comments around the private method "CheckUser", since that's where the exception might be occurring.
I had a look at the code you posted, and I don't see what this has got to do with WCF. You say that commenting out the code for invoking the FireBirdSql (FbCommand?) and the AV goes away. Clearly the problem is with FireBirdSql. Try updating to the latest version, or send the crash report to the developers. An AV (access violation) typically occurs with a problem in the p/invoke unmanaged code interop layer. It sounds like some kind of multithreading problem which would be brought out in a WCF scenario.
(update: edited OP question title to include FbSQL reference)
In your code you are not explicitly closing the connection.
Since you are using the using statement it will get closed but there may be a delay.
If there is a max number of connections and the requests are coming quickly, you could get an exception if the max number is reached.
This would explain the random nature of the errors.
Edit
Your code is vulnerable to an sql server injection attack, you should fix that.
Your problem could be a locking error, do you have an index on user and password, if not you are doing a table scan, which locks the table.
I'm thinking that there are better Role / Membership provider systems in place but based on your code, you could improve this with TRY/FINALLY contructs, with the using statement.
public Boolean AddUser(string user, string pass)
{
using (FbConnection con = new FbConnection(ConfigurationManager.ConnectionStrings["DBi"].ConnectionString.ToString()))
{
using (FbCommand fbComm = new FbCommand("INSERT INTO users (name, pass) VALUES ('" + user + "','" + pass + "')", con))
{
fbComm.Connection.Open();
if (CheckUser(user, pass, con) == 0)
{
fbComm.ExecuteNonQuery();
return true;
}
fbComm.Connection.Close();
}
}
return false;
}
Have a great day!
I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below:
public void Commit()
{
using (SQLiteConnection conn = new SQLiteConnection(this.connString))
{
conn.Open();
using (SQLiteTransaction trans = conn.BeginTransaction())
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)";
command.Parameters.Add(this.col1Param);
command.Parameters.Add(this.col2Param);
foreach (Data o in this.dataTemp)
{
this.col1Param.Value = o.Col1Prop;
this. col2Param.Value = o.Col2Prop;
command.ExecuteNonQuery();
}
}
this.TryHandleCommit(trans);
}
conn.Close();
}
}
I now employ the following gimmick to get the thing to eventually work:
private void TryHandleCommit(SQLiteTransaction trans)
{
try
{
trans.Commit();
}
catch (Exception e)
{
Console.WriteLine("Trying again...");
this.TryHandleCommit(trans);
}
}
I create my DB like so:
public DataBase(String path)
{
//build connection string
SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder();
connString.DataSource = path;
connString.Version = 3;
connString.DefaultTimeout = 5;
connString.JournalMode = SQLiteJournalModeEnum.Persist;
connString.UseUTF16Encoding = true;
using (connection = new SQLiteConnection(connString.ToString()))
{
//check for existence of db
FileInfo f = new FileInfo(path);
if (!f.Exists) //build new blank db
{
SQLiteConnection.CreateFile(path);
connection.Open();
using (SQLiteTransaction trans = connection.BeginTransaction())
{
using (SQLiteCommand command = connection.CreateCommand())
{
command.CommandText = DataBase.CREATE_MATCHES;
command.ExecuteNonQuery();
command.CommandText = DataBase.CREATE_STRING_DATA;
command.ExecuteNonQuery();
//TODO add logging
}
trans.Commit();
}
connection.Close();
}
}
}
I then export the connection string and use it to obtain new connections in different parts of the program.
At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not always happen. Sometimes the whole thing runs without a hitch.
No reads are being performed on these files before the commits finish.
I have the very latest SQLite binary.
I'm compiling for .NET 2.0.
I'm using VS 2008.
The db is a local file.
All of this activity is encapsulated within one thread / process.
Virus protection is off (though I think that was only relevant if you were connecting over a network?).
As per Scotsman's post I have implemented the following changes:
Journal Mode set to Persist
DB files stored in C:\Docs + Settings\ApplicationData via System.Windows.Forms.Application.AppData windows call
No inner exception
Witnessed on two distinct machines (albeit very similar hardware and software)
Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code...
Does anyone have any idea whats going on here?
I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question!
brian
UPDATES:
Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however...
The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state...
More suggestions welcome!
It looks like you failed to link the command with the transaction you've created.
Instead of:
using (SQLiteCommand command = conn.CreateCommand())
You should use:
using (SQLiteCommand command = new SQLiteCommand("<INSERT statement here>", conn, trans))
Or you can set its Transaction property after its construction.
While we are at it - your handling of failures is incorrect:
The command's ExecuteNonQuery method can also fail and you are not really protected. You should change the code to something like:
public void Commit()
{
using (SQLiteConnection conn = new SQLiteConnection(this.connString))
{
conn.Open();
SQLiteTransaction trans = conn.BeginTransaction();
try
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.Transaction = trans; // Now the command is linked to the transaction and don't try to create a new one (which is probably why your database gets locked)
command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)";
command.Parameters.Add(this.col1Param);
command.Parameters.Add(this.col2Param);
foreach (Data o in this.dataTemp)
{
this.col1Param.Value = o.Col1Prop;
this. col2Param.Value = o.Col2Prop;
command.ExecuteNonQuery();
}
}
trans.Commit();
}
catch (SQLiteException ex)
{
// You need to rollback in case something wrong happened in command.ExecuteNonQuery() ...
trans.Rollback();
throw;
}
}
}
Another thing is that you don't need to cache anything in memory. You can depend on SQLite journaling mechanism for storing incomplete transaction state.
Run Sysinternals Process Monitor and filter on filename while running your program to rule out if any other process does anything to it and to see what exacly your program is doing to the file. Long shot, but might give a clue.
We had a very similar problem using nested Transactions with the TransactionScope class. We thought all database actions occurred on the same thread...however we were caught out by the Transaction mechanism...more specifically the Ambient transaction.
Basically there was a transaction higher up the chain which, by the magic of ado, the connection automatically enlisted in. The result was that, even though we thought we were writing to the database on a single thread, the write didn't really happen until the topmost transaction was committed. At this 'indeterminate' point the database was written to causing it to be locked outside of our control.
The solution was to ensure that the sqlite database did not directly take part in the ambient transaction by ensuring we used something like:
using(TransactionScope scope = new TransactionScope(TransactionScopeOptions.RequiresNew))
{
...
scope.Complete()
}
Things to watch for:
don't use connections across multiple threads/processes.
I've seen it happen when a virus scanner would detect changes to the file and try to scan it. It would lock the file for a short interval and cause havoc.
I started facing this same problem today: I'm studying asp.net mvc, building my first application completely from scratch. Sometimes, when I'd write to the database, I'd get the same exception, saying the database file was locked.
I found it really strange, since I was completely sure that there was just one connection open at that time (based on process explorer's listing of active file handles).
I've also built the whole data access layer from scratch, using System.Data.SQLite .Net provider, and, when I planned it, I took special care with connections and transactions, in order to ensure no connection or transaction was left hanging around.
The tricky part was that setting a breakpoint on ExecuteNonQuery() command and running the application in debug mode would make the error disappear!
Googling, I found something interesting on this site: http://www.softperfect.com/board/read.php?8,5775. There, someone replied the thread suggesting the author to put the database path on the anti-virus ignore list.
I added the database file to the ignore list of my anti-virus (Microsoft Security Essentials) and it solved my problem. No more database locked errors!
Is your database file on the same machine as the app or is it stored on a server?
You should create a new connection in every thread. I would simplefy the creation of a connection, use everywhere: connection = new SQLiteConnection(connString.ToString());
and use a database file on the same machine as the app and test again.
Why the two different ways of creating a connection?
These guys were having similiar problems (mostly, it appears, with the journaling file being locked, maybe TortoiseSVN interactions ... check the referenced articles).
They came up with a set of recommendations (correct directories, changing journaling types from delete to persist, etc). http://sqlite.phxsoftware.com/forums/p/689/5445.aspx#5445
The journal mode options are discussed here: http://www.sqlite.org/pragma.html . You could try TRUNCATE.
Is there a stack trace during the exception into SQL Lite?
You indicate you "batch my commits at a reasonable interval". What is the interval?
I would always use a Connection, Transaction and Command in a using clause. In your first code listing you did, but your third (creating the tables) you didn't. I suggest you do that too, because (who knows?) maybe the commands that create the table somehow continue to lock the file. Long shot... but worth a shot?
Do you have Google Desktop Search (or another file indexer) running? As previously mentioned, Sysinternals Process Monitor can help you track it down.
Also, what is the filename of the database? From PerformanceTuningWindows:
Be VERY, VERY careful what you name your database, especially the extension
For example, if you give all your databases the extension .sdb (SQLite Database, nice name hey? I thought so when I choose it anyway...) you discover that the SDB extension is already associated with APPFIX PACKAGES.
Now, here is the cute part, APPFIX is an executable/package that Windows XP recognizes, and it will, (emphasis mine) ADD THE DATABASE TO THE SYSTEM RESTORE FUNCTIONALITY
This means, stay with me here, every time you write ANYTHING to the database, the Windows XP system thinks a bloody executable has changed and copies your ENTIRE 800 meg database to the system restore directory....
I recommend something like DB or DAT.
While the lock is reported on the COMMIT, the lock is on the INSERT/UPDATE command. Check for record locks not being released earlier in your code.