G'Day everybody.
Need the help to finalize the answer to similar question. Unfortunately, since I do not have enough points, I can not ask the final solution for people involved. THe original question is here: "[Passing SQL Server exception to .net][1]"
So,
PROBLEM:
TRY/CATCH statement in SQL Server 2012 swallows the exception that I want to pass or re-throw back to EF 5.0.
As long as the exception is without Try/catch, .NET code gets it without problems.
Attempts to solve:
1. FOR SQL SERVER SIDE: We have tried with THROW / RAISEERROR, Raising the exception in and outside of the Try/Catch block, recording the original error and re-throwing it again.
2. From .NET SIDE tried as planned overloaded "ExecuteSprocAccessor" and changed back to basic code
"...
SqlDataAdapter dataadap = new SqlDataAdapter(command);
dataadap.Fill(dt);
"
Nothing worked
3. Another option - guess - is to think about some parameter in SQL SERVER that might be blocking it - do not know where to look as yet
Solution:
LOOKING for help and advise how the original managed to supressed the TRY/Catch behavior.
Thanks!
We have spent another day in further attempt to get to the bottom of it and, accidentally, found temporary solution.
since there is no initial response, I am posting this as a solution. Though, it's more like a temporary solution.
The usage of "db.ExecuteDataSet(dbc)" worked fine providing the relevant handling of exceptions passed from SQL Server 2012 from Catch block. for us it shows that there are some tricks or problems that we do not know as yet with EL6.0 and/or ADO.NET in new version
*using ( DbCommand dbc = db.GetStoredProcCommand("[Personnel].[uspWebLogin]"))
{
db.AddInParameter(dbc, "UserLogin", DbType.String, user.UserLogin);
db.AddInParameter(dbc, "UserPassword", DbType.String, user.Password);
DataSet ds = db.ExecuteDataSet(dbc);
DataTableReader dtr = ds.CreateDataReader();
// string count just to check theat the core query actualy worked when there is no exception
string count = dtr.FieldCount.ToString();
......
return new WebUser();
}*
The problems are now on how to optimize the best mapping of objects and return set instead of writing heaps of code which "ExecuteSprocAccessor" and "IRowMapper" were doing for us.
Thanks for reading. Any further comments or suggestions are still welcomed. Cheers.
Related
I have a problem in my program that's supposed to store projects given by the user in a database. I'm stuck on the edit project button. After entering new values in the program and hitting the button to save the values everything runs successfully with no errors. The messagebox that says "project edited" appears, I hit ok but the database stays the same. There is no error in the code, the SQL code that gets sent to update the database values is also correct but it doesn't work. Can anyone help with this because I am lost.
Here is the method that creates and executes the SQL code to update the database.
enter image description here
Wow man that code is wrong in so many ways according to code standards and principles most popular :) but that is not what the question is about directly, though getting you past lost we have to start at the basic tbh:
Suggestions
when you catch that exception if it comes, show that in a messagebox also you can even add an error icon as part of the .Show command, it's build in.
Move the connection.Close to the finally block instead of having it replicated
Consider making an SQL procedure instead and just parse the parameter into that, this code is prone to sql injection that you pose
Consider not making the procedure and familiarize Yourself with entity framework, it's going to make your life so much easier
do not concatenate like that, use interpolation or string.Combine or you'll be copying stuff all around on the stack, for each + a new copy one and two into third, it is super inefficient
When You write the code works and the sql is correct, consider that the outcome is not the desired and therefore it doesn't technically ;) the best and the worst about computers, is that they do what you ask.
Don't write Your DAL code in the form at all
Consider checking your parameters for default values
You do not have data to say 'project was updated' only 'values were saved', you do not check the old values in the code
Still besides that I do not see why what you wrote wouldn't work, provided the resulting sql is valid in what db you use, but i suppose if you do some of these things the error will present itself
I don't think it's a connection problem because I have a function that updates only the finishdate of the project and that works completely fine.
Here is the function:
public static MySqlCommand FinishProject(int projID, string finishdate) {
try {
if (connection != null) {
connection.Open();
cmd = connection.CreateCommand();
cmd.CommandType = CommandType.Text;
cmd.Parameters.AddWithValue("#value", projID);
cmd.Parameters.AddWithValue("#finishdate", finishdate);
cmd.CommandText = "UPDATE `b1c`.`projects` SET `finishdate` = (#finishdate) WHERE (`projectid` = (#value));";
int i = cmd.ExecuteNonQuery();
connection.Close();
if (i != 0) {
MessageBox.Show("Project finalized.");
i = 0;
}
}
} catch (Exception ex) {
MessageBox.Show("Catch");
connection.Close();
}
return cmd;
}
You can see it's basically the same the only difference are the values.
So it shouldn't be a connection thing because this one works fine I think.
I also don't think it's a problem in the SQL database because all the problems
I've had up until now that had anything to do with the database have shown as errors in visual studio.
If anyone can help I will provide screenshots of anything you need and thank you all once again for trying to help.
Here is the screenshot of the function I've pasted previously so it's easier to look at.
finishprojectfunction
I have a long set of SQL scripts. They are all update statements. It's for an access database. I want to validate the script before I run it. Firstly, I'd like to make sure that the query can be parsed. I.e. that the SQL is at least syntactically correct. Secondly, I'd like to make sure that the query is valid in terms of database structure - i.e. there are no missing columns or the columns are of the wrong type etc. But, I don't want the query to be actually executed. The aim of this is to do a quick validation before the process kicks off because the process takes several hours and one syntactical error can waste a day of someone's time.
I will probably write the tool in C# with .net but if there's a pre-built tool that would be even better. I will probably use the Access API. In SQL Server this is very straight forward. You can just validate the query in SQL Server management studio before running it. It will give you a good indication of whether the SQL will complete or not.
How would I go about doing this?
Edit: an answer below solves the issue of checking syntax. However, I'd still like to be able to validate the semantic content of the query is OK. However, I think this might be impossible in Access without actually running the query. Please tell me I'm wrong.
I'm not 100% sure if Access works the same way as a traditional database, but with a mainstream RDMBS, there are actually three distinct steps that happen when you run a query:
Prepare
Execute
Fetch
Most are oblivious to the distinction because they just hit "run" and see results come back.
It's the "Execute" that actually compiles the statement before going off and pulling data.
When you use ADO, you can actually see the three events as three separate calls to the database. What this means is you can trap the execute step to see if it fails, and if it succeeds, there is nothing requiring you to actually get the results.
OleDbConnection conn = new OleDbConnection();
conn.ConnectionString = String.Format("{0}{1}",
#"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=", #"c:\Access\MyDb.accdb");
conn.Open();
bool valid;
using (OleDbCommand cmd = new OleDbCommand("select [Bad Field] from [Table]", conn))
{
try
{
OleDbDataReader reader = cmd.ExecuteReader();
valid = true;
reader.Close(); // Did not ever call reader.Read()
}
catch (Exception ex)
{
valid = false;
}
}
And now valid indicates whether or not the statement compiled.
If you want to get really fancy, you can parse the exception results to find out why the command failed.
Access supports transactions on its Connection object. Try to execute your SQL statement inside a transaction and always call Rollback. Wrap the whole attempt in a Try/Catch block to assess whether the statement executed successfully or not.
This is going to top some of the weirdest things I've seen. I've tried looking up "simple t-sql delete causing timeout" but all titles are misleading, they say simple but are not. They deal with deleting millions of records or have complex relationships setup. I do not.
I have four tables:
tblInterchangeControl,
tblFunctionalGroup,
tblTransactionSet,
tblSegment
The latter 3 all associate to tblInterchangeConrol via InterchangeControlID. There are no relationships setup. like I said as simple as one could get.
The procedure runs a delete statement on all 4 tables like so...
DELETE FROM tblSegment
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblTransactionSet
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblFunctionalGroup
WHERE (ID_InterchangeControlID = #InterchangeControlID)
DELETE FROM tblInterchangeControl
WHERE (InterchangeControlID = #InterchangeControlID)
The weird part is if I leave these in the procedure it times out, if I remove them, it does not. I've pinned it to these delete statements that are the cause. But Why?!
I included c# because I'm calling this procedure from a c# application. I don't think this is the issue but maybe. I only say I don't think so because my code work just fine when I remove the delete statements inside the stored procedure. Then if I put them back, an exception is thrown that it's timed out.
In case my comment is the answer.
Most likely you have some locks holding those deletes up.
If you run a query from a command line SQL tool or from SQL Management Studio it will take whatever it needs to complete the query. So yes, most likely it's client part issue. And, because you mentioned c# it's probably ADO.NET command timeout.
Also, I'd suggest to profile the queries by inspecting their execution plans. In case you don't have indexes (primary/unique key constraints) this will result to full-scan, i.e. O(n) operation you don't want.
Update:
OK, looks like it's ADO.NET error. In your code, just prior executing the command increase the timeout:
var myCommand = new SqlCommand("EXEC ..."); // you create it something like this
....
myCommand.CommandTimeout = 300; // 5 minutes
myCommand.ExecuteNonReader(); // assuming your SP doesn't return anything
I've inherited an application with a lot of ADO work in it, but the insert/update helper method that was written returns void. We've also been experiencing a lot of issues with data updates/inserts not actually happening. My goal is to update all of them to check rows affected and depending on the results, act accordingly, but for the time being of finding what may be causing the issue, I wanted to log SQL statements that are called against the server and the number of rows affected by the statement.
This is the statement I'm attempting:
SqlCommand com = new SqlCommand(String.Format("'INSERT INTO
SqlUpdateInsertHistory(Statement, AffectedRows) VALUES (''{0}'', {1});'",
statement.Replace("'", "''"), rows), con);
but it seems to constantly break somewhere in the sql that is being passed in (some cases on single quotes, but I imagine there are other characters that could cause it as well.
Is there a safe way to prep a statement string to be inserted?
I just can't rightly propose a solution to this question without totally modifying what you're doing. You're currently wide open to SQL Injection. Even if this is a local application, practice how you want to play.
using (SqlCommand com = new SqlCommand("INSERT INTO SqlUpdateInsertHistory(Statement, AffectedRows) VALUES (#Statement, #AffectedRows)", con))
{
com.Parameters.AddWithValue("#Statement", statement);
com.Parameters.AddWithValue("#AffectedRows", rows);
com.ExecuteNonQuery();
}
Have you tried SQL Server Profiler? It's already been written and logs queries, etc.
Someone else tried this and got a lot of decent answers here.
I have a simple query which returns 25,026 rows:
MySqlCommand cmd = new MySqlCommand("SELECT ID FROM People", DB);
MySqlDataReader reader = cmd.ExecuteReader();
(ID is an int.) If I just do this:
int i = 0;
while (reader.Read()) i++;
i will equal 25026. However, I need to do some processing on each ID in my loop; each iteration ends up taking somewhere in the hundreds of milliseconds.
int i = 0;
MySqlCommand updater = new MySqlCommand("INSERT INTO OtherTable (...)", anotherConnection);
updater.Prepare();
while (reader.Read()) {
int id = reader.getInt32(0);
// do stuff, then
updater.ExecuteNonQuery();
i++;
}
However, after about 4:15 of processing, reader.Read() simply returns false. In most of my test runs, i equaled 14896, but it also sometimes stops at 11920. The DataReader quitting after the same number of records is suspicious, and the times it stops after a different number of rows seems even stranger.
Why is reader.Read() returning false when there's definitely more rows? There are no exceptions being thrown – not even first chance exceptions.
Update: I mentioned in my response to Shaun's answer that I was becoming convinced that MySqlDataReader.Read() is swallowing an exception, so I downloaded Connector/Net's source code (bzr branch lp:connectornet/6.2 C:/local/path) and added the project to my solution. Sure enough, after 6:15 of processing, an exception!
The call to resultSet.NextRow() throws a MySqlException with a message of "Reading from the stream has failed." The InnerException is a SocketException:
{ Message: "An existing connection was forcibly closed by the remote host",
ErrorCode: 10054,
SocketErrorCode: ConnectionReset }
10054 means the TCP socket was aborted with a RST instead of the normal disconnection handshake (FIN, FIN ACK, ACK), which tells me something screwy is happening to the network connection.
In my.ini, I cranked interactive_timeout and wait_timeout to 1814400 (seconds) to no avail.
So... why is my connection getting torn down after reading for 6:15 (375 sec)?
(Also, why is this exception getting swallowed when I use the official binary? It looks like it should bubble up to my application code.)
Perhaps you have a corrupted table - this guy's problem sounds very similar to yours:
http://forums.asp.net/t/1507319.aspx?PageIndex=2 - repair the table and see what happens.
If that doesn't work, read on:
My guess is that you are hitting some type of Deadlock, especially considering you are reading and writing. This would explaing why it works with the simple loop, but doesn't work when you do updates. It would also explain why it happens around the same row / time each time.
There was a weird bug in SqlDataReader that squelched exceptions (http://support.microsoft.com/kb/316667). There might be something similar in MySqlDatareader - After your final .Read() call, try calling .NextResult(). Even if it's not a deadlock, it might help you diagnose the problem. In these type of situations, you want to lean more towards "trust but verify" - yes, the documentation says that and exception will be thrown on timeone, but sometimes (very rarely) that documentation lies :) This is especially true for 3rd party vendors - e.g. see http://bugs.mysql.com/bug.php?id=53439 - the mysql .net library has had a couple of problems like the one you are having in the past.
Another idea would be to watch what's happening in your database - make sure data is contantly being fetched up till the row that your code exits on.
Failing that, I would just read all the data in, cache it, and then do your modifications. By batching the modifications, the code would be less chatty and execute faster.
Alternatively, reset the reader every 1000 or so rows (and keep track of what row ID you were up to)
Hope something here helps you with your frustration! :)
Since I'm just reading ints, I ultimately just read the entire resultset into a List<int>, closed the reader, and then did my processing. This is fine for ints since a even a million take up < 100 MB of RAM, but I'm still disappointed that the root issue isn't resolved – if I were reading more than a single int per row, memory would become a very large problem with a large dataset.
Try to set longer connection timeout.
There are 2 issues that make things a bit more confusing than it should be:
The first, as has been mentioned in another post, is that older versions of the MySQL .NET connector were swallowing a timeout exception. I was using mysql.data.dll version 6.1.x and after upgrading to 6.3.6 the exception was being properly thrown.
The second is the default MySQL server timeouts, specifically net_read_timeout and net_write_timeout (which default to 30 and 60 seconds respectively).
With older versions of mysql.data.dll, when you are performing an action with the data in the datareader loop and you exceed the 60 second default timeout it would just sit there and not do anything. With newer versions it properly throws a timeout exception which helps diagnose the problem.
Hope this helps someone as I stumbled upon this but the solution was to use a different approach, not an actual cause/fix.
TLDR: The fix is increase net_read_timeout and net_write_timeout on the mysql server in my.ini although upgrading mysql.data.dll is a good idea.
May be this is timeout on server-side?
Try this
1. Add reference System.Transactions;
using(TransactionScope scope = new TransactionScope())
{
//Initialize connection
// execute command
:
:
scope.Complete();
}
Write your entire insert/update logic inside Scope's using. This will definetly help you.
Add the following after creating your Command.
cmd.CommandTimeout = 0;
This will set the CommandTimeout to indefinitly. The reason your getting a timeout is probably because the connection though executed, is still in the 'command' phase because of the Reader.
Either try setting the CommandTimeout = 0 or reading everything first, then doing subsequent functions on the results. Otherwise the only other issue i could possibley see is that the Sql Server is dropping the result set against the specified process id due to a timeout on the server itself.
i've found an article here
http://corengen.wordpress.com/2010/06/09/mysql-connectornet-hangs-on-mysqldatareader-read/
What this guy experienced was something similar: a hang on the Read method at exactly the same moment, during reading of the same record (which is the same thing you experience i guess).
In his case he called another webservice during the Read() loop, and that one timed out causing the Read() to hang without an exception.
Can it be the same at your machine, that an update in the Read() loop times out (i think that update uses the default 30 secs timeout) and causes the same effect?
Maybe a longshot, but reading the two stories the sounded a lot familiair.
Try setting the timeout on the MySqlCommand and not the MySqlConnection on both "cmd" and "updater".
What happens if you do : SELECT TOP 100 ID FROM PEOPLE ?