SQL Server timeout exception when extending Linq partial methods - c#

In .NET 4.0 and Linq to SQL, I am trying to use a partial class to "trigger" changes from within an update method (an existing DBML method). For simplicity, imagine a table Things with columns Id and Value
The auto gen DBML contains a method OnValueChanged, I'll extend that and as an exercise try to change one value in one other row :
public partial class Things
{
partial void OnValueChanged()
{
MyAppDataContext dc = new MyAppDataContext();
var q = from o in dc.GetTable<Things>() where o.Id == 13 select o;
foreach (Things o in q)
{
o.Value = "1"; // try to change some other row
}
try
{
dc.SubmitChanges();
}
catch (Exception)
{
// SQL timeout occurs
}
}
}
A SQL timeout error occurs. I suspect that the datacontext is getting confused trying to SubmitChanges() before the current OnValueChanged() method has disposed of it's datacontext, but I am not sure.
Mostly I cannot find an example of a good pattern for triggering updates against a DB within an existing DBML generated method.
Can anyone provide any pointers on why this doesn't work and how I can accomplish something that works OK? (I realize I can trigger in the SQL database, but do not want to take that route.)
Thanks!

First, you aren't disposing of the DataContext at all in your function. Wrap it in a using statement.
The actual issue is coming from the fact that you're recursively calling yourself by setting the Value property on the retrieved values. You're just running into the timeout before you can hit a StackOverflowException.
It's unclear what you're trying to do here; if you're trying to allow different behavior between when you set the Value property here versus anywhere else, then it's simple enough to use a flag. In your partial class, declare an internal instance boolean auto property called UpdatingValue, and set it to true on each item inside your foreach block before you update the value, then set it to false after you update the value. Then, as the first line in OnValueChanged, check to ensure that UpdatingValue is false.
Like this:
public partial class Things
{
internal bool UpdatingValue { get; set; }
partial void OnValueChanged()
{
if (UpdatingValue) return;
using(MyAppDataContext dc = new MyAppDataContext())
{
var q = from o in dc.GetTable<Things>() where o.Id == 13 select o;
foreach (Things o in q)
{
o.UpdatingValue = true;
o.Value = "1"; // try to change some other row
o.UpdatingValue = false;
}
dc.SubmitChanges();
}
}
}

I would suspect that you may have introduced infinite recursion by changing the values of Things in the OnValueChanged event handler of Things.
To me, a cleaner solution to your problem is not to generate your class in a DBML file, but instead use LinqToSql attributes on a class you create. By doing so you can do your "trigger" modifications in the setters of your properties/columns.

I had a similar issue. I don't think it is a bug in your code, I'm leaning toward a bug in how the SqlDependency works. I did the same this as you, but I incrementally tested it. If the select statement return 1-100 rows, then it worked fine. If the select statement returned 1000 rows, then I would get the SqlException (timeout).
It is not a stack overflow issue (at least not in this client code). Putting a break point at the OnValueChanged event handler reveals that it does not get called again while the SubmitChanges call is hanging.
It is possible that there is a requirement that the OnValueChanged call must return before you can call SubmitChanges. Maybe calling SubmitChanges on a different thread might help.
My solution was to wrap the code in a big try/catch block to catch the SqlException. If it happens, then I perform the same query, but I don't use an SqlDependency and don't attach it to the command. This does not hang the SubmitChanges call anymore. Then right after that, I recreate the SqlDependency and then make the query again, to reregister the dependency.
This is not ideal, but at least it will process all the rows eventually. The problem only occurs if there are a lot of rows to be selected, and if the program is working smoothly, this should not happen as it is constantly catching up.
public Constructor(string connString, CogTrkDBLog logWriter0)
{
connectionString = connString;
logWriter = logWriter0;
using (SqlConnection conn = new SqlConnection(connString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand("SELECT is_broker_enabled FROM sys.databases WHERE name = 'cogtrk'", conn))
{
bool r = (bool) cmd.ExecuteScalar();
if (!r)
{
throw new Exception("is_broker_enabled was false");
}
}
}
if (!CanRequestNotifications())
{
throw new Exception("Not enough permission to run");
}
// Remove any existing dependency connection, then create a new one.
SqlDependency.Stop(connectionString);
SqlDependency.Start(connectionString);
if (connection == null)
{
connection = new SqlConnection(connectionString);
connection.Open();
}
if (command == null)
{
command = new SqlCommand(GetSQL(), connection);
}
GetData(false);
GetData(true);
}
private string GetSQL()
{
return "SELECT id, command, state, value " +
" FROM dbo.commandqueue WHERE state = 0 ORDER BY id";
}
void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
// Remove the handler, since it is only good
// for a single notification.
SqlDependency dependency = (SqlDependency)sender;
dependency.OnChange -= dependency_OnChange;
GetData(true);
}
void GetData(bool withDependency)
{
lock (this)
{
bool repeat = false;
do {
repeat = false;
try
{
GetDataRetry(withDependency);
}
catch (SqlException)
{
if (withDependency) {
GetDataRetry(false);
repeat = true;
}
}
} while (repeat);
}
}
private void GetDataRetry(bool withDependency)
{
// Make sure the command object does not already have
// a notification object associated with it.
command.Notification = null;
// Create and bind the SqlDependency object
// to the command object.
if (withDependency)
{
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += dependency_OnChange;
}
Console.WriteLine("Getting a batch of commands");
// Execute the command.
using (SqlDataReader reader = command.ExecuteReader())
{
using (CommandQueueDb db = new CommandQueueDb(connectionString))
{
foreach (CommandEntry c in db.Translate<CommandEntry>(reader))
{
Console.WriteLine("id:" + c.id);
c.state = 1;
db.SubmitChanges();
}
}
}
}

Related

Avoid "The SqlParameter is already contained by another SqlParameterCollection" exception when use retryPolicy with SqlAzureExecutionStrategy

I've read various questions/suggestions about this exception. However what I am supposed to do in order to avoid it when I use retry policy? Connection might not end up closed and so parameters could not be reused?
public class ReliableSqlCommand
{
public List<ResultType> ExecuteReader<ResultType>() where ResultType : new()
{
var list = new List<ResultType>();
var retryPolicy = new DWSqlAzureExecutionStrategy(SqlMaxRetryCount, SqlMaxDelay);
retryPolicy.Execute(() =>
{
list = new List<ResultType>();
using (var sqlConnection = new SqlConnection(ConnectionString))
{
using (var sqlCommand = new SqlCommand(CommandText, sqlConnection))
{
sqlCommand.CommandTimeout = CommandTimeout;
sqlCommand.CommandType = CommandType;
sqlCommand.Parameters.AddRange(Parameters.ToArray());
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
using (SqlDataReader dataReader = sqlCommand.ExecuteReader())
{
while (dataReader.Read())
{
if (typeof(ResultType).BaseType == typeof(System.ValueType))
{
var sqlValue = dataReader.GetValue(0);
if (sqlValue == DBNull.Value)
list.Add(default);
else
list.Add((ResultType)ChangeType(sqlValue, typeof(ResultType)));
}
else
{
//handle complex types (objects)
ResultType item = new ResultType();
Type itemType = item.GetType();
for (int columnNr = 0; columnNr < dataReader.FieldCount; columnNr++)
{
PropertyInfo prop = itemType.GetProperty(dataReader.GetName(columnNr));
if (prop == null) continue;
var value = dataReader.GetValue(columnNr);
if (value == null || value == DBNull.Value)
{
prop.SetValue(item, null);
}
else
{
prop.SetValue(item, value);
}
}
list.Add(item);
}
}
sqlConnection.Close();
}
sqlCommand.Parameters.Clear();
}
}
});
return list;
}
}
ReliableSqlCommand contains this as a property:
public List<SqlParameter> Parameters { get; } = new List<SqlParameter>();
After reviewing your code, I could imagine the following. (Note that I haven't tested it.)
You pass a function to retryPolicy.Execute(), which seems to correctly handle your database actions, disposing all connections, commands, datareaders, etc.
However, I assume that the retryPolicy can already start executing a new run of that function while a previous run is still active/running (or at least not yet fully completed). In that case, the parameters in ReliableSqlCommand.Parameters will be added to a new instance of SqlCommand, which is clearly not allowed when those parameters are still "alive" in a previous running function call in the background (which is still waiting for a database timeout exception, perhaps).
I do not see a straightforward stable/reliable fix for this.
Within the function, you could try to make new copies/instances of the Parameter objects and assign those copies to the SqlCommand instance. But in case you have output parameters, you will have to update the ReliableSqlCommand.Parameters collection afterwards. When having multiple running/overlapping function calls, that might be tricky as well.
I think what you need to do is either to ensure the parameters are removed from the old command, or cache the command
If I understand correctly, the Execute function retries the lambda, and swallows any exceptions along the way. It does not execute multiple times concurrently.
Unfortunately, SqlCommand.Dispose does not remove the parameters from the command.
So option 1 is:
using (var sqlCommand = new SqlCommand(CommandText, sqlConnection))
{
try
{
.......
}
finally
{
sqlCommand.Parameters.Clear();
}
}
A better option in my opinion, given that a parameter is supposed to be used with only one command, is to cache the command also.
There is nothing wrong with this, as long as the connection is changed each time.
public ReliableSqlCommand
{
public SqlCommand Command { get; set; }
Then instead of using (var sqlCommand = new SqlCommand..., just use the existing _command:
_command.Connection = sqlConnection;
If you don't want to expose your command object directly, you could make a wrapper that adds and deletes the parameters.
It's not strictly necessary to dispose SqlCommand, because it's Dispose does nothing. But for consistency's sake, you may want to have ReliableSqlCommand be disposable as well.

Thread.Abort doesn't release a file

I made a code that create a Database in .sqlite, all working good but I want to be sure that when the user start for the first time the application the Database population must be completed. If the user abort the database population, the database must be deleted (because the application don't working with an incomplete resource). Now I've used the thread for execute the method that create this Database, and I've declared the thread variable global in the class, like:
Thread t = new Thread(() => Database.createDB());
The Database.createDB() method create the DB. All working perfect, the DB is created correctly. Now I fire the closing of the window that creating the DB like:
protected override void OnClosing(System.ComponentModel.CancelEventArgs e)
{
MessageBoxResult result = MessageBox.Show(
#"Sure?",
"Attention", MessageBoxButton.YesNo, MessageBoxImage.Question);
try
{
if (result == MessageBoxResult.Yes)
{
t.Abort();
if (File.Exists("Database.sqlite"))
{
File.Delete("SoccerForecast.sqlite");
Process.GetCurrentProcess().Kill();
} ....
The event was fired correct and the thread stopped, but when the condition start if (File.Exists("Database.sqlite")) the compiler tell me:
Can't delete file - in using by another process.
But I've stopped the thread, why this exception appear? What I doing wrong?
UPDATE:
In CreateDb() method I also have a call to other method of different class, one of this have the structure like this:
public void setSoccer()
{
Database.m_dbConnection.Open();
string requestUrl = "...";
string responseText = Parser.Request(requestUrl);
List<SoccerSeason.RootObject> obj = JsonConvert.DeserializeObject<List<SoccerSeason.RootObject>>(responseText);
foreach (var championships in obj)
{
string sql = "string content";
SQLiteCommand command = new SQLiteCommand(sql, Database.m_dbConnection);
try
{
command.ExecuteNonQuery();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
string query = "select * from SoccerSeason";
SQLiteCommand input = new SQLiteCommand(query, Database.m_dbConnection);
SQLiteDataReader reader = input.ExecuteReader();
int i = 0;
while (reader.Read())
{
//reading data previously inserted in the database
}
Database.m_dbConnection.Close(); /
}
I was wondering where I should put the flag variable because this code have a different loop inside.
It could be that when you're aborting the thread it's not cleanly closing the database connections, hence the error you're seeing.
Might I suggest a slight redesign because using Thread.Abort is not ideal.
Instead use a variable as a cancel flag to notify the thread to shut down.
Then when the thread detects that this cancel flag is set it can properly close connections and handle the database delete itself.
Update:
A brief example to illustrate what I mean; it ain't pretty and it won't compile but it gives the general idea.
public class Database
{
public volatile bool Stop= false;
public void CreateDb()
{
if(!Stop)
{
// Create database
}
if(!Stop)
{
// Open database
// Do stuff with database
}
// blah blah ...
if(Stop)
{
// Close your connections
// Delete your database
}
}
}
...
protected override void OnClosing(CancelEventArgs e)
{
Database.Stop = true;
}
And now that you know roughly what you're looking for I heartily recommend Googling for posts on thread cancellation by people who know what they're talking about that can tell you how to do it right.
These might be reasonable starting points:
How to: Create and Terminate Threads
.NET 4.0+ actually has a CancellationToken object with this very purpose in mind Cancellation in Managed Threads

Update only works in debug mode

I'm new to using entity as a data layer between MVC and SQL Server, so I apologize up front if what I'm doing is bad practice.
Let me start by sharing the code that is handling the update.
Update Delivery:
public bool One(Delivery toUpdate)
{
using (var dbContext = new FDb())
{
try
{
var deliveryInDb = this.dbTable(dbContext).Single(x => x.DeliveryId == toUpdate.DeliveryId);
dbContext.Entry(deliveryInDb).CurrentValues.SetValues(toUpdate);
//removal first
List<DeliveryDay> currentDays = FEngineCore.DeliveryDay.Get.ForValue((x => x.DeliveryId), toUpdate.DeliveryId);
List<DeliveryTime> currentTimes = FEngineCore.DeliveryTime.Get.ForValue((x => x.DeliveryId), toUpdate.DeliveryId);
//remove delivery days that are not needed
foreach (var curDay in currentDays)
{
if (!toUpdate.DeliveryDays.Select(x => x.DeliveryDayId).Contains(curDay.DeliveryDayId))
{
FEngineCore.DeliveryDay.Delete.One((x => x.DeliveryDayId), curDay.DeliveryDayId);
deliveryInDb.DeliveryDays.Remove(curDay);
}
}
//remove delivery times that are not needed
foreach (var curTime in currentTimes)
{
if (!toUpdate.DeliveryTimes.Select(x => x.DeliveryTimeId).Contains(curTime.DeliveryTimeId))
{
FEngineCore.DeliveryTime.Delete.One((x => x.DeliveryTimeId), curTime.DeliveryTimeId);
deliveryInDb.DeliveryTimes.Remove(curTime);
}
}
foreach (var day in toUpdate.DeliveryDays)
{
if (day.DeliveryDayId == 0)
{
dbContext.DeliveryDays.Add(day);
}
else
{
if (dbContext.DeliveryDays.Local.Any(e => e.DeliveryDayId == day.DeliveryDayId))
{
dbContext.Entry(dbContext.DeliveryDays.Local.First(e => e.DeliveryDayId == day.DeliveryDayId)).CurrentValues.SetValues(day);
dbContext.Entry(dbContext.DeliveryDays.Local.First(e => e.DeliveryDayId == day.DeliveryDayId)).State = EntityState.Modified;
}
else
{
DeliveryDay modDay = new DeliveryDay
{
DayOfWeek = day.DayOfWeek,
DeliveryDayId = day.DeliveryDayId,
DeliveryId = day.DeliveryId,
Interval = day.Interval
};
dbContext.DeliveryDays.Attach(modDay);
dbContext.Entry(modDay).State = EntityState.Modified;
}
deliveryInDb.DeliveryDays.Add(day);
}
}
foreach (var time in toUpdate.DeliveryTimes)
{
if (time.DeliveryTimeId == 0)
{
dbContext.DeliveryTimes.Add(time);
}
else
{
if (dbContext.DeliveryTimes.Local.Any(e => e.DeliveryTimeId == time.DeliveryTimeId))
{
dbContext.Entry(dbContext.DeliveryTimes.Local.First(e => e.DeliveryTimeId == time.DeliveryTimeId)).CurrentValues.SetValues(time);
dbContext.Entry(dbContext.DeliveryTimes.Local.First(e => e.DeliveryTimeId == time.DeliveryTimeId)).State = EntityState.Modified;
}
else
{
DeliveryTime modTime = new DeliveryTime
{
DeliveryId = time.DeliveryId,
DeliveryLocationId = time.DeliveryLocationId,
DeliveryTimeId = time.DeliveryTimeId,
DropoffTime = time.DropoffTime
};
dbContext.DeliveryTimes.Attach(modTime);
dbContext.Entry(modTime).State = EntityState.Modified;
}
deliveryInDb.DeliveryTimes.Add(time);
}
}
dbContext.SaveChanges();
dbContext.Entry(deliveryInDb).State = EntityState.Detached;
return true;
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException);
return false;
}
}
}
Let me continue by explaining that the delivery object has 2 children; DeliveryTime and DeliveryDay. The issue that arises happens when I try to remove one deliveryTime and modify nothing else. The end result of running the code normally (not in debug) is that the deliveryTime is in fact not removed. Here's the interesting thing guys, when I debug it and go through the break points, everything works as expected!
Let me continue by posting the code that is running behind the removal method of the deliveryTime (actually all entity objects in my system).
public bool One<V>(Expression<Func<T, V>> property, V value) where V : IComparable
{
using (var dbContext = new FoodsbyDb())
{
try
{
T toDelete;
//get the body as a property that represents the property of the entity object
MemberExpression entityPropertyExpression = property.Body as MemberExpression;
//get the parameter that is representing the entity object
ParameterExpression entityObjectExpression = (ParameterExpression)entityPropertyExpression.Expression;
//represent the value being checked against as an expression constant
Expression valueAsExpression = Expression.Constant(value);
//check the equality of the property and the value
Expression equalsExpression = Expression.Equal(entityPropertyExpression, valueAsExpression);
//create an expression that takes the entity object as a parameter, and checks the equality using the equalsExpression variable
Expression<Func<T, bool>> filterLambda = Expression.Lambda<Func<T, bool>>(equalsExpression, entityObjectExpression);
toDelete = this.dbTable(dbContext)
.SingleOrDefault(filterLambda);
if (toDelete != null)
{
this.dbTable(dbContext)
.Remove(toDelete);
dbContext.SaveChanges();
return true;
}
return false;
}
catch (Exception ex)
{
Console.WriteLine(ex.InnerException);
return false;
}
}
}
The code above is obviously generic, and it handles all my entity objects. I have tested it in and out and know for sure the problem does not lie in there. I thought it would be helpful to post it so you all can have a full understanding of what's going on.
Here's my best guess as to what's going on:
The reference to the removed deliveryTime still exists when the database context is saved, but when I debug, the system has enough time to remove the context.
Here was one of my attempted solutions:
Remove all references to the children objects immediately after setting currentDays and currentTimes and then proceeding to add them back to deliveryInDb as you enumerate through them.
Because I am new to all of this, if you see some bad practice along with the solution, I wouldn't mind constructive criticism to improve my programming method.
I actually encountered this issue in a project at work. The project is an older MVC4 project using EF 6.1.
In our situation, a simple update attempting to set a related entity property to null was failing to actually set it to null while running the web app normally (in debug mode). When setting a break point on the line of code that sets the property to null the database would be updated as expected, though. So, the update was working when a break point was in place but not working when allowed to run normally.
Using an EF interceptor, we could see that, with the break point in place, the update query was going through as expected.
Now, in our situation the related entity was using the virtual keyword to allow for lazy loading. I think this is the root of the issue. When a break point is present, EF has enough time to both lazily load that related entity and evaluate whatever it needs to evaluate and finally set it to null. When running without a break point, I think EF gets caught up trying to lazily load that entity and therefore fails to think it needs to be updated. To be clear, I was both accessing the related entity property for the first time and setting it null using a one-liner of code.
foo.Bar = null;
I resolved this issue, in our scenario, by accessing that property at least once prior to setting it to null so that EF is forced to load it. With it loaded, setting it to null seems to work as intended now. So again, to be clear, I think the issue is a combo of lazy loading and the one-liner of code both accessing that property for the first time and assigning it to null.
It appears that you're using multiple instances of your DbContext, which are not synchronized.
The solution would be to use a single instance, and pass that instance between your methods.

How can I notify my program when the database has been updated?

I have a C# program that queries the SQL Server database for some values.
Currently the application queries the database every minutes to make sure that the table is up to date.
What I would like to be able to do is that the query is only done when the database has been changed / updated. How do I notify my program when something has been updated in the database?
Thanks
Polling database is not very elegant solution.
SqlDependency from ADO.NET will be useful in your case. It does not use polling but notification mechanism. The notifications are provided by Service Broker in your database, so will need to enable this service in your databse. The OnChange event will raise when specified table changes(update, delete, insert..)
Here is an example how to use SqlDependency:
void Initialization()
{
// Create a dependency connection.
SqlDependency.Start(connectionString, queueName);
}
void SomeMethod()
{
// Assume connection is an open SqlConnection.
// Create a new SqlCommand object.
using (SqlCommand command=new SqlCommand(
"SELECT ShipperID, CompanyName, Phone FROM dbo.Shippers",
connection))
{
// Create a dependency and associate it with the SqlCommand.
SqlDependency dependency=new SqlDependency(command);
// Maintain the refence in a class member.
// Subscribe to the SqlDependency event.
dependency.OnChange+=new
OnChangeEventHandler(OnDependencyChange);
// Execute the command.
using (SqlDataReader reader = command.ExecuteReader())
{
// Process the DataReader.
}
}
}
// Handler method
void OnDependencyChange(object sender,
SqlNotificationEventArgs e )
{
// Handle the event (for example, invalidate this cache entry).
}
void Termination()
{
// Release the dependency.
SqlDependency.Stop(connectionString, queueName);
}
from http://msdn.microsoft.com/en-us/library/62xk7953.aspx
Here is how to enable Service Broker(note that you will have exclusiveness on the database to do that - best do it after restart of the sql server):
http://blogs.sftsrc.com/stuart/archive/2007/06/13/42.aspx(Broken link)
Possible alternative link: http://technet.microsoft.com/en-us/library/ms166086(v=sql.105).aspx
If you are on SQL Server 2005 and above, you can consider using the SqlDependency object.
It represents a query notification dependency between an application and an instance of SQL Server 2005.
An application can create a SqlDependency object and register to receive notifications via the OnChangeEventHandler event handler.
Refer this link on MSDN for more information
However, do note the caveat that MS puts against its use. It is advised to have a caching layer and then use SQLDependency in coordination with that layer .
SqlDependency was designed to be used in ASP.NET or middle-tier services where there is a relatively small number of servers having dependencies active against the database. It was not designed for use in client applications, where hundreds or thousands of client computers would have SqlDependency objects set up for a single database server.
To get a notify when some record is updated, avoid the application to query the table you cab use TableDependency component (in your specific case SqlTableDependency). Here is an example:
public partial class Window1 : Window
{
private IList<Stock> _stocks;
private readonly string _connectionString =
"data source=.;initial catalog=myDB;integrated security=True";
private readonly SqlTableDependency<Stock> _dependency;
public Window1()
{
this.InitializeComponent();
this.McDataGrid.ItemsSource = LoadCollectionData();
this.Closing += Window1_Closing;
var mapper = new ModelToTableMapper<Stock>();
mapper.AddMapping(model => model.Symbol, "Code");
_dependency = new SqlTableDependency<Stock>(_connectionString, "Stocks", mapper);
_dependency.OnChanged += _dependency_OnChanged;
_dependency.OnError += _dependency_OnError;
_dependency.Start();
}
private void Window1_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
_dependency.Stop();
}
private void _dependency_OnError(object sender, TableDependency.EventArgs.ErrorEventArgs e)
{
throw e.Error;
}
private void _dependency_OnChanged(
object sender,
TableDependency.EventArgs.RecordChangedEventArgs<Stock> e)
{
if (_stocks != null)
{
if (e.ChangeType != ChangeType.None)
{
switch (e.ChangeType)
{
case ChangeType.Delete:
_stocks.Remove(_stocks.FirstOrDefault(c => c.Symbol == e.Entity.Symbol));
break;
case ChangeType.Insert:
_stocks.Add(e.Entity);
break;
case ChangeType.Update:
var customerIndex = _stocks.IndexOf(
_stocks.FirstOrDefault(c => c.Symbol == e.Entity.Symbol));
if (customerIndex >= 0) _stocks[customerIndex] = e.Entity;
break;
}
this.McDataGrid.Dispatcher.Invoke(DispatcherPriority.Background, new Action(() =>
{
this.McDataGrid.Items.Refresh();
}));
}
}
}
private IEnumerable<Stock> LoadCollectionData()
{
_stocks = new List<Stock>();
using (var sqlConnection = new SqlConnection(_connectionString))
{
sqlConnection.Open();
using (var sqlCommand = sqlConnection.CreateCommand())
{
sqlCommand.CommandText = "SELECT * FROM [Stocks]";
using (var sqlDataReader = sqlCommand.ExecuteReader())
{
while (sqlDataReader.Read())
{
var code = sqlDataReader
.GetString(sqlDataReader.GetOrdinal("Code"));
var name = sqlDataReader
.GetString(sqlDataReader.GetOrdinal("Name"));
var price = sqlDataReader
.GetDecimal(sqlDataReader.GetOrdinal("Price"));
_stocks.Add(new Stock { Symbol = code, Name = name, Price = price });
}
}
}
}
return _stocks;
}
The event handler is triggered for every INSERT UPDATE or DELETE operation done on the table, reporting you the modified value. So, in case you are interested to keep your C# Datatable up to date, you simple can get the fresh data from the event handler.
What I would like to be able to do is that the query is only done when the database has been changed/updated.How do i notify my program when some thing updated in database.
There isn't any means of the database pushing notifications to the application. The application needs to poll the database to check for updates, and then deal with the updates appropriately.
If by "updates to the database" you mean any update by any application, you're out of luck: it's not doable.
If, however, you mean changes made by your app, it's easy: every time you update the DB raise and event and have handlers respond to the event.

TransactionScope not rolling back transaction

Here is the current architecture of my transaction scope source code. The third insert throws an .NET exception (Not a SQL Exception) and it is not rolling back the two previous insert statements. What I am doing wrong?
EDIT: I removed the try/catch from insert2 and insert3. I also removed the exception handling utility from the insert1 try/catch and put "throw ex". It still does not rollback the transaction.
EDIT 2: I added the try/catch back on the Insert3 method and just put a "throw" in the catch statement. It still does not rollback the transaction.
UPDATE:Based on the feedback I received, the "SqlHelper" class is using the SqlConnection object to establish a connection to the database, then creates a SqlCommand object, set the CommandType property to "StoredProcedure" and calls the ExecuteNonQuery method of the SqlCommand.
I also did not add Transaction Binding=Explicit Unbind to the current connection string. I will add that during my next test.
public void InsertStuff()
{
try
{
using(TransactionScope ts = new TransactionScope())
{
//perform insert 1
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /* create parameters for first insert */ };
sh.Insert("MyInsert1", sp);
}
//perform insert 2
this.Insert2();
//perform insert 3 - breaks here!!!!!
this.Insert3();
ts.Complete();
}
}
catch(Exception ex)
{
throw ex;
}
}
public void Insert2()
{
//perform insert 2
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /* create parameters for second insert */ };
sh.Insert("MyInsert2", sp);
}
}
public void Insert3()
{
//perform insert 3
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /*create parameters for third insert */ };
sh.Insert("MyInsert3", sp);
}
}
I have also run into a similar issue. My problem occurred because the SqlConnection I used in my SqlCommands was already open before the TransactionScope was created, so it never got enlisted in the TransactionScope as a transaction.
Is it possible that the SqlHelper class is reusing an instance of SqlConnection that is open before you enter your TransactionScope block?
It looks like you are catching the exception in Insert3() so your code continues after the call. If you want it to rollback you'll need to let the exception bubble up to the try/catch block in the main routine so that the ts.Complete() statement never gets called.
An implicit rollback will only occur if the using is exited without calling ts.complete. Because you are handling the exception in Insert3() the exception never causes an the using statement to exit.
Either rethrow the exception or notify the caller that a rollback is needed (make change the signature of Insert3() to bool Insert3()?)
(based on the edited version that doesn't swallow exceptions)
How long do the operations take? If any of them are very long running, it is possible that the Transaction Binding bug feature has bitten you - i.e. the connection has become detached. Try adding Transaction Binding=Explicit Unbind to the connection string.
I dont see your helper class, but transaction scope rollsback if you don't call complete statement even if you get error from .NET code. I copied one example for you. You may be doing something wrong in debugging. This example has error in .net code and similar catch block as yours.
private static readonly string _connectionString = ConnectionString.GetDbConnection();
private const string inserttStr = #"INSERT INTO dbo.testTable (col1) VALUES(#test);";
/// <summary>
/// Execute command on DBMS.
/// </summary>
/// <param name="command">Command to execute.</param>
private void ExecuteNonQuery(IDbCommand command)
{
if (command == null)
throw new ArgumentNullException("Parameter 'command' can't be null!");
using (IDbConnection connection = new SqlConnection(_connectionString))
{
command.Connection = connection;
connection.Open();
command.ExecuteNonQuery();
}
}
public void FirstMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello1"));
ExecuteNonQuery(command);
}
public void SecondMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello2"));
ExecuteNonQuery(command);
}
public void ThirdMethodCauseNetException()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello3"));
ExecuteNonQuery(command);
int a = 0;
int b = 1/a;
}
public void MainWrap()
{
TransactionOptions tso = new TransactionOptions();
tso.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
//TransactionScopeOption.Required, tso
try
{
using (TransactionScope sc = new TransactionScope())
{
FirstMethod();
SecondMethod();
ThirdMethodCauseNetException();
sc.Complete();
}
}
catch (Exception ex)
{
logger.ErrorException("eee ",ex);
}
}
If you want to debug your transactions, you can use this script to see locks and waiting status etc.
SELECT
request_session_id AS spid,
CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncomitted'
WHEN 2 THEN 'Readcomitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL ,
resource_type AS restype,
resource_database_id AS dbid,
DB_NAME(resource_database_id) as DBNAME,
resource_description AS res,
resource_associated_entity_id AS resid,
CASE
when resource_type = 'OBJECT' then OBJECT_NAME( resource_associated_entity_id)
ELSE 'N/A'
END as ObjectName,
request_mode AS mode,
request_status AS status
FROM sys.dm_tran_locks l
left join sys.dm_exec_sessions s on l.request_session_id = s.session_id
where resource_database_id = 24
order by spid, restype, dbname;
You will see one SPID for two method calls before calling exception method.
Default isolation level is serializable.You can read more about locks and transactions here
I ran into a similar issue when I had a call to a WCF service operation in TransactionScope.
I noticed transaction flow was not allowed due to the 'TransactionFlow' attribute in the service interface. Therefore, the WCF service operation was not using the transaction used by the outer transaction scope. Changing it to allow transaction flow as shown below fixed my problem.
[TransactionFlow(TransactionFlowOption.NotAllowed)]
to
[TransactionFlow(TransactionFlowOption.Allowed)]

Categories

Resources