Using TransactionScope in Service Layer for UnitOfWork operations - c#

Is my approach right to bundle all 3 dataprovider.GetXXX methods in a TransactionScope in the service layer as UnitOfWork?
Would you do something different?
From where does the TransactionScpe ts know the concrete ConnectionString?
Should I get the Transaction object from my connection and pass this Transaction objekt to the constructor of the TransactionScope ?
Service Layer like AdministrationService.cs
private List<Schoolclass> GetAdministrationData()
{
List<Schoolclass> schoolclasses = null
using (TransactionScope ts = new TransactionScope())
{
schoolclasses = _adminDataProvider.GetSchoolclasses();
foreach (var s in schoolclasses)
{
List<Pupil> pupils = _adminDataProvider.GetPupils(s.Id);
s.Pupils = pupils;
foreach (var p in pupils)
{
List<Document> documents = _documentDataProvider.GetDocuments(p.Id);
p.Documents = documents;
}
}
ts.Complete();
}
return schoolclasses;
}
Sample how any of those 3 methods in the DataProvider could look like:
public List<Schoolclass> GetSchoolclassList()
{
// used that formerly without TransactionSCOPE => using (var trans = DataAccess.ConnectionManager.BeginTransaction())
using (var com = new SQLiteCommand(DataAccess.ConnectionManager))
{
com.CommandText = "SELECT * FROM SCHOOLCLASS";
var schoolclasses = new List<Schoolclass>();
using (var reader = com.ExecuteReader())
{
Schoolclass schoolclass = null;
while (reader.Read())
{
schoolclass = new Schoolclass();
schoolclass.SchoolclassId = Convert.ToInt32(reader["schoolclassId"]);
schoolclass.SchoolclassCode = reader["schoolclasscode"].ToString();
schoolclasses.Add(schoolclass);
}
}
// Used that formerly without TransactionSCOPE => trans.Commit();
return schoolclasses;
}
}

This looks fine - that's what TransactionScope is there for, to provide transaction control in your code (and this is a common pattern for UoW).
From where does the TransactionScpe ts know the concrete ConnectionString?
It doesn't. That depends on your data access layer and doesn't really mean much to TransactionScope. What TransactionScope does is create a transaction (which will by default be a light-weight one) - if your data access spans several databases, the transaction will automatically be escalated to a distributed transaction. It uses MSDTC under the hood.
Should I get the Transaction object from my connection and pass this Transaction objekt to the constructor of the TransactionScope ?
No, no, no. See the above. Just do what you are doing now. There is no harm in nesting TransactionScopes.

Related

Why instantiate new DbContext for each step of test

In the Entity Framework Core documentation on Testing with SQLite, the sample code instantiates a new DbContext for each step of a test. Is there a reason for doing this?
// Copied from the docs:
[Fact]
public void Add_writes_to_database()
{
// In-memory database only exists while the connection is open
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlite(connection)
.Options;
// Create the schema in the database
using (var context = new BloggingContext(options))
{
context.Database.EnsureCreated();
}
// Run the test against one instance of the context
using (var context = new BloggingContext(options))
{
var service = new BlogService(context);
service.Add("http://sample.com");
context.SaveChanges();
}
// Use a separate instance of the context to verify correct data was saved to database
using (var context = new BloggingContext(options))
{
Assert.Equal(1, context.Blogs.Count());
Assert.Equal("http://sample.com", context.Blogs.Single().Url);
}
}
finally
{
connection.Close();
}
}
// Why not do this instead:
[Fact]
public void Add_writes_to_database()
{
// In-memory database only exists while the connection is open
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlite(connection)
.Options;
// Create the schema in the database
using (var context = new BloggingContext(options))
{
context.Database.EnsureCreated();
var service = new BlogService(context);
service.Add("http://sample.com");
context.SaveChanges();
Assert.Equal(1, context.Blogs.Count());
Assert.Equal("http://sample.com", context.Blogs.Single().Url);
}
}
finally
{
connection.Close();
}
}
Why not instantiate the context once, and use that instance throughout the entire test method, as shown in the second code sample?
Because that's how contexts should be used. They should be created per request and disposed of.
One practical reason is to ensure that you're going back to the data source each time instead of just looking at state within the context.

Connecting to a SQLLite db with SqlBulkCopy

I'm writing a unit test in c# for a function that is responsible for using System.Data.SqlClient.SqlBulkCopy to copy a DataTable to a database server.
I use SQLLite for unit tests, and wanted to connect to my SQLLite in memory database with SqlBulkCopy, and then bulk copy that test data into the SQLLite db.
However, I can't seem to get the connection string right.
I originally tried
var bcp = new SqlBulkCopy("FullUri=file::memory:?cache=shared")
Then
var bcp = new SqlBulkCopy("Data Source=:memory:;Cache=Shared")
Which didn't recognize Cache
So then I tried
var bcp = new SqlBulkCopy("Data Source=:memory:")
out of desperation, which simply timed out when attempting to connect to the database.
Is what I'm trying to accomplish here possible? If it is, can someone please help me with the connection string?
The answer to this was that you cannot connect SqlBulkCopy to a SQLite instance.
What I did to solve my problem (unit test a part of the code that used SqlBulkCopy) was to create a wrapper around SqlBulkCopy that is implemented using SqlBulkCopy for production code, and with a mock bulk copy in test code. Effectively decoupling the dependency on SqlBulkCopy itself.
Specifically, I created
public interface IBulkCopy : IDisposable {
string DestinationTableName { get; set; }
void CreateColumnMapping(string from, string to);
Task WriteToServerAsync(DataTable dt);
}
Then, I implemented this as
public class SQLBulkCopy : IBulkCopy {
private SqlBulkCopy _sbc;
public string DestinationTableName {
get { return _sbc.DestinationTableName; }
set { _sbc.DestinationTableName = value; }
}
public SQLBulkCopy(IDBContext ctx) {
_sbc = new SqlBulkCopy((SqlConnection)ctx.GetConnection());
}
public void CreateColumnMapping(string from, string to) {
_sbc.ColumnMappings.Add(new SqlBulkCopyColumnMapping(from, to));
}
public Task WriteToServerAsync(DataTable dt) {
return _sbc.WriteToServerAsync(dt);
}
}
And in my test utilities I mocked out "bulk copy" with just inserts:
class MockBulkCopy : IBulkCopy {
private IDBContext _context;
public MockBulkCopyHelper(IDBContext context) {
_context = context;
}
public string DestinationTableName { get; set; }
public void CreateColumnMapping(string fromName, string toName) {
//We don't need a column mapping for raw SQL Insert statements.
return;
}
public virtual Task WriteToServerAsync(DataTable dt) {
return Task.Run(() => {
using (var cn = _context.GetConnection()) {
using (var cmd = cn.CreateCommand()) {
cmd.CommandText = $"INSERT INTO {DestinationTableName}({GetCsvColumnList(dt)}) VALUES {GetCsvValueList(dt)}";
cmd.ExecuteNonQuery();
}
}
});
}
Where GetCsvColumnList and GetCsvValueList I implemented as helper functions.
You cannot use SqlBulkCopy for SQLite. SqlBulkCopy has been done for SQL Server.
Normally the trick to dramatically improve performance for SQLite is making sure a transaction is used.
Disclaimer: I'm the owner of .NET Bulk Operations
This library is not free but allows you to easily perform and customize all bulk operations:
Bulk Insert
Bulk Delete
Bulk Update
Bulk Merge
Example
// Easy to use
var bulk = new BulkOperation(connection);
bulk.BulkInsert(dt);
bulk.BulkUpdate(dt);
bulk.BulkDelete(dt);
bulk.BulkMerge(dt);
// Easy to customize
var bulk = new BulkOperation<Customer>(connection);
bulk.BatchSize = 1000;
bulk.ColumnInputExpression = c => new { c.Name, c.FirstName };
bulk.ColumnOutputExpression = c => c.CustomerID;
bulk.ColumnPrimaryKeyExpression = c => c.Code;
bulk.BulkMerge(customers);
EDIT: Answer comment
I want to load a data table from SQLite then "bulk copy" it in other databases
This situation is possible but requires 2 connection
DbConnection sourceConnection = // connection from the source
DbConnection destinationConnection = // connection from the destination
// Fill the DataTable using the sourceConnection
dt = ...;
// BulkInsert using the destinationConnection
var bulk = new BulkOperation(destinationConnection);
bulk.BulkInsert(dt);

Pooling MySQL Connections with Microsoft Enterprise Library

My setup is MySql.Data.MySqlClient v6.9.8.0 and Microsoft.Practices.EnterpriseLibrary.Data v6.0.0.
The program is a long running program that runs continuously listening for tasks and then performs the job with some form of database action (depending on what the request was.) Sometimes the requests will be one after the other, sometimes there will be several hours between them.
I've tried using Pooling=true in the connection string but it causes me a lot of problems (not all the time - these are intermittent problems.)
Here is an example:
[MySqlException (0x80004005): Authentication to host 'localhost' for user 'root' using method 'mysql_native_password' failed with message: Reading from the stream has failed.]
Turning off pooling fixes the problem but at the same time it makes the queries slower because we can't reuse connections. I've searched online and a lot of people have this same issue and the only fix/workaround I've found is Pooling=false which I'd rather avoid if possible.
Here is an example of my query code:
Database db = this.GetDatabase(databaseName);
List<dynamic> results = new List<dynamic>();
// Run the sql query
using (DbCommand dbCommand = db.GetSqlStringCommand(query))
{
foreach (var parameter in inParameters)
{
db.AddInParameter(dbCommand, parameter.Key, parameter.Value.Item1, parameter.Value.Item2);
}
foreach (var parameter in outParameters)
{
db.AddOutParameter(dbCommand, parameter.Key, parameter.Value.Item1, parameter.Value.Item2);
}
using (IDataReader dataReader = db.ExecuteReader(dbCommand))
{
IDictionary<string, object> instance;
do
{
// Read each row
while (dataReader.Read())
{
instance = new ExpandoObject() as IDictionary<string, object>;
// Populate the object on the fly with the data
for (int i = 0; i < dataReader.FieldCount; i++)
{
instance.Add(dataReader.GetName(i), dataReader[i]);
}
// Add the object to the results list
results.Add(instance);
}
} while (dataReader.NextResult());
}
return results;
}
Any ideas?
Can you try this? I know, I know. using "using" should mean I don't have to call the dataReader.Close() method...but I still do it. I also slightly altered the dr.Read block.
This guy talks about it.
http://www.joseguay.com/uncategorized/ensure-proper-closure-disposal-of-a-datareader
I know, I know. You shouldn't have to. Even when using Ent library, I do an extra .Close step to try and make sure.
Database db = this.GetDatabase(databaseName);
List<dynamic> results = new List<dynamic>();
// Run the sql query
using (DbCommand dbCommand = db.GetSqlStringCommand(query))
{
foreach (var parameter in inParameters)
{
db.AddInParameter(dbCommand, parameter.Key, parameter.Value.Item1, parameter.Value.Item2);
}
foreach (var parameter in outParameters)
{
db.AddOutParameter(dbCommand, parameter.Key, parameter.Value.Item1, parameter.Value.Item2);
}
using (IDataReader dataReader = db.ExecuteReader(dbCommand))
{
IDictionary<string, object> instance;
while (dataReader.Read())
{
instance = new ExpandoObject() as IDictionary<string, object>;
// Populate the object on the fly with the data
for (int i = 0; i < dataReader.FieldCount; i++)
{
instance.Add(dataReader.GetName(i), dataReader[i]);
}
// Add the object to the results list
results.Add(instance);
}
if (dataReader != null)
{
try
{
dataReader.Close();
}
catch { }
}
}
return results;
}

Nested Transactions with TransactionScope

If you have somehting like this:
IBinaryAssetStructureRepository rep = new BinaryAssetStructureRepository();
var userDto = new UserDto { id = 3345 };
var dto = new BinaryAssetBranchNodeDto("name", userDto, userDto);
using (var scope1 = new TransactionScope())
{
using(var scope2 = new TransactionScope())
{
//Persist to database
rep.CreateRoot(dto, 1, false);
scope2.Complete();
}
scope1.Dispose();
}
dto = rep.GetByKey(dto.id, -1, false);
Will the inner TransactionScope scope2 also be rolled back?
Yes.
The inner transaction is enrolled in the same scope of the outer one, and the whole thing will rollback. This is the case, as you didn't enroll the inner transaction as a new one using TransactionScopeOption.RequiresNew.
See here for an explanation on this subject: http://web.archive.org/web/20091012162649/http://www.pluralsight.com/community/blogs/jimjohn/archive/2005/06/18/11451.aspx.
Also, note that the scope1.Dispose is redundant since scope1 will be automatically disposed at the end of the using block that declares it.

Storing reader information in C#

I know what I asking might not make a lot of sense for C# experts but I'll explain what I want to do and then you can suggest me how to do it in a better way if you want ok?
I have a C# class called DatabaseManager that deals with different MySQL queries (ado.net NET connector, not linq or any kind of ActiveRecord-ish library).
I am doing something like
categories = db_manager.getCategories();
The list of categories is quite small (10 items) so I'd like to know what's the best way of accessing the retrieved information without a lot of additional code.
Right now I'm using a Struct to store the information but I'm sure there's a better way of doing this.
Here's my code:
public struct Category
{
public string name;
}
internal ArrayList getCategories()
{
ArrayList categories = new ArrayList();
MySqlDataReader reader;
Category category_info;
try
{
conn.Open();
reader = category_query.ExecuteReader();
while (reader.Read())
{
category_info = new Category();
category_info.name = reader["name"].ToString();
categories.Add(category_info);
}
reader.Close();
conn.Close();
}
catch (MySqlException e)
{
Console.WriteLine("ERROR " + e.ToString());
}
return categories;
}
Example:
public IEnumerable<Category> GetCategories()
{
using (var connection = new MySqlConnection("CONNECTION STRING"))
using (var command = new MySqlCommand("SELECT name FROM categories", connection))
{
connection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
yield return new Category { name = reader.GetString(0) };
}
}
}
}
Remarks:
Let ADO.NET connection pooling do the right work for you (avoid storing connections in static fields, etc...)
Always make sure to properly dispose unmanaged resources (using "using" in C#)
Always return the lowest interface in the hierarchy from your public methods (in this case IEnumerable<Category>).
Leave the callers handle exceptions and logging. These are crosscutting concerns and should not be mixed with your DB access code.
The first thing I would do is to replace you use of ArrayList with List that will provide compile-time type checkig for your use of the category list (so you will not have to type cast it when using it in your code).
There's nothing wrong with returning them in an like this. However, a few things stand out:
Your catch block logs the error but
then returns either an empty array or
a partially populated array. This
probably isn't a good idea
If an exception is thrown in the try
block you won't close the connection
or dispose of the reader. Consider
the using() statement.
You should use the generic types
(List<>) instead of ArrayList.
From your code I guess you are using .NET 1.1, becuase you are not using the power of generics.
1) Using a struct that only contains a string is an overkill. Just create an arraylist of strings (or with generics a List )
2) When an exception occurs in your try block, you leave your connection and reader open... Use this instead:
try
{
conn.open();
//more code
}
catch (MySqlException e) { // code
}
finally {
conn.close()
if (reader != null)
reader.close();
}

Categories

Resources