Variables in SQLite - c#

I was wondering if there is a known technique for saving and using variables in an SqlLite database.
I am looking for something like the $something variables one can find under Oracle

Didn't find any builtin solution for this, so I solved it by having a global table with Key,Value pairs
Here is the C# class i made to wrap this nicely
public class SQLiteVariable
{
public SQLiteVariable() : this (null, string.Empty)
{}
public SQLiteVariable(SQLiteConnection connection) : this(connection, string.Empty)
{}
public SQLiteVariable(string name) : this(null, name)
{}
public SQLiteVariable(SQLiteConnection connection, string name)
{
Connection = connection;
Name = name;
}
/// <summary>
/// The table name used for storing the database variables
/// </summary>
private const string VariablesTable = "__GlobalDatabaseVariablesTable";
/// <summary>
/// Gets or sets the SQLite database connection.
/// </summary>
/// <value>The connection.</value>
public SQLiteConnection Connection { get; set; }
/// <summary>
/// Gets or sets the SQLite variable name.
/// </summary>
/// <value>The name.</value>
public string Name { get; set; }
/// <summary>
/// Gets or sets the SQLite variable value.
/// </summary>
/// <value>The value.</value>
public string Value
{
get
{
CheckEnviornemnt();
var cmd = new SQLiteCommand(Connection)
{
CommandText = "SELECT Value FROM " + VariablesTable + " WHERE Key=#VarName"
};
cmd.Parameters.Add(new SQLiteParameter("#VarName", Name));
var returnValue = cmd.ExecuteScalar();
return returnValue as string;
}
set
{
CheckEnviornemnt();
// Assume the variable exists and do an update
var cmd = new SQLiteCommand(Connection)
{
CommandText = "INSERT OR REPLACE INTO " + VariablesTable + " (Key, Value) VALUES(#VarName, #Value)"
};
cmd.Parameters.Add(new SQLiteParameter("#Value", value));
cmd.Parameters.Add(new SQLiteParameter("#VarName", Name));
var count = cmd.ExecuteNonQuery();
}
}
private void CheckEnviornemnt()
{
if (Connection == null) throw new ArgumentException("Connection was not initialized");
var cmd = new SQLiteCommand(Connection)
{
CommandText = "CREATE TABLE IF NOT EXISTS "+VariablesTable+" (Key VARCHAR(30) PRIMARY KEY, Value VARCHAR(256));"
};
cmd.ExecuteNonQuery();
}
}

Related

How can I create and delete a database at the first and at the end of a Fact in XUnit testing?

In testing some services that connected to the database in integration tests I need to create a single database for a fact and after finish the fact I need to delete that database because in XUnit ,tests are parallel and this can effect to each other for example you wanna edit a user in database in a fact but before this fact there is an other fact has deleted that user and this make my test failed so I need to create a single database for each fact and after finish that fact I want to dispose that database
How can I do this ?
Use a Collection Fixture. This addresses your needs by:
only letting one test that needs the resource use it at a time
allowing you to do a single spin up/down per overall test run
There are different solutions to this issue. But they all boil down to removing the shared resource.
Remove parallelization for xUnit : You can do that by adding a a xunit.runner.json and add parallelizeTestCollections to that file as described in the documentation and you can use Respawn along with that to restore the database to a checkpoint after each test. If you have a lot of tests, then this solution may be slow but can be faster than firing up a db each time. (this is not advisable, see #RubenBartelink answer below)
If there is no relation between the two users of each test then you can use a different Identifier for each user and make the test independent of each others.
If the test is not about integration with the db, than you can use an memory database.
And last, you can use a docker image of the db, perhaps varying one of the connection parameters in order to make each test target an individual database or schema etc.
I found myself in a similar situation and made helper class for unit tests requiring database access which creates a new schema in the database for the test and removes it upon being Disposed. You can add any tables or views you want to the schema.
I use it in the test fixture so that it only creates the schema once for each collection, but you could do it for each Fact, although you'll probably start to get performance issues if there are a large number of them.
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.IO;
using System.Text;
namespace UnitTestHelpers
{
/// <summary>
/// Class supports creation of temporary schemas in an existing database to be
/// able to use them in unit tests.<br/>
/// The schemas are deleted on disposal of the object.
/// </summary>
public class DataBaseUnitTestHelper : IDisposable
{
private bool schemaCreated = false;
private bool disposedValue;
/// <summary>
/// Public constructor requires naming the data source and database.
/// </summary>
/// <param name="dataSource_">The data source, i.e. server name of the database server.</param>
/// <param name="catalog_">The database name where the temporary schemas will be created.</param>
public DataBaseUnitTestHelper(string dataSource_, string catalog_)
{
if (string.IsNullOrEmpty(dataSource_))
{
throw new ArgumentException($"{nameof(dataSource)} is null or empty.", nameof(dataSource_));
}
this.dataSource = dataSource_;
if (string.IsNullOrEmpty(catalog_))
{
throw new ArgumentException($"{nameof(catalog_)} is null or empty.", nameof(catalog_));
}
this.catalog = catalog_;
}
public string dataSource { get; private set; }
public string catalog { get; private set; }
public string schema { get; private set; }
/// <summary>
/// Builds a connect string that can be used to connect to the database,
/// for example in <see cref="SqlConnection.SqlConnection(string)"/>.
/// </summary>
public string connectString
{
get
{
if (disposedValue) throw new ObjectDisposedException(this.ToString());
if (connectString_ == null)
{
var csb = new SqlConnectionStringBuilder();
csb.DataSource = dataSource;
csb.IntegratedSecurity = true;
csb.InitialCatalog = catalog;
connectString_ = csb.ConnectionString;
}
return connectString_;
}
private set
{
if (disposedValue) throw new ObjectDisposedException(this.ToString());
connectString_ = value;
}
}
private string connectString_ = null;
/// <summary>
/// Returns a (normally unopened) connection to the database.
/// </summary>
/// <returns></returns>
public SqlConnection getConnection()
{
if (disposedValue) throw new ObjectDisposedException(this.ToString());
return new SqlConnection(connectString);
}
/// <summary>
/// Creates a new uniquely named schema in the given database and returns its name.
/// </summary>
/// <returns>The name of the schema that was created.</returns>
public string createNewTestSchema()
{
if (disposedValue) throw new ObjectDisposedException(this.ToString());
if (schemaCreated)
{
throw new InvalidOperationException("Object can only be used to create one test schema.");
}
using (SqlConnection connection = getConnection())
{
connection.Open();
string localSchema = "Test" + Guid.NewGuid().ToString("N").Substring(0, 16);
string sql = $"CREATE SCHEMA {localSchema};";
using (SqlCommand command = new SqlCommand(sql, connection))
{
int res = command.ExecuteNonQuery();
schema = localSchema;
schemaCreated = true;
}
return schema;
}
}
/// <summary>
/// Deletes the temporary database schema created by this object, first clearing all its elements
/// </summary>
private void deleteSchema()
{
if (disposedValue) throw new ObjectDisposedException(this.ToString());
if (!schemaCreated) return;
// Determine all the objects in the schema
List<Tuple<string, string>> list = new List<Tuple<string, string>>();
using (SqlConnection connection = getConnection())
{
connection.Open();
using (SqlCommand selectTablesCmd = connection.CreateCommand())
{
selectTablesCmd.CommandText = "SELECT * FROM [INFORMATION_SCHEMA].[TABLES] WHERE [TABLE_CATALOG] = #tableCatalog AND [TABLE_SCHEMA] = #tableSchema";
selectTablesCmd.Parameters.AddWithValue("tableCatalog", catalog);
selectTablesCmd.Parameters.AddWithValue("tableSchema", schema);
using (SqlDataReader reader = selectTablesCmd.ExecuteReader())
{
while (reader.Read())
{
string tableName = reader["TABLE_NAME"].ToString();
string tableType = reader["TABLE_TYPE"].ToString();
list.Add(new Tuple<string, string>(tableName, tableType));
}
}
}
// Delete all the objects in the Schema
if (list.Count > 0)
{
using (SqlCommand deleteTableCmd = connection.CreateCommand())
using (SqlCommand deleteViewCmd = connection.CreateCommand())
{
foreach (Tuple<string, string> item in list)
{
switch (item.Item2)
{
case "BASE TABLE":
deleteTableCmd.CommandText = $"DROP TABLE [{catalog}].[{schema}].[{item.Item1}]";
deleteTableCmd.ExecuteNonQuery();
break;
case "VIEW":
deleteViewCmd.CommandText = $"DROP VIEW [{catalog}].[{schema}].[{item.Item1}]";
deleteViewCmd.ExecuteNonQuery();
break;
default:
throw new InvalidDataException($"Found table type '{item.Item2}' in [INFORMATION_SCHEMA].[TABLES] for" +
$" [{catalog}].[{schema}].[{item.Item1}], expected 'BASE TABLE' or 'VIEW'.");
}
}
}
}
// Delete the schema itself
using (SqlCommand dropSchemaCmd = connection.CreateCommand())
{
dropSchemaCmd.CommandText = $"DROP SCHEMA {schema}";
dropSchemaCmd.ExecuteNonQuery();
}
schema = null;
schemaCreated = false;
return;
}
}
protected virtual void Dispose(bool disposing)
{
if (!disposedValue)
{
if (disposing)
{
// TODO: dispose managed state (managed objects)
}
// Free unmanaged resources (unmanaged objects) and override finalizer
deleteSchema();
disposedValue = true;
}
}
~DataBaseUnitTestHelper()
{
// Do not change this code. Put cleanup code in 'Dispose(bool disposing)' method
Dispose(disposing: false);
}
public void Dispose()
{
// Do not change this code. Put cleanup code in 'Dispose(bool disposing)' method
Dispose(disposing: true);
GC.SuppressFinalize(this);
}
}
}
You can use it like this:
[Fact]
public void testSchemaIsReallyCreated()
{
string schema;
string connectString;
using (DatabaseUnitTestHelper dbhelper = new DatabaseUnitTestHelper(defaultDataSource, defaultInitialCatalog))
{
connectString = dbhelper.connectString;
schema = dbhelper.createNewTestSchema();
bool schemaExists;
using (SqlConnection connection = new SqlConnection(connectString))
{
connection.Open();
SqlCommand cmd = connection.CreateCommand();
cmd.CommandText = $"SELECT COUNT(*) from SYS.SCHEMAS WHERE name = #schema";
cmd.Parameters.AddWithValue("schema", schema);
schemaExists = (int)cmd.ExecuteScalar() > 0;
}
Assert.True(schemaExists, $"Schema {schema} doesn't exist although method {nameof(dbhelper.createNewTestSchema)} was executed and this schema name was returned.");
}
}

Is this an efficient way of bulk inserting using Dapper?

Is this an efficient way of bulk inserting using Dapper?
Also, is this more efficient than creating a stored procedure and passing models to it?
Is this an efficient way of bulk inserting using Dapper?
Also, is this more efficient than creating a stored procedure and passing models to it?
category = new Category
{
Name = "category",
Description = "description",
Created = null,
LastModified = null,
CategoryPictures = new CategoryPicture[]
{
new CategoryPicture
{
CategoryId = 3,
PictureId = 2,
Picture = new Picture
{
Url = "newUrl"
}
},
new CategoryPicture
{
CategoryId = 3,
PictureId = 2,
Picture = new Picture
{
Url = "url"
}
}
}
};
string sql = #"INSERT INTO Categories(Name, Description, Created, LastModified)
VALUES(#Name, #Description, #Created, #LastModified)";
await conn.ExecuteAsync(sql, new
{
category.Name,
category.Description,
category.Created,
category.LastModified
});
string catPicInsert = #"INSERT INTO CategoryPictures(fk_CategoryId, fk_PictureId)
VALUES(#CategoryId, #PictureId)";
await conn.ExecuteAsync(catPicInsert, category.CategoryPictures);
string PicInsert = #"INSERT INTO Pictures(Url)
VALUES(#Url)";
await conn.ExecuteAsync(PicInsert, category.CategoryPictures.Select(x => x.Picture).ToList());
It won't be hugely slow, but it won't be anywhere near as fast as a bulk copy. Options, assuming SQL Server:
it is possible to use TVPs with Dapper, but the only convenient way to do this is by packing your input data into a DataTable; there are examples of TVP usage in the Dapper repo, or I can knock one out, but they're inconvenient because you need to declare the parameter type at the server
you can use SqlBulkCopy to throw data into the database independent of Dapper; FastMember has ObjectReader that can construct an IDataReader over a typed sequence, suitable for use withSqlBulkCopy
If you're not using SQL Server, you'll need to look at vendor-specific options for your RDBMS.
I've changed it up into using multiple sprocs for each table.
public async Task<bool> CreateCategoryAsync(Category category)
{
category = new Category
{
Name = "category",
Description = "description",
Created = null,
LastModified = null,
CategoryPictures = new CategoryPicture[]
{
new CategoryPicture
{
CategoryId = 1,
PictureId = 2,
Picture = new Picture
{
Url = "newUrl"
}
},
new CategoryPicture
{
CategoryId = 2,
PictureId = 2,
Picture = new Picture
{
Url = "url"
}
}
}
};
string sql = #"EXEC Categories_Insert #Categories;
EXEC CategoryPictures_Insert #CategoryPictures;
EXEC Pictures_Insert #Pictures";
var spParams = new DynamicParameters(new
{
Categories = category.ToDataTable().AsTableValuedParameter("CategoriesType"),
CategoryPictures = category.CategoryPictures.ListToDataTable()
.AsTableValuedParameter("CategoryPictureType"),
Pictures = category.CategoryPictures.Select(x => x.Picture)
.ListToDataTable()
.AsTableValuedParameter("PicturesType")
});
using (var conn = SqlConnection())
{
using (var res = await conn.QueryMultipleAsync(sql, spParams))
{
return true;
}
}
}
This is the extension class with generic methods that i created for mapping objects or a list of objects into Datatable
public static class DataTableExtensions
{
/// <summary>
/// Convert an IEnumerable into a Datatable
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="listToDataTable"></param>
/// <returns></returns>
public static DataTable ListToDataTable<T>(this IEnumerable<T> listToDataTable)
{
DataTable dataTable = new DataTable();
AddToDataTableColumns<T>(dataTable);
foreach (var item in listToDataTable)
{
AddDataTableRows(dataTable, item);
}
return dataTable;
}
/// <summary>
/// Comvert a Type of Class to DataTable
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="obj"></param>
/// <returns></returns>
public static DataTable ToDataTable<T>(this T obj) where T : class
{
DataTable dataTable = new DataTable();
AddToDataTableColumns<T>(dataTable);
AddDataTableRows(dataTable, obj);
return dataTable;
}
/// <summary>
/// Add values from of Type T to a given Datatable columns
/// </summary>
/// <typeparam name="T">Extract values from</typeparam>
/// <param name="dataTable">The datatable to add column values into</param>
/// <returns></returns>
private static DataTable AddToDataTableColumns<T>(DataTable dataTable)
{
try
{
PropertyInfo[] FilteredProps = GetFilteredProperties(typeof(T));
for (int i = 0; i < FilteredProps.Length; i++)
{
PropertyInfo prop = FilteredProps[i];
Type type = prop.PropertyType;
dataTable.Columns.Add(prop.Name, Nullable.GetUnderlyingType(type) ?? type);
}
}
catch (Exception ex)
{
new InfrastructuredException(ex.StackTrace);
}
return dataTable;
}
/// <summary>
/// Add values from of Type T to a given Datatable Rows
/// </summary>
/// <typeparam name="T">Extract values from</typeparam>
/// <param name="dataTable">The datatable to add Row values into</param>
/// <returns></returns>
private static DataTable AddDataTableRows<T>(DataTable dataTable, T obj)
{
try
{
PropertyInfo[] FilteredProps = GetFilteredProperties(typeof(T));
object[] values = new object[FilteredProps.Length];
for (int i = 0; i < values.Length; i++)
{
values[i] = FilteredProps[i].GetValue(obj);
}
dataTable.Rows.Add(values);
}
catch (Exception ex)
{
new InfrastructuredException(ex.StackTrace);
}
return dataTable;
}
/// <summary>
/// Return an array of Filterered Properties of a Type
/// </summary>
/// <param name="type"></param>
/// <returns>Properties that are filtered by Type</returns>
private static PropertyInfo[] GetFilteredProperties(Type type)
{
return type.GetProperties()
.Where(p => p.Name != "Id" && !p.PropertyType.IsSubclassOf(typeof(BaseEntity)) && !p.PropertyType.IsInterface)
.ToArray();
}
}
I would suggest that you use Bulk Insert function of SQL.
It can be combined with Dapper as well.
I was looking for the example like this for a long time, and finally found the pieces of solution, and combined them in one place.
You can see the example in my repo:
https://github.com/ayrat162/BulkInsert
It uses Dapper.Contrib, FastMember, and SqlBulkCopy for uploading large chunks of data to MS SQL Server.

Generic approach to dealing with multiple result sets from EF stored procedure

EF 6, .NET 4.51
I am trying to build a generic helper class that will help me "translate" each of the result sets into a type safe class as described here Handle multiple result from a stored procedure with SqlQuery
For my solution I want to pass the following to my helper class (MultiResultsetsHelper):
Generic Return Type
ObjectContext
DataReader
List of class types in order of the result sets coming back
and then have the helper class do the heavy lifting of populating 1. Below is the code so far:
Classes For Result
public class Set1ReturnDto
{
public int CruiseCount { get; set; }
public string DisplayText { get; set; }
public int DisplayValue { get; set; }
}
public class Set2ReturnDto
{
public string DepartingFrom { get; set; }
public string Port_Code { get; set; }
}
public class DummyReturnDto
{
public DummyReturnDto()
{
Set1 = new List<Set1ReturnDto>();
Set2 = new List<Set2ReturnDto>();
}
public List<Set1ReturnDto> Set1 { get; set; }
public List<Set2ReturnDto> Set2 { get; set; }
}
Low Level Database Call
public static DummyReturnDto DonoUspGetSideBarList(DbContext aDbContext, out int aProcResult)
{
SqlParameter procResultParam = new SqlParameter { ParameterName = "#procResult", SqlDbType = SqlDbType.Int, Direction = ParameterDirection.Output };
DbCommand dbCommand = aDbContext.Database.Connection.CreateCommand();
dbCommand.Parameters.Add(procResultParam);
dbCommand.CommandText = "EXEC #procResult = [dbo].[usp_GetSideBarList] ";
dbCommand.Transaction = aDbContext.Database.CurrentTransaction.UnderlyingTransaction;
DbDataReader reader = dbCommand.ExecuteReader();
aProcResult = -1;
// Drop down to the wrapped `ObjectContext` to get access to the `Translate` method
ObjectContext objectContext = ((IObjectContextAdapter)aDbContext).ObjectContext;
List<Type> containedDtos = new List<Type>
{
typeof (List<Set1ReturnDto>),
typeof (List<Set1ReturnDto>)
};
return MultiResultsetsHelper.Process<DummyReturnDto>(reader, objectContext, containedDtos);
}
The resulting datasets returned are:
Helper Class
public static class MultiResultsetsHelper
{
/// <summary>
/// Given a data reader that contains multiple result sets, use the supplied object context to serialise the
/// rows of data in the result set into our property.
/// </summary>
/// <typeparam name="T">Type of the containing object that contains all the various result sets.</typeparam>
/// <param name="aDbReader">Database reader that contains all the result sets returned from the database.</param>
/// <param name="aObjectContext">Data context associated with the data reader.</param>
/// <param name="aContainedDataSetReturnedTypes">
/// List of types in order of which the result sets are contained within the
/// data reader. We will serilize sequentially each result set the data reader contains
/// </param>
/// <returns>Retuns an object representing all the result sets returned by the data reader.</returns>
public static T Process<T>(DbDataReader aDbReader, ObjectContext aObjectContext, List<Type> aContainedDataSetReturnedTypes) where T : new()
{
//What we will be returning
T result = new T();
for (int datasetNdx = 0; datasetNdx < aContainedDataSetReturnedTypes.Count; datasetNdx++)
{
//Advance the reader if we are not looking at the first dataset
if (datasetNdx != 0)
aDbReader.NextResult();
//Get the property we are going to be updating based on the type of the class we will be filling
PropertyInfo propertyInfo = typeof (T).GetProperties().Single(p => p.PropertyType == aContainedDataSetReturnedTypes[datasetNdx]);
//Now get the object context to deserialize what is in the resultset into our type
var valueForProperty = aObjectContext.Translate <aContainedDataSetReturnedTypes[datasetNdx]> (aDbReader);
//Finally we update the property with the type safe information
propertyInfo.SetValue(result, valueForProperty, null);
}
return result;
}
}
However currently I cannot get this to compile.
Error 2 Operator '<' cannot be applied to operands of type 'method
group' and 'System.Type'
Can someone help out? Ultimately it has to do with how we use reflection and the passed in aContainedDataSetReturnedTypes. I am happy to change things around as long as it is still easy to call MultiResultsetsHelper.Process<>()
This code:
aObjectContext.Translate<aContainedDataSetReturnedTypes[datasetNdx]>
will not work, because generic type parameters are always resolved at compile-time. You can't pass a Type instance at runtime.
You can still call the generic method, but you'll have to use reflection.
With the help of all of the above I came up with the following (which can still be improved):
public static class MultiResultsetsHelper
{
/// <summary>
/// Given a data reader that contains multiple result sets, use the supplied object context to serialise the
/// rows of data in the result set into our property.
/// </summary>
/// <typeparam name="T">Type of the containing object that contains all the various result sets.</typeparam>
/// <param name="aDbReader">Database reader that contains all the result sets returned from the database.</param>
/// <param name="aDbContext">Data context associated with the data reader.</param>
/// <param name="aDataSetTypes">Type for each type to use when we call Translate() on the current result in the data reader.</param>
/// <param name="aContainedDataSetReturnedTypes">
/// List of types in order of which the result sets are contained within the
/// data reader. We will serilize sequentially each result set the data reader contains
/// </param>
/// <returns>Retuns an object representing all the result sets returned by the data reader.</returns>
public static T Process<T>(DbDataReader aDbReader, DbContext aDbContext, List<Type> aDataSetTypes, List<Type> aContainedDataSetReturnedTypes) where T : new()
{
//What we will be returning
T result = new T();
// Drop down to the wrapped `ObjectContext` to get access to the `Translate` method
ObjectContext objectContext = ((IObjectContextAdapter) aDbContext).ObjectContext;
//Iterate the passed in dataset types as they are in the same order as what the reader contains
for (int datasetNdx = 0; datasetNdx < aContainedDataSetReturnedTypes.Count; datasetNdx++)
{
//Advance the reader if we are not looking at the first dataset
if (datasetNdx != 0)
aDbReader.NextResult();
//Get the property we are going to be updating based on the type of the class we will be filling
PropertyInfo propertyInfo = typeof (T).GetProperties().Single(p => p.PropertyType == aContainedDataSetReturnedTypes[datasetNdx]);
//Now get the object context to deserialize what is in the resultset into our type
MethodInfo method = GetTranslateOverload(typeof (ObjectContext));
MethodInfo generic = method.MakeGenericMethod(aDataSetTypes[datasetNdx]);
//Invoke the generic method which we hvae constructed for Translate
object valueForProperty = generic.Invoke(objectContext, new object[] {aDbReader});
//Finally we update the property with the type safe information
propertyInfo.SetValue(result, valueForProperty);
}
return result;
}
/// <summary>
/// Internal helper method to get the necessary translate overload we need:
/// ObjectContext.Translate<T>(DbReader)
/// </summary>
/// <param name="aType">ObjectContext.GetType()</param>
/// <returns>Returns the method we require, null on error.</returns>
private static MethodInfo GetTranslateOverload(Type aType)
{
MethodInfo myMethod = aType
.GetMethods()
.Where(m => m.Name == "Translate")
.Select(m => new
{
Method = m,
Params = m.GetParameters(),
Args = m.GetGenericArguments()
})
.Where(x => x.Params.Length == 1
&& x.Args.Length == 1
&& x.Params[0].ParameterType == typeof (DbDataReader)
// && x.Params[0].ParameterType == x.Args[0]
)
.Select(x => x.Method)
.First();
return myMethod;
}
}
So assuming you have:
public class UspGetSideBarListReturnDto
{
public List<Set1ReturnDto> Dummy1 { get; set; }
public List<Set2ReturnDto> Dummy2 { get; set; }
}
public class Set1ReturnDto
{
public Int32 CruiseCount { get; set; }
public string DisplayText { get; set; }
public Int64 DisplayValue { get; set; }
}
public class Set2ReturnDto
{
public string DepartingFrom { get; set; }
public string Port_Code { get; set; }
}
You can call it as:
DbDataReader reader = dbCommand.ExecuteReader();
return MultiResultsHelper.Process<UspGetSideBarListReturnDto>(reader, myDbContext, new List<Type>{typeof(Set1ReturnDto), typeof(Set2ReturnDto)}, new List<Type>{typeof(List<Set1ReturnDto>), typeof(List<Set2ReturnDto>});
The order of the aDataSetTypes needs to correspond to the list of result sets in the aDbReader.
Improvements would be:
Only pass the list of dataset types. (And have the List properties automatically determined)

How to force eager load at runtime?

I'm using Fluent NHibernate. I have an object at runtime with lazy collections/properties that may or may not have been populated. I plan on serializing that object and need all the collections/properties to be populated before I do so. How can I "eager-load" my object at runtime?
If you already have the relationships set in you mappings you do not need to specify how to join in your query, you could simply use Fetch (even deep fetch) to specify the path to be loaded eaderly:
session.QueryOver<MasterEnt>()
.Where(x => x.Id == 2)
.Fetch(x => x.DetailEntList)
.Eager().List();
You can use ICriteria and manipulate the load through NHibernate.ICriteria.SetFetchMode(string, NHibernate.FetchMode).
Example:
DetailEnt.cs:
using System;
using System.Collections.Generic;
using System.Text;
namespace FetchTest
{
public class DetailEnt
{
private Int32? id;
/// <summary>
/// Entity key
/// </summary>
public virtual Int32? Id
{
get { return id; }
set { id = value; }
}
private String description;
/// <summary>
/// Description
/// </summary>
public virtual String Description
{
get { return description; }
set { description = value; }
}
private MasterEnt rIMaster;
/// <summary>
/// Gets or sets the RI master.
/// </summary>
/// <value>
/// The RI master.
/// </value>
public virtual MasterEnt RIMaster
{
get { return rIMaster; }
set { rIMaster = value; }
}
}
}
MasterEnt.cs:
using System;
using System.Collections.Generic;
using System.Text;
namespace FetchTest
{
public class MasterEnt
{
private Int32? id;
/// <summary>
/// Entity key
/// </summary>
public virtual Int32? Id
{
get { return id; }
set { id = value; }
}
private String description;
/// <summary>
/// Description
/// </summary>
public virtual String Description
{
get { return description; }
set { description = value; }
}
private ICollection<DetailEnt> detailEntList;
/// <summary>
/// <see cref="RIDetailEnt"/> one-to-many relationship.
/// </summary>
public virtual ICollection<DetailEnt> DetailEntList
{
get { return detailEntList; }
set { detailEntList = value; }
}
}
}
Forcing eager load at runtime:
NHibernate.ISession ss = GetSessionFromSomeWhere();
NHibernate.ICriteria crt = ss.CreateCriteria<MasterEnt>();
crt
.Add(NHibernate.Criterion.Expression.IdEq(17))
//here is "force eager load at runtime"
.SetFetchMode("DetailEntList", NHibernate.FetchMode.Join);
MasterEnt mEnt = crt.UniqueResult<MasterEnt>();
In this case I used "hbm". But the logic should be the same.
EDITED:
With "NHibernate 2.1.2" and "NHibernate.Linq"
INHibernateQueryable<MasterEnt> nhq = null;
IList<MasterEnt> masterList = null;
nhq = (INHibernateQueryable<MasterEnt>)(
from master in session.Linq<MasterEnt>()
where master.Id == 2
select master);
nhq.Expand("DetailEntList");
masterList = nhq.ToList<MasterEnt>();
with QueryOver<T>Left.JoinQueryOver from NHibernate 3:
IQueryOver<MasterEnt> query = session.QueryOver<MasterEnt>()
.Left.JoinQueryOver<DetailEnt>(m => m.DetailEntList)
.Where(m => m.Id == 2);
masterList = query.List<MasterEnt>();
These queries work this way independently if using "FluentNHibernate" or "hbm".
I made some code for it, soon I'll post the links to the files.
EDITED 2:
I've posted the code on q_10303345_1350308.7z (runnable by NUnit). There are explanations about the dependencies in the "dependencies \ readme.txt". The dll dependencies are loaded by NuGet.

Performing Inserts and Updates with Dapper

I am interested in using Dapper - but from what I can tell it only supports Query and Execute. I do not see that Dapper includes a way of Inserting and Updating objects.
Given that our project (most projects?) need to do inserts and updates, what is the best practice for doing Inserts and Updates alongside dapper?
Preferably we would not have to resort to the ADO.NET method of parameter building, etc.
The best answer I can come up with at this point is to use LinqToSQL for inserts and updates. Is there a better answer?
We are looking at building a few helpers, still deciding on APIs and if this goes in core or not. See: https://code.google.com/archive/p/dapper-dot-net/issues/6 for progress.
In the mean time you can do the following
val = "my value";
cnn.Execute("insert into Table(val) values (#val)", new {val});
cnn.Execute("update Table set val = #val where Id = #id", new {val, id = 1});
etcetera
See also my blog post: That annoying INSERT problem
Update
As pointed out in the comments, there are now several extensions available in the Dapper.Contrib project in the form of these IDbConnection extension methods:
T Get<T>(id);
IEnumerable<T> GetAll<T>();
int Insert<T>(T obj);
int Insert<T>(Enumerable<T> list);
bool Update<T>(T obj);
bool Update<T>(Enumerable<T> list);
bool Delete<T>(T obj);
bool Delete<T>(Enumerable<T> list);
bool DeleteAll<T>();
Performing CRUD operations using Dapper is an easy task. I have mentioned the below examples that should help you in CRUD operations.
Code for CRUD:
Method #1: This method is used when you are inserting values from different entities.
using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["myDbConnection"].ConnectionString))
{
string insertQuery = #"INSERT INTO [dbo].[Customer]([FirstName], [LastName], [State], [City], [IsActive], [CreatedOn]) VALUES (#FirstName, #LastName, #State, #City, #IsActive, #CreatedOn)";
var result = db.Execute(insertQuery, new
{
customerModel.FirstName,
customerModel.LastName,
StateModel.State,
CityModel.City,
isActive,
CreatedOn = DateTime.Now
});
}
Method #2: This method is used when your entity properties have the same names as the SQL columns. So, Dapper being an ORM maps entity properties with the matching SQL columns.
using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["myDbConnection"].ConnectionString))
{
string insertQuery = #"INSERT INTO [dbo].[Customer]([FirstName], [LastName], [State], [City], [IsActive], [CreatedOn]) VALUES (#FirstName, #LastName, #State, #City, #IsActive, #CreatedOn)";
var result = db.Execute(insertQuery, customerViewModel);
}
Code for CRUD:
using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["myDbConnection"].ConnectionString))
{
string selectQuery = #"SELECT * FROM [dbo].[Customer] WHERE FirstName = #FirstName";
var result = db.Query(selectQuery, new
{
customerModel.FirstName
});
}
Code for CRUD:
using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["myDbConnection"].ConnectionString))
{
string updateQuery = #"UPDATE [dbo].[Customer] SET IsActive = #IsActive WHERE FirstName = #FirstName AND LastName = #LastName";
var result = db.Execute(updateQuery, new
{
isActive,
customerModel.FirstName,
customerModel.LastName
});
}
Code for CRUD:
using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["myDbConnection"].ConnectionString))
{
string deleteQuery = #"DELETE FROM [dbo].[Customer] WHERE FirstName = #FirstName AND LastName = #LastName";
var result = db.Execute(deleteQuery, new
{
customerModel.FirstName,
customerModel.LastName
});
}
you can do it in such way:
sqlConnection.Open();
string sqlQuery = "INSERT INTO [dbo].[Customer]([FirstName],[LastName],[Address],[City]) VALUES (#FirstName,#LastName,#Address,#City)";
sqlConnection.Execute(sqlQuery,
new
{
customerEntity.FirstName,
customerEntity.LastName,
customerEntity.Address,
customerEntity.City
});
Edit added by Caius:
Note that it's not necessary to open/close the connection in this "immediately before/after the operation" way: if your connection is closed, Dapper opens it. If your connection is open, Dapper leaves it open.
Open the connection yourself if you e.g. have many operations to perform/you're using a transaction. Leave Dapper to do it if all you'll do is open/execute/close.
Also, it's unnecessary to make an anonymous type; just make your parameters names match your property names in whatever type holds your data, and pass that type rather than unpacking it to an anonymous type.
The code above can be written thus:
string sqlQuery = "INSERT INTO [dbo].[Customer]([FirstName],[LastName],[Address],[City]) VALUES (#FirstName,#LastName,#Address,#City)";
using(var sqlConnection = ...){
sqlConnection.Execute(sqlQuery, customerEntity);
}
Using Dapper.Contrib it is as simple as this:
Insert list:
public int Insert(IEnumerable<YourClass> yourClass)
{
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
return conn.Insert(yourClass) ;
}
}
Insert single:
public int Insert(YourClass yourClass)
{
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
return conn.Insert(yourClass) ;
}
}
Update list:
public bool Update(IEnumerable<YourClass> yourClass)
{
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
return conn.Update(yourClass) ;
}
}
Update single:
public bool Update(YourClass yourClass)
{
using (SqlConnection conn = new SqlConnection(ConnectionString))
{
return conn.Update(yourClass) ;
}
}
Source: https://github.com/StackExchange/Dapper/tree/master/Dapper.Contrib
You can also use dapper with a stored procedure and generic way by which everything easily manageable.
Define your connection:
public class Connection: IDisposable
{
private static SqlConnectionStringBuilder ConnectionString(string dbName)
{
return new SqlConnectionStringBuilder
{
ApplicationName = "Apllication Name",
DataSource = #"Your source",
IntegratedSecurity = false,
InitialCatalog = Database Name,
Password = "Your Password",
PersistSecurityInfo = false,
UserID = "User Id",
Pooling = true
};
}
protected static IDbConnection LiveConnection(string dbName)
{
var connection = OpenConnection(ConnectionString(dbName));
connection.Open();
return connection;
}
private static IDbConnection OpenConnection(DbConnectionStringBuilder connectionString)
{
return new SqlConnection(connectionString.ConnectionString);
}
protected static bool CloseConnection(IDbConnection connection)
{
if (connection.State != ConnectionState.Closed)
{
connection.Close();
// connection.Dispose();
}
return true;
}
private static void ClearPool()
{
SqlConnection.ClearAllPools();
}
public void Dispose()
{
ClearPool();
}
}
Create an interface to define Dapper methods those you actually need:
public interface IDatabaseHub
{
long Execute<TModel>(string storedProcedureName, TModel model, string dbName);
/// <summary>
/// This method is used to execute the stored procedures with parameter.This is the generic version of the method.
/// </summary>
/// <param name="storedProcedureName">This is the type of POCO class that will be returned. For more info, refer to https://msdn.microsoft.com/en-us/library/vstudio/dd456872(v=vs.100).aspx. </param>
/// <typeparam name="TModel"></typeparam>
/// <param name="model">The model object containing all the values that passes as Stored Procedure's parameter.</param>
/// <returns>Returns how many rows have been affected.</returns>
Task<long> ExecuteAsync<TModel>(string storedProcedureName, TModel model, string dbName);
/// <summary>
/// This method is used to execute the stored procedures with parameter. This is the generic version of the method.
/// </summary>
/// <param name="storedProcedureName">Stored Procedure's name. Expected to be a Verbatim String, e.g. #"[Schema].[Stored-Procedure-Name]"</param>
/// <param name="parameters">Parameter required for executing Stored Procedure.</param>
/// <returns>Returns how many rows have been affected.</returns>
long Execute(string storedProcedureName, DynamicParameters parameters, string dbName);
/// <summary>
///
/// </summary>
/// <param name="storedProcedureName"></param>
/// <param name="parameters"></param>
/// <returns></returns>
Task<long> ExecuteAsync(string storedProcedureName, DynamicParameters parameters, string dbName);
}
Implement the interface:
public class DatabaseHub : Connection, IDatabaseHub
{
/// <summary>
/// This function is used for validating if the Stored Procedure's name is correct.
/// </summary>
/// <param name="storedProcedureName">Stored Procedure's name. Expected to be a Verbatim String, e.g. #"[Schema].[Stored-Procedure-Name]"</param>
/// <returns>Returns true if name is not empty and matches naming patter, otherwise returns false.</returns>
private static bool IsStoredProcedureNameCorrect(string storedProcedureName)
{
if (string.IsNullOrEmpty(storedProcedureName))
{
return false;
}
if (storedProcedureName.StartsWith("[") && storedProcedureName.EndsWith("]"))
{
return Regex.IsMatch(storedProcedureName,
#"^[\[]{1}[A-Za-z0-9_]+[\]]{1}[\.]{1}[\[]{1}[A-Za-z0-9_]+[\]]{1}$");
}
return Regex.IsMatch(storedProcedureName, #"^[A-Za-z0-9]+[\.]{1}[A-Za-z0-9]+$");
}
/// <summary>
/// This method is used to execute the stored procedures without parameter.
/// </summary>
/// <param name="storedProcedureName">Stored Procedure's name. Expected to be a Verbatim String, e.g. #"[Schema].[Stored-Procedure-Name]"</param>
/// <param name="model">The model object containing all the values that passes as Stored Procedure's parameter.</param>
/// <typeparam name="TModel">This is the type of POCO class that will be returned. For more info, refer to https://msdn.microsoft.com/en-us/library/vstudio/dd456872(v=vs.100).aspx. </typeparam>
/// <returns>Returns how many rows have been affected.</returns>
public long Execute<TModel>(string storedProcedureName, TModel model, string dbName)
{
if (!IsStoredProcedureNameCorrect(storedProcedureName))
{
return 0;
}
using (var connection = LiveConnection(dbName))
{
try
{
return connection.Execute(
sql: storedProcedureName,
param: model,
commandTimeout: null,
commandType: CommandType.StoredProcedure
);
}
catch (Exception exception)
{
throw exception;
}
finally
{
CloseConnection(connection);
}
}
}
public async Task<long> ExecuteAsync<TModel>(string storedProcedureName, TModel model, string dbName)
{
if (!IsStoredProcedureNameCorrect(storedProcedureName))
{
return 0;
}
using (var connection = LiveConnection(dbName))
{
try
{
return await connection.ExecuteAsync(
sql: storedProcedureName,
param: model,
commandTimeout: null,
commandType: CommandType.StoredProcedure
);
}
catch (Exception exception)
{
throw exception;
}
finally
{
CloseConnection(connection);
}
}
}
/// <summary>
/// This method is used to execute the stored procedures with parameter. This is the generic version of the method.
/// </summary>
/// <param name="storedProcedureName">Stored Procedure's name. Expected to be a Verbatim String, e.g. #"[Schema].[Stored-Procedure-Name]"</param>
/// <param name="parameters">Parameter required for executing Stored Procedure.</param>
/// <returns>Returns how many rows have been affected.</returns>
public long Execute(string storedProcedureName, DynamicParameters parameters, string dbName)
{
if (!IsStoredProcedureNameCorrect(storedProcedureName))
{
return 0;
}
using (var connection = LiveConnection(dbName))
{
try
{
return connection.Execute(
sql: storedProcedureName,
param: parameters,
commandTimeout: null,
commandType: CommandType.StoredProcedure
);
}
catch (Exception exception)
{
throw exception;
}
finally
{
CloseConnection(connection);
}
}
}
public async Task<long> ExecuteAsync(string storedProcedureName, DynamicParameters parameters, string dbName)
{
if (!IsStoredProcedureNameCorrect(storedProcedureName))
{
return 0;
}
using (var connection = LiveConnection(dbName))
{
try
{
return await connection.ExecuteAsync(
sql: storedProcedureName,
param: parameters,
commandTimeout: null,
commandType: CommandType.StoredProcedure
);
}
catch (Exception exception)
{
throw exception;
}
finally
{
CloseConnection(connection);
}
}
}
}
You can now call from model as your need:
public class DeviceDriverModel : Base
{
public class DeviceDriverSaveUpdate
{
public string DeviceVehicleId { get; set; }
public string DeviceId { get; set; }
public string DriverId { get; set; }
public string PhoneNo { get; set; }
public bool IsActive { get; set; }
public string UserId { get; set; }
public string HostIP { get; set; }
}
public Task<long> DeviceDriver_SaveUpdate(DeviceDriverSaveUpdate obj)
{
return DatabaseHub.ExecuteAsync(
storedProcedureName: "[dbo].[sp_SaveUpdate_DeviceDriver]", model: obj, dbName: AMSDB);//Database name defined in Base Class.
}
}
You can also passed parameters as well:
public Task<long> DeleteFuelPriceEntryByID(string FuelPriceId, string UserId)
{
var parameters = new DynamicParameters();
parameters.Add(name: "#FuelPriceId", value: FuelPriceId, dbType: DbType.Int32, direction: ParameterDirection.Input);
parameters.Add(name: "#UserId", value: UserId, dbType: DbType.String, direction: ParameterDirection.Input);
return DatabaseHub.ExecuteAsync(
storedProcedureName: #"[dbo].[sp_Delete_FuelPriceEntryByID]", parameters: parameters, dbName: AMSDB);
}
Now call from your controllers:
var queryData = new DeviceDriverModel().DeviceInfo_Save(obj);
Hope it's prevent your code repetition and provide security;
Instead of using any 3rd party library for query operations, I would rather suggest writing queries on your own. Because using any other 3rd party packages would take away the main advantage of using dapper i.e. flexibility to write queries.
Now, there is a problem with writing Insert or Update query for the entire object. For this, one can simply create helpers like below:
InsertQueryBuilder:
public static string InsertQueryBuilder(IEnumerable < string > fields) {
StringBuilder columns = new StringBuilder();
StringBuilder values = new StringBuilder();
foreach(string columnName in fields) {
columns.Append($ "{columnName}, ");
values.Append($ "#{columnName}, ");
}
string insertQuery = $ "({ columns.ToString().TrimEnd(',', ' ')}) VALUES ({ values.ToString().TrimEnd(',', ' ')}) ";
return insertQuery;
}
Now, by simply passing the name of the columns to insert, the whole query will be created automatically, like below:
List < string > columns = new List < string > {
"UserName",
"City"
}
//QueryBuilder is the class having the InsertQueryBuilder()
string insertQueryValues = QueryBuilderUtil.InsertQueryBuilder(columns);
string insertQuery = $ "INSERT INTO UserDetails {insertQueryValues} RETURNING UserId";
Guid insertedId = await _connection.ExecuteScalarAsync < Guid > (insertQuery, userObj);
You can also modify the function to return the entire INSERT statement by passing the TableName parameter.
Make sure that the Class property names match with the field names in the database. Then only you can pass the entire obj (like userObj in our case) and values will be mapped automatically.
In the same way, you can have the helper function for UPDATE query as well:
public static string UpdateQueryBuilder(List < string > fields) {
StringBuilder updateQueryBuilder = new StringBuilder();
foreach(string columnName in fields) {
updateQueryBuilder.AppendFormat("{0}=#{0}, ", columnName);
}
return updateQueryBuilder.ToString().TrimEnd(',', ' ');
}
And use it like:
List < string > columns = new List < string > {
"UserName",
"City"
}
//QueryBuilder is the class having the UpdateQueryBuilder()
string updateQueryValues = QueryBuilderUtil.UpdateQueryBuilder(columns);
string updateQuery = $"UPDATE UserDetails SET {updateQueryValues} WHERE UserId=#UserId";
await _connection.ExecuteAsync(updateQuery, userObj);
Though in these helper functions also, you need to pass the name of the fields you want to insert or update but at least you have full control over the query and can also include different WHERE clauses as and when required.
Through this helper functions, you will save the following lines of code:
For Insert Query:
$ "INSERT INTO UserDetails (UserName,City) VALUES (#UserName,#City) RETURNING UserId";
For Update Query:
$"UPDATE UserDetails SET UserName=#UserName, City=#City WHERE UserId=#UserId";
There seems to be a difference of few lines of code, but when it comes to performing insert or update operation with a table having more than 10 fields, one can feel the difference.
You can use the nameof operator to pass the field name in the function to avoid typos
Instead of:
List < string > columns = new List < string > {
"UserName",
"City"
}
You can write:
List < string > columns = new List < string > {
nameof(UserEntity.UserName),
nameof(UserEntity.City),
}
Stored procedure + Dapper method or SQL insert statement + Dapper do the work, but it do not perfectly fulfill the concept of ORM which dynamic mapping data model with SQL table column, because if using one of the above 2 approaches, you still need hard code some column name value in your stored procedure parameter or SQL insert statement.
To solve the concern of minimize code modification, you can use Dapper.Contrib to support SQL insert, here is the official guide and below was the sample setup and code
Step 1
Set up your class model in C#, by using Dapper.Contrib.Extensions :
[Table] attribute will point to the desired table name in your SQL box, [ExplicitKey] attribute will tell Dapper this model properties is a primary key in your SQL table.
[Table("MySQLTableName")]
public class UserModel
{
[ExplicitKey]
public string UserId { get; set; }
public string Name { get; set; }
public string Sex { get; set; }
}
Step 2
Setup you SQL database/table something like this:
Step 3
Now build your C# code as something like below, you need to use these namespaces:
using Dapper.Contrib.Extensions;
using System.Data;
Code:
string connectionString = "Server=localhost;Database=SampleSQL_DB;Integrated Security=True";
UserModel objUser1 = new UserModel { UserId = "user0000001" , Name = "Jack", Sex = "Male" };
UserModel objUser2 = new UserModel { UserId = "user0000002", Name = "Marry", Sex = "female" };
UserModel objUser3 = new UserModel { UserId = "user0000003", Name = "Joe", Sex = "male" };
List<UserModel> LstUsers = new List<UserModel>();
LstUsers.Add(objUser2); LstUsers.Add(objUser3);
try
{
using (IDbConnection connection = new System.Data.SqlClient.SqlConnection(connectionString))
{
connection.Open();
using (var trans = connection.BeginTransaction())
{
try
{
// insert single record with custom data model
connection.Insert(objUser1, transaction: trans);
// insert multiple record with List<Type>
connection.Insert(LstUsers, transaction: trans);
// Only save to SQL database if all require SQL operation completed successfully
trans.Commit();
}
catch (Exception e)
{
// If one of the SQL operation fail , roll back the whole transaction
trans.Rollback();
}
}
}
}
catch (Exception e) { }
You can try this:
string sql = "UPDATE Customer SET City = #City WHERE CustomerId = #CustomerId";
conn.Execute(sql, customerEntity);
Here is a simple example with Repository Pattern :
public interface IUserRepository
{
Task<bool> CreateUser(User user);
Task<bool> UpdateUser(User user);
}
And in UserRepository :
public class UserRepository: IUserRepository
{
private readonly IConfiguration _configuration;
public UserRepository(IConfiguration configuration)
{
_configuration = configuration;
}
public async Task<bool> CreateUser(User user)
{
using var connection = new NpgsqlConnection(_configuration.GetValue<string>("DatabaseSettings:ConnectionString"));
var affected =
await connection.ExecuteAsync
("INSERT INTO User (Name, Email, Mobile) VALUES (#Name, #Email, #Mobile)",
new { Name= user.Name, Email= user.Email, Mobile = user.Mobile});
if (affected == 0)
return false;
return true;
}
public async Task<bool> UpdateUser(User user)
{
using var connection = new NpgsqlConnection(_configuration.GetValue<string>("DatabaseSettings:ConnectionString"));
var affected = await connection.ExecuteAsync
("UPDATE User SET Name=#Name, Email= #Email, Mobile = #Mobile WHERE Id = #Id",
new { Name= user.Name, Email= user.Email, Mobile = user.Mobile , Id = user.Id });
if (affected == 0)
return false;
return true;
}
}
Note : NpgsqlConnection used for getting the ConnectionString of PostgreSQL database

Categories

Resources