.NET Core / EF core unit test - c#

In my application, I have a repo that accesses a SQL Server database. I try to write unit tests for this method. For that, I use SQLite in memory.
My method works very well with SQL Server, but not with SQLite. Therefore unit test does not pass
var result = await (from command in _context.Commandes
where command.Numero == numero
select new CommandeDto
Numero = command.Numero,
Produits = command.Produits.Select(s => new ProduittDto()
Id = s.s.Id,
Libelle = s.Name,
System.InvalidOperationException : Translating this query requires the SQL APPLY operation, which is not supported on SQLite.
If I remove Distinct, my test works

Your question is essentially answered in this GitHub issue for EFCore. I think your only option is to perform the .Distinct() in memory afterwards using LINQ, not as part of your EFCore expression.


EF Core Repository items not tracked after insert with ExecuteSqlRaw(commandText)

Because of the performance benefits, I am inserting data by executing raw sql command.
All data are properly inserted into database, but the repository is not aware of them being inserted. Basically, usersBeforeInsert and usersAfterInsert contain the same 3 records.
Also, after application restart, repository is still not aware of users inserted with ExecuteSqlRaw(), retrieving only users that existed prior to ExecuteSqlRaw().
Does anybody knows how to make _userRepository retrieve all data from db? Btw, I am using asp.net boilerplate project.
Here is the code sample:
var usersBeforeInsert = await _userRepository.GetAllListAsync(); // 3
var commandText = GenerateInsertUsersSqlScript(users);
var context = _userRepository.GetDbContext();
var rowsAffected = context.Database.ExecuteSqlRaw(commandText); // 85000
var usersAfterInsert = await _userRepository.GetAllListAsync(); // 3
Your culprit is this line
var usersBeforeInsert = await _userRepository.GetAllListAsync();
Because the framework assume nothing has changed between this line and this line
var usersAfterInsert = await _userRepository.GetAllListAsync();
You fix this by not calling the list until after the raw insert, or you can clear the changeset.
the easiest way to clear the changeset is to new up a new _userRepository, but I guess it most likely is injected.

Dapper.Contrib and MiniProfiler (for MySql) integration issues

I'm trying to use MiniProfiler.Integrations.MySql along with Dapper.Contrib extensions to profile the sql queries sent to MySql server. I'm using my own ConnectionFactory:
public IDbConnection GetConnection()
var connection = (DbConnection) new MySqlConnection(_connectionString);
return new ProfiledDbConnection(connection, CustomDbProfiler.Current);
Dapper.Contrib allows inserting new records as simple as
public async Task AddAsync(TEntity sample)
using (var connection = _connectionFactory.GetConnection())
await connection.InsertAsync(sample);
But ProfiledDbConnection is interpreted as an SQLConnection, producing SQLServer syntax which is incompatible with MySQL:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[Id], [CreatedAt], [AndSoOn]' at line 1
Looking for your advice on how to solve the issue and get MiniProfiler working.
I'm using (all from Nuget):
Dapper: 1.50.5
Dapper.Contrib: 1.50.5
MiniProfiler: 3.2.0
MiniProfiler.Integrations.MySql: 1.0.1
As a workaround, in the current version you can override the type-based name resolution in Dapper.Contrib like this:
SqlMapperExtensions.GetDatabaseType = conn => "MySqlConnection";
This will override the default connection.GetType()-based name behavior. Still, that's not awesome, and I'll take a look if we can improve this in the next Dapper release.
Looks like I've found a workaround for the Insert() and InsertAsync() methods, they accept ISqlAdapter as an optional parameter which seems to fix the issue (but still I can't use this approach for Update()/UpdateAsync()).
This is due to the fact that when you want to use MiniProfiler with MySQL and Dapper.Contrib you need to wrap the MySqlConnection which leads to Dapper.Contrib using the default (wrong) ISqlAdapter.

SQL Sever 2014 Query Notifications in C++ with ADO.NET libraries

I have a C++ application that has to react to status changes that are kept in an SQL database. They have to react in real-time, and currently this is done by polling. However the real-time response time I need is overloading the database.
I have read about Query Notifications introduced in SQL Server 2005, but in truth I don't really understand the different SQL connection methods available. Currently I use ADO.NET to faciliate database communication, but the only sample code I have found for doing Query Notifications of this nature is in C# and VB. Not to mention that I am struggling to understand the concept. Here is a sample of how I currently run SQL with ADO.NET.
hr = link.CreateInstance(__uuidof(Connection) );
if (link) { // Execute the query on the open connection using existing command object
pCommand->ActiveConnection = link;
pCommand->CommandType = ado20CommandType;
pCommand->CommandText = strQuery.GetBuffer(0);
pCommand->NamedParameters = true;
if (pRecordSet = pCommand->Execute(&vRecordsAffected,NULL,NULL)) { // Execute the final query and set object-level recordset object
lRecordsAffected = vRecordsAffected.lVal;
_RecordsetPtr pfuRecordSet = link->Execute(_T("SELECT ##IDENTITY as pmIDENT;"),NULL,adCmdText);
CString strID = pfuRecordSet->GetFields()->GetItem(COleVariant("pmIDENT"))->Value.bstrVal;
int nID = atoi(strID);
if (nID > 0) {
nLastInsertedID = nID;
return true;
I was wondering if someone could provide me with resources and/or sample code that would help me set this up in C++? Thanks in advance.
The fastest way to get data out of a SQL Server database is using the SqlDataReader, which is forward-only, read-only.
Take a look at this C++ code and give that a try.

Is it possible to connect to SQL Server without specifying a database?

I'm trying to write some unit tests for my code that connect to SQL Server for persistence, but I would like to be able to run my unit tests by just pointing it at a SQL Server instance, and let the tests create their own database to run tests in, so after each test it can just drop the database and then on setup before the next test recreate it, so I know there is no legacy data or structures left over from a previous test effecting the next test.
In brief: no, you cannot do that. You might be able to leave out the database from the connection string, but in that case, that connection will be made to the configured default database of the login that's connecting to SQL Server (and that default database must exist at the time the connection is made)
If you want to have this scenario, you need to
first connect to your instance and database master and create your new testdb (or whatever it's called)
in your tests, connect to the instance and the testdb database
Better yet: use a mocking framework of some sort so you don't even need an actual database in your testing scenario!
I use the following class to facilitate the OP's scenario:
public class MsSqlDatabaseCreator
public void Create(string connectionstring)
if (DatabaseExists(connectionstring))
private static void CreateDatabase(string connectionString)
var sqlConnectionStringBuilder = new SqlConnectionStringBuilder(connectionString);
var databaseName = sqlConnectionStringBuilder.InitialCatalog;
sqlConnectionStringBuilder.InitialCatalog = "master";
using (var sqlConnection = new SqlConnection(sqlConnectionStringBuilder.ConnectionString))
using (var sqlCommand = sqlConnection.CreateCommand())
sqlCommand.CommandText = $"CREATE DATABASE {databaseName}";
private static bool DatabaseExists(string connectionString)
var sqlConnectionStringBuilder = new SqlConnectionStringBuilder(connectionString);
var databaseName = sqlConnectionStringBuilder.InitialCatalog;
sqlConnectionStringBuilder.InitialCatalog = "master";
using (var sqlConnection = new SqlConnection(sqlConnectionStringBuilder.ConnectionString))
using (var command = sqlConnection.CreateCommand())
command.CommandText = $"SELECT db_id('{databaseName}')";
return command.ExecuteScalar() != DBNull.Value;
private static void DropDatabase(string connectionString)
var sqlConnectionStringBuilder = new SqlConnectionStringBuilder(connectionString);
var databaseName = sqlConnectionStringBuilder.InitialCatalog;
sqlConnectionStringBuilder.InitialCatalog = "master";
using (var sqlConnection = new SqlConnection(sqlConnectionStringBuilder.ConnectionString))
using (var sqlCommand = sqlConnection.CreateCommand())
sqlCommand.CommandText = $#"
DROP DATABASE [{databaseName}]
The important part is the switching of the database name (initial catalog) to master. This way you can have just one connectionstring.
What you want to accomplish is possible using a mocking framework, in which case you don't even have to "connect to a database", you simply mock the return values that the database should return in order for you to test your "db handler" implementation.
There are several to choose from when it comes to C#, I can recommend Rhino Mocks and Moq to name two. Here's a question detailing a bit more; https://stackoverflow.com/questions/37359/what-c-sharp-mocking-framework-to-use
Why not have the same named database dedicated for tests? and drop-create it every time. This way you won't need to mess about with connection strings - it is always the same.
And yet, there is a better solution: within all your tests, start transaction, do your test, where your data is messed up. Once you verified (or failed) the test, unroll the transaction. This way you don't need to drop-create your tests for every test, because the data is never changed.
But you'll need to make sure schema in test-database is always up to date. So you'll need to drop-create test database whenever your schema is changed.
I've blogged about database tests and how we deal with Entity Framework migrations. This might not be completely applicable to your situation, but might help with ideas.
Regarding using mocks in your tests - yes this is absolutely valid suggestion and should be followed most of the time. Unless you are trying to test the database layer. In that case no mocks will save you, and you just have to go to DB. Many times over I have tried to mock DbContext in EF, but never managed to simulate realistic DB behavior. So going to DB was easier for me, rather than simulating DB-mock.
I'd use SQL Server Management Objects for the task. It's Server and Database APIs doesn't necessarily need a connection string but I think you might still need to specify a database. You can use master for that. (Check jeroenh's answer about creating object using SMO API as well)
By the way, if you are using .Net 4.0.2 and up you can use LocalDB as well, which is even better.
Edit: Note that actually LocalDB is an SQL Server 2012 feature however you still need .Net Framework > 4.0.2 to be able to use it.

Entity Framework Bulk Load Too Slow Add Seed

protected override void Seed(Fitlife.Domain.Concrete.EFDBContext context)
List<List<string>> foodweights = GetLines(basePath + "FoodWeights.txt");
int counter = 0;
foodweights.ForEach(line =>
FoodWeights newVal = new FoodWeights()
FoodCode = int.Parse(line[0]),
PortionCode = int.Parse(line[1]),
PortionWeight = decimal.Parse(line[2])
if (++counter == 1000)
counter = 0;
Above method is used to populate my database. But it takes 50 seconds for 1000 entries i have a file with 470k entries, how can i improve performance i am using entity framework and this method is called when i do
PM> update-database
with Package manager. i need similar functionality, i am very new to asp.net and entity framework any guidance will be appreciated thanks.
PS: Is it ok to take 50 seconds for 1000 entries or am i doing something wrong.
The Seed method runs every time the application starts, so the way you have coded it will attempt to add the FoodWeights over and over again. EF have provided the AddOrUpdate as a convenient method to prevent that but it is really not appropriate for bulk inserts.
You could use sql directly on the database - and if you are using sql server that sql could be 'BULK INSERT'.
I would put the sql in an Up migration because you probably only want to run the insert once from a known state, and it avoids having to worry about the efficiency of the context and tracking changes etc.
There is example code and more information here: how to seed data using sql files