I would like to reduce load on Azure Cosmos DB SQL-API, which is called from a .NET Core Web API with dependency injection.
In App Insights, I have noticed that every call to the Web API results in GetDatabase and GetCollection calls to Cosmos which can take 5s to run when Cosmos is under heavy load.
I have made CosmosClient a singleton (e.g advice here - https://learn.microsoft.com/en-us/azure/cosmos-db/performance-tips-dotnet-sdk-v3-sql)
However I could not find any advice for whether the Database or Container objects could also be singletons so these are created for each request to the Web API.
I check for the existence of the database and collection (e.g. following advice here - https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.cosmosclient.getdatabase?view=azure-dotnet#remarks and https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.cosmosclient.getcontainer?view=azure-dotnet#remarks)
This means that for every request to the Web API, the following code is run
var databaseResponse = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(
this.databaseConfiguration.DatabaseName,
throughput: this.databaseConfiguration.DatabaseLevelThroughput);
var database = databaseResponse.Database;
var containerResponse = await database.CreateContainerIfNotExistsAsync(containerId, partitionKey);
var container = containerResponse.Container;
Can I make Database and Container singletons and add them to the DI to be injected like CosmosClient in order to reduce the number of calls to GetDatabase and GetCollection seen in App Insights?
According to the latest Microsoft documentation, you create a CosmosClient Service singleton, which owns the Containers you will be working with. In affect making the Containers singletons as well.
First, make your interface contract:
public interface ICosmosDbService
{
// identify the database CRUD operations you need
}
Second, define your Service based on the contract:
public class CosmosDbService : ICosmosDbService
{
private Container _container;
public CosmosDbService(CosmosClient dbClient, string databaseName, string containerName)
{
this._container = dbClient.GetContainer(databaseName, containerName);
}
// your database CRUD operations go here using the Container field(s)
}
Third, create a method in your Startup class to return a CosmosClient:
private static async Task<CosmosDbService> InitializeCosmosClientAsync(IConfigurationSection cosmosConfig)
{
var databaseName = cosmosConfig.GetSection("DatabaseName").Value;
var containerName = cosmosConfig.GetSection("ContainerName").Value;
var account = cosmosConfig.GetSection("Account").Value;
var key = cosmosConfig.GetSection("Key").Value;
var client = new Microsoft.Azure.Cosmos.CosmosClient(account, key);
var database = await client.CreateDatabaseIfNotExistsAsync(databaseName);
await database.Database.CreateContainerIfNotExistsAsync(containerName, "/id");
return new CosmosDbService(client, databaseName, containerName);
}
Finally, add your CosmosClient to the ServiceCollection:
public void ConfigureServices(IServiceCollection services)
{
var cosmosConfig = this.Configuration.GetSection("CosmosDb");
var cosmosClient = InitializeCosmosClientAsync(cosmosConfig).GetAwaiter().GetResult();
services.AddSingleton<ICosmosDbService>(cosmosClient);
}
Now your CosmosClient has only been created once (with all the Containers), and it will be reused each time you get it through Dependency Injection.
You don't need to call CreateIfNotExistsAsync every time, if you know that they are available, you can use CosmosClient.GetContainer(dbName, containerName) which is a lightweight proxy class.
Unless you are expecting the database and containers to be deleted dynamically at some point?
CreateDatabaseIfNotExistsAsync should only be called once as it is just a setup step for DB configuration.
You'd better create a DbService to persist the container object. And inject the DbService into each services instead of the DB client
Related
In previous versions of MassTransit, when writing unit tests, I used to seed initial state of a saga by calling the InMemorySagaRepository.Add() method:
private InMemorySagaRepository<TState> Repository { get; }
protected async Task SeedSaga<TState>(TState seedState)
where TState : class, SagaStateMachineInstance
{
await Repository.Add(new SagaInstance<TState>(seedState), default);
}
This method would add an entry for the saga in the desired state in the repository, which could be used as a starting point for my tests.
Now I upgraded to the version 7.2.1 of MassTransit and this method is no longer available.
Is it possible to seed data in the current version of the InMemorySagaRepository? If not, what approach could be taken to achive similar results?
Thanks.
There was a discussion on this late last year. In summary, you need to be using the container-based in-memory test harness.
var services = new ServiceCollection();
services.AddMassTransitInMemoryTestHarness(x =>
{
x.AddSagaStateMachineTestHarness<MySagaStateMachine, MySagaState>();
x.AddSagaStateMachine<MySagaStateMachine, MySagaState>()
.InMemoryRepository();
});
var provider = services.BuildServiceProvider();
Get and start the test harness (be sure to stop it in finally):
var harness = provider.GetRequiredService<InMemoryTestHarness>();
await harness.Start();
You can obtain the saga dictionary and add your seeded saga instance:
var dictionary = provider.GetRequiredService<IndexedSagaDictionary<MySagaState>>();
dictionary.Add(new SagaInstance(new MySagaState
{
CorrelationId = ...,
OtherProperty = ...
}));
The linked discussion has some useful extension methods as well.
I created several microservices using the ASP.net core API
One of these microservices returns the exact address of the other microservices
How to update the address of any of these microservices without restarting if the address is changed
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddHttpClient("MainMicroservice", x =>
{
x.BaseAddress = new Uri("http://mainmicroservice.com");
});
services.AddHttpClient("Microservice1", x =>
{
x.BaseAddress = new Uri("http://microservice1.com");
});
services.AddHttpClient("Microservice2", x =>
{
x.BaseAddress = new Uri("http://microservice2.com");
});
services.AddHttpClient("Microservice3", x =>
{
x.BaseAddress = new Uri("http://microservice3.com");
});
}
}
public class Test
{
private readonly IHttpClientFactory _client;
public Test(IHttpClientFactory client)
{
_client = client;
}
public async Task<string> Test()
{
var repeat = false;
do
{
try
{
return await _client
.CreateClient("Microservice1")
.GetStringAsync("Test")
.ConfigureAwait(false);
}
catch (HttpRequestException e) when (e.StatusCode == HttpStatusCode.NotFound)
{
var newAddress = await _client
.CreateClient("MainMicroservice")
.GetStringAsync("Microservice1")
.ConfigureAwait(false);
//todo change address of microservice1
repeat = true;
}
} while (repeat);
}
}
If you are building a microservice-based solution sooner or later (rather sooner) you will encounter a situation when one service needs to talk to another. In order to do this caller must know the exact location of a target microservice in a network, they operate in.
You must somehow provide an IP address and port where the target microservice listens for requests. You can do it using configuration files or environment variables, but this approach has some drawbacks and limitations.
First is that you have to maintain and properly deploy
configuration files for all your environments: local development,
test, pre-production, and production. Forgetting to update any of
these configurations when adding a new service or moving an existing
one to a different node will result in errors discovered at runtime.
Second, the more important issue is that it works only in a static
environment, meaning you cannot dynamically add/remove nodes,
therefore you won’t be able to dynamically scale your system. The
ability to scale and deploy given microservice autonomously is one
of the key advantages of microservice-based architecture, and we do
not want to lose this ability.
Therefore we need to introduce service discovery. Service discovery is a mechanism that allows services to find each other's network location. There are many possible implementations of this pattern.
We have two types of service discovery: client-side and server-side which you can find a good NuGet package to handle in ASP.NET projects.
Got a small confusion here.
I'm not sure if I am handling my DbContext throughout the WebApi properly.
I do have some controllers that do some operations on my DB (Inserts/Updates with EF) and after doing these actions I do trigger an event.
In my EventArgs (I have a custom class which inherits from EventArgs) I pass my DbContext and I use it in the event handler to log these operations (basically I just log authenticated user API requests).
In the event handler when I am trying to commit my changes (await SaveChangesAsync) I get an error : "Using a disposed object...etc" basically noticing me that at the first time I use await in my async void (fire and forget) I notify the caller to dispose the Dbcontext object.
Not using async works and the only workaround that I've mangaged to put out is by creating another instance of DbContext by getting the SQLConnectionString of the EventArgs passed DbContext.
Before posting I did made a small research based on my issue
Entity Framework disposing with async controllers in Web api/MVC
This is how I pass parameters to my OnRequestCompletedEvent
OnRequestCompleted(dbContext: dbContext,requestJson: JsonConvert.SerializeObject);
This is the OnRequestCompleted() declaration
protected virtual void OnRequestCompleted(int typeOfQuery,PartnerFiscalNumberContext dbContext,string requestJson,string appId)
{
RequestCompleted?.Invoke(this,new MiningResultEventArgs()
{
TypeOfQuery = typeOfQuery,
DbContext = dbContext,
RequestJson = requestJson,
AppId = appId
});
}
And this is how I process and use my dbContext
var appId = miningResultEventArgs.AppId;
var requestJson = miningResultEventArgs.RequestJson;
var typeOfQuery = miningResultEventArgs.TypeOfQuery;
var requestType = miningResultEventArgs.DbContext.RequestType.FirstAsync(x => x.Id == typeOfQuery).Result;
var apiUserRequester = miningResultEventArgs.DbContext.ApiUsers.FirstAsync(x => x.AppId == appId).Result;
var apiRequest = new ApiUserRequest()
{
ApiUser = apiUserRequester,
RequestJson = requestJson,
RequestType = requestType
};
miningResultEventArgs.DbContext.ApiUserRequests.Add(apiRequest);
await miningResultEventArgs.DbContext.SaveChangesAsync();
By using SaveChanges instead of SaveChangesAsync everything works.
My only idea is to create another dbContext by passing the previous DbContext's SQL connection string
var dbOptions = new DbContextOptionsBuilder<PartnerFiscalNumberContext>();
dbOptions.UseSqlServer(miningResultEventArgs.DbContext.Database.GetDbConnection().ConnectionString);
using (var dbContext = new PartnerFiscalNumberContext(dbOptions.Options))
{
var appId = miningResultEventArgs.AppId;
var requestJson = miningResultEventArgs.RequestJson;
var typeOfQuery = miningResultEventArgs.TypeOfQuery;
var requestType = await dbContext.RequestType.FirstAsync(x => x.Id == typeOfQuery);
var apiUserRequester = await dbContext.ApiUsers.FirstAsync(x => x.AppId == appId);
var apiRequest = new ApiUserRequest()
{
ApiUser = apiUserRequester,
RequestJson = requestJson,
RequestType = requestType
};
dbContext.ApiUserRequests.Add(apiRequest);
await dbContext.SaveChangesAsync();
}
The latter code excerpt is just a small test to check my supposition, basically I should pass the SQL connection string instead of the DbContext object.
I am not sure (in terms of best practice) if I should pass a connection string and create a new dbContext object (and dispose it by using a using clause) or if I should use/have another mindset for this issue.
From what I know, using a DbContext should be done for a limited set of operations and not for multiple purposes.
EDIT 01
I'm going to detail more thorough what I've been doing down below.
I think I got an idea of why this error happens.
I have 2 controllers
One that receives a JSON and after de-serializing it I return a JSON to the caller and another controller that gets a JSON that encapsulates a list of objects that I iterate in an async way, returning an Ok() status.
The controllers are declared as async Task<IActionResult> and both feature an async execution of 2 similar methods.
The first one that returns a JSON executes this method
await ProcessFiscalNo(requestFiscalView.FiscalNo, dbContext);
The second one (the one that triggers this error)
foreach (string t in requestFiscalBulkView.FiscalNoList)
await ProcessFiscalNo(t, dbContext);
Both methods (the ones defined previously) start an event OnOperationComplete()
Within that method I execute the code from my post's beginning.
Within the ProcessFiscalNo method I DO NOT use any using contexts nor do I dispose the dbContext variable.
Within this method I only commit 2 major actions either updating an existing sql row or inserting it.
For edit contexts I select the row and tag the row with the modified label by doing this
dbContext.Entry(partnerFiscalNumber).State = EntityState.Modified;
or by inserting the row
dbContext.FiscalNumbers.Add(partnerFiscalNumber);
and finally I execute an await dbContext.SaveChangesAsync();
The error always gets triggered within the EventHandler ( the one detailed # the beginning of the thread) during the await dbContext.SaveChangedAsync()
which is pretty weird since 2 lines before that I do await reads on my DB with EF.
var requestType = await dbContext.RequestType.FirstAsync(x => x.Id == typeOfQuery);
var apiUserRequester = await dbContext.ApiUsers.FirstAsync(x => x.AppId == appId);
dbContext.ApiUserRequests.Add(new ApiUserRequest() { ApiUser = apiUserRequester, RequestJson = requestJson, RequestType = requestType });
//this throws the error
await dbContext.SaveChangesAsync();
For some reason calling await within the Event Handler notifies the caller to dispose the DbContext object.
Also by re-creating the DbContext and not re-using the old one I see a huge improvement on access.
Somehow when I use the first controller and return the info the DbContext object appears to get flagged by the CLR for disposal but for some unknown reason it still functions.
EDIT 02
Sorry for the bulk-ish content that follows, but I've placed all of the areas where I do use dbContext.
This is how I'm propagating my dbContext to all my controllers that request it.
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMemoryCache();
// Add framework services.
services.AddOptions();
var connection = #"Server=.;Database=CrawlerSbDb;Trusted_Connection=True;";
services.AddDbContext<PartnerFiscalNumberContext>(options => options.UseSqlServer(connection));
services.AddMvc();
services.AddAuthorization(options =>
{
options.AddPolicy("PowerUser",
policy => policy.Requirements.Add(new UserRequirement(isPowerUser: true)));
});
services.TryAddSingleton<IHttpContextAccessor, HttpContextAccessor>();
services.AddSingleton<IAuthorizationHandler, UserTypeHandler>();
}
In Configure I'm using the dbContext for my custom MiddleWare
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
var context = app.ApplicationServices.GetService<PartnerFiscalNumberContext>();
app.UseHmacAuthentication(new HmacOptions(),context);
app.UseMvc();
}
In the custom MiddleWare I'm only using it for a query.
public HmacHandler(IHttpContextAccessor httpContextAccessor, IMemoryCache memoryCache, PartnerFiscalNumberContext partnerFiscalNumberContext)
{
_httpContextAccessor = httpContextAccessor;
_memoryCache = memoryCache;
_partnerFiscalNumberContext = partnerFiscalNumberContext;
AllowedApps.AddRange(
_partnerFiscalNumberContext.ApiUsers
.Where(x => x.Blocked == false)
.Where(x => !AllowedApps.ContainsKey(x.AppId))
.Select(x => new KeyValuePair<string, string>(x.AppId, x.ApiHash)));
}
In my controller's CTOR I'm passing the dbContext
public FiscalNumberController(PartnerFiscalNumberContext partnerContext)
{
_partnerContext = partnerContext;
}
This is my Post
[HttpPost]
[Produces("application/json", Type = typeof(PartnerFiscalNumber))]
[Consumes("application/json")]
public async Task<IActionResult> Post([FromBody]RequestFiscalView value)
{
if (!ModelState.IsValid)
return BadRequest(ModelState);
var partnerFiscalNo = await _fiscalNoProcessor.ProcessFiscalNoSingle(value, _partnerContext);
}
Within the ProcessFiscalNoSingle method I have the following usage, If that partner exists then I'll grab him, if not, create and return him.
internal async Task<PartnerFiscalNumber> ProcessFiscalNoSingle(RequestFiscalView requestFiscalView, PartnerFiscalNumberContext dbContext)
{
var queriedFiscalNumber = await dbContext.FiscalNumbers.FirstOrDefaultAsync(x => x.FiscalNo == requestFiscalView.FiscalNo && requestFiscalView.ForceRefresh == false) ??
await ProcessFiscalNo(requestFiscalView.FiscalNo, dbContext, TypeOfQuery.Single);
OnRequestCompleted(typeOfQuery: (int)TypeOfQuery.Single, dbContextConnString: dbContext.Database.GetDbConnection().ConnectionString, requestJson: JsonConvert.SerializeObject(requestFiscalView), appId: requestFiscalView.RequesterAppId);
return queriedFiscalNumber;
}
Further down in the code, there's the ProcessFiscalNo method where I use the dbContext
var existingItem =
dbContext.FiscalNumbers.FirstOrDefault(x => x.FiscalNo == partnerFiscalNumber.FiscalNo);
if (existingItem != null)
{
var existingGuid = existingItem.Id;
partnerFiscalNumber = existingItem;
partnerFiscalNumber.Id = existingGuid;
partnerFiscalNumber.ChangeDate = DateTime.Now;
dbContext.Entry(partnerFiscalNumber).State = EntityState.Modified;
}
else
dbContext.FiscalNumbers.Add(partnerFiscalNumber);
//this gets always executed at the end of this method
await dbContext.SaveChangesAsync();
Also I've got an Event called OnRequestCompleted() where I pass my actual dbContext (after it ends up with SaveChangesAsync() if I update/create it)
The way I initiate the event args.
RequestCompleted?.Invoke(this, new MiningResultEventArgs()
{
TypeOfQuery = typeOfQuery,
DbContextConnStr = dbContextConnString,
RequestJson = requestJson,
AppId = appId
});
This is the notifier class (where the error occurs)
internal class RequestNotifier : ISbMineCompletionNotify
{
public async void UploadRequestStatus(object source, MiningResultEventArgs miningResultArgs)
{
await RequestUploader(miningResultArgs);
}
/// <summary>
/// API Request Results to DB
/// </summary>
/// <param name="miningResultEventArgs">EventArgs type of a class that contains requester info (check MiningResultEventArgs class)</param>
/// <returns></returns>
private async Task RequestUploader(MiningResultEventArgs miningResultEventArgs)
{
//ToDo - fix the following bug : Not being able to re-use the initial DbContext (that's being used in the pipeline middleware and controller area),
//ToDo - basically I am forced by the bug to re-create the DbContext object
var dbOptions = new DbContextOptionsBuilder<PartnerFiscalNumberContext>();
dbOptions.UseSqlServer(miningResultEventArgs.DbContextConnStr);
using (var dbContext = new PartnerFiscalNumberContext(dbOptions.Options))
{
var appId = miningResultEventArgs.AppId;
var requestJson = miningResultEventArgs.RequestJson;
var typeOfQuery = miningResultEventArgs.TypeOfQuery;
var requestType = await dbContext.RequestType.FirstAsync(x => x.Id == typeOfQuery);
var apiUserRequester = await dbContext.ApiUsers.FirstAsync(x => x.AppId == appId);
var apiRequest = new ApiUserRequest()
{
ApiUser = apiUserRequester,
RequestJson = requestJson,
RequestType = requestType
};
dbContext.ApiUserRequests.Add(apiRequest);
await dbContext.SaveChangesAsync();
}
}
}
Somehow when the dbContext reaches the Event Handler CLR gets notified to dispose the dbContext object (because I'm using await?)
Without recreating the object I was having huge lag when I wanted to use it.
While writing this I have an idea, I did upgrade my solution to 1.1.0 and I'm gonna try to see if it behaves similarly.
Concerning Why you get the error
As pointed out at the Comments by #set-fu DbContext is not thread safe.
In addition to that, since there is no explicit lifetime management of your DbContext your DbContext is going to get disposed when the garbage collector sees fit.
Judging from your context, and your mention about Request scoped DbContext
I suppose you DI your DbContext in your controller's constructor.
And since your DbContext is request scoped it is going to be disposed as soon as your Request is over,
BUT since you have already fired and forgot your OnRequestCompleted events there is no guarantee that your DbContext won't be disposed.
From there on , the fact that one of our methods succeeds and the other fails i think is seer "Luck".
One method might be faster than the other and completes before the Garbage collector disposes the DbContext.
What you can do about this is to change the return type of your Events from
async void
To
async Task<T>
This way you can wait your RequestCompleted Task within your controller to finish and that will guarantee you that your Controller/DbContext will not get Disposed until your RequestCompleted task is finished.
Concerning Properly handling DbContexts
There are two contradicting recommendations here by microsoft and many people use DbContexts in a completely divergent manner.
One recommendation is to "Dispose DbContexts as soon as posible"
because having a DbContext Alive occupies valuable resources like db
connections etc....
The other states that One DbContext per request is highly
reccomended
Those contradict to each other because if your Request is doing a lot of unrelated to the Db stuff , then your DbContext is kept for no reason.
Thus it is waste to keep your DbContext alive while your request is just waiting for random stuff to get done...
So many people who follow rule 1 have their DbContexts inside their "Repository pattern" and create a new Instance per Database Query
public User GetUser(int id)
{
User usr = null;
using (Context db = new Context())
{
usr = db.Users.Find(id);
}
return usr;
}
They just get their data and dispose the context ASAP.
This is considered by MANY people an acceptable practice.
While this has the benefits of occupying your db resources for the minimum time it clearly sacrifices all the UnitOfWork and "Caching" candy EF has to offer.
So Microsoft's recommendation about using 1 Db Context per request it's clearly based on the fact that your UnitOfWork is scoped within 1 request.
But in many cases and i believe your case also this is not true.
I consider Logging a separate UnitOfWork thus having a new DbContext for your Post-Request Logging is completely acceptable (And that's the practice i also use).
An Example from my project i have 3 DbContexts in 1 Request for 3 Units Of Work.
Do Work
Write Logs
Send Emails to administrators.
I am new to both Ninject and the Vita ORM (vita docs) and am just having issues coming up with a binding strategy for dependency injection and would really appreciate any help.
Firstly, my application in split into 3 layers, namely the data layer ( using Vita ORM so in essence it is a single EntityApp class similar to dbContext for EF), then I have a service layer with well defined interfaces ( and existing implementations using my current repository pattern using dapper.net ) and finally a simple MVC web application with controllers calling off to the service layer.
Currently I am using ninject constructor intection to inject the repo into the service and the service into the controller and scoping them accordingly. this works great because everything is interface driven and there is no shared context between services / repositories.
But now with the introduction of an "entity context" I need to create the context once in the app ( so singleton scope ) and then would in essence like to open a new session per request and pass that to any service layer objects that require it. Over and above this, the ORM needs to be initialized at application startup ( or I suppose it can be done lazily, but at app start would be better )
Here is some code generated by vita as to how to initialize the ORM
class Program {
public static MyEntityApp App;
static void Main(string[] args) {
Console.WriteLine(" Sample application for VITA-generated model. ");
Init();
//Open session and run query
var session = App.OpenSession();
var query = from ent in session.EntitySet<IConnections>() // just random entity
// where ?condition?
select ent;
var entities = query.Take(5).ToList();
Console.WriteLine("Loaded " + entities.Count + " entities.");
foreach(var ent in entities)
Console.WriteLine(" Entity: " + ent.ToString()); // change to smth more meaningful
Console.WriteLine("Press any key ...");
Console.ReadKey();
}
private static void Init() {
App = new MyEntityApp();
App.CacheSettings.AddCachedTypes(CacheType.FullSet /* , <fully cached entity types> */ );
App.CacheSettings.AddCachedTypes(CacheType.Sparse /* , <sparsely cached entity types> */ );
var connString = #".......";
var driver = new Vita.Data.MsSql.MsSqlDbDriver();
App.LogPath = "_appLog.log";
var dbSettings = new DbSettings(driver, DbOptions.Default, connString, upgradeMode: DbUpgradeMode.Always);
App.ConnectTo(dbSettings);
}
}
As you can see they initialize and set a static variable which has reference to the context container. As can also be seen by the above is that you need to call var session = App.OpenSession(); to work with the context, so I was hoping to create one session per request and then inject that session into the service object contructors.
So here is what I have done so far
/*Map the Vita ORM session into the request scope*/
kernel.Bind<MyEntities>().ToSelf().InSingletonScope();
I am assuming that will call the contructor, and inside there I have initialized the context correctly ( and this should also only be called once )
then in the service objects i want to do something like this
here is the service impl
private IEntitySession _Session { get; set; }
public VitaSettingsServiceImpl(IEntitySession Session)
{
_Session = Session;
}
And here is my attempt at injecting that session object...
kernel.Bind<ISettingsService>().To<VitaSettingsServiceImpl>().InRequestScope().WithConstructorArgument("Session", [WANT TO CALL MYENTITIES.OpenSession() HERE]);
As you can see it is that last binding that is stumping me? how do I bind a contructor object param to a method call on an existing singleton bound object?
Like I said in the begining, I am VERY green at this and maybe I am going about it all wrong, but i have scoured the web and cant find any info regarding these to technologies being used together, so any help would be greatly appreciated
So for all the lonely souls trolling the web with similar issues, here is what I eventually cam up with :
/*Map the Vita ORM session into the request scope*/
kernel.Bind<SorbetEntities>().ToSelf().InSingletonScope();
/*Map all the services to their respective implementations*/
kernel.Bind<ISettingsService>().To<sorbet.Vita.VitaSettingsServiceImpl>().InRequestScope().WithConstructorArgument("Session", CreateVitaSession);
and then a static extension method to execute the method that will return my connection context
private static object CreateVitaSession(IContext context)
{
return context.Kernel.Get<SorbetEntities>().OpenSession();
}
And thats it. now i get a new connection context per request and I am a happy camper
In attempts to do some test-driven-development, I've created the most basic, buildable method:
public class NoteService : INoteService
{
public IEnumerable<Annotation> GetNotes(ODataQueryOptions oDataQueryOptions)
{
return new List<Annotation>();
}
}
When trying to unit test it, it seems impossible to create an instance of ODataQueryOptions:
[TestFixture]
public class NoteServiceTests
{
[Test]
public void GetNotes_Returns_IEnumerable_Of_Notes()
{
var sut = new NoteService();
var queryOptions = new ODataQueryOptions(new ODataQueryContext(new EdmCoreModel(), new EdmCollectionType())// new new new etc??
Assert.That(() => sut.GetNotes(options), Is.InstanceOf<IEnumerable<Annotation>>());
}
}
How do you create a simple instance of the object ODataQueryOptions in order to inject it for unit tests?
Will this work?
var request = new HttpRequestMessage(HttpMethod.Get, "");
var context = new ODataQueryContext(EdmCoreModel.Instance, typeof(int));
var options = new ODataQueryOptions(context, request);
I wanted to do something similar and found a workable solution:
My use case
I have a web service that passes OData query parameters down to our CosmosDB document client which translates them into a CosmosDB SQL query.
I wanted a way to write integration tests directly on the CosmosDB client without having to make outgoing calls to other downstream services.
Approaches
Tried mocking ODataQueryParameters using Moq but because it's a class and not an interface, Moq can't properly instantiate all of the properties that I need
Tried instantiating one directly, but outside of an MVC application this is extremely difficult to build the required EdmModel.
Wondered if there is a way to do it without constructing an EdmModel?
I finally figured out a way to do it with the following:
This may not solve every use case, but here's the solution that I landed on for my needs:
public static class ODataQueryOptionsBuilder
{
private static WebApplicationFactory<TEntryPoint> _app =
new WebApplicationFactory<TEntryPoint>();
public static ODataQueryOptions<T> Build<T>(
string queryString)
where T : class
{
var httpContext = new DefaultHttpContext();
httpContext.Request.QueryString = new QueryString(queryString);
httpContext.RequestServices = _app.Services;
var modelBuilder = new ODataConventionModelBuilder();
modelBuilder.EntityType<T>();
var model = modelBuilder.GetEdmModel();
var context = new ODataQueryContext(
model,
typeof(T),
new Microsoft.AspNet.OData.Routing.ODataPath());
return new ODataQueryOptions<T>(context, httpContext.Request);
}
}
Conclusion
I realized that all I needed to get the ODataQueryOptions was a test server that was set up exactly like my web service. Once I realized that, the solution was simple. Use WebApplicationFactory to create the test server, and then allow OData to use the IServiceProvider from that application to build the ODataQueryOptions that I needed.
Drawbacks
Realize that this solution is as performant as your TEntryPoint, so if your application takes a long time to start up, this will cause unit/integration tests to take a long time to run.