I've taken over a code base from someone else. This is a web application built on Angular 8 (client) and .NET Core 3.0 (server).
Brief description of the application:
Frequent notifications are stored in a database, with an SqlTableDependency attached to it for detecting new notifications.
When new notifications occur, the server prompts all clients to request an updated list based on their custom filters. These client-to-server requests happen over HttpPost with the filter as a parameter.
The problem occurs when too many notifications arrive at once. Say, when 10 new notifications arrive, the server sends 10 update prompts to the client at the same time, causing the client to immediately send 10 HttpPost requests to the API.
The API takes the filter from the POST, uses it to query the database, and returns the filtered result to the calling client. However, when 10 of these arrive at the same time, it causes a DbContext error - more specific:
A second operation started on this context before a previous operation
completed. This is usually caused by different threads using the same
instance of DbContext.
public class AlarmController : Controller
{
private readonly IAlarmRepository alarmRepo;
private readonly ISiteRepository siteRepo;
public AlarmController(IAlarmRepository alarmRepo, ISiteRepository siteRepo)
{
this.alarmRepo = alarmRepo;
this.siteRepo = siteRepo;
}
[HttpPost("filter")]
public async Task<IActionResult> FilterAlarm([FromBody] AlarmRequest alarmRequest)
{
var snmpReciverList = await this.alarmRepo.GetFilteredSNMPReceiverHistory(alarmRequest.FromDate, alarmRequest.ToDate);
var siteList = await this.siteRepo.GetSiteListFiltered(int.Parse(alarmRequest.Filter), alarmRequest.SiteName);
return Ok(await SNMPHistoryMapping.DoMapping(siteList, snmpReciverList);
}
This HttpPost returns an Ok() with a list of the data requested, in which some mapping is done:
IEnumerable<Site> sites = siteList;
IEnumerable<SnmpreceiverHistory> histories = snmpReceiverList;
IEnumerable<SNMPHistoryResponse> data = (from s in sites
join rh in histories on s.Address equals rh.Ipaddress
where priority > 0 ? s.SitePriority == priority : true
&& !string.IsNullOrEmpty(trap) ? rh.AlarmDescription.Contains(trap) : true
select new SNMPHistoryResponse()
{
AlarmDescription = rh.AlarmDescription,
EventType = rh.EventType,
OnOffStatus = rh.OnOffStatus,
ParentSiteName = TraceFullParentDescription(s.Parent),
ReceiveTime = rh.ReceiveTime,
RepeatCount = rh.RepeatCount,
SiteName = s.Description,
SitePriority = s.SitePriority,
Status = AlarmStatus.GetStatusDescription(rh.EventType),
Value = rh.Value
});
When multiple of these [HttpPost("filter")] requests arrive at the same time, it appears as if a new thread is created for each one. They all connect on the same DbContext, and the next query starts before the previous is completed.
I can solve it by putting delays between each request from the client, but I want a more robust server-side solution to it, effectively processing these specific requests sequentially.
Note that this is EF Core and .NET Core 3.0, which does not have a SynchronizationContext.
I believe the comment posted by Panagiotis Kanavos is correct:
In this case a single DbContext is created by dependency injection, don't do that. All examples and tutorials show that the DbContexts are Scoped.
This catches me often, and actually just did. I wasn't using dependency injection, but sharing the DbContext around because I was being lazy. Best to properly set up dependency injection and do it the right way, e.g.:
IHostBuilder host = CreateHostBuilder(args);
host.ConfigureServices(services => {
services.AddSingleton(service);
// other stuff...
// Then the context:
services.AddScoped<DataContext>(x => {
DbContextOptionsBuilder<DataContext> dbBuilder =
new DbContextOptionsBuilder<DataContext>();
dbBuilder.UseNpgsql(connstr);
return new DataContext(dbBuilder.Options);
});
});
// Start the host
host.Build().Run();
The documentation for AddScoped is here, and in true microsoft form, it is impossible to read or digest. Stackoverflow does a better job at explaining it.
Related
We use microservice architecture with .net core 6 for one service that gets configs from other api instead of appsettings.json file.
The problem is the existing latency for every request or handling event from rabbitmq.
We also saw these logs:
11 Feb 2023 17:41:50.406
Executed controller factory for controller Basket.API.Controllers.MonitoringController (Basket.API)
11 Feb 2023 17:41:49.305
Executing controller factory for controller Basket.API.Controllers.MonitoringController (Basket.API)
that there was more than a second delay in creating a controller.
This is where there is a possibility of latency (my repository class constructor):
public RedisBasketRepository(ILoggerFactory loggerFactory,
IOptions<BasketSettings> settings,
IConnectionMultiplexer redis,
IOptionsSnapshot<Dictionary<string, BrandsConfigurations>> brandsConfigurations,
IInternalAPICallerService internalAPICallerService,
IHttpClientFactory clientFactory)
{
_logger = loggerFactory.CreateLogger<RedisBasketRepository>();
_redis = redis;
_database = redis.GetDatabase();
_settings = settings?.Value ?? throw new ArgumentNullException(nameof(settings));
_brandsConfigurations = brandsConfigurations.Value;
_internalAPICallerService = internalAPICallerService;
_clientFactory = clientFactory;
}
There were no errors in the logs.
Because we had also used Redis and there was a possibility of thread theft in the library used to connect to it,
also used the following config to set the minimum number of threads, but the problem still remained:
ThreadPool.SetMinThreads(100, 100);
We searched and tried a lot to solve this problem.
IOptionsSnapshot<Dictionary<string, BrandsConfigurations>> brandsConfigurations,
Because using the above item results in the following:
Options are computed once per request when accessed and cached for the
lifetime of the request.
If there are many configs that you receive through the api when starting the app, and in multiple constructors used by IOptionsSnapshot, there will be a lot of delay due to the computing for each item.
If you use IOptions instead of IOptionsSnapshot, no more computing and processing will be done and no delay will be created for each request.
You have to manually update the config if needed.
for more info use this link
I'm trying to create a Microsoft Graph subscription inside of Startup.ConfigureServices(), but this requires an async operation.
After doing some research it turns out that .NET does not and probably will never support async operations inside of Startup. At this point I'm just willing to block the thread to create the subscription.
However, I have ran into another issue. The controllers are not set up inside of Startup.ConfigureServices even though I am creating the subscription after calling services.AddControllers(). This means that the Graph API is never receiving the 200 from my controller which it needs to register the subscription since the controller hasn't been set up yet I'm assuming.
Is what I am trying to accomplish even possible with the current structure of the .NET framework?
The only workaround I've found is to create the Subscription object without calling the Graph API, call services.AddSingleton<Subscription(provider => subscription), dependency inject the Subscription object into my controller, and then create the subscription by calling the Graph API via an API endpoint inside the controller. Then I must update all of the fields for the dependency injected subscription, since I can't overwrite the object itself with subscription = await _gsc.Subscriptions.Request().AddAsync(_sub); (_gsc being an instance of GraphServiceClient).
This is what I would like to do:
Startup.ConfigureServices()
Task<Subscription> subscriptionTask = gsc.Subscriptions.Request().AddAsync(subscription);
subscription = subscriptionTask.GetAwaiter().GetResult();
services.AddSingleton<GraphServiceClient>(provider => gsc);
services.AddSingleton<Subscription>(provider => subscription);
This is my current workaround:
Startup.ConfigureServices()
services.AddSingleton<GraphServiceClient>(provider => gsc);
services.AddSingleton<Subscription>(provider => subscription);
One of my controller endpoints
public async Task<IActionResult> CreateMSGraphSubscription()
{
try
{
Subscription newSubscription = await _gsc.Subscriptions.Request().AddAsync(_sub);
// Update our singleton instance.
_sub.Id = newSubscription.Id;
_sub.AdditionalData = newSubscription.AdditionalData;
_sub.ApplicationId = newSubscription.ApplicationId;
_sub.CreatorId = newSubscription.CreatorId;
_graphRenewalSettings.IsActive = true;
}
catch (Exception e)
{
return new BadRequestObjectResult(e);
}
return new OkObjectResult(_sub);
}
My problem is that I have to call one of my API endpoints after the application has started, rather than having the graph subscription ready when the app has started.
UPDATE
I figured out that I need to inject into Startup.Configure() IHostApplicationLifetime lifetime and then use
lifetime.ApplicationStarted.Register(async () =>
{
Subscription newSubscription = await gsc.Subscriptions.Request().AddAsync(sub);
// Update our singleton instance.
sub.Id = newSubscription.Id;
sub.AdditionalData = newSubscription.AdditionalData;
sub.ApplicationId = newSubscription.ApplicationId;
sub.CreatorId = newSubscription.CreatorId;
graphRenewalSettings.IsActive = true;
});
at the end of Startup.Configure() to register the graph subscription with the Graph API once the app has started up. (sub and gsc are my Subscription and GraphServiceClient objects, respectively, and I inject these into Startup.Configure() also)
This is not ideal since I would like to inject an already registered Subscription object instead of updating all of the fields like I do above, but I could not find a better way to do this.
There are extension functions for submitting queries asynchronously in Gremlin.Net, some get string which is not recommended and others use RequestMessage which is not as readable as using GraphTraversal functions.
Is there a way to submit a query like the one below asynchronously, without submitting a string or RequestMessage?
var res = _graphTraversalSource.V().Has("name", "Armin").Out().Values<string>("name").ToList();
A little more context
I'm writing an API that queries AWS Neptune. Here's how I get the GraphTraversalSource in the constructor of a singleton service (not sure if I should make the RemoteConnection a singleton and generate a GraphTraversalSource for each query or this is the right approach):
private readonly GraphTraversalSource _graphTraversalSource;
public NeptuneHandler(string endpoint, int port)
{
var gremlinClient = new GremlinClient(new GremlinServer(endpoint, port));
var remoteConnection = new DriverRemoteConnection(gremlinClient);
_graphTraversalSource = AnonymousTraversalSource.Traversal().WithRemote(remoteConnection);
}
You can execute a traversal asynchronously with the Promise() terminator step
var names = await g.V().Has("name", "Armin").Out().Values<string>("name").Promise(t => t.ToList());
Promise() takes a callback as its argument that calls the usual terminator step you want to be executed for your traversal which is ToList in your case. If you however only want to get a single result back, then you can just replace ToList() with Next().
Note that I renamed the variable for the graph traversal source to g as that is the usual naming convention for Gremlin.
As I already mentioned in my comment, it is recommended to reuse this graph traversal source g across your application as it can contain configuration that applies to all traversals you want to execute. It also contains the DriverRemoteConnection which uses a connection pool for the communication with the server. By reusing g, you also use the same connection pool for all traversals in your application.
How can I allow SignalR to push updates from a SQL Server database to the browser using Entity Framework 6?
Here's my action method:
public ActionResult Index()
{
var currentGates = _ctx.Transactions
.GroupBy(item => item.SubGateId)
.SelectMany(group => group.OrderByDescending(x => x.TransactionDateTime)
.Take(1))
.Include(g => g.Card)
.Include(g => g.Student)
.Include(g => g.Student.Faculty)
.Include(g => g.Student.Department)
.Include(g => g.SubGate)
.ToList();
return View(currentGates);
}
After a lot of searching, the only result I got is this:
ASP.NET MVC 5 SignalR, SqlDependency and EntityFramework 6
I have tried the suggested way but it didn't work. In addition to that, I found a very important security issue concerning storing sensitive data in a hidden field!
My question is: How can I update my view according to any Insert on Transaction table?
So basically what you need to do is overwrite the SaveChanges() method and action you SignalR function:
public class ApplicationDbContext : IdentityDbContext<IdentityUser>
{
public override int SaveChanges()
{
var entities = ChangeTracker.Entries().Where(x => x.Entity is Transactions && x.State == EntityState.Added) ;
IHubContext hubContext = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
foreach (var entity in entities)
{
hubContext.Clients.All.notifyClients(entity);
}
return base.SaveChanges();
}
}
Normally I would plug into either the service creating the Transaction or the Save changes event like #Vince Suggested However because of your new requirement
the problem is: I don't have this control over the sdk that pushes the transactions of students to db, so I hope that there's some way to work directly over the db table.
In you're case you can just watch the table using SQL Dependencies
Note: Be careful using SqlDependency class - it has problems with memory leaks.
using(var tableDependency = new SqlTableDependency<Transaction>(conString))
{
tableDependency.OnChanged += TableDependency_Changed;
tableDependency.Start();
}
void TableDependency_Changed(object sender, RecordChangedEventArgs<Transaction> e)
{
if (e.ChangeType != ChangeType.None)
{
var changedEntity = e.Entity;
//You'll need to change this logic to send only to the people you want to
IHubContext hubContext = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
hubContext.Clients.All.notifyClients(entity);
}
}
Edit
It seems you may have other dependencies e.g include you want with your results.
So what you can do is resolve the Id from the entity and then call Get using EntityFramework.
if (e.ChangeType != ChangeType.None)
{
var changedEntity = e.Entity;
var id = GetPrimaryKey(changedEntity)
_ctx.Transactions.Find(id);
}
Edit
Other Methods of approach.
If you're entities have a last updated field you can scan for changes to the table on a timer. Client has a timer when it elapses it send the in the last time it checked for changes. The server the obtains all of entities who have an last update time stamp greater than the time passed in
Although you don't have access to the EF, you probably have access to the web api requests. Whenever someone calls an update or create method, just grab the id of the changed entity, pop all the ids of the entities changed onto a background service which will send the signalr notifications,
The only issue now is obtaining the primary key that can be done with a manual mapping of the types to their id property, or it can be done using the metadata from the model but that's a bit more complicated
Note you probably can modify this so it works generically on all tables
I understand you have the following dataflow:
6x card reader devices
a 'manager' receives the card reader data and pushes it to
a SQL Server 'transaction' table
All of this is 3rd party. Your goal is to have a custom ASP.NET web page that displays the most recent record received by a card reader.
So the issue basically is, that your ASP.NET service needs to be notified of changes in a monitored database table.
Quick-and-dirty approach:
Implement a database trigger on your big 'transaction' table that updates a lightweight 'state' table
CREATE TABLE state (
GateId INT NOT NULL,
UserName VARCHAR (20) NOT NULL,
PRIMARY KEY (GateId)
);
This table is intended to have 1 record per gate only. 6 gates -> 6 records in this table.
Change your ASP.NET service to poll the 'state' table in an interval of e.g. 200ms. When doing such a hack, make sure the table you are polling does not have many records! Once you detect a change, notify your SignalR clients.
IMHO, a dataflow via a database is a bad design decision (or limitation). It would be way better if your 'manager' does not only push data to the database, but subsequently notifies all SignalR clients. This dataflow is basically what #Vince answer assumes.
Edit:
As you have posted the actual device you are using, I'd encourage you to double-check if you can directly connect to the card reader. It seams that there are approaches to register some kind of callback once the device has read a student card. Do this with the sole goal of achieving a straight dataflow / architecture like this:
-> connect 6x card readers to your
-> ASP.Net service which at a single point in your code:
-> updates the database
-> updates the Signal R clients
http://kb.supremainc.com/bs2sdk/doku.php?id=en:getting_started
https://github.com/supremainc/BioStar2_device_SDK
You might ask the vendor for support, for the C# documentation is not too plenty :|
I'm using TPL to send emails to the end-users without delaying the api response, i'm not sure which method should be used since im dealing with the db context here. I did method 2 because i wasn't sure that the db context would be available by the time the task gets to run, so a created a new EF object, or maybe im doing it all wrong.
public class OrdersController : ApiController {
private AllegroDMContainer db = new AllegroDMContainer();
public HttpResponseMessage PostOrder(Order order) {
// Creating a new EF object and adding it to the database
Models.Order _order = new Models.Order{ Name = order.Name };
db.Orders.Add(_order);
/* Method 1 */
Task.Factory.StartNew(() => {
_order.SendEmail();
});
/* Method 2 */
Task.Factory.StartNew(() => {
Models.Order rOrder = db.Orders.Find(_order.ID);
rOrder.SendEmail();
});
return Request.CreateResponse(HttpStatusCode.Created);
}
}
Both methods are wrong, because you're starting a fire-and-forget operation on a pool thread inside the ASP.NET process.
The problem is, an ASP.NET host is not guaranteed to stay alive between handling HTTP responses. E.g., it can be automatically recycled, manually restarted or taken out of the farm. In which case, the send-mail operation would never get completed and you wouldn't get notified about it.
If you need to speed up the response delivery, consider outsourcing the send-mail operation to a separate WCF or Web API service. A related question: Fire and forget async method in asp.net mvc.