I'm in a situation where two calls at the same time write to the session (of an asp.net core application running on the old framework), and one of the session variables gets overwritten.
Given the following controller code, assume that the long session gets called first, 200 ms later the short session gets called, and 800 ms later (when the long session is done) the result of both sessions gets called.
[HttpPost("[action]")]
public async Task<IActionResult> TestLongSession() {
HttpContext.Session.SetString("testb", "true");
// If we do this delay BEFORE the session ("testb") is set, then all is fine.
await Task.Delay(1000);
return Ok();
}
[HttpPost("[action]")]
public async Task<IActionResult> TestShortSession() {
HttpContext.Session.SetString("testa", "true");
return Ok();
}
[HttpGet("[action]")]
public async Task<IActionResult> TestResultOfBothSessions() {
string a = HttpContext.Session.GetString("testa");
string b = HttpContext.Session.GetString("testb");
return Ok($"A: {a}, B: {b}");
}
The result of the final call (TestBothSessions) is "A: , B: true".
The question is then: Is there something I missed to make the session work (aka, return "A: true, B: true")?
Obviously, I could remove the delay and all is fine, but in the real application there's a call that potentially can take some time, and I prefer not to write the session variable at a later time (I guess I could with a bit of custom error handling, but then the problem still remains that I no longer trust the asp.net session to work with synchronous calls).
Edit: The typescript code that calls these endpoints from the browser:
this.service.testLongSession().subscribe(() => {
this.service.testBothSessions().subscribe((result: string) => {
console.log(result);
});
});
setTimeout(() => {
this.service.testShortSession().subscribe();
}, 200);
I believe the behavior you observe is what the ASP.NET authors intended. I look at the interfaces that session stores need to implement, namely ISession and ISessionStore, and I see no synchronization mechanisms to prevent the overwriting of data during simultaneous requests.
The benefit of such a simple interface is that it's much easier to implement, and can be easily implemented by a variety of caches and databases.
ASP.NET 4 had a much more complex session store base class SessionStateStoreProviderBase that included locking logic, but it was really challenging to implement.
Related
I have a controller where the functionality required is to implement a call where two actions are done simultaneously, first we get input and do a call to an external application then respond to the call OK we are working on it and release the caller. When the external application responds, we get the response and save to the db, I am using a task.delay as
Part 1
[HttpPost]
public async Task<IActionResults> ProcessTransaction(Transactions transactions)
{
// do some processing
TransactionResults results = new TransactionResults();
Notify(transactions, results);
return Ok("We are working on it, you will get a notification");
}
The delayed task
private void Notify(Transactions transactions, TransactionResults results)
{
Task.Delay(10000).ContinueWith(t => SendNotification(transactions, results));
}
on the SendNotification I am attempting to save the results
private void SendNotification(Transactions transactions, TransactionResults results)
{
// some processing
_context.Add(results); // this gives an error context has already been disposed
_context.SaveChanges();
}
Is there a better way to do this, or a way to re instantiate the context?
I managed to do a work around to the problem I am facing, I created an endpoint that I would call once the notification results came back and the data would be saved on the callback not at that particular event. Once the controller has respond with an Ok, the controller is disposed and its difficult to re instantiate it. The call back work around works for now, I will update if I find another way to do it.
I have a very simple Code, but what it does it completely weird. It is a simple Cache abstraction and goes like this:
public class CacheAbstraction
{
private MemoryCache _cache;
public CacheAbstraction()
{
_cache = new MemoryCache(new MemoryCacheOptions { });
}
public async Task<T> GetItemAsync<T>(TimeSpan duration, Func<Task<T>> factory,
[CallerMemberName] string identifier = null ) where T : class
{
return await _cache.GetOrCreateAsync<T>(identifier, async x =>
{
x.SetAbsoluteExpiration(DateTime.UtcNow.Add(duration));
T result = null;
result = await factory();
return result;
});
}
}
Now the fun part: I'm passing expiration durations of 1h - 1d
If I run it in a test suite, everything is fine.
If I run it as a .net core app, the expiration is always set to "now" and the item expires on the next cache check. WTF!?
I know it's been two years, but I ran across this same problem (cache items seeming to expire instantly) recently and found a possible cause. Two essentially undocumented features in MemoryCache: linked cache entries and options propagation.
This allows a child cache entry object to passively propagate it's options up to a parent cache entry when the child goes out of scope. This is done via IDisposable, which ICacheEntry implements and is used internally by MemoryCache in extension methods like Set() and GetOrCreate/Async(). What this means is that if you have "nested" cache operations, the inner ones will propagate their cache entry options to the outer ones, including cancellation tokens, expiration callbacks, and expiration times.
In my case, we were using GetOrCreateAsync() and a factory method that made use of a library which did its own caching using the same injected IMemoryCache. For example:
public async Task<Foo> GetFooAsync() {
return await _cache.GetOrCreateAsync("cacheKey", async c => {
c.AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(1);
return await _library.DoSomething();
});
}
The library uses IMemoryCache internally (the same instance, injected via DI) to cache results for a few seconds, essentially doing this:
_cache.Set(queryKey, queryResult, TimeSpan.FromSeconds(5));
Because GetOrCreateAsync() is implemented by creating a CacheEntry inside a using block, the effect is that the 5 second expiration used by the library propagates up to the parent cache entry in GetFooAsync(), resulting in the Foo object always only being cached for 5 seconds instead of 1 hour, effectively expiring it immediately.
DotNet Fiddle showing this behavior: https://dotnetfiddle.net/fo14BT
You can avoid this propagation behavior in a few ways:
(1) Use TryGetValue() and Set() instead of GetOrCreateAsync()
if (_cache.TryGetValue("cacheKey", out Foo result))
return result;
result = await _library.DoSomething();
return _cache.Set("cacheKey", result, TimeSpan.FromHours(1));
(2) Assign the cache entry options after invoking the other code that may also use the cache
return await _cache.GetOrCreateAsync("cacheKey", async c => {
var result = await _library.DoSomething();
// set expiration *after*
c.AbsoluteExpiration = DateTime.Now.AddHours(1);
return result;
});
(and since GetOrCreate/Async() does not prevent reentrancy, the two are effectively the same from a concurrency standpoint).
Warning: Even then it's easy to get wrong. If you try to use AbsoluteExpirationRelativeToNow in option (2) it won't work because setting that property doesn't remove the AbsoluteExpiration value if it exists resulting in both properties having a value in the CacheEntry, and AbsoluteExpiration is honored before the relative.
For the future, Microsoft has added a feature to control this behavior via a new property MemoryCacheOptions.TrackLinkedCacheEntries, but it won't be available until .NET 7. Without this future feature, I haven't been able to think of a way for libraries to prevent propagation, aside from using a different MemoryCache instance.
I have a controller with one action. In this action method, I have an async method that I call and that is it. This is the code that I am using:
[HttpGet]
public Task<MyObject> Get()
{
return _task.GetMyObject()
}
This serializes correctly into the JSON I expect from it. Now my manager insists that the signature should be changed to the following:
[HttpGet]
public async Task<IActionResult> Get()
{
var data = await _task.GetMyObject();
return Ok(data);
}
I'm of the belief that there is no reason for the code to await in the controller and can just return the Task because nothing afterwards depends on the result. Apart from the extra code generation (creation of state machine etc.) done for the await, are there any implications from a WebApi point of view of these approaches? To clarify, I want to know if returning an IActionResult is better than to just return Task<MyObject> even tho it seems like the results are the same.
Task< T>
Pro
Unit tests do not require any casting,
Product product = await controller.Get();
Big advantage is, your unit tests become truly independent of underlying HTTP Stack.
Swagger does not need any extra attribute to generate response schema as swagger can easily detect result type.
Another big advantage is, you can reuse your controller in some other controller when the logic remains same.
Also avoiding await before return gives slight improvement in performance as that part of code does not need Task state machine. I think future C# version will omit single await as compiler optimization.
Con
Returning error status code requires throwing exception..
throw new HttpStatusException(404, "File not found");
throw new HttpStatusException(409, "Unauthorized");
Task< IAsyncResult>
Pro
You can return HTTP Status code such as
return NotFound(); // (Status Code = 404)
return Unauthorized(); // (Status Code = 409)
Con
Unit testing requires extra casting..
Product productResult = ((await controller.Get()) as OkResult).Result as Product;
Due to such casting, it becomes difficult to reuse your controllers in some other controller, leading to duplication of logic.
Swagger generator requires extra attribute to generate response schema
[ProducesResponseType(typeof(Product), 200)]
This approach is only recommended when you are dealing with logic that is not part of unit tests, and not part of your business logic such as OAuth integration with third party services where you want to focus more on IActionResult based results such as Challenge, Redirect etc.
Actions can return anything, mostly they return an instance of IActionResult (or Task<IActionResult> for async methods) that produces a response. The action method is responsible for choosing what kind of response it return and the action result does the responding.
If an action returns an IActionResult implementor and the controller inherits from Controller, developers have many helper methods corresponding to many of the choices. Results from actions that return objects that are not IActionResult types will be serialized using the appropriate IOutputFormatter implementation.
For non-trivial actions with multiple return types or options (for example, different HTTP status codes based on the result of operations performed), prefer IActionResult as the return type.
ASP.NET MVC is a conventions over configuration framework. This means any future maintainer of your code, including your future self, will expect code to be written a certain way in order to reduce the number of class files you have to inspect to make changes or additions.
While the result may be the same from your two technically different options, the conventional approach is to async/await your results. Anything other than that convention will potentially cause confusion for future maintainers. Additionally, future releases of MVC may break your code in unknown ways as you did not follow the convention.
Good leadership of software development teams includes instilling a desire to reduce overall manpower needs for the organization by simplifying potential future maintenance of the code. Your manager may be trying to promote this concept.
ASP.NET Core team, while unifying MVC and WEB API (Controller and ApiController), abstracted IActionResult for robust exception handling mechanism.
Throwing exceptions for control flow is an anti-pattern on action methods.
[HttpGet]
public async Task<MyObject> Get()
{
var data = await _task.GetMyObject()
if(data == null)
{
return NotFound(); //Or, return Request.CreateResponse(HttpStatusCode.NotFound)
// Versus, throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound));
}
return Ok(data);
}
Note: NotFound(), Ok() etc. are IHttpActionResult things in WEBAPI2 era not new to asp.net core.
Let's say I have the following ApiController in my ASP.NET WebAPI Application:
class MyApiController : ApiController
{
private string str;
[HttpGet]
public string SetStr(string str)
{
this.str = str;
MaybeSleep(); // Some executions take longer, some don't.
return this.str;
}
}
(Reality is a bit more complicated, but this should be all that matters)
This is running fine in my environment and certain others, always returning the input value, even under heavy server load.
In two environments, however, str sometimes "magically" changes between set and return, even without too much server load. However, it always changes to values that were sent to the server around that time, just not always the ones sent in this request.
So, my questions are:
Is ApiController reuse a behaviour that I just have to expect, or should a new ApiController be created, used and destroyed for every single request the server processes?
Is this behaviour depending on ASP.NET version, IIS version and/or a Web.config setting?
Is there documentation about the behaviour of private ApiController variables available from Microsoft?
Or is this possibly a known bug in a certain .NET or ASP.NET version?
A request should not modify the state of a controller. From the entry method to any other methods called, you can pass parameters as needed, so there's no need to modify the controller object itself according to the request.
If there's some state that you need to maintain throughout the request that you can't pass through parameters to other methods, the best place to do that is on the HttpContext since that is always specific to the request. (Even then that scenario probably isn't too common.)
Instead of this:
public string SetStr(string str)
{
this.str = str;
MaybeSleep(); // Some executions take longer, some don't.
return this.str;
}
this:
public string SetStr(string str)
{
HttpContext.Items["str"] = str; //I'd declare a constant for "str"
MaybeSleep();
return HttpContext.Items["str"];
}
Is ApiController reuse a behaviour that I just have to expect, or
should a new ApiController be created, used and destroyed for every
single request the server processes?
When a request is received, a new controller instance is created by ControllerFactory or DependencyResolver.
Basically, main thread creates a controller instance, and then the same instance is shared between multiple threads until the request is completed.
The rest of the question is not relevant anymore since the first assumption is not correct.
Ideally, if you execute long running process, you want to use a scheduler so that it will not freeze the UI.
You can read more at Scott Hanselman's blog - How to run Background Tasks in ASP.NET
I'm using TPL to send emails to the end-users without delaying the api response, i'm not sure which method should be used since im dealing with the db context here. I did method 2 because i wasn't sure that the db context would be available by the time the task gets to run, so a created a new EF object, or maybe im doing it all wrong.
public class OrdersController : ApiController {
private AllegroDMContainer db = new AllegroDMContainer();
public HttpResponseMessage PostOrder(Order order) {
// Creating a new EF object and adding it to the database
Models.Order _order = new Models.Order{ Name = order.Name };
db.Orders.Add(_order);
/* Method 1 */
Task.Factory.StartNew(() => {
_order.SendEmail();
});
/* Method 2 */
Task.Factory.StartNew(() => {
Models.Order rOrder = db.Orders.Find(_order.ID);
rOrder.SendEmail();
});
return Request.CreateResponse(HttpStatusCode.Created);
}
}
Both methods are wrong, because you're starting a fire-and-forget operation on a pool thread inside the ASP.NET process.
The problem is, an ASP.NET host is not guaranteed to stay alive between handling HTTP responses. E.g., it can be automatically recycled, manually restarted or taken out of the farm. In which case, the send-mail operation would never get completed and you wouldn't get notified about it.
If you need to speed up the response delivery, consider outsourcing the send-mail operation to a separate WCF or Web API service. A related question: Fire and forget async method in asp.net mvc.