Let's say I have the following ApiController in my ASP.NET WebAPI Application:
class MyApiController : ApiController
{
private string str;
[HttpGet]
public string SetStr(string str)
{
this.str = str;
MaybeSleep(); // Some executions take longer, some don't.
return this.str;
}
}
(Reality is a bit more complicated, but this should be all that matters)
This is running fine in my environment and certain others, always returning the input value, even under heavy server load.
In two environments, however, str sometimes "magically" changes between set and return, even without too much server load. However, it always changes to values that were sent to the server around that time, just not always the ones sent in this request.
So, my questions are:
Is ApiController reuse a behaviour that I just have to expect, or should a new ApiController be created, used and destroyed for every single request the server processes?
Is this behaviour depending on ASP.NET version, IIS version and/or a Web.config setting?
Is there documentation about the behaviour of private ApiController variables available from Microsoft?
Or is this possibly a known bug in a certain .NET or ASP.NET version?
A request should not modify the state of a controller. From the entry method to any other methods called, you can pass parameters as needed, so there's no need to modify the controller object itself according to the request.
If there's some state that you need to maintain throughout the request that you can't pass through parameters to other methods, the best place to do that is on the HttpContext since that is always specific to the request. (Even then that scenario probably isn't too common.)
Instead of this:
public string SetStr(string str)
{
this.str = str;
MaybeSleep(); // Some executions take longer, some don't.
return this.str;
}
this:
public string SetStr(string str)
{
HttpContext.Items["str"] = str; //I'd declare a constant for "str"
MaybeSleep();
return HttpContext.Items["str"];
}
Is ApiController reuse a behaviour that I just have to expect, or
should a new ApiController be created, used and destroyed for every
single request the server processes?
When a request is received, a new controller instance is created by ControllerFactory or DependencyResolver.
Basically, main thread creates a controller instance, and then the same instance is shared between multiple threads until the request is completed.
The rest of the question is not relevant anymore since the first assumption is not correct.
Ideally, if you execute long running process, you want to use a scheduler so that it will not freeze the UI.
You can read more at Scott Hanselman's blog - How to run Background Tasks in ASP.NET
Related
I'd like to setup a page that continuously receives values from the backend, so that the rendered data set grows with time until the backend says it's done (which may even never happen). Something like in this article but with backend based on .NET Core.
So, the Angular service looks like this at first.
#Injectable()
export class TheService {
constructor(private httpClient: HttpClient) {}
getStuff(): Observable<Thing[]> {
return this.httpClient.get<Thing[]>('http://localhost:3000/stuff');
}
getThing(): Observable<Thing> {
return this.httpClient.get<Thing>('http://localhost:3000/thing');
}
}
The issue I'm having is on the backend side. When I return the set of things, I finish off with providing Ok() specifying that the operation has completed successfully, status code 200. Now, what I'd like to achieve is to return a single thing at a time (or an array of things, as long as it's not the final set of all things to be served). Basically, I'd like to emit values from .NET API without finalizing the connection. For simplicity, we can even work with responses of type String instead of Thing.
Is it possible at all using "the usuals" in .NET? I'm thinking the default GET method like so.
[HttpGet("stuff")]
public ActionResult<IEnumerable<Thing>> GetThing()
{
IEnumerable<Thing> output = ...;
return Ok(output);
}
[HttpGet("thing")]
public ActionResult<Thing> GetThing()
{
Thing output = ...;
return Ok(output);
}
I've googled the matter but found nothing of relevance. There's a lot of resources dealing with the Angular side and observables, RxJs etc. All the examples connecting .NET and Angular present serve-and-finalize type of connection. The best one I've found is linked at the top and doesn't use .NET on the back. Somehow, I'm getting the suspicion that it's extremely simple or nearly not doable.
If you need a long-lived connection between client and server you could look into using WebSockets through something like Pusher, which has a .NET lib available: https://github.com/pusher/pusher-http-dotnet
Alternatively you could use long-polling, although that is much less efficient because you're intermittently querying the server for updates. You'd basically setup an Interval observable to make a request every N seconds to check for updates on the server.
I'm in a situation where two calls at the same time write to the session (of an asp.net core application running on the old framework), and one of the session variables gets overwritten.
Given the following controller code, assume that the long session gets called first, 200 ms later the short session gets called, and 800 ms later (when the long session is done) the result of both sessions gets called.
[HttpPost("[action]")]
public async Task<IActionResult> TestLongSession() {
HttpContext.Session.SetString("testb", "true");
// If we do this delay BEFORE the session ("testb") is set, then all is fine.
await Task.Delay(1000);
return Ok();
}
[HttpPost("[action]")]
public async Task<IActionResult> TestShortSession() {
HttpContext.Session.SetString("testa", "true");
return Ok();
}
[HttpGet("[action]")]
public async Task<IActionResult> TestResultOfBothSessions() {
string a = HttpContext.Session.GetString("testa");
string b = HttpContext.Session.GetString("testb");
return Ok($"A: {a}, B: {b}");
}
The result of the final call (TestBothSessions) is "A: , B: true".
The question is then: Is there something I missed to make the session work (aka, return "A: true, B: true")?
Obviously, I could remove the delay and all is fine, but in the real application there's a call that potentially can take some time, and I prefer not to write the session variable at a later time (I guess I could with a bit of custom error handling, but then the problem still remains that I no longer trust the asp.net session to work with synchronous calls).
Edit: The typescript code that calls these endpoints from the browser:
this.service.testLongSession().subscribe(() => {
this.service.testBothSessions().subscribe((result: string) => {
console.log(result);
});
});
setTimeout(() => {
this.service.testShortSession().subscribe();
}, 200);
I believe the behavior you observe is what the ASP.NET authors intended. I look at the interfaces that session stores need to implement, namely ISession and ISessionStore, and I see no synchronization mechanisms to prevent the overwriting of data during simultaneous requests.
The benefit of such a simple interface is that it's much easier to implement, and can be easily implemented by a variety of caches and databases.
ASP.NET 4 had a much more complex session store base class SessionStateStoreProviderBase that included locking logic, but it was really challenging to implement.
We are planning to develop an Azure function for which the input trigger is a service bus message and the output will be blob storage. The service bus message will contain a image url and the function will resize the image to a predefined resolution and will upload to azure blob storage.
The resolution to which the image should be resized is stored in the database and the Azure function needs to make a call to database to get to know the resolution that is supposed to be used for the image in the input message. The resolution would actually be a master data configured based on the source of the input message.
Making a database call would be a expensive call as it would have to go to the database for each call. Is there any way to cache the data and use it without calling the database. Like in memory caching?
You are free to use the usual approaches that you would use in other .NET applications:
You can cache it in memory. The easiest way is just to declare a static dictionary and put database values inside (use concurrent dictionary if needed). The cached values will be reused for all subsequent Function executions which run on the same instance. If an instance gets idle for 5 minutes, or if App scales out to an extra instance, you will have to read the database again;
You can use distributed cache, e.g. Redis, by using its SDK from Function code. Might be a bit nicer, since you keep the stateless nature of Functions, but might cost a bit more. Table Storage is a viable alternative to Redis, but with more limited API.
There's no "caching" feature of Azure Functions themselves, that would be ready to use without any extra code.
You can use Azure Cache service (https://azure.microsoft.com/en-us/services/cache/) to cache your data. Basically, In your Azure Function instead of calling database all the time, call Azure cache and use if it is not expired and if it is expired or not set then call database to get the value and populate the cache with appropriate expiry logic (timeout after fixed time or some other custom logic).
You could use Durable Functions and make the database call via an activity or sub-Orchestration, the return value is essentially cached for you then and will be returned without making the underlying call again each time the function replays.
Redis is in-memory cache and there is custom output binding that you can use to keep your function clean:
[FunctionName("SetPoco")]
public static async Task<IActionResult> SetPoco(
[HttpTrigger("POST", Route = "poco/{key}")] HttpRequest request,
[Redis(Key = "{key}")] IAsyncCollector<CustomObject> collector)
{
string requestBody;
using (var reader = new StreamReader(request.Body))
{
requestBody = reader.ReadToEnd();
var value = JsonConvert.DeserializeObject<CustomObject>(requestBody);
await collector.AddAsync(value);
}
return new OkObjectResult(requestBody);
}
Link to the project: https://github.com/daulet/Indigo.Functions#redis
However if by in-memory cache you mean in memory of the function I'd strongly recommend otherwise as function are meant to be stateless and you won't be able to share that memory across multiple hosts running your function. This is also not recommended in Azure Functions best practices
Here's a little class I built to simplify the task of storing and re-using objects in the running instance's memory whilst it remains alive. Of course this means each new instance will need to populate itself but this can provide some useful optimisations.
// A simple light-weight cache, used for storing data in the memory of each running instance of the Azure Function.
// If an instance gets idle (for 5 minutes or whatever the latest time period is) or if the Function App scales out to an extra instance then the cache is re-populated.
// To use, create a static readonly instance of this class in the Azure Function class, in the constructor pass a function which populates the object to cache.
// Then simply reference the Data object. It will be populated on the first call and re-used on future calls whilst the same instance remains alive.
public class FunctionInstanceCache<T>
{
public FunctionInstanceCache(Func<T> populate)
{
Populate = populate;
IsInit = false;
}
public Func<T> Populate { get; set; }
public bool IsInit { get; set; }
private T data;
public T Data
{
get
{
if (IsInit == false)
{
Init();
};
return data;
}
}
public void Init()
{
data = Populate();
IsInit = true;
}
}
Then in your Azure Function instance implementation create a static readonly instance of this, passing in a Populate method:
private static readonly FunctionInstanceCache<string[]> Fic = new FunctionInstanceCache<string[]>(PopulateCache);
Then implement this
private static string[] PopulateCache()
{
return DOSOMETHING HERE;
}
Then simply call Fic.Data when needed - it will be populated on first use and then re-used whilst the instance remains alive.
I am currently developing an application in ASP.NET CORE 2.0
The following is the action inside my controller that get's executed when the user clicks submit button.
The following is the function that get's called the action
As a measure to prevent duplicate inside a database I have the function
IsSignedInJob(). The function works
My Problem:
Sometimes when the internet connection is slow or the server is not responding right away it is possible to click submit button more than once. When the connection is reestablished the browser (in my case Chrome) sends multiple HttpPost request to the server. In that case the functions(same function from different instances) are executed so close in time that before the change in database is made, other instances are making the same change without being aware of each other.
Is there a way to solve this problem on a server side without being to "hacky"?
Thank you
As suggested on the comments - and this is my preferred approach-, you can simply disable the button once is clicked the first time.
Another solution would be to add something to a dictionary indicating that the job has already been registered but this will probably have to use a lock as you need to make sure that only one thread can read-write at a time. A Concurrent collection won't do the trick as the problem is not whether this operation is thread-safe or not. The IsSignedInJob method you have can do this behind the scenes but I wouldn't check the database for this as the latency could be too high. Adding/removing a Key from a dictionary should be a lot faster.
Icarus's answer is great for the user experience and should be implemented. If you also need to make sure the request is only handled once on the server side you have a few options. Here is one using the ReaderWRiterLockSlim class.
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
This will prevent overlapping DoWork code. It does not prevent DoWork from finishing completely, then another post executing that causes DoWork again.
If you want to prevent the post from happening twice, implement the AntiForgeryToken, then store the token in session. Something like this (haven't used session in forever) may not compile, but you should get the idea.
private const SomeMethodTokenName = "SomeMethodToken";
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
var token = Request.Form.Get["__RequestVerificationToken"].ToString();
var session = Session[SomeMethodTokenName ];
if (token == session) return;
session[SomeMethodTokenName] = token
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
Not exactly perfect, two different requests could happen over and over, you could store in session the list of all used tokens for this session. There is no perfect way, because even then, someone could technically cause a OutOfMemoryException if they wanted to (to many tokens stored in session), but you get the idea.
Try not to use asynchronous processing. Remove task,await and async.
I have a wcf service (hosted in IIS) that is setup to use sessions. It seems to work. When Application_PostAcquireRequestState is called I have a session ID.
I end up using it like this (in my Global.asax):
if (Context.Handler is IRequiresSessionState)
{
log4net.ThreadContext.Properties["sessionId"] = Session.SessionID;
}
That seems to work fine. The value is stored off into my log4net property.
But when my service operation begins (my actual WCF service code) the log4net property is null again.
Since the property is stored per thread (ThreadContext), I can only assume that this means that the session is setup on one thread then executed on another thread. Am I right?
Is there anyway to get my log4net property set on the on the correct thread (without having to remember to make the above call at the start of every single service operation)?
Yes, IIS may use multiple thread to service multiple WCF requests. See http://msdn.microsoft.com/en-us/library/cc512374.aspx for more detail.
You might consider using different instances of a logger for each WCF request.
There are multiple scenarios where WCF might change threads on you:
The Global.asx thread is not guaranteed to be used for a service call (in fact its unlikely).
If there are multiple calls during the same session, the thread may also change between calls to the same service instance.
In theory state information like this should be stored in an Operation Context object. However because log4net uses thread local storage it becomes an awkward solution.
Is there anyway to get my log4net property set on the on the correct
thread (without having to remember to make the above call at the start
of every single service operation)?
Yes. Create a custom IOperationInvoker. The best example I know of is Carlos Figueira's blog. If you apply this as a service behavior your log4net property should always be defined for the service code.
One warning: When adding to thread local storage be sure to clean up. That's why log4net.ThreadContext.Stacks[].Push() returns a IDisposable. In other words your Invoke method should look like (incomplete and untested):
public object Invoke(object instance, object[] inputs, out object[] outputs)
{
using (log4net.ThreadContext.Stacks[key].Push(value))
{
return this.originalInvoker.Invoke(instance, inputs, out outputs);
}
}
See Carlos' blog to understand why you are calling the "originalInvoker". Note that if you want to support async operations that you need to implement additional methods.
Custom properties do not need to be strings. So you could store an instance of the following class in the global context:
public class SessionIdProperty
{
public override string ToString()
{
// error handling omitted
return Session.SessionID;
}
}
This way log4net can access the Session object directly when it logs a message. Log4net calls the ToString() method on non-string properties.