Save changes to db async in a synchronous function - c#

I want to make use of the saveChangesAsync in a synchronous function. The situation I want to use this in is for example.
public string getName(int id)
{
var db = new dbContext();
String name= db.names.find(id);
db.log.add(new Log("accessed name");
db.savechangesAsync();
return name;
}
So basically I dont care when the log is actually saved to the database, I just dont want it to slow down my getName function. I want the getname to return and then the log can be saved to the database any time after / during that.
How would I go about achieving this? Nothing is dependant on the time that the log is submitted, So it can take 2 min for all I care.
I have come up with another solution:
private async void UpdateLastComms(string _id)
{
int id = Int32.Parse(_id);
using (var db = new dbContext())
{
db.Devices.Where(x => x.UserId == id).FirstOrDefault().LastComms = DateTime.Now;
await db.SaveChangesAsync();
}
}
and I then can call this function like so UpdateLastComms("5");
How will the this compare to the first and will it execute as I think?

The problem with "fire and forget" methods like this is error handling. If there is an error saving the log to the database, is that something you want to know about?
If you want to silently ignore errors, then you can just ignore the returned task, as in your first example. Your second example uses async void, which is dangerous: if there is a database write error with your second example, the default behavior is to crash the application.
If you want to handle errors by taking some action, then put a try/catch around the body of the method in your second example.

Related

Adding more detail to await Task<T> in C# .net

The Task object created in C# contains the details for the completion of the Action ascribed to the Task. So even if the Action fails the task is still IsCompleted, unless the Task throws an exception or is cancelled.
What I am trying to do is to create a Task-Like class (easy, done that bit) that can store more information about the job that was done.
So for example I want an API to get some accounts, the results may throw an exception, say SQLConnection Exception for example, fine, but also the results may be empty and I want to know why. Is the Database empty or has a filter restricted the view to nothing, for example?
So I could do:
var t = AccountResposistory.GetAccountsAsync(filter, token);
var results = await t;
if(t.Success)
{
return Results.Ok(results)
}
else
{
return Results.Problem(t.ErrorMessage);
}
I was heading down the path of using a custom Task-Like object something like:
public async ServiceTask<IEnumerable<Account>?> GetAccountsAsync(Filter filter, CancellationToken token)
{
// Get accounts with filtering
if(Accounts.Any())
{
return Accounts;
}
else if(filter != null)
{
return ServiceTask.FromError("Filter has restricted all Accounts");
}
else
{
return ServiceTask.FromError("Database doesn't contain any Accounts");
}
}
I can have ServiceTask.FromError return a default T but there's no way (I can see) to access the ServiceTask that's returned by the method to add in the details.
Alternatively, I figured I could have ServiceTask always return a generic Response class and then work with the properties inside ServiceTask to apply the results or messages, but I can't figure out how to do that, how to restrict ServiceTask that T is always a generic class of ServiceResponse
I don't like the idea of throwing exceptions when an exception hasn't happened. It's not a code exception when the database is empty, or the filter has removed all accounts.
Currently, my GetAccountsAsync is returning a Task<Response> and the Response has Success, ErrorMessage and Results properties. It becomes a bit cumbersome to work around this, to get the results of the awaited results. I'm hoping there's a simple way to code this.

Block Controller Method while already running

I have a controller which returns a large json object. If this object does not exist, it will generate and return it afterwards. The generation takes about 5 seconds, and if the client sent the request multiple times, the object gets generated with x-times the children. So my question is: Is there a way to block the second request, until the first one finished, independent who sent the request?
Normally I would do it with a Singleton, but because I am having scoped services, singleton does not work here
Warning: this is very oppinionated and maybe not suitable for Stack Overflow, but here it is anyway
Although I'll provide no code... when things take a while to generate, you don't usually spend that time directly in controller code, but do something like "start a background task to generate the result, and provide a "task id", which can be queried on another different call).
So, my preferred course of action for this would be having two different controller actions:
Generate, which creates the background job, assigns it some id, and returns the id
GetResult, to which you pass the task id, and returns either different error codes for "job id doesn't exist", "job id isn't finished", or a 200 with the result.
This way, your clients will need to call both, however, in Generate, you can check if the job is already being created and return an existing job id.
This of course moves the need to "retry and check" to your client: in exchange, you don't leave the connection to the server opened during those 5 seconds (which could potentially be multiplied by a number of clients) and return fast.
Otherwise, if you don't care about having your clients wait for a response during those 5 seconds, you could do a simple:
if(resultDoesntExist) {
resultDoesntExist = false; // You can use locks for the boolean setters or Interlocked instead of just setting a member
resultIsBeingGenerated = true;
generateResult(); // <-- this is what takes 5 seconds
resultIsBeingGenerated = false;
}
while(resultIsBeingGenerated) { await Task.Delay(10); } // <-- other clients will wait here
var result = getResult(); // <-- this should be fast once the result is already created
return result;
note: those booleans and the actual loop could be on the controller, or on the service, or wherever you see fit: just be wary of making them thread-safe in however method you see appropriate
So you basically make other clients wait till the first one generates the result, with "almost" no CPU load on the server... however with a connection open and a thread from the threadpool used, so I just DO NOT recommend this :-)
PS: #Leaky solution above is also good, but it also shifts the responsability to retry to the client, and if you are going to do that, I'd probably go directly with a "background job id", instead of having the first (the one that generates the result) one take 5 seconds. IMO, if it can be avoided, no API action should ever take 5 seconds to return :-)
Do you have an example for Interlocked.CompareExchange?
Sure. I'm definitely not the most knowledgeable person when it comes to multi-threading stuff, but this is quite simple (as you might know, Interlocked has no support for bool, so it's customary to represent it with an integral type):
public class QueryStatus
{
private static int _flag;
// Returns false if the query has already started.
public bool TrySetStarted()
=> Interlocked.CompareExchange(ref _flag, 1, 0) == 0;
public void SetFinished()
=> Interlocked.Exchange(ref _flag, 0);
}
I think it's the safest if you use it like this, with a 'Try' method, which tries to set the value and tells you if it was already set, in an atomic way.
Besides simply adding this (I mean just the field and the methods) to your existing component, you can also use it as a separate component, injected from the IOC container as scoped. Or even injected as a singleton, and then you don't have to use a static field.
Storing state like this should be good for as long as the application is running, but if the hosted application is recycled due to inactivity, it's obviously lost. Though, that won't happen while a request is still processing, and definitely won't happen in 5 seconds.
(And if you wanted to synchronize between app service instances, you could 'quickly' save a flag to the database, in a transaction with proper isolation level set. Or use e.g. Azure Redis Cache.)
Example solution
As Kit noted, rightly so, I didn't provide a full solution above.
So, a crude implementation could go like this:
public class SomeQueryService : ISomeQueryService
{
private static int _hasStartedFlag;
private static bool TrySetStarted()
=> Interlocked.CompareExchange(ref _hasStartedFlag, 1, 0) == 0;
private static void SetFinished()
=> Interlocked.Exchange(ref _hasStartedFlag, 0);
public async Task<(bool couldExecute, object result)> TryExecute()
{
if (!TrySetStarted())
return (couldExecute: false, result: null);
// Safely execute long query.
SetFinished();
return (couldExecute: true, result: result);
}
}
// In the controller, obviously
[HttpGet()]
public async Task<IActionResult> DoLongQuery([FromServices] ISomeQueryService someQueryService)
{
var (couldExecute, result) = await someQueryService.TryExecute();
if (!couldExecute)
{
return new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = "Another request has already started. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
return Ok(result);
}
Of course possibly you'd want to extract the 'blocking' logic from the controller action into somewhere else, for example an action filter. In that case the flag should also go into a separate component that could be shared between the query service and the filter.
General use action filter
I felt bad about my inelegant solution above, and I realized that this problem can be generalized into basically a connection number limiter on an endpoint.
I wrote this small action filter that can be applied to any endpoint (multiple endpoints), and it accepts the number of allowed connections:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class ConcurrencyLimiterAttribute : ActionFilterAttribute
{
private readonly int _allowedConnections;
private static readonly ConcurrentDictionary<string, int> _connections = new ConcurrentDictionary<string, int>();
public ConcurrencyLimiterAttribute(int allowedConnections = 1)
=> _allowedConnections = allowedConnections;
public override async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
{
var key = context.HttpContext.Request.Path;
if (_connections.AddOrUpdate(key, 1, (k, v) => ++v) > _allowedConnections)
{
Close(withError: true);
return;
}
try
{
await next();
}
finally
{
Close();
}
void Close(bool withError = false)
{
if (withError)
{
context.Result = new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = $"Maximum {_allowedConnections} simultaneous connections are allowed. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
_connections.AddOrUpdate(key, 0, (k, v) => --v);
}
}
}

Data in memory is re-used instead of executing a new sql query

I have a WCF service FooService.
My service implements a method LoginAsync which takes a User object as parameters.
public async Task<Token> LoginAsync(User user)
{
var result = await _userManager.GetModelAsync(user.Uname, user.Pword);
if (result != null && result.Id > 0)
return await _tokenManager.GetModelAsync(result);
return null;
}
Inside this method we call _userManager.GetModelAsync(string, string) which is implemented as follows:
public async Task<User> GetModelAsync(string username, string password)
{
var result =
(from m in await _userRepo.GetModelsAsync()
where m.Uname.Equals(username, StringComparison.InvariantCulture)
&& m.Pword.Equals(password, StringComparison.InvariantCulture)
select m).ToList();
if (result.Any() && result.Count == 1)
{
var user = result.First();
user.Pword = null;
return user;
}
return null;
}
To mention it again: this is all server-side code.
I never want my service to send back the Pword field, even though it is not clear text. I just don't want that information to be on my client-side code.
This is why I'm setting this property to NULL when I found a User by comparing username and password.
Here's how _userRepo.GetModelAsync() is implemented (don't get confused with _userManager and _userRepo):
public async Task<IList<User>> GetModelsAsync()
{
return await MediaPlayer.GetModelsAsync<User>(_getQuery);
}
private readonly string _getQuery = "SELECT ID, Uname, DateCreated, Pword FROM dbo.[User] WITH(READUNCOMMITTED)";
And here MediaPlayer.GetModelsAsync<T>(string, params IDbParameter[])
public static async Task<IList<T>> GetModelsAsync<T>(string query, params DbParameter[] parameters)
{
IList<T> models;
using (SqlConnection con = new SqlConnection(Builder.ConnectionString))
using (SqlCommand command = Db.GetCommand(query, CommandType.Text, parameters))
{
await con.OpenAsync();
command.Connection = con;
using (SqlDataReader dr = await command.ExecuteReaderAsync(CommandBehavior.SequentialAccess))
models = ReadModels<T>(dr);
}
return models;
}
This code works fine the first time executing it after publishing or restarting this service (the service is consumed by a WPF application).
But calling FooService.LoginAsync(User) a second time, without publishing or restarting the service again, it will throw a NullReferenceException in my _userManager.GetModelAsync LINQ because Pword is NULL.
Which is really strange to me, because as you can see there is no logic implemented where my data is explicit stored in memory.
Normally my code should execute a sql query everytime calling this method, but somehow it doesn't. It seems like WCF does not get its data from my database, instead re-uses it from memory.
Can somehow explain this behavior to me and what I can do against it?
Edit 26.09.2018
To add some more details:
The method _userManager.GetModelAsync(string, string) always gets called, same for _userRepo.GetModelsAsync. I did some file logging at different points in my server-side code. What I also did, is to take the result of _userRepo.GetModelsAsync, iterated through every object in it and logged Uname and Pword. Only Pword was NULL (did this logging before doing my LINQ).
I also logged the parameters _userManager.GetModelAsync(user.Uname, user.Pword) receives. user.Uname and user.Pword are not NULL.
I just noticed that this question was reposed. My diagnosis is the same:
What I am thinking right now, is that my service keeps my IList with the cleared Pword in memory and uses it the next time without performing a new sql query.
LINQ to SQL (and EF) reuse the same entity objects keyed on primary key. This is a very important feature.
Translate will give you preexisting objects if you use it to query an entity type. You can avoid that by querying with a DTO type (e.g. class UserDTO { public string UserName; }).
It is best practice to treat entity objects as a synchronized mirror of the database. Do not make temporary edits to them.
Make sure that your DataContext has the right scope. Normally, you want one context per HTTP request. All code inside one request should share one context and no context should ever be shared across requests.
So maybe there are two issues: You modifying entities, and reusing a DataContext across requests.

How best to return async tasks to the caller

I'm inserting rows in to Cassandra, although my question is about Cassandra but how best to use an async API in general. In a nutshell, I generate an id for a row in C# code and do the insert asynchronously and want to return the id to the caller. The sample code below shows what I've tried so far, all of which work, but which is best? Are there alternatives or better solutions?
Say I have some class and all the Cassandra gubbins set up. This could equally be a file I'm serializing the class to; this isn't a Cassandra question.
class MyClass
{
public Guid Id { get; set; }
public string Text { get; set; }
}
ISession cassandraSession;
PreparedStatement preparedStatement;
My first attempt was to use async+await
async Task<Guid> Insert(MyClass someObject)
{
someObject.Id = TimeUuid.NewId();
BoundStatement insert = preparedStatement.Bind(someObject.Id, someObject.Text);
await cassandraSession.ExecuteAsync(insert);
return someObject.Id;
}
After some research, my understanding as this is sub-optimal because it will block on the await and create a whole new state machine etc.
Next I tried this:
Task<Guid> Insert(MyClass someObject)
{
someObject.Id = TimeUuid.NewId();
BoundStatement insert = preparedStatement.Bind(someObject.Id, someObject.Text);
cassandraSession.ExecuteAsync(insert);
return Task.FromResult(someObject.Id);
}
Seems okay, no blocking, but could the caller receive the id and query before the insert has completed and not find the row? I tried a continuation:
Task<Guid> Insert(MyClass someObject)
{
someObject.Id = TimeUuid.NewId();
BoundStatement insert = preparedStatement.Bind(someObject.Id, someObject.Text);
return cassandraSession
.ExecuteAsync(insert)
.ContinueWith(t => someObject.Id);
}
That also seems to work and means the task the caller awaits would include the insert, so should ensure the insert will have completed.
Have I missed or misunderstood anything?
await doesn't block so your statement is technically inaccurate. But I believe what you mean is "it does not continue execution" which is correct.
create a whole new state machine
Yes it does, but it is quite cheap. The await+async here is responsible for 4 small object allocations if I count correctly. That is not that significant. In particular it disappears in the noise due to the highly expensive database call.
Your attempt 2 has no chance of working as you recognized.
Attempt 3 works, but the code is of worse quality. Prefer 1.

AspNetCacheProfile attribute in inner method

I have an api method that gets a lot of data in the application start up.
90% of the data is relevant for all application users and the other 10% need to be changed by the user id and his environment and app version.
To avoid calling the first method every time user connect and make the start up slower,
i added : AspNetCacheProfile attribute.
But in this situation i can`t use the user id, environment and version at this method
because it is the data of the first user called the method.
So,
I added new method (the second one) and set the AspNetCacheProfile attribute above it
and called this method from the first one.
This way i can cache the general data and update the other 10% each call.
I just wanted to know .. will it work ?
I was not sure because the second method is not called directly, it is called from the first one.
[WebGet(UriTemplate = "/GetData")]
public APIResponse GetData()
{
try
{
MyCachedResponse = GetMyCachedResponse();
foreach (var info in data.info)
{
if (Something)
{
// Update some specific values inside
}
}
AppState.CurrentResponse.Data = data;
}
return AppState.CurrentResponse;
}
[AspNetCacheProfile("OneMinuteCaching")]
private MyCachedResponse GetMyCachedResponse()
{
return new MyCachedResponse(categories);
}

Categories

Resources