C# - Json Deserialization performance in Blazor WebAssembly is slow - c#

I have a blazor webassembly application that pulls 5000 rows. Fetching the data takes less than a second but the deserialization takes 4-6 seconds.
protected override async Task OnInitializedAsync()
{
Http.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", tokenInfo.Token);
try
{
List<Foo> fooList = await Http.GetFromJsonAsync<List<Foo>>($"Foo");
}
catch (Exception ex)
{
}
}
I've tried running it in Release mode

Related

hangfire job cancellation doesn't happen

I have .Net core 3.1 Web api project. There is a method to call separate API for getting some values.
This is my code
Define instance variable
CancellationTokenSource tokenSource = new CancellationTokenSource();
Looping through my values. There is separate method and user call the method when he wants.
foreach (var no in nos)
{
BackgroundJob.Enqueue(() => webServiceCall(no, tokenSource));//loop and give jobs to hangfire
}
WebServiceCall implementation
public async Task webServiceCall(int no, CancellationTokenSource tokenSource)
{
using (HttpClient client = new HttpClient())
{
try
{
var result = await client.GetAsync("https://localhost:44359/api/process/" + no, tokenSource.Token);
if (result.IsSuccessStatusCode)
{
var val = await result.Content.ReadAsStringAsync();
System.Diagnostics.Debug.WriteLine("Returned Value : " + val);
//logic code
}
}
catch (TaskCanceledException ex)
{
System.Diagnostics.Debug.WriteLine("HTTP Get request canceled." + ex.Message);
}
}
}
This code works and I need to cancel above batch from separate API. I try something like this and it doesn't work.
[HttpGet("CancellingJob")]
public async Task HagfireCancel()
{
tokenSource.Cancel();
}
But when I call to this, all processes happens as usual and did not canceled.... Can any one help me to cancel hangfire jobs?
I used Hangfire 1.7.0 version.

HttpClient.GetAsync fails after running for some time with TaskCanceledException "A task was cancelled" [duplicate]

It works fine when have one or two tasks however throws an error "A task was cancelled" when we have more than one task listed.
List<Task> allTasks = new List<Task>();
allTasks.Add(....);
allTasks.Add(....);
Task.WaitAll(allTasks.ToArray(), configuration.CancellationToken);
private static Task<T> HttpClientSendAsync<T>(string url, object data, HttpMethod method, string contentType, CancellationToken token)
{
HttpRequestMessage httpRequestMessage = new HttpRequestMessage(method, url);
HttpClient httpClient = new HttpClient();
httpClient.Timeout = new TimeSpan(Constants.TimeOut);
if (data != null)
{
byte[] byteArray = Encoding.ASCII.GetBytes(Helper.ToJSON(data));
MemoryStream memoryStream = new MemoryStream(byteArray);
httpRequestMessage.Content = new StringContent(new StreamReader(memoryStream).ReadToEnd(), Encoding.UTF8, contentType);
}
return httpClient.SendAsync(httpRequestMessage).ContinueWith(task =>
{
var response = task.Result;
return response.Content.ReadAsStringAsync().ContinueWith(stringTask =>
{
var json = stringTask.Result;
return Helper.FromJSON<T>(json);
});
}).Unwrap();
}
There's 2 likely reasons that a TaskCanceledException would be thrown:
Something called Cancel() on the CancellationTokenSource associated with the cancellation token before the task completed.
The request timed out, i.e. didn't complete within the timespan you specified on HttpClient.Timeout.
My guess is it was a timeout. (If it was an explicit cancellation, you probably would have figured that out.) You can be more certain by inspecting the exception:
try
{
var response = task.Result;
}
catch (TaskCanceledException ex)
{
// Check ex.CancellationToken.IsCancellationRequested here.
// If false, it's pretty safe to assume it was a timeout.
}
I ran into this issue because my Main() method wasn't waiting for the task to complete before returning, so the Task<HttpResponseMessage> was being cancelled when my console program exited.
C# ≥ 7.1
You can make the main method asynchronous and await the task.
public static async Task Main(){
Task<HttpResponseMessage> myTask = sendRequest(); // however you create the Task
HttpResponseMessage response = await myTask;
// process the response
}
C# < 7.1
The solution was to call myTask.GetAwaiter().GetResult() in Main() (from this answer).
var clientHttp = new HttpClient();
clientHttp.Timeout = TimeSpan.FromMinutes(30);
The above is the best approach for waiting on a large request.
You are confused about 30 minutes; it's random time and you can give any time that you want.
In other words, request will not wait for 30 minutes if they get results before 30 minutes.
30 min means request processing time is 30 min.
When we occurred error "Task was cancelled", or large data request requirements.
Another possibility is that the result is not awaited on the client side. This can happen if any one method on the call stack does not use the await keyword to wait for the call to be completed.
Promoting #JobaDiniz's comment to an answer:
Do not do the obvious thing and dispose the HttpClient instance, even though the code "looks right":
async Task<HttpResponseMessage> Method() {
using (var client = new HttpClient())
return client.GetAsync(request);
}
Disposing the HttpClient instance can cause following HTTP requests started by other instances of HttpClient to be cancelled!
The same happens with C#'s new RIAA syntax; slightly less obvious:
async Task<HttpResponseMessage> Method() {
using var client = new HttpClient();
return client.GetAsync(request);
}
Instead, the correct approach is to cache a static instance of HttpClient for your app or library, and reuse it:
static HttpClient client = new HttpClient();
async Task<HttpResponseMessage> Method() {
return client.GetAsync(request);
}
(The Async() request methods are all thread safe.)
in my .net core 3.1 applications I am getting two problem where inner cause was timeout exception.
1, one is i am getting aggregate exception and in it's inner exception was timeout exception
2, other case was Task canceled exception
My solution is
catch (Exception ex)
{
if (ex.InnerException is TimeoutException)
{
ex = ex.InnerException;
}
else if (ex is TaskCanceledException)
{
if ((ex as TaskCanceledException).CancellationToken == null || (ex as TaskCanceledException).CancellationToken.IsCancellationRequested == false)
{
ex = new TimeoutException("Timeout occurred");
}
}
Logger.Fatal(string.Format("Exception at calling {0} :{1}", url, ex.Message), ex);
}
In my situation, the controller method was not made as async and the method called inside the controller method was async.
So I guess its important to use async/await all the way to top level to avoid issues like these.
I was using a simple call instead of async. As soon I added await and made method async it started working fine.
public async Task<T> ExecuteScalarAsync<T>(string query, object parameter = null, CommandType commandType = CommandType.Text) where T : IConvertible
{
using (IDbConnection db = new SqlConnection(_con))
{
return await db.ExecuteScalarAsync<T>(query, parameter, null, null, commandType);
}
}
Another reason can be that if you are running the service (API) and put a breakpoint in the service (and your code is stuck at some breakpoint (e.g Visual Studio solution is showing Debugging instead of Running)). and then hitting the API from the client code. So if the service code a paused on some breakpoint, you just hit F5 in VS.

.NET Core API seems to deadlock

We have a .NET Core API running in production which can run stable for days or even weeks and then suddenly freezes. Such a freeze can even happen multiple times a day, completely random. What happens: the code seems to be frozen and doesn't accept any new requests. No new requests are logged, the thread count rises sky-high and the memory rises steadily until it's maxed out.
I created a memory dump to analyze. It tells me that most threads are waiting for a lock to be released at a specific function, looking like a deadlock. I analysed this function and cannot see why this would cause issues. Can someone help me out? Obviously I suspect AsParallel() to be thread unsafe, but the internet says no, it is thread safe.
public async Task<bool> TryStorePropertiesAsync(string sessionId, Dictionary<string, string> keyValuePairs, int ttl = 1500)
{
try
{
await Task.WhenAll(keyValuePairs.AsParallel().Select(async item =>
{
var doc = await _cosmosDbRepository.GetItemByKeyAsync(GetId(sessionId, item.Key), sessionId) ?? new Document();
doc.SetPropertyValue("_partitionKey", sessionId);
doc.SetPropertyValue("key", GetId(sessionId, item.Key));
doc.SetPropertyValue("name", item.Key.ToLowerInvariant());
doc.SetPropertyValue("value", item.Value);
doc.TimeToLive = ttl;
await _cosmosDbRepository.UpsertDocumentAsync(doc, "_partitionKey");
}));
return true;
}
catch
{
ApplicationInsightsLogger.TrackException(ex, new Dictionary<string, string>
{
{ "sessionID", sessionId },
{ "action", "TryStoreItems" }
});
return false;
}
}
The code has serious issues. For eg 100 items, it fires off 100 concurrent operations, 4/8 at a time. The code inside the loop seems to read a document from CosmosDB, set all its properties then call a method named similar to DocumentClient.UpsertDocumentAsync which doesn't need pre-loading anything. Without knowing what _cosmosDbRepository is and what its methods do, one can only guess. It's possible it creates extra conflicts though by trying to lock stuff while the (probably useless) load/update cycle takes place.
For starters, AsParallel() is only meant for data parallelism: partition some data in memory and use as many workers are there are cores to crunch each partition. There's no data here though, just calls to async operations. That's why for 100 items, this code will fire off 100 concurrent tasks.
That could hit any number of CosmosDB throttling limits, even if it doesn't cause concurrency conflicts. It could also lead to networking issues, as the same cable is used for all those concurrent connections.
Not taking CosmosDB into account, the correct way to make lots of calls to a remote service is to queue them and execute them with a limited number of workers. This is very easy to do with .NET's ActionBlock. The code could change to something like this :
class Payload
{
public string SessionKey{get;set;}
public string Key{get;set;}
public string Name{get;set;}
public string Value{get;set;}
public int TTL{get;set;}
}
//Allow only 10 concurrent upserts
var options=new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10
};
var upsertBlock=new ActionBlock<Payload>(myPosterAsync,options);
foreach(var payload in payloads)
{
block.Post(pair);
}
//Tell the block we're done
block.Complete();
//Await for all queued operations to complete
await block.Completion;
Where myPosterAsync contains the posting code :
async Task myPosterAsync(Payload item)
{
try
{
var doc = await _cosmosDbRepository.GetItemByKeyAsync(GetId(item.SessionId, item.Key),
item.SessionId)
?? new Document();
doc.SetPropertyValue("_partitionKey", item.SessionId);
doc.SetPropertyValue("key", GetId(sessionId, item.Key));
doc.SetPropertyValue("name", item.Name);
doc.SetPropertyValue("value", item.Value);
doc.TimeToLive = item.TTL;
await _cosmosDbRepository.UpsertDocumentAsync(doc, "_partitionKey");
catch (Exception ex)
{
//Handle the error in some way, eg log it
ApplicationInsightsLogger.TrackException(ex, new Dictionary<string, string>
{
{ "sessionID", item.SessionId },
{ "action", "TryStoreItems" }
});
}
}

Asynchronous WCF Timeout in WPF App

I've got a WPF application that needs to process a list of items asynchronously. As of right now, I loop through the list of items, and for each one, I await the operation. Everything seems to be fine 90% of the time. The other 10% of the time, I'm getting timeout errors (has exceeded the allotted timeout of 00:01:00). What's weirding me out is that the operations seem to of completed on the backend. Everything looks fine in the database. No errors on the server side. I'm only getting this timeout error on my WPF client.
I'm awaiting my WCF call on each one, so it isn't timing out on the individual call. Each individual call doesn't take long (maybe a few seconds), however the entire operation could take up to a minute or so.
private async void MyClickEvent(object sender, RoutedEventArgs e){
await HandleProcessing(MyListOfIds);
}
private async Task HandleProcessing(List<int> Ids)
{
foreach (var id in Ids)
{
try
{
// This function makes 2 wcf calls. both 2 are awaited
int i = await MyFunctionThatMakesWCFCalls(id);
}
catch (Exception e)
{
//log error
}
}
}
private async Task<int> MyFunctionThatMakesWCFCalls(int id){
bool success = await _serviceClient.MyCall(id);
if (success){
return await _serviceClient.MySecondCall(id);
}
return 0;
}
So, I suppose each iteration in the loop could potentially take a few seconds (sometimes up to 10). Like I said, even though I get the exception in my log, the data in the backend seems to all be there. I'm not sure how to get around this false positive error. Any help would be greatly appreciated.

Parallel.ForEach and blocking thread

I created Windows Service application with Quartz.NET library to schedule jobs for reporting purposes. Main part of application is fetching some data from databases on different locations (~260), so I decided to use Parallel.ForEach for parallel fetching and storing data on central location.
In Quartz.NET Job I run static method from my utility class that do parallel processing.
Utility class:
public class Helper
{
public static ConcurrentQueue<Exception> KolekcijaGresaka = new ConcurrentQueue<Exception>(); // Thread-safe
public static void Start()
{
List<KeyValuePair<string, string>> podaci = Aktivne(); // List of data for processing (260 items)
ParallelOptions opcije = new ParallelOptions { MaxDegreeOfParallelism = 50 };
Parallel.ForEach(podaci, opcije, p =>
{
UzmiPodatke(p.Key, p.Value, 2000);
});
}
public static void UzmiPodatke(string oznaka, string ipAdresa, int pingTimeout)
{
string datumTrenutneString = DateTime.Now.ToString("d.M.yyyy");
string datumPrethodneString = DatumPrethodneGodineString();
string sati = DateTime.Now.ToString("HH");
// Ping:
Ping ping = new Ping();
PingReply reply = ping.Send(ipAdresa, pingTimeout);
// If is online call method for copy data:
if (reply.Status == IPStatus.Success)
{
KopirajPodatke(oznaka, ipAdresa, datumTrenutneString, datumPrethodneString, sati, "TBL_DATA");
}
}
public static void KopirajPodatke(string oznaka, string ipAdresa, string datumTrenutneString, string datumPrethodneString, string sati, string tabelaDestinacija)
{
string lanString = "Database=" + ipAdresa + "://DBS//custdb.gdb; User=*******; Password=*******; Dialect=3;";
IDbConnection lanKonekcija = new FbConnection(lanString);
IDbCommand lanCmd = lanKonekcija.CreateCommand();
try
{
lanKonekcija.Open();
lanCmd.CommandText = "query ...";
DataTable podaciTabela = new DataTable();
// Get data from remote location:
try
{
podaciTabela.Load(lanCmd.ExecuteReader());
}
catch (Exception ex)
{
throw ex;
}
// Save data:
if (podaciTabela.Rows.Count > 0)
{
using (SqlConnection sqlKonekcija = new SqlConnection(Konekcije.DB("Podaci")))
{
sqlKonekcija.Open();
using (SqlBulkCopy bulkcopy = new SqlBulkCopy(sqlKonekcija))
{
bulkcopy.DestinationTableName = tabelaDestinacija;
bulkcopy.BulkCopyTimeout = 5; // seconds
bulkcopy.ColumnMappings.Add("A", "A");
bulkcopy.ColumnMappings.Add("B", "B");
bulkcopy.ColumnMappings.Add("C", "C");
bulkcopy.ColumnMappings.Add("D", "D");
try
{
bulkcopy.WriteToServer(podaciTabela);
}
catch (Exception ex)
{
throw ex;
}
}
}
}
}
catch (Exception ex)
{
KolekcijaGresaka.Enqueue(ex);
}
finally
{
lanCmd.Dispose();
lanKonekcija.Close();
lanKonekcija.Dispose();
}
}
Application works most of times (job is executing 4 times per day), but sometimes get stuck and hanging (usually when processed ~200 items parallel) thus blocking main thread and never ends. Seems like one of thread from parallel processing get blocked and prevents execution of main thread. Can this be caused by deadlocks?
How can I ensure that no one thread blocks application execution (even with no success of fetching data)? What can get wrong with code above?
How can I ensure that no one thread blocks application execution (even with no success of fetching data)? What can get wrong with code above?
Parallel.Foreach is not asynchronous, it only executes each iteration in parallel, so it will wait for every operation to finish before proceeding. If you truly do not care to wait for all operations to finish before proceeding back to the caller, then try using the Task factory to schedule these and use the thread pool by default.
i.e.
foreach(var p in podaci)
{
Task.Factory.StartNew(() => UzmiPodatke(p.Key, p.Value, 2000));
}
Or use ThreadPool.QueueUserWorkItem or BackgroundWorker, whatever you're familiar with and is applicable to the behavior you want.
This probably won't solve all your problems, just the unresponsive program. Most likely, if there is actually a problem with your code, one of your Tasks will eventually throw an exception which will crash your program if unhandled. Or worse yet, you will have "stuck" tasks just sitting there hogging resources if the Task(s) never finish. However, it may just be the case that occasionally one of these takes extremely long. In this case, you can handle this however you want (cancellation of long task, make sure all previously scheduled tasks complete before scheduling more, etc.), and the Task Parallel Library can support all these cases with some minor modifications.

Categories

Resources