Why is my grpc so much slower then my minimal api? - c#

Hey there a bit of an odd question
I've downloaded the code for this :
https://medium.com/geekculture/build-high-performant-microservices-using-grpc-and-net-6-adde158c5ac
Actual Code:
https://github.com/csehammad/gRPCDemoUsingNET6
basically is streams 1 line at a time to a client~---takes about 1 min to stream 1.5 mil lines.
Tried replicating the same behavior with Minimal Api and not sure how,
But it takes 13 seconds to read and stream all 5 million records.
Shouldnt it be slower?
I think my code is badly messed up and probably rotter logic all over the place 😄.
Any Help would be swell
My Ui Code:
`using HttpClient client = new();
using HttpResponseMessage response = await client.GetAsync(
"http://localhost:5247/test",
HttpCompletionOption.ResponseHeadersRead
).ConfigureAwait(false);
IAsyncEnumerable<Sales> Sales= await response.Content.ReadFromJsonAsync<IAsyncEnumerable<Sales>>().ConfigureAwait(false);
var count = 0;
var watch = System.Diagnostics.Stopwatch.StartNew();
await foreach (var each in Sales)
{
Console.WriteLine(String.Format("New Order Receieved from {0}-{1},Order ID = {2}, ", each.Country, each.Region, each.TotalRevenue));
}`
The Backend:
`app.MapGet("/test", async () =>
{
return MakeHttpCall();
});
{
var watch = System.Diagnostics.Stopwatch.StartNew();
int Count = 0;
using (var reader = new StreamReader("path"))
{
while (watch.Elapsed < TimeSpan.FromSeconds(60) && !reader.EndOfStream)
{
var line = reader.ReadLine();
var pieces = line.Split(',');
var _model = new Sales();
_model.Region = pieces[0];
_model.Country = pieces[1];
yield return _model;
}
}
}`
Also the ui seems to start printing the text only when its all done, meaning not rly streaming..
(removed some code regarding model,watch,and model building --non relevant stuff)
Excpeting it to work after hours of debugging etc but getting nowhere..

Related

Cosmos DB, What will happen If I update some item when query with SkipToken?

If I query items from CosmosDB with SkipToken,
Like Pseudo code:
do{
var page = Query();
foreach(var item in page)
{
Update(item);
}
}while(HasNextPage());
The page I get may not be complete, which means I will miss some item.
But if I wait a moment after Update
Like:
do{
var page = Query();
foreach(var item in page)
{
Update(item);
}
// difference here:
WaitAMoment();
}while(HasNextPage());
, the error will not happen, and I will get the complete page with all I need.
So what happened to such a process?
You don't have to wait in code as such, this functionality is handled by CosmosDB internally. Check out Pagination in SDKs of Cosmos DB and, for example sake, I am adding code of handling server-side pagination in C# below (to get a gist of how it works):
private static async Task QueryPartitionedContainerInParallelAsync(Container container)
{
List<Family> familiesSerial = new List<Family>();
string queryText = "SELECT * FROM Families";
// 0 maximum parallel tasks, effectively serial execution
QueryRequestOptions options = new QueryRequestOptions() { MaxBufferedItemCount = 100 };
options.MaxConcurrency = 0;
using (FeedIterator<Family> query = container.GetItemQueryIterator<Family>(
queryText,
requestOptions: options))
{
while (query.HasMoreResults)
{
foreach (Family family in await query.ReadNextAsync())
{
familiesSerial.Add(family);
}
}
}
Assert("Parallel Query expected two families", familiesSerial.ToList().Count == 2);
// 1 maximum parallel tasks, 1 dedicated asynchronous task to continuously make REST calls
List<Family> familiesParallel1 = new List<Family>();
options.MaxConcurrency = 1;
using (FeedIterator<Family> query = container.GetItemQueryIterator<Family>(
queryText,
requestOptions: options))
{
while (query.HasMoreResults)
{
foreach (Family family in await query.ReadNextAsync())
{
familiesParallel1.Add(family);
}
}
}
Assert("Parallel Query expected two families", familiesParallel1.ToList().Count == 2);
AssertSequenceEqual("Parallel query returns result out of order compared to serial execution", familiesSerial, familiesParallel1);
// 10 maximum parallel tasks, a maximum of 10 dedicated asynchronous tasks to continuously make REST calls
List<Family> familiesParallel10 = new List<Family>();
options.MaxConcurrency = 10;
using (FeedIterator<Family> query = container.GetItemQueryIterator<Family>(
queryText,
requestOptions: options))
{
while (query.HasMoreResults)
{
foreach (Family family in await query.ReadNextAsync())
{
familiesParallel10.Add(family);
}
}
}
Assert("Parallel Query expected two families", familiesParallel10.ToList().Count == 2);
AssertSequenceEqual("Parallel query returns result out of order compared to serial execution", familiesSerial, familiesParallel10);
}

Why is my .net core API cancelling requests?

I have a an aync method that is looped:
private Task<HttpResponseMessage> GetResponseMessage(Region region, DateTime startDate, DateTime endDate)
{
var longLatString = $"q={region.LongLat.Lat},{region.LongLat.Long}";
var startDateString = $"{startDateQueryParam}={ConvertDateTimeToApixuQueryString(startDate)}";
var endDateString = $"{endDateQueryParam}={ConvertDateTimeToApixuQueryString(endDate)}";
var url = $"http://api?key={Config.Key}&{longLatString}&{startDateString}&{endDateString}";
return Client.GetAsync(url);
}
I then take the response and save it to my ef core database, however in some instances I get this Exception message: The Operaiton was canceled
I really dont understand that. This is a TCP handshake issue?
Edit:
For context I am making many of these calls, passing response to the method that writes to db (which is also so slow Its unbelievable):
private async Task<int> WriteResult(Response apiResponse, Region region)
{
// since context is not thread safe we ensure we have a new one for each insert
// since a .net core app can insert data at the same time from different users different instances of context
// must be thread safe
using (var context = new DalContext(ContextOptions))
{
var batch = new List<HistoricalWeather>();
foreach (var forecast in apiResponse.Forecast.Forecastday)
{
// avoid inserting duplicates
var existingRecord = context.HistoricalWeather
.FirstOrDefault(x => x.RegionId == region.Id &&
IsOnSameDate(x.Date.UtcDateTime, forecast.Date));
if (existingRecord != null)
{
continue;
}
var newHistoricalWeather = new HistoricalWeather
{
RegionId = region.Id,
CelsiusMin = forecast.Day.Mintemp_c,
CelsiusMax = forecast.Day.Maxtemp_c,
CelsiusAverage = forecast.Day.Avgtemp_c,
MaxWindMph = forecast.Day.Maxwind_mph,
PrecipitationMillimeters = forecast.Day.Totalprecip_mm,
AverageHumidity = forecast.Day.Avghumidity,
AverageVisibilityMph = forecast.Day.Avgvis_miles,
UvIndex = forecast.Day.Uv,
Date = new DateTimeOffset(forecast.Date),
Condition = forecast.Day.Condition.Text
};
batch.Add(newHistoricalWeather);
}
context.HistoricalWeather.AddRange(batch);
var inserts = await context.SaveChangesAsync();
return inserts;
}
Edit: I am making 150,000 calls. I know this is questionable since It all goes in memory I guess before even doing a save but this is where I got to in trying to make this run faster... only I guess my actual writing code is blocking :/
var dbInserts = await Task.WhenAll(
getTasks // the list of all api get requests
.Select(async x => {
// parsed can be null if get failed
var parsed = await ParseApixuResponse(x.Item1); // readcontentasync and just return the deserialized json
return new Tuple<ApiResult, Region>(parsed, x.Item2);
})
.Select(async x => {
var finishedGet = await x;
if(finishedGet.Item1 == null)
{
return 0;
}
return await writeResult(finishedGet.Item1, finishedGet.Item2);
})
);
.net core has a DefaultConnectionLimit setting as answered in comments.
this limits outgoing connections to specific domains to ensure all ports are not taken etc.
i did my parallel work incorrectly causing it to go over the limit - which everything i read says should not be 2 on .net core but it was - and that caused connections to close before receiving responses.
I made it greater, did parallel work correctly, lowered it again.

Forcing an asynchronous parallel loop to stop immediately- C#

Here is my method for requesting information from Azures FaceAPI.
I finally realized that in order to make my application work best, a bunch of my security camera frames must be grabbed in parallel, then sent away to the Neural Network to be analyzed.
(Note: it's an Alexa custom skill, so it times out around 8-10 seconds)
Because I grab a bunch of frames in parallel, and asynchronously, there is no way to know which of the images will return a decent face detection. However, when a good detection is found, and the request return Face Data, there is no way to stop the rest of the information from returning.
This happens because the Security camera images were sent in Parallel, they are gone, the info is coming back no matter what.
You'll see that I'm able to use "thread local variables" to capture information and send it back to the function scoped "imageAnalysis" variable to serialize and allow Alexa to describe people in the security image. BUT, because the loop is in Parallel, it doesn't break right away.
It may only take a second or two, but I'm on a time limit thanks to Alexas strict time-out policies.
There doesn't seem to be a way to break the parallel loop immediately...
Or is there?
The more time is spent collecting the "imageAnalysis" Data, the longer Alexa has to wait for a response. She doesn't wait long, and it's important to try and send as many possible images for analysis as possible before Alexa times-out, and also keeping under the Azure FaceAPI limits.
public static async Task<List<Person>> DetectPersonAsync()
{
ConfigurationDto config = Configuration.Configuration.GetSettings();
string imageAnalysis;
using (var cameraApi = new SecurityCameraApi())
{
byte[] imageData = cameraApi.GetImageAsByte(config.SecurityCameraUrl +
config.SecurityCameraStaticImage +
DateTime.Now);
//Unable to get an image from the Security Camera
if (!imageData.Any())
{
Logger.LogInfo(Logger.LogType.Info, "Unable to aquire image from Security Camera \n");
return null;
}
Logger.LogInfo(Logger.LogType.Info, "Attempting Image Face Detection...\n");
Func<string, bool> isEmptyOrErrorAnalysis = s => s.Equals("[]") || s.Contains("error");
imageAnalysis = "[]";
List<byte[]> savedImageList = cameraApi.GetListOfSavedImagesAsByte();
if (savedImageList.Any())
{
Parallel.ForEach(savedImageList, new ParallelOptions
{
MaxDegreeOfParallelism = 50
},
async (image, loopState) =>
{
string threadLocalImageAnalysis = "[]";
if (!loopState.IsStopped)
threadLocalImageAnalysis = await GetImageAnalysisAsync(image, config);
if (!isEmptyOrErrorAnalysis(threadLocalImageAnalysis))
{
imageAnalysis = threadLocalImageAnalysis;
loopState.Break();
}
});
}
// Don't do to many image analysis - or Alexa will time-out.
Func<List<byte[]>, int> detectionCount =
savedImageListDetections => savedImageListDetections.Count > 5 ? 0 : 16;
//Continue with detections of current Camera image frames
if (isEmptyOrErrorAnalysis(imageAnalysis))
{
Parallel.For(0, detectionCount(savedImageList), new ParallelOptions
{
MaxDegreeOfParallelism = 50
},
async (i, loopState) =>
{
imageData = cameraApi.GetImageAsByte(config.SecurityCameraUrl +
config.SecurityCameraStaticImage +
DateTime.Now);
string threadLocalImageAnalysis = "[]";
if (!loopState.IsStopped)
threadLocalImageAnalysis = await GetImageAnalysisAsync(imageData, config);
if (!isEmptyOrErrorAnalysis(threadLocalImageAnalysis))
{
imageAnalysis = threadLocalImageAnalysis;
loopState.Break();
}
});
}
}
try
{
//Cognitive sense has found elements(people) in the image
return new NewtonsoftJsonSerializer().DeserializeFromString<List<Person>>(imageAnalysis);
}
catch (Exception ex)
{
//No elements(people) detected from the camera stream
Logger.LogInfo(Logger.LogType.Info,
string.Format(
"Microsoft Cognitive Sense Face Api Reports: \n{0} \nNo people in the CCTV Camera Image.\n",
ex.Message));
return new List<Person>(); //Empty
}
}
private static async Task<string> GetImageAnalysisAsync(byte[] image, ConfigurationDto config)
{
string json;
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key",
config.MicrosoftCognitiveSenseApiKey);
// Request parameters.
const string queryString =
"returnFaceId=true" +
"&returnFaceLandmarks=false" +
"&returnFaceAttributes=age,gender,accessories,hair";
const string uri =
"https://westus.api.cognitive.microsoft.com/face/v1.0/detect?" + queryString;
using (var content = new ByteArrayContent(image))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
HttpResponseMessage response = await client.PostAsync(uri, content);
json = await response.Content.ReadAsStringAsync();
try
{
Logger.LogInfo(Logger.LogType.Info, json);
}
catch
{
}
}
}
return json;
}

PostAsync() and WaitAll() break instantly

I'm working on an API that enables C# to communicate and manage GNS projects easily. Now I'm looking for an efficient way to run and stop the projects. You can do it pretty much easily by simple POST requests synchronously. However, this process take some time so I'm trying to make it asynchronous since PostAsync let you do if I'm not mistaken.
Sadly, when I try to run my code, this breaks so bad. This is part where all the problem comes up:
// Change the status of the project (start or stop)
private void ChangeProjectStatus(string status){
// First part of the URL
string URLHeader = $"http://{host}:{port}/v2/projects/{projectID}/nodes";
// Number of nodes
int numNodes = nodes.Length;
// Pack the content we will send
string content = JsonConvert.SerializeObject(new Dictionary<string, string> { { "-d", "{}" } });
ByteArrayContent byteContent = new ByteArrayContent(System.Text.Encoding.UTF8.GetBytes(content));
byteContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
if (status.Equals("start"))
Console.WriteLine("Activating all the nodes in the project...");
else
Console.WriteLine("Deactivating all the nodes in the project...");
Task<System.Net.Http.HttpResponseMessage>[] tasks = new Task<System.Net.Http.HttpResponseMessage>[nodes.Length];
for (ushort i = 0; i < nodes.Length; i++){
try{
tasks[i] = HTTPclient.PostAsync($"{URLHeader}/{nodes[i].ID}/{status}", byteContent);
} catch(Exception err){
Console.Error.WriteLine("Impossible to {2} node {0}: {1}", nodes[i].Name, err.Message, status);
}
}
Task.WaitAll(tasks);
Console.WriteLine("...ok");
}
The error I get (from the WaitAll() block actually) is:
An item with the same key has already been added. Key: Content-Length
Any idea on how to fix it?

"hill climbing, change max number of threads 5" when debugging on my androiddevice in xamarin forms when I load data

I load data from my DB and when I push buttons to recieve different data I get "hill climbing, change max number of threads 5" in the log, and it makes the app slow when I try to gather the data.
Any idea how to resolve this? Or is this even making the app slower? It sure seems like something is spooky because it takes a few extra seconds when I load data on an android-device compared to an ios.
This is my code:
static public async Task<JObject> getContacts ()
{
var httpClientRequest = new HttpClient ();
try {
var result = await httpClientRequest.GetAsync ("http://address.com");
var resultString = await result.Content.ReadAsStringAsync ();
var jsonResult = JObject.Parse (resultString);
return jsonResult;
} catch {
return null;
}
}
And how I use it:
async void createData (object s, EventArgs a)
{
var getContacts = await parseAPI.getContacts ();
if (getContacts != null) {
listview.ItemsSource = null;
theList = new List <items> ();
foreach (var items in getContacts["results"]) {
theList.Add (new items () {
Name = items ["Name"].ToString (),
Number = items ["Number"].ToString ()
});
}
}
listview.ItemsSource = theList;
}
"Hill climbing" is a fairly common within the Mono runtime as the adaptative thread pool count is increasing (or decreasing) based upon current depends.
Personally I doubt that a thread count of 5 is causing any problems within your app. Seeing 30, 50, 100+ could/would be a problem as the thrashing of context switching can/will bring an app (and OS) to its knees.
In terms of OS "speed", the iOS simulator vs. an Android emulator is huge. Instrument and Perf test on actual devices.
The Mono Heuristic thread pool reference:
https://github.com/mono/mono/blob/master/mono/metadata/threadpool-ms.c
hill_climbing_change_thread_count (gint16 new_thread_count, ThreadPoolHeuristicStateTransition transition)
{
ThreadPoolHillClimbing *hc;
g_assert (threadpool);
hc = &threadpool->heuristic_hill_climbing;
mono_trace (G_LOG_LEVEL_INFO, MONO_TRACE_THREADPOOL, "[%p] hill climbing, change max number of threads %d", mono_native_thread_id_get (), new_thread_count);
hc->last_thread_count = new_thread_count;
hc->current_sample_interval = rand_next (&hc->random_interval_generator, hc->sample_interval_low, hc->sample_interval_high);
hc->elapsed_since_last_change = 0;
hc->completions_since_last_change = 0;
}

Categories

Resources