throttle parallel request to remote api - c#

I'm working on an ASP.NET MVC application that uses the Google Maps Geocoding API. In a single batch there may be upto 1000 queries to submit to the Geocoding API, so I'm trying to use a parallel processing approach to imporove performance. The method responsible for starting a process for each core is:
public void GeoCode(Queue<Job> qJobs, bool bolKeepTrying, bool bolSpellCheck, Action<Job, bool, bool> aWorker)
{
// Get the number of processors, initialize the number of remaining
// threads, and set the starting point for the iteration.
int intCoreCount = Environment.ProcessorCount;
int intRemainingWorkItems = intCoreCount;
using(ManualResetEvent mreController = new ManualResetEvent(false))
{
// Create each of the work items.
for(int i = 0; i < intCoreCount; i++)
{
ThreadPool.QueueUserWorkItem(delegate
{
Job jCurrent = null;
while(qJobs.Count > 0)
{
lock(qJobs)
{
if(qJobs.Count > 0)
{
jCurrent = qJobs.Dequeue();
}
else
{
if(jCurrent != null)
{
jCurrent = null;
}
}
}
aWorker(jCurrent, bolKeepTrying, bolSpellCheck);
}
if(Interlocked.Decrement(ref intRemainingWorkItems) == 0)
{
mreController.Set();
}
});
}
// Wait for all threads to complete.
mreController.WaitOne();
}
}
This is based on patterns document I found on Microsoft's parallel computing web site.
The problem is that the Google API has a limit of 10 QPS (enterprise customer) - which I'm hitting - then I get HTTP 403 error's. Is this a way I can benefit from parallel processing but limit the requests I'm making? I've tried using Thread.Sleep but it doesn't solve the problem. Any help or suggestions would be very much appreciated.

It sounds like your missing some sort of Max in Flight parameter. Rather than just looping while there are jobs in the queue, you need to throttle your submissions based on jobs finishing.
Seems like your algorithm should be something like the following:
submit N jobs (where N is your max in flight)
Wait for a job to complete, and if queue is not empty, submit next job.

Related

.Net/ C# multithreading for better performance [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have a web application that makes industrial scheduling calculations, I'm trying to log events in Azure cosmos db table after every update that happens to the schedule without affecting the application performance (screen Loading time).
That means, I want to fire log method in the BACKGROUND and the end user will not feel it (no freeze or extra loading time) and without making the UI wait for this operation to be done.
I added the next C# lines of code just after finishing the whole calculations:
private List<JPIEventEntity> batch = new List<JPIEventEntity>();
private List<List<JPIEventEntity>> batchesList = new List<List<JPIEventEntity>>();
Thread newThread = new Thread(() => myJPIEventHandler.checkForJPIEventsToSend(customer, author, model));
newThread.Start();
/*
* check if there are any changes or updates in the calculations and log their events.
*/
internal void checkForJPIEventsToSend(JPIBaseCustomer customer, JPIBaseUser author, SchedulingModel afterModel)
{
myCustomer = customer;
myUser = author;
// Looking for deleted Jobs
foreach (Job beforeJob in myBeforeModel.Jobs)
{
if (!afterModel.Jobs.Any(x => x.Guid == beforeJob.Guid))
{
//Looking for deleted Tasks and Log the deletion
foreach (Operation beforeOp in beforeJob.Operations)
{
//if (!afterJob.Operations.Any(x => x.Guid == beforeOp.Guid))
logTaskEvent(EventType.Delete, beforeOp, "", "");
}
//Log Job Deletion
logJobEvent(EventType.Delete, beforeJob, "", "");
}
}
//Comparison
foreach (Job afterJob in afterModel.Jobs)
{
if (myBeforeModel.Jobs.Any(x => x.Guid == afterJob.Guid))
{
Job beforeJob = myBeforeModel.Jobs.First(x => x.Guid == afterJob.Guid);
if (beforeJob.Name != afterJob.Name)
logJobEvent(EventType.NameChanged, afterJob, beforeJob.Name, afterJob.Name);
if (beforeJob.ReleaseDate != afterJob.ReleaseDate)
logJobEvent(EventType.ReleaseDateChanged, afterJob, beforeJob.ReleaseDate, afterJob.ReleaseDate);
if (beforeJob.DueDate != afterJob.DueDate)
logJobEvent(EventType.DueDateChanged, afterJob, beforeJob.DueDate, afterJob.DueDate);
if (beforeJob.IsDueDateExceeded != afterJob.IsDueDateExceeded)
logJobEvent(EventType.DueDateExceededChanged, afterJob, beforeJob.IsDueDateExceeded.ToString(), afterJob.IsDueDateExceeded.ToString());
if (beforeJob.ProcessingState != afterJob.ProcessingState)
{
logJobEvent(EventType.StatusChanged, afterJob,
convertProcessingStateToStatus(beforeJob.ProcessingState.ToString()), convertProcessingStateToStatus(afterJob.ProcessingState.ToString()));
}
if (beforeJob.SequenceNumber != afterJob.SequenceNumber && afterJob.ProcessingState != JobProcessingState.Finished)
logJobEvent(EventType.SequenceNumberChanged, afterJob, beforeJob.SequenceNumber, afterJob.SequenceNumber);
if (beforeJob.CustomQuantity != afterJob.CustomQuantity)
logJobEvent(EventType.QuantityChanged, afterJob, beforeJob.CustomQuantity, afterJob.CustomQuantity);
DateTime? beforeStart = beforeJob.ProcessingStart != null ? beforeJob.ProcessingStart : beforeJob.PlannedStart;
DateTime? afterStart = afterJob.ProcessingStart != null ? afterJob.ProcessingStart : afterJob.PlannedStart;
if (beforeStart != afterStart)
logJobEvent(EventType.StartDateChanged, afterJob, beforeStart, afterStart);
DateTime? beforeEnd = beforeJob.ProcessingEnd != null ? beforeJob.ProcessingEnd : beforeJob.PlannedEnd;
DateTime? afterEnd = afterJob.ProcessingEnd != null ? afterJob.ProcessingEnd : afterJob.PlannedEnd;
if (beforeEnd != afterEnd)
logJobEvent(EventType.EndDateChanged, afterJob, beforeEnd, afterEnd);
TimeSpan? beforeBuffer = beforeJob.DueDate != null ? (beforeJob.DueDate - beforeEnd) : new TimeSpan(0L);
TimeSpan? afterBuffer = afterJob.DueDate != null ? (afterJob.DueDate - afterEnd) : new TimeSpan(0L);
if (beforeBuffer != afterBuffer)
logJobEvent(EventType.BufferChanged, afterJob, beforeBuffer, afterBuffer);
}
//Collect the Batches in one list of batches
CollectBatches();
//Log all the Batches
LogBatches(batchesList);
}
/*
* Collectes the events in one batch
*/
private void logJobEvent(EventType evtType, Job afterJob, string oldValue, string newValue)
{
var eventGuid = Guid.NewGuid();
JPIEventEntity evt = new JPIEventEntity();
evt.Value = newValue;
evt.PrevValue = oldValue;
evt.ObjectGuid = afterJob.Guid.ToString();
evt.PartitionKey = myCustomer.ID; //customer guid
evt.RowKey = eventGuid.ToString();
evt.EventType = evtType;
evt.CustomerName = myCustomer.Name;
evt.User = myUser.Email;
evt.ParentName = null;
evt.ObjectType = JOB;
evt.ObjectName = afterJob.Name;
evt.CreatedAt = DateTime.Now;
batch.Add(evt);
}
/*
* Collectes the Events lists in an enumerable of Batches (max capacity of a single batch insertion is 100).
*/
private void CollectBatches()
{
//batchesList = new List<List<JPIEventEntity>>();
if (batch.Count > 0)
{
int rest = batch.Count;
var nextBatch = new List<JPIEventEntity>();
if (batch.Count > MaxBatchSize) //MaxBatchSize = 100
{
foreach (var item in batch)
{
nextBatch.Add(item);
rest = rest - 1; //rest = rest - (MaxBatchSize * hundreds);
if (rest < MaxBatchSize && nextBatch.Count == (batch.Count % MaxBatchSize))
{
batchesList.Add(nextBatch);
}
else if (nextBatch.Count == MaxBatchSize)
{
batchesList.Add(nextBatch);
nextBatch = new List<JPIEventEntity>();
}
}
}
else
{
batchesList.Add(batch);
}
}
}
private void LogBatches(List<List<JPIEventEntity>> batchesList)
{
if (batchesList.Count > 0)
{
JPIEventHandler.LogBatches(batchesList);
}
}
/*
* Insert Events into database
*/
public static void LogBatches(List<List<JPIEventEntity>> batchesList)
{
foreach (var batch in batchesList)
{
var batchOperationObj = new TableBatchOperation();
//Iterating through each batch entities
foreach (var Event in batch)
{
batchOperationObj.InsertOrReplace(Event);
}
var res = table.ExecuteBatch(batchOperationObj);
}
}
Inside the 'checkForJPIEventsToSend' method, I'm checking if there's any changes or updates in the calculations and insert events (hundreds or even thousands of lines) into the cosmos db table as batches.
After putting the method in a separate thread (as shown above) I still have an EXTRA LOADING duration of 2 to 4 seconds after every operation, which is something critical and bad for us.
Am I using the multi-threading correctly?
Thank you in advance.
As I understand your situation you have a front end application such as a desktop app or a website that creates requests. For each request you
Perform some calculations
Write some data to storage (Cosmos DB)
It is unclear whether you must display some result to the front end after these steps are complete. Your options depend on this question.
Scenario 1: The front end is waiting for the results of the calculations or database changes
The front end requires some result from the calculations or database changes, so the user is forced to wait for this to complete. However you want to avoid freezing your front end whilst you perform the long running tasks.
The solution most developers reach for here is to perform the work in a background thread. Your main thread waits for this background thread to complete and return a result, at which point the main/UI thread will update the front end with the result. This is because the main thread is often the only thread allowed to update the front end.
How you offload the work to a background thread depends on the type of work load you have. If the work is mostly I/O such as File I/O, Network I/O (writing to Azure CosmosDB) then you want to use the Async methods of the Cosmos API and async/await.
See https://stackoverflow.com/a/18033198/6662878
If the work you are doing is mostly CPU based, then threads will only speed up the processing if the problem can be broken into parts and run in parallel across multiple background threads. If the problem cannot be broken down and parallelised then running the work on a single background thread has a small cost associated with thread switching, but in turn this frees up the main/UI thread whilst the CPU based work is in progress in the background.
See https://learn.microsoft.com/en-us/dotnet/standard/asynchronous-programming-patterns/consuming-the-task-based-asynchronous-pattern
You will need to think about how you handle exceptions that occur on background threads, and how the code you run in your background thread will respond to a request to stop processing.
See https://learn.microsoft.com/en-us/dotnet/standard/threading/canceling-threads-cooperatively
Caveat: if any thread is carrying out very CPU intensive work (such as compressing or encrypting large amounts of data, encoding audio or video etc) this can often cause the front end to freeze, stop responding, drop network requests etc. If you have some processor intensive work to complete you need to think about how the work is spread over CPU cores, or CPUs.
Scenario 2: The front end does not need to display any specific result for the request
In this scenario you have more flexibilty about how and when you perform your background work because you can simply respond to the front end request with an acknowledgement that the request is received and will be processed in the (near) future. For example a Web API may respond with a 201 ACCEPTED HTTP response code to signal this.
You now want to queue the requests and process them somewhere other than your main/UI thread. There are a number of options, background threads being one of them, though not the simplest. You may also consider using a framework like https://www.hangfire.io/.
Another popular approach is to create a completely separate service or microservice that is responsible for your picking up requests from a queue and performing the work.
See https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication
Multithreading should come with a big warning message. Sometimes it is unavoidable, but it is always difficult and troublesome to get right. The C# APIs have evolved over time and so there's a lot to learn and a lot of ground to cover. It is often seen as a quick option to convert an application to be multithreaded, though you should be wary of this. Although more complex architectures as discussed in the link above seem overly burdensome, they invariably force you to think through a number of issues that come up when you begin to split up your application into threads, or processes, or services.

Insert Multiple records in AWS keyspace using C#

Hello I just newly started with Cassandra not much familiar, can u please let me know the error here
I am trying to insert 16000 records using the bellow code
public async Task AddSprintsStories(List<SprintStories> sprintStories)
{
var tasks = new List<Task>();
try
{
if (sprintStories.Count > 0)
{
foreach (var item in sprintStories)
{
SprintStories sprintStoryData = new SprintStories();
sprintStoryData.Id = item.Id;
sprintStoryData.ProjectId = item.ProjectId;
sprintStoryData.SprintId = item.SprintId;
tasks.Add(mapper.InsertAsync<SprintStories>(sprintStoryData, new CqlQueryOptions().SetConsistencyLevel(ConsistencyLevel.LocalQuorum)));
}
await Task.WhenAll(tasks);
}
}
catch (Exception e)
{
}
}
but facing the error: c# Server timeout during write query at consistency LOCALQUORUM (0 peer(s) acknowledged the write over 2 required)
can anyone please help me out here
How does the Cassandra cluster look during this cluster? CPU or disk I/O maxed-out? Without knowing that, my guess is that those 16000 writes are happening faster than your cluster can process them, creating write back pressure. Finally, it just can't process anymore, so they start failing.
For a possible solution, try limiting the number of active threads. Something like this should do it.
int maxActiveThreads = 20;
int activeThreads = 0;
foreach (var item in sprintStories)
{
...
tasks.Add(mapper.InsertAsync<SprintStories>(sprintStoryData, new CqlQueryOptions().SetConsistencyLevel(ConsistencyLevel.LocalQuorum)));
activeThreads++;
if (activeThreads >= maxActiveThreads)
{
await Task.WhenAll(tasks);
activeThreads = 0;
}
}
await Task.WhenAll(tasks);
With this code, only 20 writes will be competing for Cassandra cluster resources at any given time. Do note, that I'm just using 20 as an example. Adjust that number to something that meets your requirements for performance and stability.
Ryan Svihla wrote a great blog post on this topic- Cassandra: Batch Loading Without the BATCH - The Nuanced Edition

C# - Parallel.Foreach() for Service Call

I have a console application which does 3 steps as mentioned below,
Get pending notification records from db
Call service to send email (service returns email address as response)
After getting service response, update the db
For step 2, I am using Parallel.Foreach() and it is working far better than foreach().
I have gone through lot of articles and threads on stackoverflow which is causing more confusion on this topic.
I have few questions
I am running this on a server, does it affect the performance and should I limit the number of threads? (The email count can be from 0-500 or 1000)
I ran into one issue, where in step 2, the service returned an email address as response but it was not available while updating the db. (email count here was 400)
I am suspecting that the issue could be because of using parallel.foreach and that it did not add in notifList.
If this is the case, can I add Thread.Sleep(1000) after Parallel.Foreach() loop ends, does it fix the issue?
In case of any exception, should I explicitly cancel the threads?
Appreciate your time and effort on helping me with this. Thank you!
public void notificationMethod()
{
List<notify> notifList = new List<notify>();
//step 1
List<orders> orderList = GetNotifs();
try
{
if (orderList.Count > 0)
{
Parallel.ForEach(orderList, (orderItem) =>
{
//step 2
SendNotifs(orderItem);
notifList.Add(new notify()
{
//building list by adding email address along with other information
});
});
if (notifList.Count > 0)
{
int index = 0;
int rows = 10;
int skipRows = index * rows;
int updatedRows = 0;
while (skipRows < notifList.Count)
{
//pagination
List<notify> subitem = notifList.Skip(index * rows).Take(rows).ToList<notify>();
updatedRows += subitem.Count;
//step 3
UpdateDatabase(subitem);
index++;
skipRows = index * rows;
}
}
}
}
catch (ApplicationException ex)
{
}
}
I also had a similar scenario regarding whether using Parallel.ForEach() would help improve the performance. But when I saw the below video from Microsoft, it gave me an idea to select Parallel.ForEach() only for CPU intensive workloads.
In this case, your scenario will fall into I/O intensive workloads and could be handled better by async/await.
https://channel9.msdn.com/Series/Three-Essential-Tips-for-Async/Tip-2-Distinguish-CPU-Bound-work-from-IO-bound-work

Maximizing usage of Parallel.For or Parallel.Foreach loops

I have a structure of nested Parallel.For and PLINQ statements in my small console app that basically performs network-bound operation(performing http requests) like following:
a list of users is filled from DB where then I do following:
Parallel.For(0,users.count(), index=>{
// here I try to perform HTTP requests for multiple users
});
Then inside this for loop I perform a plinq statement for fetching this user's info via HTTP requests.
So that now in total I get two nested loops like following:
Parallel.For(0,users.count(), index=>{
// Some stuff is done before the PLINQ statement is called...
newFilteredList.AsParallel().WithDegreeOfParallelism(60).ForAll(qqmethod =>
{
var xdocic = new XmlDocument();
xdocic.LoadXml(SendXMLRequestToEbay(null, null, qqmethod.ItemID, true, TotalDaysSinceLastUpdate.ToString(), null));
int TotalPages = 0;
if (xdocic.GetElementsByTagName("TotalNumberOfPages").Item(0) != null)
{
TotalPages = Convert.ToInt32(xdocic.GetElementsByTagName("TotalNumberOfPages").Item(0).InnerText);
}
if (TotalPages > 1)
{
for (int i = 1; i < TotalPages + 1; i++)
{
Products.Add(SendXMLRequestToEbay(null, null, qqmethod.ItemID, false, TotalDaysSinceLastUpdate.ToString(), i.ToString()));
}
}
else
{
Products.Add(SendXMLRequestToEbay(null, null, qqmethod.ItemID, false, TotalDaysSinceLastUpdate.ToString(), "1"));
}
});
});
I tried using the outer for loop just as a regular one ,and I noticed that it was performing much faster and better than like this.
What worries me here mostly is that I was checking the utilization of CPU when running the console app like this, it's always nearby 0.5-3% of total CPU power...
So the way I'm trying to perform HTTP requests is like this:
15 users at a time * amount of HTTP requests for those 15 users.
What am I doing wrong here?

Define Next Start Point When Number of Items Unknown

I have a web service I need to query and it takes a value that supports pagination for its data. Due to the amount of data I need to fetch and how that service is implemented I intended to do a series of concurrent http web requests to accumulate this data.
Say I have number of threads and page size how could I assign each thread to pick its starting point that doesn't overlap with the other thread? Its been a long time since I took parallel programming and I'm floundering a bit. I know I could find my start point with something like start = N/numThreads * threadNum however I don't know N. Right now I just spin up X threads and each loop until they get no more data. Problem is they tend to overlap and I end up with duplicate data. I need unique data and not to waste requests.
Right now I have code that looks something like this. This is one of many attempts and I see why this is wrong but its better to show something. The goal is to in parallel collect pages of data from a webservice:
int limit = pageSize;
data = new List<RequestStuff>();
List<Task> tasks = new List<Task>();
for (int i = 0; i < numThreads; i++)
{
tasks.Add(Task.Factory.StartNew(() =>
{
try
{
List<RequestStuff> someData;
do
{
int start;
lock(myLock)
{
start = data.Count;
}
someKeys = GetDataFromService(start, limit);
lock (myLock)
{
if (someData != null && someData.Count > 0)
{
data.AddRange(someData);
}
}
} while (hasData);
}
catch (AggregateException ex)
{
//Exception things
}
}));
}
Task.WaitAll(tasks.ToArray());
Any inspiration to solve this without race conditions? I need to stick to .NET 4 if that matters.
I'm not sure there's a way to do this without wasting some requests unless you know the actual limit. The code below might help eliminate the duplicate data as you will only query on each index once:
private int _index = -1; // -1 so first request starts at 0
private bool _shouldContinue = true;
public IEnumerable<RequestStuff> GetAllData()
{
var tasks = new List<Task<RequestStuff>>();
while (_shouldContinue)
{
tasks.Add(new Task<RequestStuff>(() => GetDataFromService(GetNextIndex())));
}
Task.WaitAll(tasks.ToArray());
return tasks.Select(t => t.Result).ToList();
}
private RequestStuff GetDataFromService(int id)
{
// Get the data
// If there's no data returned set _shouldContinue to false
// return the RequestStuff;
}
private int GetNextIndex()
{
return Interlocked.Increment(ref _index);
}
It could also be improved by adding cancellation tokens to cancel any indexes you know to be wasteful, i.e, if index 4 returns nothing you can cancel all queries on indexes above 4 that are still active.
Or if you could make a reasonable guess at the max index you might be able to implement an algorithm to pinpoint the exact limit before retrieving any data. This would probably only be more efficient if your guess was fairly accurate though.
Are you attempting to force parallelism on the part of the remote service by issuing multiple concurrent requests? Paging is generally used to limit the amount of data returned to only that which is needed, but if you need all of the data, then attempting to first page and then reconstruct it later seems like a poor design. Your code becomes needlessly complex, difficult to maintain, you'll likely just move the bottleneck from code you control to somewhere else, and now you've introduced data integrity issues (what happens if all of these threads access different versions of the data you are trying to query?). By increasing the complexity and number of calls, you are also increasing the likelihood of problems occurring (eg. one of the connections gets dropped).
Can you state the problem you are attempting to solve so perhaps instead we can help architect a better solution?

Categories

Resources