I have a web service I need to query and it takes a value that supports pagination for its data. Due to the amount of data I need to fetch and how that service is implemented I intended to do a series of concurrent http web requests to accumulate this data.
Say I have number of threads and page size how could I assign each thread to pick its starting point that doesn't overlap with the other thread? Its been a long time since I took parallel programming and I'm floundering a bit. I know I could find my start point with something like start = N/numThreads * threadNum however I don't know N. Right now I just spin up X threads and each loop until they get no more data. Problem is they tend to overlap and I end up with duplicate data. I need unique data and not to waste requests.
Right now I have code that looks something like this. This is one of many attempts and I see why this is wrong but its better to show something. The goal is to in parallel collect pages of data from a webservice:
int limit = pageSize;
data = new List<RequestStuff>();
List<Task> tasks = new List<Task>();
for (int i = 0; i < numThreads; i++)
{
tasks.Add(Task.Factory.StartNew(() =>
{
try
{
List<RequestStuff> someData;
do
{
int start;
lock(myLock)
{
start = data.Count;
}
someKeys = GetDataFromService(start, limit);
lock (myLock)
{
if (someData != null && someData.Count > 0)
{
data.AddRange(someData);
}
}
} while (hasData);
}
catch (AggregateException ex)
{
//Exception things
}
}));
}
Task.WaitAll(tasks.ToArray());
Any inspiration to solve this without race conditions? I need to stick to .NET 4 if that matters.
I'm not sure there's a way to do this without wasting some requests unless you know the actual limit. The code below might help eliminate the duplicate data as you will only query on each index once:
private int _index = -1; // -1 so first request starts at 0
private bool _shouldContinue = true;
public IEnumerable<RequestStuff> GetAllData()
{
var tasks = new List<Task<RequestStuff>>();
while (_shouldContinue)
{
tasks.Add(new Task<RequestStuff>(() => GetDataFromService(GetNextIndex())));
}
Task.WaitAll(tasks.ToArray());
return tasks.Select(t => t.Result).ToList();
}
private RequestStuff GetDataFromService(int id)
{
// Get the data
// If there's no data returned set _shouldContinue to false
// return the RequestStuff;
}
private int GetNextIndex()
{
return Interlocked.Increment(ref _index);
}
It could also be improved by adding cancellation tokens to cancel any indexes you know to be wasteful, i.e, if index 4 returns nothing you can cancel all queries on indexes above 4 that are still active.
Or if you could make a reasonable guess at the max index you might be able to implement an algorithm to pinpoint the exact limit before retrieving any data. This would probably only be more efficient if your guess was fairly accurate though.
Are you attempting to force parallelism on the part of the remote service by issuing multiple concurrent requests? Paging is generally used to limit the amount of data returned to only that which is needed, but if you need all of the data, then attempting to first page and then reconstruct it later seems like a poor design. Your code becomes needlessly complex, difficult to maintain, you'll likely just move the bottleneck from code you control to somewhere else, and now you've introduced data integrity issues (what happens if all of these threads access different versions of the data you are trying to query?). By increasing the complexity and number of calls, you are also increasing the likelihood of problems occurring (eg. one of the connections gets dropped).
Can you state the problem you are attempting to solve so perhaps instead we can help architect a better solution?
Related
So I'm running a Parallel.ForEach that basically generates a bunch of data which is ultimately going to be saved to a database. However, since collection of data can get quite large I need to be able to occasionally save/clear the collection so as to not run into an OutOfMemoryException.
I'm new to using Parallel.ForEach, concurrent collections, and locks, so I'm a little fuzzy on what exactly needs to be done to make sure everything works correctly (i.e. we don't get any records added to the collection between the Save and Clear operations).
Currently I'm saying, if the record count is above a certain threshold, save the data in the current collection, within a lock block.
ConcurrentStack<OutRecord> OutRecs = new ConcurrentStack<OutRecord>();
object StackLock = new object();
Parallel.ForEach(inputrecords, input =>
{
lock(StackLock)
{
if (OutRecs.Count >= 50000)
{
Save(OutRecs);
OutRecs.Clear();
}
}
OutRecs.Push(CreateOutputRecord(input);
});
if (OutRecs.Count > 0) Save(OutRecs);
I'm not 100% certain whether or not this works the way I think it does. Does the lock stop other instances of the loop from writing to output collection? If not is there a better way to do this?
Your lock will work correctly but it will not be very efficient because all your worker threads will be forced to pause for the entire duration of each save operation. Also, locks tends to be (relatively) expensive, so performing a lock in each iteration of each thread is a bit wasteful.
One of your comments mentioned giving each worker thread its own data storage: yes, you can do this. Here's an example that you could tailor to your needs:
Parallel.ForEach(
// collection of objects to iterate over
inputrecords,
// delegate to initialize thread-local data
() => new List<OutRecord>(),
// body of loop
(inputrecord, loopstate, localstorage) =>
{
localstorage.Add(CreateOutputRecord(inputrecord));
if (localstorage.Count > 1000)
{
// Save() must be thread-safe, or you'll need to wrap it in a lock
Save(localstorage);
localstorage.Clear();
}
return localstorage;
},
// finally block gets executed after each thread exits
localstorage =>
{
if (localstorage.Count > 0)
{
// Save() must be thread-safe, or you'll need to wrap it in a lock
Save(localstorage);
localstorage.Clear();
}
});
One approach is to define an abstraction that represents the destination for your data. It could be something like this:
public interface IRecordWriter<T> // perhaps come up with a better name.
{
void WriteRecord(T record);
void Flush();
}
Your class that processes the records in parallel doesn't need to worry about how those records are handled or what happens when there's too many of them. The implementation of IRecordWriter handles all those details, making your other class easier to test.
An implementation of IRecordWriter could look something like this:
public abstract class BufferedRecordWriter<T> : IRecordWriter<T>
{
private readonly ConcurrentQueue<T> _buffer = new ConcurrentQueue<T>();
private readonly int _maxCapacity;
private bool _flushing;
public ConcurrentQueueRecordOutput(int maxCapacity = 100)
{
_maxCapacity = maxCapacity;
}
public void WriteRecord(T record)
{
_buffer.Enqueue(record);
if (_buffer.Count >= _maxCapacity && !_flushing)
Flush();
}
public void Flush()
{
_flushing = true;
try
{
var recordsToWrite = new List<T>();
while (_buffer.TryDequeue(out T dequeued))
{
recordsToWrite.Add(dequeued);
}
if(recordsToWrite.Any())
WriteRecords(recordsToWrite);
}
finally
{
_flushing = false;
}
}
protected abstract void WriteRecords(IEnumerable<T> records);
}
When the buffer reaches the maximum size, all the records in it are sent to WriteRecords. Because _buffer is a ConcurrentQueue it can keep reading records even as they are added.
That Flush method could be anything specific to how you write your records. Instead of this being an abstract class the actual output to a database or file could be yet another dependency that gets injected into this one. You can make decisions like that, refactor, and change your mind because the very first class isn't affected by those changes. All it knows about is the IRecordWriter interface which doesn't change.
You might notice that I haven't made absolutely certain that Flush won't execute concurrently on different threads. I could put more locking around this, but it really doesn't matter. This will avoid most concurrent executions, but it's okay if concurrent executions both read from the ConcurrentQueue.
This is just a rough outline, but it shows how all of the steps become simpler and easier to test if we separate them. One class converts inputs to outputs. Another class buffers the outputs and writes them. That second class can even be split into two - one as a buffer, and another as the "final" writer that sends them to a database or file or some other destination.
I'm having some performance issues when starting my windows service, the first round my lstSps is long (about 130 stored procedures). Is there anyway to speed this up (except for speeding the stored procedures up)?
When the foreach is over and goes over to the second round it goes faster, because there aren't that many returning true on TimeToRun(). But, my concern is about the first time, when there are a lot more stored procedures to run.
I have though about making a array and a for loop since I read that its faster, but I believe the problem is because the procedures takes to long time. Could I build this in a better way? Maybe use multiple threads (one for each execute) or something like that?
Would really appreciate some tips :)
EDIT: Just to clarify, it's method HasResult() is executing the SP:s and makes to look taking time..
lock (lstSpsToSend)
{
lock (lstSps)
{
foreach (var sp in lstSps.Where(sp => sp .TimeToRun()).Where(sp => sp.HasResult()))
{
lstSpsToSend.Add(sp);
}
}
}
while (lstSpsToSend.Count > 0)
{
//Take the first watchdog in list and then remove it
Sp sp;
lock (lstSpsToSend)
{
sp = lstSpsToSend[0];
lstSpsToSend.RemoveAt(0);
}
try
{
//Send the results
}
catch (Exception e)
{
Thread.Sleep(30000);
}
}
What I would do is something like this:
int openThread = 0;
ConcurrentQueue<Type> queue = new ConcurrentQueue<Type>();
foreach (var sp in lstSps)
{
Thread worker = new Thread(() =>
{
Interlocked.Increment(ref openThread);
if(sp.TimeToRun() && sp.HasResult)
{
queue.add(sp);
}
Interlocked.Decrement(ref openThread);
}) {Priority = ThreadPriority.AboveNormal, IsBackground = false};
worker.Start();
}
// Wait for all thread to be finnished
while(openThread > 0)
{
Thread.Sleep(500);
}
// And here move sp from queue to lstSpsToSend
while (lstSpsToSend.Count > 0)
{
//Take the first watchdog in list and then remove it
Sp sp;
lock (lstSpsToSend)
{
sp = lstSpsToSend[0];
lstSpsToSend.RemoveAt(0);
}
try
{
//Send the results
}
catch (Exception e)
{
Thread.Sleep(30000);
}
}
The best approach would rely heavily on what these stored procedures are actually doing, If they are returning the same kind of result set, or no result for that matter, it would definitely be beneficial to send them to SQL server all at once instead of one at a time.
The reason for this is network latency, if your SQL server sits in a data center somewhere that you are accessing over a WAN, your latency could be anywhere from 200ms up. So if you are calling 130 stored procedures sequentially, the "cost" would be 200ms X 130. That's 26 seconds just running back and forth over a network connection not actually executing the logic in your proc.
If you can combine all the procedures into a single call, you pay the 200ms cost only once.
Executing them on multiple concurrent threads is also a reasonable approach, but as before it would depend on what your procedures are doing and returning back to you.
Using an array over a list would not really give you any performance increases.
Hope this helps, good luck!
After doing some research, I'm resorting to any feedback regarding how to effectively remove two items off a Concurrent collection. My situation involves incoming messages over UDP which are currently being placed into a BlockingCollection. Once there are two Users in the collection, I need to safely Take two users and process them. I've seen several different techniques including some ideas listed below. My current implementation is below but I'm thinking there's a cleaner way to do this while ensuring that Users are processed in groups of two. That's the only restriction in this scenario.
Current Implementation:
private int userQueueCount = 0;
public BlockingCollection<User> UserQueue = new BlockingCollection<User>();
public void JoinQueue(User u)
{
UserQueue.Add(u);
Interlocked.Increment(ref userQueueCount);
if (userQueueCount > 1)
{
IEnumerable<User> users = UserQueue.Take(2);
if(users.Count==2) {
Interlocked.Decrement(ref userQueueCount);
Interlocked.Decrement(ref userQueueCount);
... do some work with users but if only one
is removed I'll run into problems
}
}
}
What I would like to do is something like this but I cannot currently test this in a production situation to ensure integrity.
Parallel.ForEach(UserQueue.Take(2), (u) => { ... });
Or better yet:
public void JoinQueue(User u)
{
UserQueue.Add(u);
// if needed? increment
Interlocked.Increment(ref userQueueCount);
UserQueue.CompleteAdding();
}
Then implement this somewhere:
Task.Factory.StartNew(() =>
{
while (userQueueCount > 1) OR (UserQueue.Count > 1) If it's safe?
{
IEnumerable<User> users = UserQueue.Take(2);
... do stuff
}
});
The problem with this is that i'm not sure I can guarantee that between the condition (Count > 1) and the Take(2) that i'm ensuring the UserQueue has at least two items to process? Incoming UDP messages are processed in parallel so I need a way to safely pull items off of the Blocking/Concurrent Collection in pairs of two.
Is there a better/safer way to do this?
Revised Comments:
The intented goal of this question is really just to achieve a stable/thread safe method of processing items off of a Concurrent Collection in .Net 4.0. It doesn't have to be pretty, it just has to be stable in the task of processing items in unordered pairs of twos in a parallel environment.
Here is what I'd do in rough Code:
ConcurrentQueuequeue = new ConcurrentQueue(); //can use a BlockingCollection too (as it's just a blocking ConcurrentQueue by default anyway)
public void OnUserStartedGame(User joiningUser)
{
User waitingUser;
if (this.gameQueue.TryDequeue(out waitingUser)) //if there's someone waiting, we'll get him
this.MatchUsers(waitingUser, joiningUser);
else
this.QueueUser(joiningUser); //it doesn't matter if there's already someone in the queue by now because, well, we are using a queue and it will sort itself out.
}
private void QueueUser(User user)
{
this.gameQueue.Enqueue(user);
}
private void MatchUsers(User first, User second)
{
//not sure what you do here
}
The basic idea being that if someone's wants to start a game and there's someone in your queue, you match them and start a game - if there's no-one, add them to the queue.
At best you'll only have one user in the queue at a time, but if not, well, that's not too bad either because as other users start games, the waiting ones will gradually removed and no new ones added until the queue is empty again.
If I could not put pairs of users into the collection for some reason, I would use ConcurrentQueue and try to TryDequeue 2 items at a time, if I can get only one - put it back. Wait as necessary.
I think the easiest solution here is to use locking: you will have one lock for all consumers (producers won't use any locks), which will make sure you always take the users in the correct order:
User firstUser;
User secondUser;
lock (consumerLock)
{
firstUser = userQueue.Take();
secondUser = userQueue.Take();
}
Process(firstUser, secondUser);
Another option, would be to have two queues: one for single users and one for pairs of users and have a process that transfers them from the first queue to the second one.
If you don't mind having wasting another thread, you can do this with two BlockingCollections:
while (true)
{
var firstUser = incomingUsers.Take();
var secondUser = incomingUsers.Take();
userPairs.Add(Tuple.Create(firstUser, secondUser));
}
You don't have to worry about locking here, because the queue for single users will have only one consumer, and the consumers of pairs can now use simple Take() safely.
If you do care about wasting a thread and can use TPL Dataflow, you can use BatchBlock<T>, which combines incoming items into batches of n items, where n is configured at the time of creation of the block, so you can set it to 2.
May this can helpd
public static IList<T> TakeMulti<T>(this BlockingCollection<T> me, int count = 100) where T : class
{
T last = null;
if (me.Count == 0)
{
last = me.Take(); // blocking when queue is empty
}
var result = new List<T>(count);
if (last != null)
{
result.Add(last);
}
//if you want to take more item on this time.
//if (me.Count < count / 2)
//{
// Thread.Sleep(1000);
//}
while (me.Count > 0 && result.Count <= count)
{
result.Add(me.Take());
}
return result;
}
I wanted to parallelize a piece of code, but the code actually got slower probably because of overhead of Barrier and BlockCollection. There would be 2 threads, where the first would find pieces of work wich the second one would operate on. Both operations are not much work so the overhead of switching safely would quickly outweigh the two threads.
So I thought I would try to write some code myself to be as lean as possible, without using Barrier etc. It does not behave consistent however. Sometimes it works, sometimes it does not and I can't figure out why.
This code is just the mechanism I use to try to synchronize the two threads. It doesn't do anything useful, just the minimum amount of code you need to reproduce the bug.
So here's the code:
// node in linkedlist of work elements
class WorkItem {
public int Value;
public WorkItem Next;
}
static void Test() {
WorkItem fst = null; // first element
Action create = () => {
WorkItem cur=null;
for (int i = 0; i < 1000; i++) {
WorkItem tmp = new WorkItem { Value = i }; // create new comm class
if (fst == null) fst = tmp; // if it's the first add it there
else cur.Next = tmp; // else add to back of list
cur = tmp; // this is the current one
}
cur.Next = new WorkItem { Value = -1 }; // -1 means stop element
#if VERBOSE
Console.WriteLine("Create is done");
#endif
};
Action consume = () => {
//Thread.Sleep(1); // this also seems to cure it
#if VERBOSE
Console.WriteLine("Consume starts"); // especially this one seems to matter
#endif
WorkItem cur = null;
int tot = 0;
while (fst == null) { } // busy wait for first one
cur = fst;
#if VERBOSE
Console.WriteLine("Consume found first");
#endif
while (true) {
if (cur.Value == -1) break; // if stop element break;
tot += cur.Value;
while (cur.Next == null) { } // busy wait for next to be set
cur = cur.Next; // move to next
}
Console.WriteLine(tot);
};
try { Parallel.Invoke(create, consume); }
catch (AggregateException e) {
Console.WriteLine(e.Message);
foreach (var ie in e.InnerExceptions) Console.WriteLine(ie.Message);
}
Console.WriteLine("Consume done..");
Console.ReadKey();
}
The idea is to have a Linkedlist of workitems. One thread adds items to the back of that list, and another thread reads them, does something, and polls the Next field to see if it is set. As soon as it is set it will move to the new one and process it. It polls the Next field in a tight busy loop because it should be set very quickly. Going to sleep, context switching etc would kill the benefit of parallizing the code.
The time it takes to create a workitem would be quite comparable to executing it, so the cycles wasted should be quite small.
When I run the code in release mode, sometimes it works, sometimes it does nothing. The problem seems to be in the 'Consumer' thread, the 'Create' thread always seems to finish. (You can check by fiddling with the Console.WriteLines).
It has always worked in debug mode. In release it about 50% hit and miss. Adding a few Console.Writelines helps the succes ratio, but even then it's not 100%. (the #define VERBOSE stuff).
When I add the Thread.Sleep(1) in the 'Consumer' thread it also seems to fix it. But not being able to reproduce a bug is not the same thing as knowing for sure it's fixed.
Does anyone here have a clue as to what goes wrong here? Is it some optimization that creates a local copy or something that does not get updated? Something like that?
There's no such thing as a partial update right? like a datarace, but then that one thread is half doen writing and the other thread reads the partially written memory? Just checking..
Looking at it I think it should just work.. I guess once every few times the threads arrive in different order and that makes it fail, but I don't get how. And how I could fix this without adding slowing it down?
Thanks in advance for any tips,
Gert-Jan
I do my damn best to avoid the utter minefield of closure/stack interaction at all costs.
This is PROBABLY a (language-level) race condition, but without reflecting Parallel.Invoke i can't be sure. Basically, sometimes fst is being changed by create() and sometimes not. Ideally, it should NEVER be changed (if c# had good closure behaviour). It could be due to which thread Parallel.Invoke chooses to run create() and consume() on. If create() runs on the main thread, it might change fst before consume() takes a copy of it. Or create() might be running on a separate thread and taking a copy of fst. Basically, as much as i love c#, it is an utter pain in this regard, so just work around it and treat all variables involved in a closure as immutable.
To get it working:
//Replace
WorkItem fst = null
//with
WorkItem fst = WorkItem.GetSpecialBlankFirstItem();
//And
if (fst == null) fst = tmp;
//with
if (fst.Next == null) fst.Next = tmp;
A thread is allowed by the spec to cache a value indefinitely.
see Can a C# thread really cache a value and ignore changes to that value on other threads? and also http://www.yoda.arachsys.com/csharp/threads/volatility.shtml
I'm working on an ASP.NET MVC application that uses the Google Maps Geocoding API. In a single batch there may be upto 1000 queries to submit to the Geocoding API, so I'm trying to use a parallel processing approach to imporove performance. The method responsible for starting a process for each core is:
public void GeoCode(Queue<Job> qJobs, bool bolKeepTrying, bool bolSpellCheck, Action<Job, bool, bool> aWorker)
{
// Get the number of processors, initialize the number of remaining
// threads, and set the starting point for the iteration.
int intCoreCount = Environment.ProcessorCount;
int intRemainingWorkItems = intCoreCount;
using(ManualResetEvent mreController = new ManualResetEvent(false))
{
// Create each of the work items.
for(int i = 0; i < intCoreCount; i++)
{
ThreadPool.QueueUserWorkItem(delegate
{
Job jCurrent = null;
while(qJobs.Count > 0)
{
lock(qJobs)
{
if(qJobs.Count > 0)
{
jCurrent = qJobs.Dequeue();
}
else
{
if(jCurrent != null)
{
jCurrent = null;
}
}
}
aWorker(jCurrent, bolKeepTrying, bolSpellCheck);
}
if(Interlocked.Decrement(ref intRemainingWorkItems) == 0)
{
mreController.Set();
}
});
}
// Wait for all threads to complete.
mreController.WaitOne();
}
}
This is based on patterns document I found on Microsoft's parallel computing web site.
The problem is that the Google API has a limit of 10 QPS (enterprise customer) - which I'm hitting - then I get HTTP 403 error's. Is this a way I can benefit from parallel processing but limit the requests I'm making? I've tried using Thread.Sleep but it doesn't solve the problem. Any help or suggestions would be very much appreciated.
It sounds like your missing some sort of Max in Flight parameter. Rather than just looping while there are jobs in the queue, you need to throttle your submissions based on jobs finishing.
Seems like your algorithm should be something like the following:
submit N jobs (where N is your max in flight)
Wait for a job to complete, and if queue is not empty, submit next job.