How to request an item from IObservable? - c#

The original post contained a problem, I managed to solve, introducing a lot of issues with shared mutable state. Now, I'm wondering, if it can be done in a pure functional way.
Requests can be processed in a certain order.
For each order i there is an effectiveness E(i)
Processing request should follow three conditions
There should be no delay between acquiring the first request and processing it
There should be no delay between processing some request and processing next request
When there are several orders of processing requests, the one with highest effectiveness should be chosen
Concrete example:
For an infinite list of integers, print them, so, that prime numbers are generally earlier, than not prime numbers
Effectiveness of ordering is reverse to the number of times we had primes in queue, but printed non prime
My first solution in C# (not for primes, obviously) used some classes having a shared mutable state represented by a concurrent priority queue. It was ugly, because I had to manually subscribe classes to events and unsubscribe them, check that queue is not exhausted by one intermediate consumer before other consumer processes it and etc.
To refactor it, I chose Reactive Extensions library, which seemed to address issues with state. I understood that in the following case I couldn't use it:
The source function accepts nothing and returns IObservable<Request>
The process function accepts IObservable<Request> and returns nothing
I have to write a reorder function, which reorders requests on their way from source to process.
Internally reorder has a ConcurrentPriorityQueue of orders. It should handle two scenarios:
When process is busy with processing reorder finds better orderings and updates the queue
When process has requested a new order reorder returns the first element from queue
The problem was that if reorder returned IObservable<Request>, it wass unaware, whether items were requested from it, or no.
If reorder had called OnNext immediately upon receiving, it did not reorder anything and violated condition 3.
If it ensured, that it had found the best ordering, it violated conditions 1&2 because process could become idle.
If reorder returned ISubject<Request>, it exposed an option to call OnError and OnCompleted to consumer.
If reorder has returned the queue, I would have returned to where I started
The problem was that cold IObservable.Create was not lazy enough. It started exhausting queue with all requests when a subscription to it was made but results of only the first ones were used.
The solution I came up with is to return observable of requests, i.e. IObservable<Func<Task<int>>> instead of IObservable<int>
It works when there is only one subscriber, but if there are more requests used, than there are numbers generated by source, they will be awaited forever.
This issue can probably be solved by introducing caching, but then consumer which consumed a queue fast will have side effects on all other consumers, because he will freeze the queue in less effective ordering, than it would be after some waiting.
So, I will post solution to the original question, but It's not really a valuable answer, because it introduces a lot of problems.
This demonstrates why doesn't functional reactive programming and side effects mix well. On the other hand, it seems I now have an example of a practical problem impossible to solve in pure functional way. Or don't I? If Order function accepted optimizationLevel as a parameter it would be pure. Can we somehow implicitly convert time to optimizationLevel to make this pure as well?
I'd like to see such solution very much. In C# or any other language.
Problematic solution. Uses ConcurrentPriorityQueue from this repo.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Reactive.Linq;
using DataStructures;
using System.Threading;
namespace LazyObservable
{
class Program
{
/// <summary>
/// Compares tuple by second element, then by first in reverse
/// </summary>
class PriorityComparer<TElement, TPriority> : IComparer<Tuple<TElement, TPriority>>
where TPriority : IComparable<TPriority>
{
Func<TElement, TElement, int> fallbackComparer;
public PriorityComparer(IComparer<TElement> comparer=null)
{
if (comparer != null)
{
fallbackComparer = comparer.Compare;
}
else if (typeof(IComparable<TElement>).IsAssignableFrom(typeof(TElement))
|| typeof(IComparable).IsAssignableFrom(typeof(TElement)))
{
fallbackComparer = (a,b)=>-Comparer<TElement>.Default.Compare(a,b);
}
else
{
fallbackComparer = (_1,_2) => 0;
}
}
public int Compare(Tuple<TElement, TPriority> x, Tuple<TElement, TPriority> y)
{
if (x == null && y == null)
{
return 0;
}
if (x == null || y == null)
{
return x == null ? -1 : 1;
}
int res=x.Item2.CompareTo(y.Item2);
if (res == 0)
{
res = fallbackComparer(x.Item1,y.Item1);
}
return res;
}
};
const int N = 100;
static IObservable<int> Source()
{
return Observable.Interval(TimeSpan.FromMilliseconds(1))
.Select(x => (int)x)
.Where(x => x <= 100);
}
static bool IsPrime(int x)
{
if (x <= 1)
{
return false;
}
if (x == 2)
{
return true;
}
int limit = ((int)Math.Sqrt(x)) + 1;
for (int i = 2; i < limit; ++i)
{
if (x % i == 0)
{
return false;
}
}
return true;
}
static IObservable<Func<Task<int>>> Order(IObservable<int> numbers)
{
ConcurrentPriorityQueue<Tuple<int, int>> queue = new ConcurrentPriorityQueue<Tuple<int, int>>(new PriorityComparer<int, int>());
numbers.Subscribe(x =>
{
queue.Add(new Tuple<int, int>(x, 0));
});
numbers
.ForEachAsync(x=>
{
Console.WriteLine("Testing {0}", x);
if (IsPrime(x))
{
if (queue.Remove(new Tuple<int, int>(x, 0)))
{
Console.WriteLine("Accelerated {0}", x);
queue.Add(new Tuple<int, int>(x, 1));
}
}
});
Func<Task<int>> requestElement = async () =>
{
while (queue.Count == 0)
{
await Task.Delay(30);
}
return queue.Take().Item1;
};
return numbers.Select(_=>requestElement);
}
static void Process(IObservable<Func<Task<int>>> numbers)
{
numbers
.Subscribe(async x=>
{
await Task.Delay(1000);
Console.WriteLine(await x());
});
}
static void Main(string[] args)
{
Console.WriteLine("init");
Process(Order(Source()));
//Process(Source());
Console.WriteLine("called");
Console.ReadLine();
}
}
}

To summarize (conceptually):
You have requests that come in irregularly (from source), and a single processor (function process) that can handle them.
The processor should have no downtime.
You're implicitly going to need some sort of queue-ish collection to manage the case where the requests come in faster than the processor can process.
In the event that there are multiple requests queued up, ideally, you should order them by some effectiveness function, however the re-ordering shouldn't be the cause of downtime. (Function reorder).
Is all this correct?
Assuming it is, the source can be of type IObservable<Request>, sounds fine. reorder though sounds like it should really return an IEnumerable<Request>: process wants to be working on a pull-basis: It wants to pull the highest priority request once it frees up, and wait for the next request if the queue is empty but start immediately. That sounds like a task for IEnumerable, not IObservable.
public IObservable<Request> requester();
public IEnumerable<Request> reorder(IObservable<Request> requester);
public void process(IEnumerable<Request> requestEnumerable);

Related

Define Next Start Point When Number of Items Unknown

I have a web service I need to query and it takes a value that supports pagination for its data. Due to the amount of data I need to fetch and how that service is implemented I intended to do a series of concurrent http web requests to accumulate this data.
Say I have number of threads and page size how could I assign each thread to pick its starting point that doesn't overlap with the other thread? Its been a long time since I took parallel programming and I'm floundering a bit. I know I could find my start point with something like start = N/numThreads * threadNum however I don't know N. Right now I just spin up X threads and each loop until they get no more data. Problem is they tend to overlap and I end up with duplicate data. I need unique data and not to waste requests.
Right now I have code that looks something like this. This is one of many attempts and I see why this is wrong but its better to show something. The goal is to in parallel collect pages of data from a webservice:
int limit = pageSize;
data = new List<RequestStuff>();
List<Task> tasks = new List<Task>();
for (int i = 0; i < numThreads; i++)
{
tasks.Add(Task.Factory.StartNew(() =>
{
try
{
List<RequestStuff> someData;
do
{
int start;
lock(myLock)
{
start = data.Count;
}
someKeys = GetDataFromService(start, limit);
lock (myLock)
{
if (someData != null && someData.Count > 0)
{
data.AddRange(someData);
}
}
} while (hasData);
}
catch (AggregateException ex)
{
//Exception things
}
}));
}
Task.WaitAll(tasks.ToArray());
Any inspiration to solve this without race conditions? I need to stick to .NET 4 if that matters.
I'm not sure there's a way to do this without wasting some requests unless you know the actual limit. The code below might help eliminate the duplicate data as you will only query on each index once:
private int _index = -1; // -1 so first request starts at 0
private bool _shouldContinue = true;
public IEnumerable<RequestStuff> GetAllData()
{
var tasks = new List<Task<RequestStuff>>();
while (_shouldContinue)
{
tasks.Add(new Task<RequestStuff>(() => GetDataFromService(GetNextIndex())));
}
Task.WaitAll(tasks.ToArray());
return tasks.Select(t => t.Result).ToList();
}
private RequestStuff GetDataFromService(int id)
{
// Get the data
// If there's no data returned set _shouldContinue to false
// return the RequestStuff;
}
private int GetNextIndex()
{
return Interlocked.Increment(ref _index);
}
It could also be improved by adding cancellation tokens to cancel any indexes you know to be wasteful, i.e, if index 4 returns nothing you can cancel all queries on indexes above 4 that are still active.
Or if you could make a reasonable guess at the max index you might be able to implement an algorithm to pinpoint the exact limit before retrieving any data. This would probably only be more efficient if your guess was fairly accurate though.
Are you attempting to force parallelism on the part of the remote service by issuing multiple concurrent requests? Paging is generally used to limit the amount of data returned to only that which is needed, but if you need all of the data, then attempting to first page and then reconstruct it later seems like a poor design. Your code becomes needlessly complex, difficult to maintain, you'll likely just move the bottleneck from code you control to somewhere else, and now you've introduced data integrity issues (what happens if all of these threads access different versions of the data you are trying to query?). By increasing the complexity and number of calls, you are also increasing the likelihood of problems occurring (eg. one of the connections gets dropped).
Can you state the problem you are attempting to solve so perhaps instead we can help architect a better solution?

Adding Sample(TimeSpan span) to Reactive Extensions pipeline causes threading issues

Using Reactive Extensions, I have created a rolling buffer of values that caches a small history of recent values in a data stream for use in a plotting application. Since the values arrive much faster than I am interested in displaying, I would like to use the Sample(Timespan span) method in my Reactive pipeline to slow things down. However, adding it to the sample below causes an Exception to be thrown after a bit in the WriteEnumerable method (collection was modified). This is obviously a threading issue related to Sample, but I'm stumped on how exactly to alleviate it. I've tried setting the Scheduler to use in the Sample method to no avail.
Any advice?
class Program
{
static void Main(string[] args)
{
Observable.Interval(TimeSpan.FromSeconds(0.1))
.Take(500)
.TimedRollingBuffer(TimeSpan.FromSeconds(10))
.Sample(TimeSpan.FromSeconds(0.5))
.Subscribe(frame => WriteEnumerable(frame));
var input = "";
while (input != "exit")
{
input = Console.ReadLine();
}
}
private static void WriteEnumerable<T>(IEnumerable<T> enumerable)
{
foreach (T thing in enumerable)
Console.WriteLine(thing + " " + DateTime.UtcNow);
Console.WriteLine(Environment.NewLine);
}
}
public static class Extensions
{
public static IObservable<IEnumerable<Timestamped<T>>> TimedRollingBuffer<T>(this IObservable<T> observable, TimeSpan timeRange)
{
return Observable.Create<IEnumerable<Timestamped<T>>>(
o =>
{
var queue = new Queue<Timestamped<T>>();
return observable.Timestamp().Subscribe(
tx =>
{
queue.Enqueue(tx);
DateTime now = DateTime.Now;
while (queue.Peek().Timestamp < now.Subtract(timeRange))
queue.Dequeue();
o.OnNext(queue);
},
ex => o.OnError(ex),
() => o.OnCompleted()
);
});
}
}
credit where credit is due: reactive extensions sliding time window
The exception is due to the fact you are modifying the queue whilst enumerating it.
The RollingBuffer implementation you cited by #Enigmativity does the right thing - you will notice in his implemention the OnNext is invoked with a ToArray() ensuring a copy of the list as it stands is dispatched to observers rather than the mutating original.
In your case, the introduction of Sample introduces concurrency (which is fine - that's what it is supposed to do) - however, you are passing the queue itself, which will mutate whilst enumeration is occurring due to this introduced concurrency. This is a bug.
In your TimedRollingBuffer, if you were to use o.OnNext(queue.ToArray()) instead of o.OnNext(queue) you wouldn't have this problem.

Creating generated sequence of events as a cold sequence

FWIW - I'm scrapping the previous version of this question in favor of different one along the same way after asking for advice on meta
I have a webservice that contains configuration data. I would like to call it at regular intervals Tok in order to refresh the configuration data in the application that uses it. If the service is in error (timeout, down, etc) I want to keep the data from the previous call and call the service again after a different time interval Tnotok. Finally I want the behavior to be testable.
Since managing time sequences and testability seems like a strong point of the Reactive Extensions, I started using an Observable that will be fed by a generated sequence. Here is how I create the sequence:
Observable.Generate<DataProviderResult, DataProviderResult>(
// we start with some empty data
new DataProviderResult() {
Failures = 0
, Informations = new List<Information>()},
// never stop
(r) => true,
// there is no iteration
(r) => r,
// we get the next value from a call to the webservice
(r) => FetchNextResults(r),
// we select time for next msg depending on the current failures
(r) => r.Failures > 0 ? tnotok : tok,
// we pass a TestScheduler
scheduler)
.Suscribe(r => HandleResults(r));
I have two problems currently:
It looks like I am creating a hot observable. Even trying to use Publish/Connect I have the suscribed action missing the first event. How can I create it as a cold observable?
myObservable = myObservable.Publish();
myObservable.Suscribe(r => HandleResults(r));
myObservable.Connect() // doesn't call onNext for first element in sequence
When I suscribe, the order in which the suscription and the generation seems off, since for any frame the suscription method is fired before the FetchNextResults method. Is it normal? I would expect the sequence to call the method for frame f, not f+1.
Here is the code that I'm using for fetching and suscription:
private DataProviderResult FetchNextResults(DataProviderResult previousResult)
{
Console.WriteLine(string.Format("Fetching at {0:hh:mm:ss:fff}", scheduler.Now));
try
{
return new DataProviderResult() { Informations = dataProvider.GetInformation().ToList(), Failures = 0};
}
catch (Exception)
{}
previousResult.Failures++;
return previousResult;
}
private void HandleResults(DataProviderResult result)
{
Console.WriteLine(string.Format("Managing at {0:hh:mm:ss:fff}", scheduler.Now));
dataResult = result;
}
Here is what I'm seeing that prompted me articulating these questions:
Starting at 12:00:00:000
Fetching at 12:00:00:000 < no managing the result that has been fetched here
Managing at 12:00:01:000 < managing before fetching for frame f
Fetching at 12:00:01:000
Managing at 12:00:02:000
Fetching at 12:00:02:000
EDIT: Here is a bare bones copy-pastable program that illustrates the problem.
/*using System;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
using Microsoft.Reactive.Testing;*/
private static int fetchData(int i, IScheduler scheduler)
{
writeTime("fetching " + (i+1).ToString(), scheduler);
return i+1;
}
private static void manageData(int i, IScheduler scheduler)
{
writeTime("managing " + i.ToString(), scheduler);
}
private static void writeTime(string msg, IScheduler scheduler)
{
Console.WriteLine(string.Format("{0:mm:ss:fff} {1}", scheduler.Now, msg));
}
private static void Main(string[] args)
{
var scheduler = new TestScheduler();
writeTime("start", scheduler);
var datas = Observable.Generate<int, int>(fetchData(0, scheduler),
(d) => true,
(d) => fetchData(d, scheduler),
(d) => d,
(d) => TimeSpan.FromMilliseconds(1000),
scheduler)
.Subscribe(i => manageData(i, scheduler));
scheduler.AdvanceBy(TimeSpan.FromMilliseconds(3000).Ticks);
}
This outputs the following:
00:00:000 start
00:00:000 fetching 1
00:01:000 managing 1
00:01:000 fetching 2
00:02:000 managing 2
00:02:000 fetching 3
I don't understand why the managing of the first element is not picked up immediately after its fetching. There is one second between the sequence effectively pulling the data and the data being handed to the observer. Am I missing something here or is it expected behavior? If so is there a way to have the observer react immediately to the new value?
You are misunderstanding the purpose of the timeSelector parameter. It is called each time a value is generated and it returns a time which indicates how long to delay before delivering that value to observers and then generating the next value.
Here's a non-Generate way to tackle your problem.
private DataProviderResult FetchNextResult()
{
// let exceptions throw
return dataProvider.GetInformation().ToList();
}
private IObservable<DataProviderResult> CreateObservable(IScheduler scheduler)
{
// an observable that produces a single result then completes
var fetch = Observable.Defer(
() => Observable.Return(FetchNextResult));
// concatenate this observable with one that will pause
// for "tok" time before completing.
// This observable will send the result
// then pause before completing.
var fetchThenPause = fetch.Concat(Observable
.Empty<DataProviderResult>()
.Delay(tok, scheduler));
// Now, if fetchThenPause fails, we want to consume/ignore the exception
// and then pause for tnotok time before completing with no results
var fetchPauseOnErrors = fetchThenPause.Catch(Observable
.Empty<DataProviderResult>()
.Delay(tnotok, scheduler));
// Now, whenever our observable completes (after its pause), start it again.
var fetchLoop = fetchPauseOnErrors.Repeat();
// Now use Publish(initialValue) so that we remember the most recent value
var fetchLoopWithMemory = fetchLoop.Publish(null);
// YMMV from here on. Lets use RefCount() to start the
// connection the first time someone subscribes
var fetchLoopAuto = fetchLoopWithMemory.RefCount();
// And lets filter out that first null that will arrive before
// we ever get the first result from the data provider
return fetchLoopAuto.Where(t => t != null);
}
public MyClass()
{
Information = CreateObservable();
}
public IObservable<DataProviderResult> Information { get; private set; }
Generate produces cold observable sequences, so that is my first alarm bell.
I tried to pull your code into linqpad* and run it and changed it a bit to focus on the problem. It seems to me that you have the Iterator and ResultSelector functions confused. These are back-to-front. When you iterate, you should take the value from your last iteration and use it to produce your next value. The result selector is used to pick off (Select) the value form the instance you are iterating on.
So in your case, the type you are iterating on is the type you want to produce values of. Therefore keep your ResultSelector function just the identity function x=>x, and your IteratorFunction should be the one that make the WebService call.
Observable.Generate<DataProviderResult, DataProviderResult>(
// we start with some empty data
new DataProviderResult() {
Failures = 0
, Informations = new List<Information>()},
// never stop
(r) => true,
// we get the next value(iterate) by making a call to the webservice
(r) => FetchNextResults(r),
// there is no projection
(r) => r,
// we select time for next msg depending on the current failures
(r) => r.Failures > 0 ? tnotok : tok,
// we pass a TestScheduler
scheduler)
.Suscribe(r => HandleResults(r));
As a side note, try to prefer immutable types instead of mutating values as you iterate.
*Please provide an autonomous working snippet of code so people can better answer your question. :-)

Unstable unit test result - mock message queue behavior and parallel loop

I am building a class to use parallel loop to access messages from message queue, in order to explain my issue I created a simplified version of code:
public class Worker
{
private IMessageQueue mq;
public Worker(IMessageQueue mq)
{
this.mq = mq;
}
public int Concurrency
{
get
{
return 5;
}
}
public void DoWork()
{
int totalFoundMessage = 0;
do
{
// reset for every loop
totalFoundMessage = 0;
Parallel.For<int>(
0,
this.Concurrency,
() => 0,
(i, loopState, localState) =>
{
Message data = this.mq.GetFromMessageQueue("MessageQueueName");
if (data != null)
{
return localState + 1;
}
else
{
return localState + 0;
}
},
localState =>
{
Interlocked.Add(ref totalFoundMessage, localState);
});
}
while (totalFoundMessage >= this.Concurrency);
}
}
The idea is to set the worker class a concurrency value to control the parallel loop. If after each loop the number of message to retrieve from message queue equals to the concurrency number I assume there are potential more messages in the queue and continue to fetch from queue until the message number is smaller than the concurrency. The TPL code is also inspired by TPL Data Parallelism Issue post.
I have the interface to message queue and message object.
public interface IMessageQueue
{
Message GetFromMessageQueue(string queueName);
}
public class Message
{
}
Thus I created my unit test codes and I used Moq to mock the IMessageQueue interface
[TestMethod()]
public void DoWorkTest()
{
Mock<IMessageQueue> mqMock = new Mock<IMessageQueue>();
Message data = new Message();
Worker w = new Worker(mqMock.Object);
int callCounter = 0;
int messageNumber = 11;
mqMock.Setup(x => x.GetFromMessageQueue("MessageQueueName")).Returns(() =>
{
callCounter++;
if (callCounter < messageNumber)
{
return data;
}
else
{
// simulate MSMQ's behavior last call to empty queue returns null
return (Message)null;
}
}
);
w.DoWork();
int expectedCallTimes = w.Concurrency * (messageNumber / w.Concurrency);
if (messageNumber % w.Concurrency > 0)
{
expectedCallTimes += w.Concurrency;
}
mqMock.Verify(x => x.GetFromMessageQueue("MessageQueueName"), Times.Exactly(expectedCallTimes));
}
I used the idea from Moq to set up a function return based on called times to set up call times based response.
During the unit testing I noticed the testing result is unstable, if you run it multiple times you will see in most cases the test passes, but occasionally the test fails for various reasons.
I have no clue what caused the situation and look for some input from you. Thanks
The problem is that your mocked GetFromMessageQueue() is not thread-safe, but you're calling it from multiple threads at the same time. ++ is inherently thread-unsafe operation.
Instead, you should use locking or Interlocked.Increment().
Also, in your code, you're likely not going to benefit from parallelism, because starting and stopping Parallel.ForEach() has some overhead. A better way would be to have a while (or do-while) inside the Parallel.ForEach(), not the other way around.
My approach would be to restructure. When testing things like timing or concurrency, it is usually prudent to abstract your calls (in this case, use of PLINQ) into a separate class that accepts a number of delegates. You can then test the correct calls are being made to the new class. Then, because the new class is a lot simpler (only a single PLINQ call) and contains no logic, you can leave it untested.
I advocate not testing in this case because unless you are working on something super-critical (life support systems, airplanes, etc), it becomes more trouble than it's worth to test. Trust the framework will execute the PLINQ query as expected. You should only be testing those things which make sense to test, and that provide value to your project or client.

.Net BlockingCollection.Take(2) : Safely removing two items at a time

After doing some research, I'm resorting to any feedback regarding how to effectively remove two items off a Concurrent collection. My situation involves incoming messages over UDP which are currently being placed into a BlockingCollection. Once there are two Users in the collection, I need to safely Take two users and process them. I've seen several different techniques including some ideas listed below. My current implementation is below but I'm thinking there's a cleaner way to do this while ensuring that Users are processed in groups of two. That's the only restriction in this scenario.
Current Implementation:
private int userQueueCount = 0;
public BlockingCollection<User> UserQueue = new BlockingCollection<User>();
public void JoinQueue(User u)
{
UserQueue.Add(u);
Interlocked.Increment(ref userQueueCount);
if (userQueueCount > 1)
{
IEnumerable<User> users = UserQueue.Take(2);
if(users.Count==2) {
Interlocked.Decrement(ref userQueueCount);
Interlocked.Decrement(ref userQueueCount);
... do some work with users but if only one
is removed I'll run into problems
}
}
}
What I would like to do is something like this but I cannot currently test this in a production situation to ensure integrity.
Parallel.ForEach(UserQueue.Take(2), (u) => { ... });
Or better yet:
public void JoinQueue(User u)
{
UserQueue.Add(u);
// if needed? increment
Interlocked.Increment(ref userQueueCount);
UserQueue.CompleteAdding();
}
Then implement this somewhere:
Task.Factory.StartNew(() =>
{
while (userQueueCount > 1) OR (UserQueue.Count > 1) If it's safe?
{
IEnumerable<User> users = UserQueue.Take(2);
... do stuff
}
});
The problem with this is that i'm not sure I can guarantee that between the condition (Count > 1) and the Take(2) that i'm ensuring the UserQueue has at least two items to process? Incoming UDP messages are processed in parallel so I need a way to safely pull items off of the Blocking/Concurrent Collection in pairs of two.
Is there a better/safer way to do this?
Revised Comments:
The intented goal of this question is really just to achieve a stable/thread safe method of processing items off of a Concurrent Collection in .Net 4.0. It doesn't have to be pretty, it just has to be stable in the task of processing items in unordered pairs of twos in a parallel environment.
Here is what I'd do in rough Code:
ConcurrentQueuequeue = new ConcurrentQueue(); //can use a BlockingCollection too (as it's just a blocking ConcurrentQueue by default anyway)
public void OnUserStartedGame(User joiningUser)
{
User waitingUser;
if (this.gameQueue.TryDequeue(out waitingUser)) //if there's someone waiting, we'll get him
this.MatchUsers(waitingUser, joiningUser);
else
this.QueueUser(joiningUser); //it doesn't matter if there's already someone in the queue by now because, well, we are using a queue and it will sort itself out.
}
private void QueueUser(User user)
{
this.gameQueue.Enqueue(user);
}
private void MatchUsers(User first, User second)
{
//not sure what you do here
}
The basic idea being that if someone's wants to start a game and there's someone in your queue, you match them and start a game - if there's no-one, add them to the queue.
At best you'll only have one user in the queue at a time, but if not, well, that's not too bad either because as other users start games, the waiting ones will gradually removed and no new ones added until the queue is empty again.
If I could not put pairs of users into the collection for some reason, I would use ConcurrentQueue and try to TryDequeue 2 items at a time, if I can get only one - put it back. Wait as necessary.
I think the easiest solution here is to use locking: you will have one lock for all consumers (producers won't use any locks), which will make sure you always take the users in the correct order:
User firstUser;
User secondUser;
lock (consumerLock)
{
firstUser = userQueue.Take();
secondUser = userQueue.Take();
}
Process(firstUser, secondUser);
Another option, would be to have two queues: one for single users and one for pairs of users and have a process that transfers them from the first queue to the second one.
If you don't mind having wasting another thread, you can do this with two BlockingCollections:
while (true)
{
var firstUser = incomingUsers.Take();
var secondUser = incomingUsers.Take();
userPairs.Add(Tuple.Create(firstUser, secondUser));
}
You don't have to worry about locking here, because the queue for single users will have only one consumer, and the consumers of pairs can now use simple Take() safely.
If you do care about wasting a thread and can use TPL Dataflow, you can use BatchBlock<T>, which combines incoming items into batches of n items, where n is configured at the time of creation of the block, so you can set it to 2.
May this can helpd
public static IList<T> TakeMulti<T>(this BlockingCollection<T> me, int count = 100) where T : class
{
T last = null;
if (me.Count == 0)
{
last = me.Take(); // blocking when queue is empty
}
var result = new List<T>(count);
if (last != null)
{
result.Add(last);
}
//if you want to take more item on this time.
//if (me.Count < count / 2)
//{
// Thread.Sleep(1000);
//}
while (me.Count > 0 && result.Count <= count)
{
result.Add(me.Take());
}
return result;
}

Categories

Resources