SemaphoreSlim with dynamic maxCount - c#

I'm facing a problem where I need to limit the number of calls to another web server. It will vary because the server is shared and maybe it could have more or less capacity.
I was thinking about using SemaphoreSlim class, but there's no public property to change the max count.
Should I wrap my SemaphoreSlim class in another class that will handle the max count? Is there any better approach?
EDIT:
Here's what I'm trying:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Semaphore
{
class Program
{
static SemaphoreSlim _sem = new SemaphoreSlim(10,10000);
static void Main(string[] args)
{
int max = 15;
for (int i = 1; i <= 50; i++)
{
new Thread(Enter).Start(new int[] { i, max});
}
Console.ReadLine();
max = 11;
for (int i = 1; i <= 50; i++)
{
new Thread(Enter).Start(new int[] { i, max });
}
}
static void Enter(object param)
{
int[] arr = (int[])param;
int id = arr[0];
int max = arr[1];
try
{
Console.WriteLine(_sem.CurrentCount);
if (_sem.CurrentCount <= max)
_sem.Release(1);
else
{
_sem.Wait(1000);
Console.WriteLine(id + " wants to enter");
Thread.Sleep((1000 * id) / 2); // can be here at
Console.WriteLine(id + " is in!"); // Only three threads
}
}
catch(Exception ex)
{
Console.WriteLine("opps ", id);
Console.WriteLine(ex.Message);
}
finally
{
_sem.Release();
}
}
}
}
Questions:
1-_sem.Wait(1000) should cancel the execution of threads that will execute for more than 1000ms, wasn't it?
2-Did I got the idea of using Release / Wait?

You can't change the max count, but you can create a SemaphoreSlim that has a very high maximum count, and reserve some of them. See this constructor.
So let's say that the absolute maximum number of concurrent calls is 100, but initially you want it to be 25. You initialize your semaphore:
SemaphoreSlim sem = new SemaphoreSlim(25, 100);
So 25 is the number of requests that can be serviced concurrently. You have reserved the other 75.
If you then want to increase the number allowed, just call Release(num). If you called Release(10), then the number would go to 35.
Now, if you want to reduce the number of available requests, you have to call WaitOne multiple times. For example, if you want to remove 10 from the available count:
for (var i = 0; i < 10; ++i)
{
sem.WaitOne();
}
This has the potential of blocking until other clients release the semaphore. That is, if you allow 35 concurrent requests and you want to reduce it to 25, but there are already 35 clients with active requests, that WaitOne will block until a client calls Release, and the loop won't terminate until 10 clients release.

Get a semaphore.
Set the capacity to something quite a bit higher than you need it to be.
Set the initial capacity to what you want your actual maximum capacity to be.
Give out the semaphore to others to use.
At this point you can then wait on the semaphore however much you want (without a corresponding release call) to lower the capacity. You can release the semaphore a number of times (without a corresponding wait call) to increase the effective capacity.
If this is something you're doing enough of, you can potentially create your own semaphore class that composes a SemaphoreSlim and encapsulates this logic. This composition will also be essential if you have code that already releases a semaphore without first waiting on it; with your own class you could ensure that such releases are no-ops. (That said, you should avoid putting yourself in that position to begin with, really.)

Here is how I solved this situation: I created a custom semaphore slim class that allows me to increase and decrease the number of slots. This class also allows me to set a maximum number of slots so I never exceed a "reasonable" number and also to set a minimum number of slots so I don't go below a "reasonable" threshold.
using Picton.Messaging.Logging;
using System;
using System.Threading;
namespace Picton.Messaging.Utils
{
/// <summary>
/// An improvement over System.Threading.SemaphoreSlim that allows you to dynamically increase and
/// decrease the number of threads that can access a resource or pool of resources concurrently.
/// </summary>
/// <seealso cref="System.Threading.SemaphoreSlim" />
public class SemaphoreSlimDynamic : SemaphoreSlim
{
#region FIELDS
private static readonly ILog _logger = LogProvider.GetLogger(typeof(SemaphoreSlimDynamic));
private readonly ReaderWriterLockSlim _lock;
#endregion
#region PROPERTIES
/// <summary>
/// Gets the minimum number of slots.
/// </summary>
/// <value>
/// The minimum slots count.
/// </value>
public int MinimumSlotsCount { get; private set; }
/// <summary>
/// Gets the number of slots currently available.
/// </summary>
/// <value>
/// The available slots count.
/// </value>
public int AvailableSlotsCount { get; private set; }
/// <summary>
/// Gets the maximum number of slots.
/// </summary>
/// <value>
/// The maximum slots count.
/// </value>
public int MaximumSlotsCount { get; private set; }
#endregion
#region CONSTRUCTOR
/// <summary>
/// Initializes a new instance of the <see cref="SemaphoreSlimDynamic"/> class.
/// </summary>
/// <param name="minCount">The minimum number of slots.</param>
/// <param name="initialCount">The initial number of slots.</param>
/// <param name="maxCount">The maximum number of slots.</param>
public SemaphoreSlimDynamic(int minCount, int initialCount, int maxCount)
: base(initialCount, maxCount)
{
_lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
this.MinimumSlotsCount = minCount;
this.AvailableSlotsCount = initialCount;
this.MaximumSlotsCount = maxCount;
}
#endregion
#region PUBLIC METHODS
/// <summary>
/// Attempts to increase the number of slots
/// </summary>
/// <param name="millisecondsTimeout">The timeout in milliseconds.</param>
/// <param name="increaseCount">The number of slots to add</param>
/// <returns>true if the attempt was successfully; otherwise, false.</returns>
public bool TryIncrease(int millisecondsTimeout = 500, int increaseCount = 1)
{
return TryIncrease(TimeSpan.FromMilliseconds(millisecondsTimeout), increaseCount);
}
/// <summary>
/// Attempts to increase the number of slots
/// </summary>
/// <param name="timeout">The timeout.</param>
/// <param name="increaseCount">The number of slots to add</param>
/// <returns>true if the attempt was successfully; otherwise, false.</returns>
public bool TryIncrease(TimeSpan timeout, int increaseCount = 1)
{
if (increaseCount < 0) throw new ArgumentOutOfRangeException(nameof(increaseCount));
else if (increaseCount == 0) return false;
var increased = false;
try
{
if (this.AvailableSlotsCount < this.MaximumSlotsCount)
{
var lockAcquired = _lock.TryEnterWriteLock(timeout);
if (lockAcquired)
{
for (int i = 0; i < increaseCount; i++)
{
if (this.AvailableSlotsCount < this.MaximumSlotsCount)
{
Release();
this.AvailableSlotsCount++;
increased = true;
}
}
if (increased) _logger.Trace($"Semaphore slots increased: {this.AvailableSlotsCount}");
_lock.ExitWriteLock();
}
}
}
catch (SemaphoreFullException)
{
// An exception is thrown if we attempt to exceed the max number of concurrent tasks
// It's safe to ignore this exception
}
return increased;
}
/// <summary>
/// Attempts to decrease the number of slots
/// </summary>
/// <param name="millisecondsTimeout">The timeout in milliseconds.</param>
/// <param name="decreaseCount">The number of slots to add</param>
/// <returns>true if the attempt was successfully; otherwise, false.</returns>
public bool TryDecrease(int millisecondsTimeout = 500, int decreaseCount = 1)
{
return TryDecrease(TimeSpan.FromMilliseconds(millisecondsTimeout), decreaseCount);
}
/// <summary>
/// Attempts to decrease the number of slots
/// </summary>
/// <param name="timeout">The timeout.</param>
/// <param name="decreaseCount">The number of slots to add</param>
/// <returns>true if the attempt was successfully; otherwise, false.</returns>
public bool TryDecrease(TimeSpan timeout, int decreaseCount = 1)
{
if (decreaseCount < 0) throw new ArgumentOutOfRangeException(nameof(decreaseCount));
else if (decreaseCount == 0) return false;
var decreased = false;
if (this.AvailableSlotsCount > this.MinimumSlotsCount)
{
var lockAcquired = _lock.TryEnterWriteLock(timeout);
if (lockAcquired)
{
for (int i = 0; i < decreaseCount; i++)
{
if (this.AvailableSlotsCount > this.MinimumSlotsCount)
{
if (Wait(timeout))
{
this.AvailableSlotsCount--;
decreased = true;
}
}
}
if (decreased) _logger.Trace($"Semaphore slots decreased: {this.AvailableSlotsCount}");
_lock.ExitWriteLock();
}
}
return decreased;
}
#endregion
}
}

Ok, I could solve my problem lookin on mono project.
// SemaphoreSlim.cs
//
// Copyright (c) 2008 Jérémie "Garuma" Laval
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//
using System;
using System.Diagnostics;
using System.Threading.Tasks;
namespace System.Threading
{
public class SemaphoreSlimCustom : IDisposable
{
const int spinCount = 10;
const int deepSleepTime = 20;
private object _sync = new object();
int maxCount;
int currCount;
bool isDisposed;
public int MaxCount
{
get { lock (_sync) { return maxCount; } }
set
{
lock (_sync)
{
maxCount = value;
}
}
}
EventWaitHandle handle;
public SemaphoreSlimCustom (int initialCount) : this (initialCount, int.MaxValue)
{
}
public SemaphoreSlimCustom (int initialCount, int maxCount)
{
if (initialCount < 0 || initialCount > maxCount || maxCount < 0)
throw new ArgumentOutOfRangeException ("The initialCount argument is negative, initialCount is greater than maxCount, or maxCount is not positive.");
this.maxCount = maxCount;
this.currCount = initialCount;
this.handle = new ManualResetEvent (initialCount > 0);
}
public void Dispose ()
{
Dispose(true);
}
protected virtual void Dispose (bool disposing)
{
isDisposed = true;
}
void CheckState ()
{
if (isDisposed)
throw new ObjectDisposedException ("The SemaphoreSlim has been disposed.");
}
public int CurrentCount {
get {
return currCount;
}
}
public int Release ()
{
return Release(1);
}
public int Release (int releaseCount)
{
CheckState ();
if (releaseCount < 1)
throw new ArgumentOutOfRangeException ("releaseCount", "releaseCount is less than 1");
// As we have to take care of the max limit we resort to CAS
int oldValue, newValue;
do {
oldValue = currCount;
newValue = (currCount + releaseCount);
newValue = newValue > maxCount ? maxCount : newValue;
} while (Interlocked.CompareExchange (ref currCount, newValue, oldValue) != oldValue);
handle.Set ();
return oldValue;
}
public void Wait ()
{
Wait (CancellationToken.None);
}
public bool Wait (TimeSpan timeout)
{
return Wait ((int)timeout.TotalMilliseconds, CancellationToken.None);
}
public bool Wait (int millisecondsTimeout)
{
return Wait (millisecondsTimeout, CancellationToken.None);
}
public void Wait (CancellationToken cancellationToken)
{
Wait (-1, cancellationToken);
}
public bool Wait (TimeSpan timeout, CancellationToken cancellationToken)
{
CheckState();
return Wait ((int)timeout.TotalMilliseconds, cancellationToken);
}
public bool Wait (int millisecondsTimeout, CancellationToken cancellationToken)
{
CheckState ();
if (millisecondsTimeout < -1)
throw new ArgumentOutOfRangeException ("millisecondsTimeout",
"millisecondsTimeout is a negative number other than -1");
Stopwatch sw = Stopwatch.StartNew();
Func<bool> stopCondition = () => millisecondsTimeout >= 0 && sw.ElapsedMilliseconds > millisecondsTimeout;
do {
bool shouldWait;
int result;
do {
cancellationToken.ThrowIfCancellationRequested ();
if (stopCondition ())
return false;
shouldWait = true;
result = currCount;
if (result > 0)
shouldWait = false;
else
break;
} while (Interlocked.CompareExchange (ref currCount, result - 1, result) != result);
if (!shouldWait) {
if (result == 1)
handle.Reset ();
break;
}
SpinWait wait = new SpinWait ();
while (Thread.VolatileRead (ref currCount) <= 0) {
cancellationToken.ThrowIfCancellationRequested ();
if (stopCondition ())
return false;
if (wait.Count > spinCount) {
int diff = millisecondsTimeout - (int)sw.ElapsedMilliseconds;
int timeout = millisecondsTimeout < 0 ? deepSleepTime :
Math.Min (Math.Max (diff, 1), deepSleepTime);
handle.WaitOne (timeout);
} else
wait.SpinOnce ();
}
} while (true);
return true;
}
public WaitHandle AvailableWaitHandle {
get {
return handle;
}
}
public Task WaitAsync ()
{
return Task.Factory.StartNew (() => Wait ());
}
public Task WaitAsync (CancellationToken cancellationToken)
{
return Task.Factory.StartNew (() => Wait (cancellationToken), cancellationToken);
}
public Task<bool> WaitAsync (int millisecondsTimeout)
{
return Task.Factory.StartNew (() => Wait (millisecondsTimeout));
}
public Task<bool> WaitAsync (TimeSpan timeout)
{
return Task.Factory.StartNew (() => Wait (timeout));
}
public Task<bool> WaitAsync (int millisecondsTimeout, CancellationToken cancellationToken)
{
return Task.Factory.StartNew (() => Wait (millisecondsTimeout, cancellationToken), cancellationToken);
}
public Task<bool> WaitAsync (TimeSpan timeout, CancellationToken cancellationToken)
{
return Task.Factory.StartNew (() => Wait (timeout, cancellationToken), cancellationToken);
}
}
}

Updated .Net Core 5 answer:
Let's say I want a lock with a maximum of 10 requests, but most of the time I only want 1.
private readonly static SemaphoreSlim semLock = new(1, 10);
Now when I want to release some resources I can do:
semLock.Release(Math.Min(9, requiredAmount));
note that 9 is one less than 10 as we already have one release initially.
Once I want to restrict the available resources again I can call:
while(semLock.CurrentCount > 1)
{
await semLock.WaitAsync();
}
which will await bringing it back down to 1

Related

Executing code on a schedule, and skipping a task

Definition
It is necessary to run the code for example in 12:00, 15:00, 19:00, 20:00 every day.
The task that is being performed can be any, for example, copying a folder.
My implementation
There is a code for running tasks on a schedule:
public class TaskScheduler
{
private static TaskScheduler _instance;
private List<Timer> _timers = new List<Timer>();
private TaskScheduler() { }
public static TaskScheduler Instance => _instance ?? (_instance = new TaskScheduler());
/// <summary>
/// Create new Task
/// </summary>
/// <code>
/// TaskScheduler.Instance.ScheduleTask(
/// ()=>
/// {
///
/// }, new TimeSpan(0, 20, 0));
///
/// TaskScheduler.Instance.ScheduleTask(Action, TimeSpan);
///
/// </code>
/// <param name="task">Action</param>
/// <param name="time"></param>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="ArgumentOutOfRangeException"></exception>
/// <exception cref="OverflowException"></exception>
public void ScheduleTask(Action task, TimeSpan time)
{
ScheduleTask(task, time.Hours, time.Minutes, time.Seconds);
}
/// <summary>
/// Create new Task
/// </summary>
/// <code>
/// TaskScheduler.Instance.ScheduleTask(
/// ()=>
/// {
///
/// }, 1, 20, 0);
///
/// TaskScheduler.Instance.ScheduleTask(Action, hour, minute, second);
///
/// </code>
/// <param name="task">Action</param>
/// <param name="hour">Hour</param>
/// <param name="min">Minute</param>
/// <param name="second">Second</param>
/// <param name="intervalInHour">Interval in hours</param>
/// <exception cref="ArgumentNullException"></exception>
/// <exception cref="ArgumentOutOfRangeException"></exception>
/// <exception cref="OverflowException"></exception>
public void ScheduleTask(Action task, int hour = 0, int min = 0, int second = 1, double intervalInHour = 24d)
{
var now = DateTime.Now;
var firstRun = new DateTime(now.Year, now.Month, now.Day, hour, min, second);
if (now > firstRun) firstRun = firstRun.AddDays(1);
var timeToGo = firstRun - now;
if (timeToGo <= TimeSpan.Zero) timeToGo = TimeSpan.Zero;
var timer = new Timer(x =>
{
task.Invoke();
}, null, timeToGo, TimeSpan.FromHours(intervalInHour));
_timers.Add(timer);
}
}
It works great, tasks are running, working.
There is a certain nuance, there is a code that copies a directory to a directory.
public static class FolderCopper
{
public static void CopyAll(DirectoryInfo source, DirectoryInfo target, CancellationToken token)
{
if (!source.Exists) return;
if (!target.Exists) target.Create();
var po = new ParallelOptions
{
CancellationToken = token
};
Parallel.ForEach(source.GetDirectories(), po, (sourceChildDirectory) =>
{
if (po.CancellationToken.IsCancellationRequested)
{
po.CancellationToken.ThrowIfCancellationRequested();
}
CopyAll(sourceChildDirectory, new DirectoryInfo(Path.Combine(target.FullName, sourceChildDirectory.Name)), token);
});
Parallel.ForEach(source.GetFiles(), po, sourceFile =>
{
if (po.CancellationToken.IsCancellationRequested)
{
po.CancellationToken.ThrowIfCancellationRequested();
}
var file = new FileInfo(Path.Combine(target.FullName, sourceFile.Name));
switch (file.Exists)
{
case false:
sourceFile.CopyTo(file.FullName);
break;
case true when file.LastWriteTimeUtc < sourceFile.LastWriteTimeUtc:
sourceFile.CopyTo(file.FullName, true);
break;
}
});
}
}
It must be run for example once every 20 or 10 minutes.
I call in this way:
Utils.Task Scheduler.Instance.Schedule Task(() => {}, 0, 20, 0, 0.33); //Task, hours, minutes, seconds, the interval in hours of 20 minutes is 0.33.
```c#
Or:
```c#
Utils.Task Scheduler.Instance.Schedule Task(() => {//coppy}, 1, 20, 30, 24); //Task, hours, minutes, seconds, the interval in hours.
One and the same task is always started.
Consider the situation, running the code at 12:00 and 13: 00 to copy the directory. The directory weighs 1670 GB. Accordingly, he does not have time to copy copy, and starts another 1 task.
Question
How can I rewrite the code so that it does not run another 1 instance if the previous one did not work, and just skip the task?
I did it through the blocking mechanism.
Described the class for the work of the task itself:
public class TimedTask
{
private Timer _timer;
private readonly Action _actionToInvoke;
private TimeSpan _intervalTime;
private TimeSpan _startTime;
private const int WAITING_TIME_IN_MINUTES = 1;
private object _locker = new object();
public TimedTask(Action toInvoke, TimeSpan startTime, TimeSpan interval)
{
this._actionToInvoke = toInvoke;
this._startTime = startTime;
this._intervalTime = interval;
}
public void Start()
{
this.Stop();
TimeSpan timeToGo = CalculateTheFirstStartTime();
_timer = new Timer(Callback, null, timeToGo, _intervalTime);
}
public void Stop()
{
var local = _timer;
_timer = null;
local?.Dispose();
}
private TimeSpan CalculateTheFirstStartTime()
{
var now = DateTime.Now;
var firstRun = new DateTime(now.Year, now.Month, now.Day, _startTime.Hours, _startTime.Minutes, _startTime.Seconds);
if (now > firstRun) firstRun = firstRun.AddDays(1);
var timeToGo = firstRun - now;
if (timeToGo <= TimeSpan.Zero) timeToGo = TimeSpan.Zero;
return timeToGo;
}
private void Callback(object state)
{
var timeout = TimeSpan.FromMinutes(WAITING_TIME_IN_MINUTES);
Console.WriteLine($"{DateTime.Now.ToLongTimeString()} Попытка запуска");
if (Monitor.TryEnter(_locker, timeout))
{
try
{
Console.WriteLine($"{DateTime.Now.ToLongTimeString()} Попытка успешна");
Console.WriteLine($"{DateTime.Now.ToLongTimeString()} Запускаю задачу");
this._actionToInvoke();
Console.WriteLine($"{DateTime.Now.ToLongTimeString()} Задача выполнена");
}
finally
{
Monitor.Exit(_locker);
}
}
else
{
Console.WriteLine($"{DateTime.Now.ToLongTimeString()} Запуск не осуществлён");
}
}
}
And a class that is a collection of tasks:
public class Scheduler
{
private List<TimedTask> _tasks = new List<TimedTask>();
public void ScheduleJob(Action action, TimeSpan firstRun)
{
ScheduleJob(action, firstRun, new TimeSpan(24, 0, 0));
}
public void ScheduleJob(Action action, TimeSpan firstRun, TimeSpan interval)
{
var task = new TimedTask(action, firstRun, interval);
_tasks.Add(task);
task.Start();
}
public void Stop()
{
_tasks.ForEach(t => t.Stop());
_tasks.Clear();
}
}
You can use a variable (like a static bool) to save the state. Ensure only locked access is possible and query that. If state is still running, then return, else do your work.
class Foo {
private bool Running = false;
private static object lockObject = new object();
public void Run() {
lock (lockObject) {
if (Running) return;
Running = true;
}
try {
// do your stuff
} finally {
lock (lockObject) {
Running = false;
}
}
}
}
Above is simplified code to demonstrate the idea

Make a C# implementation of a LinkedRingBuffer Thread Safe

I have three questions:
What do you generally think about my approach to solve the given problem?
What do you think I could further improve performance wise?
The most important one: How do I make my implementation really thread safe?
At first the simplified scenario I'm in:
I am communicating via a messaging system with different devices. I am receiving and sending thousands and thousands of messages in a rather short time period. I am inside of a multithreading environment so a lot of different tasks are sending and expecting messages. For the message reception an event driven approach got us a lot of trouble in the sense of making it thread safe.
I have a few Receiver tasks which get messages from outside and have to deliver these messages to a lot of consumer tasks.
So I came up with a different approach:
Why not have a history of a few thousand messages where every new message is enqueued and the consumer tasks can search backwards from the newest item to the last processed item in order to get all newly arrived messages. Of course this has to be fast and thread safe.
I came up with the idea of a linked ring buffer and implemented the following:
public class LinkedRingBuffer<T>
{
private LinkedRingBufferNode<T> firstNode;
private LinkedRingBufferNode<T> lastNode;
public LinkedRingBuffer(int capacity)
{
Capacity = capacity;
Count = 0;
}
/// <summary>
/// Maximum count of items inside the buffer
/// </summary>
public int Capacity { get; }
/// <summary>
/// Actual count of items inside the buffer
/// </summary>
public int Count { get; private set; }
/// <summary>
/// Get value of the oldest buffer entry
/// </summary>
/// <returns></returns>
public T GetFirst()
{
return firstNode.Item;
}
/// <summary>
/// Get value of the newest buffer entry
/// </summary>
/// <returns></returns>
public T GetLast()
{
return lastNode.Item;
}
/// <summary>
/// Add item at the end of the buffer.
/// If capacity is reached the link to the oldest item is deleted.
/// </summary>
public void Add(T item)
{
/* create node and set to last one */
var node = new LinkedRingBufferNode<T>(lastNode, item);
lastNode = node;
/* if it is the first node, the created is also the first */
if (firstNode == null)
firstNode = node;
/* check for capacity reach */
Count++;
if(Count > Capacity)
{/* deleted all links to the current first so that its eventually gc collected */
Count = Capacity;
firstNode = firstNode.NextNode;
firstNode.PreviousNode = null;
}
}
/// <summary>
/// Iterate through the buffer from the oldest to the newest item
/// </summary>
public IEnumerable<T> LastToFirst()
{
var current = lastNode;
while(current != null)
{
yield return current.Item;
current = current.PreviousNode;
}
}
/// <summary>
/// Iterate through the buffer from the newest to the oldest item
/// </summary>
public IEnumerable<T> FirstToLast()
{
var current = firstNode;
while (current != null)
{
yield return current.Item;
current = current.NextNode;
}
}
/// <summary>
/// Iterate through the buffer from the oldest to given item.
/// If item doesn't exist it iterates until it reaches the newest
/// </summary>
public IEnumerable<T> LastToReference(T item)
{
var current = lastNode;
while (current != null)
{
yield return current.Item;
if (current.Item.Equals(item))
break;
current = current.PreviousNode;
}
}
/// <summary>
/// Iterate through the buffer from the newest to given item.
/// If item doesn't exist it iterates until it reaches the oldest
/// </summary>
public IEnumerable<T> FirstToReference(T item)
{
var current = firstNode;
while (current != null)
{
yield return current.Item;
if (current.Item.Equals(item))
break;
current = current.PreviousNode;
}
}
/// <summary>
/// Represents a linked node inside the buffer and holds the data
/// </summary>
private class LinkedRingBufferNode<A>
{
public LinkedRingBufferNode(LinkedRingBufferNode<A> previousNode, A item)
{
Item = item;
NextNode = null;
PreviousNode = previousNode;
if(previousNode != null)
previousNode.NextNode = this;
}
internal A Item { get; }
internal LinkedRingBufferNode<A> PreviousNode { get; set; }
internal LinkedRingBufferNode<A> NextNode { get; private set; }
}
}
But unfortunately I'm kind of new to the multithreading environment, so how would I make this buffer thread safe for multiple reads and writes?
Thanks!
I think the simplest way would be to have a synchronization object which you would lock on, whenever performing thread-critical code. The code within a lock block is called the critical section, and can only be accessed by one thread at a time. Any other thread wishing to access it will wait, until the lock is released.
Definition and initialization:
private object Synchro;
public LinkedRingBuffer(int capacity)
{
Synchro = new object();
// Other constructor code
}
Usage:
public T GetFirst()
{
lock(Synchro)
{
return firstNode.Item;
}
}
When writing thread-safe code, locking some parts may seem obvious. But if you're not sure whether or not to lock a statement or block of code, for both read and write safety you need to consider:
Whether or not this code can influence the behavior or result of any other locked critical sections.
Whether or not any other locked critical sections can influence this code's behavior or result.
You will also need to rewrite some of your auto-implemented properties to have a backing field. It should be pretty straightforward, however...
Your usage of yield return, while being pretty smart and efficient in a single-thread context, will cause trouble in a multi-threaded context. This is because yield return doesn't release a lock statement (and it shouldn't). You will have to perform materialization in a wrapper, wherever you use yield return.
Your thread-safe code looks like this:
public class LinkedRingBuffer<T>
{
private LinkedRingBufferNode<T> firstNode;
private LinkedRingBufferNode<T> lastNode;
private object Synchro;
public LinkedRingBuffer(int capacity)
{
Synchro = new object();
Capacity = capacity;
Count = 0;
}
/// <summary>
/// Maximum count of items inside the buffer
/// </summary>
public int Capacity { get; }
/// <summary>
/// Actual count of items inside the buffer
/// </summary>
public int Count
{
get
{
lock (Synchro)
{
return _count;
}
}
private set
{
_count = value;
}
}
private int _count;
/// <summary>
/// Get value of the oldest buffer entry
/// </summary>
/// <returns></returns>
public T GetFirst()
{
lock (Synchro)
{
return firstNode.Item;
}
}
/// <summary>
/// Get value of the newest buffer entry
/// </summary>
/// <returns></returns>
public T GetLast()
{
lock (Synchro)
{
return lastNode.Item;
}
}
/// <summary>
/// Add item at the end of the buffer.
/// If capacity is reached the link to the oldest item is deleted.
/// </summary>
public void Add(T item)
{
lock (Synchro)
{
/* create node and set to last one */
var node = new LinkedRingBufferNode<T>(lastNode, item);
lastNode = node;
/* if it is the first node, the created is also the first */
if (firstNode == null)
firstNode = node;
/* check for capacity reach */
Count++;
if (Count > Capacity)
{
/* deleted all links to the current first so that its eventually gc collected */
Count = Capacity;
firstNode = firstNode.NextNode;
firstNode.PreviousNode = null;
}
}
}
/// <summary>
/// Iterate through the buffer from the oldest to the newest item
/// </summary>
public IEnumerable<T> LastToFirst()
{
lock (Synchro)
{
var materialized = LastToFirstInner().ToList();
return materialized;
}
}
private IEnumerable<T> LastToFirstInner()
{
var current = lastNode;
while (current != null)
{
yield return current.Item;
current = current.PreviousNode;
}
}
/// <summary>
/// Iterate through the buffer from the newest to the oldest item
/// </summary>
public IEnumerable<T> FirstToLast()
{
lock (Synchro)
{
var materialized = FirstToLastInner().ToList();
return materialized;
}
}
private IEnumerable<T> FirstToLastInner()
{
var current = firstNode;
while (current != null)
{
yield return current.Item;
current = current.NextNode;
}
}
/// <summary>
/// Iterate through the buffer from the oldest to given item.
/// If item doesn't exist it iterates until it reaches the newest
/// </summary>
public IEnumerable<T> LastToReference(T item)
{
lock (Synchro)
{
var materialized = LastToReferenceInner(item).ToList();
return materialized;
}
}
private IEnumerable<T> LastToReferenceInner(T item)
{
var current = lastNode;
while (current != null)
{
yield return current.Item;
if (current.Item.Equals(item))
break;
current = current.PreviousNode;
}
}
/// <summary>
/// Iterate through the buffer from the newest to given item.
/// If item doesn't exist it iterates until it reaches the oldest
/// </summary>
public IEnumerable<T> FirstToReference(T item)
{
lock (Synchro)
{
var materialized = FirstToReferenceInner(item).ToList();
return materialized;
}
}
private IEnumerable<T> FirstToReferenceInner(T item)
{
var current = firstNode;
while (current != null)
{
yield return current.Item;
if (current.Item.Equals(item))
break;
current = current.PreviousNode;
}
}
/// <summary>
/// Represents a linked node inside the buffer and holds the data
/// </summary>
private class LinkedRingBufferNode<A>
{
public LinkedRingBufferNode(LinkedRingBufferNode<A> previousNode, A item)
{
Item = item;
NextNode = null;
PreviousNode = previousNode;
if (previousNode != null)
previousNode.NextNode = this;
}
internal A Item { get; }
internal LinkedRingBufferNode<A> PreviousNode { get; set; }
internal LinkedRingBufferNode<A> NextNode { get; private set; }
}
}
There can be some optimizations done, for example you don't need to create the LinkedRingBufferNode objects inside the critical section, however you would have to copy the lastNode value to a local variable inside a critical section, before creating the object.

C# - finite list or limited list?

I am wondering about a certain functionality in C#...
I would like to have a List<Object> MyList();, which I could .Add(new Object()) finite amount of times. Let's say I added 5 items, and if I would add sixth, then the last item would be destroyed, and this sixth element would be put on top of the list.
Is there any built-in mechanism in c# that does that?
There is no build-in collection in Framework as Servy said. However, you can make a CircularBuffer like this -
namespace DataStructures
{
class Program
{
static void Main(string[] args)
{
var buffer = new CircularBuffer<int>(capacity: 3);
while (true)
{
int value;
var input = Console.ReadLine();
if (int.TryParse(input, out value))
{
buffer.Write(value);
continue;
}
break;
}
Console.WriteLine("Buffer: ");
while (!buffer.IsEmpty)
{
Console.WriteLine(buffer.Read());
}
Console.ReadLine();
}
}
}
namespace DataStructures
{
public class CircularBuffer<T>
{
private T[] _buffer;
private int _start;
private int _end;
public CircularBuffer()
: this(capacity: 3)
{
}
public CircularBuffer(int capacity)
{
_buffer = new T[capacity + 1];
_start = 0;
_end = 0;
}
public void Write(T value)
{
_buffer[_end] = value;
_end = (_end + 1) % _buffer.Length;
if (_end == _start)
{
_start = (_start + 1) % _buffer.Length;
}
}
public T Read()
{
T result = _buffer[_start];
_start = (_start + 1) % _buffer.Length;
return result;
}
public int Capacity
{
get { return _buffer.Length; }
}
public bool IsEmpty
{
get { return _end == _start; }
}
public bool IsFull
{
get { return (_end + 1) % _buffer.Length == _start; }
}
}
}
Above code is from PluralSight - Scott Allen's C# Generics.
None of the built in collections will do this, but you can easily make your own class that has an internal list that has this behavior when adding an item. It's not particularly difficult, but writing out all of the methods that a standard list would use and implementing all of the interfaces List does could be a bit tedious.
In my core library, I have something called a LimitedQueue<T>. This is probably similar to what you're after (you could easily modify it to be a List<T> instead). (Source on GitHub)
using System.Collections.Generic;
namespace Molten.Core
{
/// <summary>
/// Represents a limited set of first-in, first-out objects.
/// </summary>
/// <typeparam name="T">The type of each object to store.</typeparam>
public class LimitedQueue<T> : Queue<T>
{
/// <summary>
/// Stores the local limit instance.
/// </summary>
private int limit = -1;
/// <summary>
/// Sets the limit of this LimitedQueue. If the new limit is greater than the count of items in the queue, the queue will be trimmed.
/// </summary>
public int Limit
{
get
{
return limit;
}
set
{
limit = value;
while (Count > limit)
{
Dequeue();
}
}
}
/// <summary>
/// Initializes a new instance of the LimitedQueue class.
/// </summary>
/// <param name="limit">The maximum number of items to store.</param>
public LimitedQueue(int limit)
: base(limit)
{
this.Limit = limit;
}
/// <summary>
/// Adds a new item to the queue. After adding the item, if the count of items is greater than the limit, the first item in the queue is removed.
/// </summary>
/// <param name="item">The item to add.</param>
public new void Enqueue(T item)
{
while (Count >= limit)
{
Dequeue();
}
base.Enqueue(item);
}
}
}
You would use it like this:
LimitedQueue<int> numbers = new LimitedQueue<int>(5);
numbers.Enqueue(1);
numbers.Enqueue(2);
numbers.Enqueue(3);
numbers.Enqueue(4);
numbers.Enqueue(5);
numbers.Enqueue(6); // This will remove "1" from the list
// Here, "numbers" contains 2, 3, 4, 5, 6 (but not 1).
You can use a Queue with a fixed size. Just call .ToList() afterwards.
Fixed size queue which automatically dequeues old values upon new enques

What is wrong with my custom thread pool?

I've created a custom thread pool utility, but there seems to be a problem that I cannot find.
using System;
using System.Collections;
using System.Collections.Generic;
using System.Threading;
namespace iWallpaper.S3Uploader
{
public class QueueManager<T>
{
private readonly Queue queue = Queue.Synchronized(new Queue());
private readonly AutoResetEvent res = new AutoResetEvent(true);
private readonly AutoResetEvent res_thr = new AutoResetEvent(true);
private readonly Semaphore sem = new Semaphore(1, 4);
private readonly Thread thread;
private Action<T> DoWork;
private int Num_Of_Threads;
private QueueManager()
{
Num_Of_Threads = 0;
maxThread = 5;
thread = new Thread(Worker) {Name = "S3Uploader EventRegisterer"};
thread.Start();
// log.Info(String.Format("{0} [QUEUE] FileUploadQueueManager created", DateTime.Now.ToLongTimeString()));
}
public int maxThread { get; set; }
public static FileUploadQueueManager<T> Instance
{
get { return Nested.instance; }
}
/// <summary>
/// Executes multythreaded operation under items
/// </summary>
/// <param name="list">List of items to proceed</param>
/// <param name="action">Action under item</param>
/// <param name="MaxThreads">Maximum threads</param>
public void Execute(IEnumerable<T> list, Action<T> action, int MaxThreads)
{
maxThread = MaxThreads;
DoWork = action;
foreach (T item in list)
{
Add(item);
}
}
public void ExecuteNoThread(IEnumerable<T> list, Action<T> action)
{
ExecuteNoThread(list, action, 0);
}
public void ExecuteNoThread(IEnumerable<T> list, Action<T> action, int MaxThreads)
{
foreach (T wallpaper in list)
{
action(wallpaper);
}
}
/// <summary>
/// Default 10 threads
/// </summary>
/// <param name="list"></param>
/// <param name="action"></param>
public void Execute(IEnumerable<T> list, Action<T> action)
{
Execute(list, action, 10);
}
private void Add(T item)
{
lock (queue)
{
queue.Enqueue(item);
}
res.Set();
}
private void Worker()
{
while (true)
{
if (queue.Count == 0)
{
res.WaitOne();
}
if (Num_Of_Threads < maxThread)
{
var t = new Thread(Proceed);
t.Start();
}
else
{
res_thr.WaitOne();
}
}
}
private void Proceed()
{
Interlocked.Increment(ref Num_Of_Threads);
if (queue.Count > 0)
{
var item = (T) queue.Dequeue();
sem.WaitOne();
ProceedItem(item);
sem.Release();
}
res_thr.Set();
Interlocked.Decrement(ref Num_Of_Threads);
}
private void ProceedItem(T activity)
{
if (DoWork != null)
DoWork(activity);
lock (Instance)
{
Console.Title = string.Format("ThrId:{0}/{4}, {1}, Activity({2} left):{3}",
thread.ManagedThreadId, DateTime.Now, queue.Count, activity,
Num_Of_Threads);
}
}
#region Nested type: Nested
protected class Nested
{
// Explicit static constructor to tell C# compiler
// not to mark type as beforefieldinit
internal static readonly QueueManager<T> instance = new FileUploadQueueManager<T>();
}
#endregion
}
}
Problem is here:
Console.Title = string.Format("ThrId:{0}/{4}, {1}, Activity({2} left):{3}",
thread.ManagedThreadId, DateTime.Now, queue.Count, activity,
Num_Of_Threads);
There is always ONE thread id in title. And program seems to be working in one thread.
Sample usage:
var i_list = new int[] {1, 2, 4, 5, 6, 7, 8, 6};
QueueManager<int>.Instance.Execute(i_list,
i =>
{
Console.WriteLine("Some action under element number {0}", i);
}, 5);
P.S.: it's pretty messy, but I'm still working on it.
I looked through your code and here are a couple of issues I saw.
You lock the queue object even though it is synchronized queue. This is unnecessary
You inconsistently lock the queue object. It should either be locked for every access or not locked and depending on the Synchronized behavior.
The Proceed method is not thread safe. These two lines are the issue
if (queue.Count > 0) {
var item = (T)queue.Dequeue();
...
}
Using a synchronized queue only guarantees that individual accesses are safe. So both the .Count and the .Dequeue method won't mess with te internal structure of the queue. However imagine the scenario where two threads run these lines of code at the same time with a queue of count 1
Thread1: if (...) -> true
Thread2: if (...) -> true
Thread1: dequeue -> sucess
Thread2: dequeue -> fails because the queue is empty
There is a race condition between Worker and Proceed that can lead to deadlock. The following two lines of code should be switched.
Code:
res_thr.Set()
Interlocked.Decrement(ref Num_Of_Threads);
The first line will unblock the Worker method. If it runs quickly enough it will go back through the look, notice that Num_Of_Threads < maxThreads and go right back into res_thr.WaitOne(). If no other threads are currently running then this will lead to a deadlock in your code. This is very easy to hit with a low number of maximum threads (say 1). Inverting these two lines of code should fix the issue.
The maxThread count property does not seem to be useful beyond 4. The sem object is initialized to accept only 4 maximum concurrent entries. All code that actually executes an item must go through this semaphore. So you've effectively limited the maximum number of concurrent items to 4 regardless of how high maxThread is set.
Writing robust threaded code is not trivial. There are numerous thread-pools around that you might look at for reference, but also note that Parallel Extensions (available as CTP, or later in .NET 4.0) includes a lot of additional threading constructs out-of-the-box (in the TPL/CCR). For example, Parallel.For / Parallel.ForEach, which deal with work-stealing, and handling the available cores effectively.
For an example of a pre-rolled thread-pool, see Jon Skeet's CustomThreadPool here.
I think you can simply things considerably.
Here is a modified form (I didn't test the modifications) of the thread pool I use:
The only sync. primitive you need is a Monitor, locked on the thread pool. You don't need a semaphore, or the reset events.
internal class ThreadPool
{
private readonly Thread[] m_threads;
private readonly Queue<Action> m_queue;
private bool m_shutdown;
private object m_lockObj;
public ThreadPool(int numberOfThreads)
{
Util.Assume(numberOfThreads > 0, "Invalid thread count!");
m_queue = new Queue<Action>();
m_threads = new Thread[numberOfThreads];
m_lockObj = new object();
lock (m_lockObj)
{
for (int i = 0; i < numberOfWriteThreads; ++i)
{
m_threads[i] = new Thread(ThreadLoop);
m_threads[i].Start();
}
}
}
public void Shutdown()
{
lock (m_lockObj)
{
m_shutdown = true;
Monitor.PulseAll(m_lockObj);
if (OnShuttingDown != null)
{
OnShuttingDown();
}
}
foreach (var thread in m_threads)
{
thread.Join();
}
}
public void Enqueue(Action a)
{
lock (m_lockObj)
{
m_queue.Enqueue(a);
Monitor.Pulse(m_lockObj);
}
}
private void ThreadLoop()
{
Monitor.Enter(m_lockObj);
while (!m_shutdown)
{
if (m_queue.Count == 0)
{
Monitor.Wait(m_lockObj);
}
else
{
var a = m_queue.Dequeue();
Monitor.Pulse(m_lockObj);
Monitor.Exit(m_lockObj);
try
{
a();
}
catch (Exception ex)
{
Console.WriteLine("An unhandled exception occured!\n:{0}", ex.Message, null);
}
Monitor.Enter(m_lockObj);
}
}
Monitor.Exit(m_lockObj);
}
}
You should probally use the built in thread pool. When running your code I noticed that your spining up a bunch of threads but since the queue count is <1 you just exit, this continues until the queue is actually populated then your next thread processes everything. This is a very expensive process. You should only spin up threads if you have something to do.

Silverlight ReaderWriterLock Implementation Good/Bad?

I have an adopted implementation of a simple (no upgrades or timeouts) ReaderWriterLock for Silverlight, I was wondering anyone with the right expertise can validate if it is good or bad by design. To me it looks pretty alright, it works as advertised, but I have limited experience with multi-threading code as such.
public sealed class ReaderWriterLock
{
private readonly object syncRoot = new object(); // Internal lock.
private int i = 0; // 0 or greater means readers can pass; -1 is active writer.
private int readWaiters = 0; // Readers waiting for writer to exit.
private int writeWaiters = 0; // Writers waiting for writer lock.
private ConditionVariable conditionVar; // Condition variable.
public ReaderWriterLock()
{
conditionVar = new ConditionVariable(syncRoot);
}
/// <summary>
/// Gets a value indicating if a reader lock is held.
/// </summary>
public bool IsReaderLockHeld
{
get
{
lock ( syncRoot )
{
if ( i > 0 )
return true;
return false;
}
}
}
/// <summary>
/// Gets a value indicating if the writer lock is held.
/// </summary>
public bool IsWriterLockHeld
{
get
{
lock ( syncRoot )
{
if ( i < 0 )
return true;
return false;
}
}
}
/// <summary>
/// Aquires the writer lock.
/// </summary>
public void AcquireWriterLock()
{
lock ( syncRoot )
{
writeWaiters++;
while ( i != 0 )
conditionVar.Wait(); // Wait until existing writer frees the lock.
writeWaiters--;
i = -1; // Thread has writer lock.
}
}
/// <summary>
/// Aquires a reader lock.
/// </summary>
public void AcquireReaderLock()
{
lock ( syncRoot )
{
readWaiters++;
// Defer to a writer (one time only) if one is waiting to prevent writer starvation.
if ( writeWaiters > 0 )
{
conditionVar.Pulse();
Monitor.Wait(syncRoot);
}
while ( i < 0 )
Monitor.Wait(syncRoot);
readWaiters--;
i++;
}
}
/// <summary>
/// Releases the writer lock.
/// </summary>
public void ReleaseWriterLock()
{
bool doPulse = false;
lock ( syncRoot )
{
i = 0;
// Decide if we pulse a writer or readers.
if ( readWaiters > 0 )
{
Monitor.PulseAll(syncRoot); // If multiple readers waiting, pulse them all.
}
else
{
doPulse = true;
}
}
if ( doPulse )
conditionVar.Pulse(); // Pulse one writer if one waiting.
}
/// <summary>
/// Releases a reader lock.
/// </summary>
public void ReleaseReaderLock()
{
bool doPulse = false;
lock ( syncRoot )
{
i--;
if ( i == 0 )
doPulse = true;
}
if ( doPulse )
conditionVar.Pulse(); // Pulse one writer if one waiting.
}
/// <summary>
/// Condition Variable (CV) class.
/// </summary>
public class ConditionVariable
{
private readonly object syncLock = new object(); // Internal lock.
private readonly object m; // The lock associated with this CV.
public ConditionVariable(object m)
{
lock (syncLock)
{
this.m = m;
}
}
public void Wait()
{
bool enter = false;
try
{
lock (syncLock)
{
Monitor.Exit(m);
enter = true;
Monitor.Wait(syncLock);
}
}
finally
{
if (enter)
Monitor.Enter(m);
}
}
public void Pulse()
{
lock (syncLock)
{
Monitor.Pulse(syncLock);
}
}
public void PulseAll()
{
lock (syncLock)
{
Monitor.PulseAll(syncLock);
}
}
}
}
If it is good, it might be helpful to others too as Silverlight currently lacks a reader-writer type of lock. Thanks.
I go in depth on explaining Vance Morrison's ReaderWriterLock (which became ReaderWriterLockSlim in .NET 3.5) on my blog (down to the x86 level). This might be helpful in your design, especially understanding how things really work.
Both of your IsReadorLockHeld and IsWriterLockHeld methods are flawed at a conceptual level. While it is possible to determine that at a given point in time a particular lock is or is not held, there is absolutely nothing you can safely do without this information unless you continue to hold the lock (not the case in your code).
These methods would be more accurately named WasReadLockHeldInThePast and WasWriterLockHeldInThePast. Once you rename the methods to a more accurate representation of what they do, it becomes clearer that they are not very useful.
This class seems simpler to me, and provides the same functionality. It may be slightly less performant, since it always PulsesAll(), but the logic is much simpler to understand, and I doubt the performance hit is that great.
public sealed class ReaderWriterLock()
{
private readonly object internalLock = new object();
private int activeReaders = 0;
private bool activeWriter = false;
public void AcquireReaderLock()
{
lock (internalLock)
{
while (activeWriter)
Monitor.Wait(internalLock);
++activeReaders;
}
}
public void ReleaseReaderLock()
{
lock (internalLock)
{
// if activeReaders <= 0 do some error handling
--activeReaders;
Monitor.PulseAll(internalLock);
}
}
public void AcquireWriterLock()
{
lock (internalLock)
{
// first wait for any writers to clear
// This assumes writers have a higher priority than readers
// as it will force the readers to wait until all writers are done.
// you can change the conditionals in here to change that behavior.
while (activeWriter)
Monitor.Wait(internalLock);
// There are no more writers, set this to true to block further readers from acquiring the lock
activeWriter = true;
// Now wait till all readers have completed.
while (activeReaders > 0)
Monitor.Wait(internalLock);
// The writer now has the lock
}
}
public void ReleaseWriterLock()
{
lock (internalLock)
{
// if activeWriter != true handle the error
activeWriter = false;
Monitor.PulseAll(internalLock);
}
}
}

Categories

Resources