I have an application that works quite slow and I'm trying to speed it up.
I am quite new to concurrent systems, so I'm a bit stuck here.
Shortly, I can present the system as the following classes:
Some resource that is being processed
public class Resource
{
public int Capacity { get; set; } = 1000;
}
A consumer
public class Consumer
{
private readonly int _sleep;
public Consumer(int sleep)
{
_sleep = sleep;
}
public void ConsumeResource(Resource resource)
{
var capture = resource.Capacity;
Thread.Sleep(_sleep); // some calsulations and stuff
if (resource.Capacity != capture)
throw new SystemException("Something went wrong");
resource.Capacity -= 1;
}
}
And resource manager that does the job
public class ResourceManager
{
private readonly List<Consumer> _consumers;
private readonly Resource _resource;
public ResourceManager(List<Consumer> consumers)
{
_consumers = consumers;
_resource = new Resource();
}
public void Process()
{
Parallel.For(0, _consumers.Count, i =>
{
var consumer = _consumers[i];
consumer.ConsumeResource(_resource);
});
}
}
So as you saw, Consumer relies on Resource state. If you run this simulation with the following code
static void Main(string[] args)
{
var consumers = new List<Consumer>
{
new Consumer(1000),
new Consumer(900),
new Consumer(800),
new Consumer(700),
new Consumer(600),
};
var resourceManager = new ResourceManager(consumers);
resourceManager.Process();
}
you will see that when capacity of resource changes, everything breaks.
I couldn't think of any other example, and it lacks a couple of details.
First, there are many instances of Resource class, so locking access
to it would not discard all efforts to make the code concurrent.
Second, in real application this problem is very rare, so I can
sacrifice a bit of performance there.
I'm guessing this issue can be fixed with properly placed locks, but I fail to place them correctly.
As I understand the concept of locks, it prevents the locked code from being called from different threads simultaneously. Placing lock in Consumer::ConsumeResource would not help, just as placing it inside Resource::Capacity setter. I need somehow to lock modification of a resource while a consumer is doing it's job with the resource.
I hope I explained my problem efficiently. It is all quite new to me, so I'll try to make things more concrete if needed.
After thinking long and hard, I conjured a somewhat sloppy solution.
I decided to lock Resource property for a consumer using comsumer's id, and manually wait for next consumer's turn:
public class Resource
{
private int Capacity { get; set; } = 1000;
private Guid? _currentConsumer;
public int GetCapacity(Guid? id)
{
while (id.HasValue && _currentConsumer.HasValue && id != _currentConsumer)
{
Thread.Sleep(5);
}
_currentConsumer = id;
return Capacity;
}
public void SetCapacity(int cap, Guid id)
{
if (_currentConsumer.HasValue && id != _currentConsumer)
return;
Capacity = cap;
_currentConsumer = null;
}
}
public class Consumer
{
private readonly int _sleep;
private Guid _id = Guid.NewGuid();
public Consumer(int sleep)
{
_sleep = sleep;
}
public void ConsumeResource(Resource resource)
{
var capture = resource.GetCapacity(_id);
Thread.Sleep(_sleep); // some calsulations and stuff
if (resource.GetCapacity(_id) != capture)
throw new SystemException("Something went wrong");
resource.SetCapacity(resource.GetCapacity(_id) - 1, _id);
}
}
This way it works as expected, but I get a feeling that it also can be implemented with locks.
After some research about locks and stuff, I wrote that little helper class:
public class ConcurrentAccessProvider<TObject>
{
private readonly Func<TObject> _getter;
private readonly Action<TObject> _setter;
private readonly object _lock = new object();
public ConcurrentAccessProvider(Func<TObject> getter, Action<TObject> setter)
{
_getter = getter;
_setter = setter;
}
public TObject Get()
{
lock (_lock)
{
return _getter();
}
}
public void Set(TObject value)
{
lock (_lock)
{
_setter(value);
}
}
public void Access(Action accessAction)
{
lock (_lock)
{
accessAction();
}
}
}
With that, I rewrote Resource and Consumer in order to make it thread-safe:
public class Resource
{
public ConcurrentAccessProvider<int> CapacityAccessProvider { get; }
private int _capacity;
public Resource()
{
CapacityAccessProvider = new ConcurrentAccessProvider<int>(() => _capacity, val => _capacity = val);
}
public int Capacity
{
get => CapacityAccessProvider.Get();
set => CapacityAccessProvider.Set(value);
}
}
public class Consumer
{
private readonly int _sleep;
public Consumer(int sleep)
{
_sleep = sleep;
}
public void ConsumeResource(Resource resource)
{
resource.CapacityAccessProvider.Access(() =>
{
var capture = resource.Capacity;
Thread.Sleep(_sleep); // some calsulations and stuff
if (resource.Capacity != capture)
throw new SystemException("Something went wrong");
resource.Capacity -= 1;
Console.WriteLine(resource.Capacity);
});
}
}
In the provided example those manipulations effectively kill all possible profits from concurrency, but it is because there is only one Resource instance. In real world application when there are thousands of resources and only several conflicting cases, that will work just fine.
Related
I have this method:
public static async Task OpenPageAsync(string route)
{
await Shell.Current.GoToAsync(route, true);
}
If the method is called more than once in 5 seconds I would like the second call to be ignored. Has anyone come across a way to deal with this need?
Note that if it helps I do have access to create properities at the App level like this etc.
public partial class App : Application
{
public static int LastTapTime;
public static int TapTime;
In our project, we have created a 'MaxFrequencyUpdater' for exactly that cause.
Only difference: if within 5 seconds a new call comes in, it is delayed and executed after the 5 seconds interval.
namespace Utils
{
public class MaxFrequencyUpdater
{
private readonly WinformsExceptionHandler _exceptionHandler;
private readonly string _name;
private readonly int _millis;
private MethodInvoker _currentMethod;
private DateTime _lastExecuted = DateTime.MinValue;
private readonly object _updaterLockObject = new object();
public MaxFrequencyUpdater(string name, int maxFrequencyInMillis, WinformsExceptionHandler exceptionHandler)
{
_name = name;
_exceptionHandler = exceptionHandler;
_millis = maxFrequencyInMillis;
}
public void Update(MethodInvoker method)
{
lock (_updaterLockObject)
{
_currentMethod = method;
}
Task.Run(HandleWork);
}
private void HandleWork()
{
lock (_updaterLockObject)
{
// No longer bother, someone else handled it already
if (_currentMethod == null) return;
var now = DateTime.Now;
var delay = (int)(_millis - now.Subtract(_lastExecuted).TotalMilliseconds);
// Post-pone if too soon
if (delay > 0)
{
Task.Delay(delay).ContinueWith(HandleWork);
}
else
{
try
{
_currentMethod.Invoke();
}
catch (Exception e)
{
_exceptionHandler.HandleException(e);
}
_lastExecuted = now;
_currentMethod = null;
}
}
}
}
}
usage:
_maxFrequencyUpdater.Update(() =>
{
doSomething();
});
I need a thread safe cache to store the instances of a disposable class.
It will be used with .NET 4.0
Cache should be aware of if a stored instance is beign used or not.
When an instance is wanted from the cache, it should look at the stored avaliable instances and give one; if there is no available, create a new instance and store it.
If the cache has not been used for a period of time, cache should dispose the stored, not being used insances and clear them.
This is the solution I wrote:
private class cache<T> where T : IDisposable
{
Func<T> _createFunc;
long livingTicks;
int livingMillisecs;
public cache(Func<T> createFunc, int livingTimeInSec)
{
this.livingTicks = livingTimeInSec * 10000000;
this.livingMillisecs = livingTimeInSec * 1000;
this._createFunc = createFunc;
}
Stack<T> st = new Stack<T>();
public IDisposable BeginUseBlock(out T item)
{
this.actionOccured();
if (st.Count == 0)
item = _createFunc();
else
lock (st)
if (st.Count == 0)
item = _createFunc();
else
item = st.Pop();
return new blockDisposer(this, item);
}
long _lastTicks;
bool _called;
private void actionOccured()
{
if (!_called)
lock (st)
if (!_called)
{
_called = true;
System.Threading.Timer timer = null;
timer = new System.Threading.Timer((obj) =>
{
if ((DateTime.UtcNow.Ticks - _lastTicks) > livingTicks)
{
timer.Dispose();
this.free();
}
},
null, livingMillisecs, livingMillisecs);
}
_lastTicks = DateTime.UtcNow.Ticks;
}
private void free()
{
lock (st)
{
while (st.Count > 0)
st.Pop().Dispose();
_called = false;
}
}
private class blockDisposer : IDisposable
{
T _item;
cache<T> _c;
public blockDisposer(cache<T> c, T item)
{
this._c = c;
this._item = item;
}
public void Dispose()
{
this._c.actionOccured();
lock (this._c.st)
this._c.st.Push(_item);
}
}
}
This is a sample use:
class MyClass:IDisposable
{
public MyClass()
{
//expensive work
}
public void Dispose()
{
//free
}
public void DoSomething(int i)
{
}
}
private static Lazy<cache<MyClass>> myCache = new Lazy<cache<MyClass>>(() => new cache<MyClass>(() => new MyClass(), 60), true);//free 60sec. after last call
private static void test()
{
Parallel.For(0, 100000, (i) =>
{
MyClass cl;
using (myCache.Value.BeginUseBlock(out cl))
cl.DoSomething(i);
});
}
My questions:
Is there a faster way of doing this? (I've searched for the MemoryCache examples, but couln't figure out how I could use it for my requirements. And it requires a key check. Stack.Pop would be faster than a key search, I thought; and for my problem, performance is very important.)
In order to dispose the instances after a while (60sec. for the example code) I had to use a Timer. I just need a delayed function call that would be re-delayed on each action happening with the cache. Is there a way to do that without using a timer?
Edit:
I've tried #mjwills's comment. The performance is better with this:
ConcurrentStack<T> st = new ConcurrentStack<T>();
public IDisposable BeginUseBlock(out T item)
{
this.actionOccured();
if (!st.TryPop(out item))
item = _createFunc();
return new blockDisposer(this, item);
}
Edit2:
In my cas its not required, but if we need to control the size of the stack and dispose the unused objects, using a separate counter which will be increment-decremented with Interlocked.Increment will be faster (#mjwills)
In my site, I call a third party API. To avoid hitting its rate limit, I need to define a global variable to enqueue requests. (I'm using RateLimiter any better solution?)
namespace MySite.App_Start
{
public static class Global
{
public static int MaxCount { get; set; } = 30;
public static TimeSpan Interval { get; set; } = TimeSpan.FromSeconds(1);
private static TimeLimiter rateLimiter;
public static TimeLimiter RateLimiter
{
get
{
if (rateLimiter == null)
rateLimiter = TimeLimiter.GetFromMaxCountByInterval(MaxCount, Interval);
return rateLimiter;
}
}
}
}
Then I'll use RateLimiter property. But I've read a lot that having a global variable is not a good idea. Considering my site has a lot of requests per second, is my code safe to use? Thanks.
Your code isn't 100% safe since it could create multiple instances of TimeLimiter in the beginning and depending on surrounding code, it could be a problem. I'm guessing it wouldn't be a big problem, but it's better to write the code properly to begin with.
This is something an IoC container handles nicely, but if you don't want to use one, you could use Lazy:
private static TimeLimiter rateLimiter = new Lazy(() =>
TimeLimiter.GetFromMaxCountByInterval(MaxCount, Interval));
public static TimeLimiter RateLimiter => rateLimiter.Value;
Maybe, you can make it thread-safe by using lock statement.
public static class Global
{
public static int MaxCount { get; set; } = 30;
public static TimeSpan Interval { get; set; } = TimeSpan.FromSeconds(1);
private static object _lockObject = new object();
private static TimeLimiter rateLimiter;
public static TimeLimiter RateLimiter
{
get
{
lock (_lockObject)
{
if (rateLimiter == null)
rateLimiter = TimeLimiter.GetFromMaxCountByInterval(MaxCount, Interval);
return rateLimiter;
}
}
}
}
Your code is not thread-safety.
Try this:
public class Singleton
{
protected Singleton() { }
private sealed class SingletonCreator
{
private static readonly Singleton instance = new Singleton();
public static Singleton Instance { get { return instance; } }
}
public static Singleton Instance
{
get { return SingletonCreator.Instance; }
}
}
Or use your favorite IoC-container with creating SingleInstance object
[Edit: It looks like the original question involved a double and not an integer. So I think this question stands if we change the integer to a double.]
I have rare issue with reading integer properties from a class used in multiple threads that sometimes returns a zero value. The values are not changed after initialization.
This question addresses that. The consensus is that even though I'm accessing an integer I need to synchronize the properties. (Some of the original answers have been deleted). I haven't chosen an answer there because I have not resolved my issue yet.
So I’ve done some research on this and I’m not sure which of .Net 4’s locking mechanisms to use or if the locks should be outside the class itself.
This is what I thought about using:
public class ConfigInfo
{
private readonly object TimerIntervalLocker = new object();
private int _TimerInterval;
public int TimerInterval
{
get
{
lock (TimerIntervalLocker) {
return _TimerInterval;
}
}
}
private int _Factor1;
public int Factor1
{
set
{
lock (TimerIntervalLocker) {
_Factor1 = value;
_TimerInterval = _Factor1 * _Factor2;
}
}
get
{
lock (TimerIntervalLocker) {
return _Factor1;
}
}
}
private int _Factor2;
public int Factor2
{
set
{
lock (TimerIntervalLocker) {
_Factor2 = value;
_TimerInterval = _Factor1 * _Factor2;
}
}
get
{
lock (TimerIntervalLocker) {
return _Factor2;
}
}
}
}
But I’ve read that this is horribly slow.
Another alternative is to lock the instance of ConfigData on the user side but that seems to be a lot of work. Another alternative I’ve seen is Monitor.Enter and Monitor.Exit but I think Lock is the same thing with less syntax.
So what is a best practice for making a class's properties thread
safe?
a. Using lock can be slow since it uses operating system resources, if the properties' complexity is low, then spin lock (or interlocked.compareexchange) will be faster.
b. You have to make sure that a thread won't enter a lock and via a call from one property to another get locked out. - If this can happen (non currently an issue in your code), you'll need to make the lock thread or task sensitive.
Edit:
If the object is supposed to be set during initialization and never changed, make it immutable (like .NET strings are). Remove all the public setters and provide a constructor with parameters for defining the initial state and perhaps additional methods/operators for creating a new instance with a modified state (e.g. var newString = "Old string" + " was modified.";).
If the values never change, it would be easier to just make a copy of that instance and pass each thread an instance of it's own. No locking required at all.
I think you should rewrite your ConfigInfo class to look like this; then you can't get overflow or threading problems:
public sealed class ConfigInfo
{
public ConfigInfo(int factor1, int factor2)
{
if (factor1 <= 0)
throw new ArgumentOutOfRangeException("factor1");
if (factor2 <= 0)
throw new ArgumentOutOfRangeException("factor2");
_factor1 = factor1;
_factor2 = factor2;
checked
{
_timerInterval = _factor1*_factor2;
}
}
public int TimerInterval
{
get
{
return _timerInterval;
}
}
public int Factor1
{
get
{
return _factor1;
}
}
public int Factor2
{
get
{
return _factor2;
}
}
private readonly int _factor1;
private readonly int _factor2;
private readonly int _timerInterval;
}
Note that I'm using checked to detect overflow problems.
Otherwise some values will give incorrect results.
For example, 57344 * 524288 will give zero in unchecked integer arithmetic (and there's very many other pairs of values that will give zero, and even more that will give a negative result or a positive value that "seems" correct).
It is best, as mentioned in the comments, to make the properties readonly. I thought about the following possibility:
public class ConfigInfo
{
private class IntervalHolder
{
public static readonly IntervalHolder Empty = new IntervalHolder();
private readonly int _factor1;
private readonly int _factor2;
private readonly int _interval;
private IntervalHolder()
{
}
private IntervalHolder(int factor1, int factor2)
{
_factor1 = factor1;
_factor2 = factor2;
_interval = _factor1*_factor2;
}
public IntervalHolder WithFactor1(int factor1)
{
return new IntervalHolder(factor1, _factor2);
}
public IntervalHolder WithFactor2(int factor2)
{
return new IntervalHolder(_factor1, factor2);
}
public int Factor1
{
get { return _factor1; }
}
public int Factor2
{
get { return _factor2; }
}
public int Interval
{
get { return _interval; }
}
public override bool Equals(object obj)
{
var otherHolder = obj as IntervalHolder;
return
otherHolder != null &&
otherHolder._factor1 == _factor1 &&
otherHolder._factor2 == _factor2;
}
}
private IntervalHolder _intervalHolder = IntervalHolder.Empty;
public int TimerInterval
{
get { return _intervalHolder.Interval; }
}
private void UpdateHolder(Func<IntervalHolder, IntervalHolder> update)
{
IntervalHolder oldValue, newValue;
do
{
oldValue = _intervalHolder;
newValue = update(oldValue);
} while (!oldValue.Equals(Interlocked.CompareExchange(ref _intervalHolder, newValue, oldValue)));
}
public int Factor1
{
set { UpdateHolder(holder => holder.WithFactor1(value)); }
get { return _intervalHolder.Factor1; }
}
public int Factor2
{
set { UpdateHolder(holder => holder.WithFactor2(value)); }
get { return _intervalHolder.Factor2; }
}
}
This way, your TimerInterval value is always in sync with its factors. The only problem is when some thread reads one of the properties while another writes them from outside the ConfigInfo. The first one could get wrong value and I don't see any way to solve this without introducing a single lock root. The question is whether read operations are critical.
For a "log information for support" type of function I'd like to enumerate and dump active thread information.
I'm well aware of the fact that race conditions can make this information semi-inaccurate, but I'd like to try to get the best possible result, even if it isn't 100% accurate.
I looked at Process.Threads, but it returns ProcessThread objects, I'd like to have a collection of Thread objects, so that I can log their name, and whether they're background threads or not.
Is there such a collection available, even if it is just a snapshot of the active threads when I call it?
ie.
Thread[] activeThreads = ??
Note, to be clear, I am not asking about Process.Threads, this collection gives me a lot, but not all of what I want. I want to know how much time specific named threads in our application is currently using (which means I will have to look at connecting the two types of objects later, but the names is more important than the CPU time to begin with.)
If you're willing to replace your application's Thread creations with another wrapper class, said wrapper class can track the active and inactive Threads for you. Here's a minimal workable shell of such a wrapper:
namespace ThreadTracker
{
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Threading;
public class TrackedThread
{
private static readonly IList<Thread> threadList = new List<Thread>();
private readonly Thread thread;
private readonly ParameterizedThreadStart start1;
private readonly ThreadStart start2;
public TrackedThread(ParameterizedThreadStart start)
{
this.start1 = start;
this.thread = new Thread(this.StartThreadParameterized);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public TrackedThread(ThreadStart start)
{
this.start2 = start;
this.thread = new Thread(this.StartThread);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public TrackedThread(ParameterizedThreadStart start, int maxStackSize)
{
this.start1 = start;
this.thread = new Thread(this.StartThreadParameterized, maxStackSize);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public TrackedThread(ThreadStart start, int maxStackSize)
{
this.start2 = start;
this.thread = new Thread(this.StartThread, maxStackSize);
lock (threadList)
{
threadList.Add(this.thread);
}
}
public static int Count
{
get
{
lock (threadList)
{
return threadList.Count;
}
}
}
public static IEnumerable<Thread> ThreadList
{
get
{
lock (threadList)
{
return new ReadOnlyCollection<Thread>(threadList);
}
}
}
// either: (a) expose the thread object itself via a property or,
// (b) expose the other Thread public methods you need to replicate.
// This example uses (a).
public Thread Thread
{
get
{
return this.thread;
}
}
private void StartThreadParameterized(object obj)
{
try
{
this.start1(obj);
}
finally
{
lock (threadList)
{
threadList.Remove(this.thread);
}
}
}
private void StartThread()
{
try
{
this.start2();
}
finally
{
lock (threadList)
{
threadList.Remove(this.thread);
}
}
}
}
}
and a quick test driver of it (note I do not iterate over the list of threads, merely get the count in the list):
namespace ThreadTracker
{
using System;
using System.Threading;
internal static class Program
{
private static void Main()
{
var thread1 = new TrackedThread(DoNothingForFiveSeconds);
var thread2 = new TrackedThread(DoNothingForTenSeconds);
var thread3 = new TrackedThread(DoNothingForSomeTime);
thread1.Thread.Start();
thread2.Thread.Start();
thread3.Thread.Start(15);
while (TrackedThread.Count > 0)
{
Console.WriteLine(TrackedThread.Count);
}
Console.ReadLine();
}
private static void DoNothingForFiveSeconds()
{
Thread.Sleep(5000);
}
private static void DoNothingForTenSeconds()
{
Thread.Sleep(10000);
}
private static void DoNothingForSomeTime(object seconds)
{
Thread.Sleep(1000 * (int)seconds);
}
}
}
Not sure if you can go such a route, but it will accomplish the goal if you're able to incorporate at an early stage of development.
Is it feasible for you to store thread information in a lookup as you create each thread in your application?
As each thread starts, you can get its ID using AppDomain.GetCurrentThreadId(). Later, you can use this to cross reference with the data returned from Process.Threads.