I have a method running in an infinite loop in a task from the ctor. The message does the equivalent to sending a ping, and upon a certain number of failed pings, prevents other commands from being able to be sent. The method is started from the ctor with Task.Factory.StartNew(()=>InitPingBackgroundTask(pingCts.token));
public async Task InitPingBackgroundTask (CancellationToken token)
{
while(!token.IsCancellationRequested)
{
try
{
Ping();
// handle logic
_pingsFailed = 0;
_canSend = true;
}
catch(TimeoutException)
{
// handle logic
if(++_pingsFailed >= settings.MaxPingLossNoConnection)
_canSend = false;
}
finally
{
await Task.Delay(settings.PingInterval);
}
}
}
I have a second method DoCmd like so:
public void DoCmd()
{
if(_canSend)
{
// handle logic
}
}
In my test, using Moq, I can set the settings to have PingInterval=TimeSpan.FromSeconds(1) and MaxPingLossNoConnection = 2 for the sake of my test.
After setting up the sut and everything else, I want to call DoCmd() and verify that it is not actually sent. (I have mocked its dependencies and can verify that the logic in its method was never called)
One way to achieve this is to add a sleep or delay from within the test, before calling sut.DoCmd(), but this makes the test take longer. Is there a way to simulate time passing (like tick(desiredTime) in angular, or something similar)? To simulate the passing of time without having to actually wait the time?
Any leads would be appreciated :-)
EDIT
Added the test code:
public void NotAllowSendIfTooManyPingsFailed()
{
var settings = new Settings()
{
PingInterval=TimeSpan.FromSeconds(1),
MaxPingLossNoConnection = 0 // ended up changing to 0 to fail immidiately
}
// set up code, that Ping messages should fail
Thread.Sleep(100); // necessary to Ping has a chance to fail before method is called
sut.DoCmd();
// assert logic, verifying that DoCmd did not go through
}
Ideally, I would like to be able to play with MaxPingLossNoConnection and set it to different numbers to test circumstances. That would also require adding sleeps to let the time pass. In this test, I would want to remove the Sleep altogether, as well as in other similar tests where sleep would be longer
In order to deal with time concerns im my tests, I found a little Clock class a while ago and replaced all the calls to DateTime.Now in my systems for Clock.Now. I tweaked it a little bit to have the option to set the time and freeze it or to keep it running during the tests.
Clock.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ComumLib
{
public class Clock : IDisposable
{
private static DateTime? _startTime;
private static DateTime? _nowForTest;
public static DateTime Now
{
get
{
if (_nowForTest == null)
{
return DateTime.Now;
}
else
{
//freeze time
if (_startTime == null)
{
return _nowForTest.GetValueOrDefault();
}
//keep running
else
{
TimeSpan elapsedTime = DateTime.Now.Subtract(_startTime.GetValueOrDefault());
return _nowForTest.GetValueOrDefault().Add(elapsedTime);
}
}
}
}
public static IDisposable NowIs(DateTime dateTime, bool keepTimeRunning = false)
{
_nowForTest = dateTime;
if (keepTimeRunning)
{
_startTime = DateTime.Now;
}
return new Clock();
}
public static IDisposable ResetNowIs()
{
_startTime = null;
_nowForTest = null;
return new Clock();
}
public void Dispose()
{
_startTime = null;
_nowForTest = null;
}
}
}
In my tests, I either call
DateTime dt = new DateTime(2022,01,01,22,12,44,2);
Clock.NowIs(dt,true); //time keeps running during the test
or
Clock.NowIs(dt); //time remains the same during the test
Related
Let's say its the 15th and my function runs, if that function is called again on the 15th it will not run. But if the date is the 16th and the function is called, then it is allowed to run.
How could I achieve this?
P.S. Code below is in visual basic however C# is fine for an answer
Private Sub CreateLogger()
Logger = New CustomLogger(Sub(value, Type)
Dim tempFileLogPath = IO.Path.Combine("C:\temp", $"FileAttributer_{Today:yyyyMMdd}.txt")
Dim consoleLogger = New ConsoleLogger()
Dim textFileLogger = New TextFileLogger(tempFileLogPath)
Dim compositeLogger = New CompositeLogger(consoleLogger, textFileLogger)
value = $"{DateTime.Now:dd/MM/yyyy HH:mm:ss} - {value}"
compositeLogger.Write(value, Type)
End Sub)
End Sub
I suppose this has to do with the file, so in the end you do not end up with multiple files per day.
You could do a check on the file to see if it was created before or not.
I would store the last called time in a variable, and update it every time you call it. It doesn't need to include the time of day, just the date. Every time you call the function, check if the last called time is equal to the current date, and if it is return/throw error to stop the rest of the function.
You have to save the last run time somewhere and compare the day every time.
This is a very simple example how to do it:
using System;
using System.Threading;
public class Program
{
private static DateTime lastRunTime;
public static void Main()
{
for(int i=0; i < 10; i++) {
DoSomething();
Thread.Sleep(1000);
}
}
private static void DoSomething()
{
if (DateTime.Now.Day != lastRunTime.Day) {
lastRunTime = DateTime.Now;
Console.WriteLine($"Run: {lastRunTime}");
}
}
}
But I guess, Radu Hatos is right. You should better explain, why do you want you function to behave so.
You could create a simple class like the one below to manage your calls.
(It is just a very simple example)
public abstract class ExecutionManager
{
protected string ActionKey { get; }
public ExecutionManager(string actionKey)
{
ActionKey = actionKey;
}
protected abstract DateTime GetLastExecution();
protected abstract void SetLastExecution(DateTime dateTime);
public void OncePerDay(Action action)
{
if (GetLastExecution().Date != DateTime.Now.Date)
{
SetLastExecution(DateTime.Now);
action();
}
}
}
You would then implement this abstract class according to you needs: if you run the app on a single machine (console app for instance), you can implement a registry version or a file version to store the lats call dates / times.
If you run seveal instances on several servers, you will have to store the information in a database for instance, or a shared folder (there light be complex mutual exclision issues to manage here)
You will find below a very simple implementation with a file storage for the single computer scenario:
public class LADExecutionManager : ExecutionManager
{
private static string FilePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "MyApplicationName");
public LADExecutionManager(string actionKey) : base(actionKey)
{
if (!File.Exists(FilePath))
Directory.CreateDirectory(FilePath);
}
protected string FileName => Path.Combine(FilePath, $"{ActionKey}.txt");
protected override DateTime GetLastExecution()
{
try
{
var sDate = File.ReadAllLines(FileName).First();
return DateTime.Parse(sDate);
}
catch
{
return DateTime.MinValue;
}
}
protected override void SetLastExecution(DateTime dateTime)
{
File.WriteAllLines(FileName, new string[] { dateTime.ToLongDateString() });
}
}
To use these classes, you would code something like
internal class Program
{
private static void Main(string[] args)
{
var em = new LADExecutionManager("MyActionKeyName");
em.OncePerDay(() => MyAction());
}
public static void MyAction()
{
// Action...
Debug.WriteLine("Executing Action");
}
}
You just have to set an action key name in the constructor for each method (You could use reflection to get the method name but I think it is off topic here)
I want to use IAsyncEnumerable like a Source for akka streams. But I not found, how do it.
No sutiable method in Source class for this code.
using System.Collections.Generic;
using System.Threading.Tasks;
using Akka.Streams.Dsl;
namespace ConsoleApp1
{
class Program
{
static async Task Main(string[] args)
{
Source.From(await AsyncEnumerable())
.Via(/*some action*/)
//.....
}
private static async IAsyncEnumerable<int> AsyncEnumerable()
{
//some async enumerable
}
}
}
How use IAsyncEnumerbale for Source?
This has been done in the past as a part of Akka.NET Streams contrib package, but since I don't see it there anymore, let's go through on how to implement such source. The topic can be quite long, as:
Akka.NET Streams is really about graph processing - we're talking about many-inputs/many-outputs configurations (in Akka.NET they're called inlets and outlets) with support for cycles in graphs.
Akka.NET is not build on top of .NET async/await or even on top of .NET standard thread pool library - they're both pluggable, which means that the lowest barier is basically using callbacks and encoding what C# compiler sometimes does for us.
Akka.NET streams is capable of both pushing and pulling values between stages/operators. IAsyncEnumerable<T> can only pull data while IObservable<T> can only push it, so we get more expressive power here, but this comes at a cost.
The basics of low level API used to implement custom stages can be found in the docs.
The starter boilerplate looks like this:
public static class AsyncEnumerableExtensions {
// Helper method to change IAsyncEnumerable into Akka.NET Source.
public static Source<T, NotUsed> AsSource<T>(this IAsyncEnumerable<T> source) =>
Source.FromGraph(new AsyncEnumerableSource<T>(source));
}
// Source stage is description of a part of the graph that doesn't consume
// any data, only produce it using a single output channel.
public sealed class AsyncEnumerableSource<T> : GraphStage<SourceShape<T>>
{
private readonly IAsyncEnumerable<T> _enumerable;
public AsyncEnumerableSource(IAsyncEnumerable<T> enumerable)
{
_enumerable = enumerable;
Outlet = new Outlet<T>("asyncenumerable.out");
Shape = new SourceShape<T>(Outlet);
}
public Outlet<T> Outlet { get; }
public override SourceShape<T> Shape { get; }
/// Logic if to a graph stage, what enumerator is to enumerable.
protected override GraphStageLogic CreateLogic(Attributes inheritedAttributes) =>
new Logic(this);
sealed class Logic: OutGraphStageLogic
{
public override void OnPull()
{
// method called whenever a consumer asks for new data
}
public override void OnDownstreamFinish()
{
// method called whenever a consumer stage finishes,used for disposals
}
}
}
As mentioned, we don't use async/await straight away here: even more, calling Logic methods in asynchronous context is unsafe. To make it safe we need to register out methods that may be called from other threads using GetAsyncCallback<T> and call them via returned wrappers. This will ensure, that not data races will happen when executing asynchronous code.
sealed class Logic : OutGraphStageLogic
{
private readonly Outlet<T> _outlet;
// enumerator we'll call for MoveNextAsync, and eventually dispose
private readonly IAsyncEnumerator<T> _enumerator;
// callback called whenever _enumerator.MoveNextAsync completes asynchronously
private readonly Action<Task<bool>> _onMoveNext;
// callback called whenever _enumerator.DisposeAsync completes asynchronously
private readonly Action<Task> _onDisposed;
// cache used for errors thrown by _enumerator.MoveNextAsync, that
// should be rethrown after _enumerator.DisposeAsync
private Exception? _failReason = null;
public Logic(AsyncEnumerableSource<T> source) : base(source.Shape)
{
_outlet = source.Outlet;
_enumerator = source._enumerable.GetAsyncEnumerator();
_onMoveNext = GetAsyncCallback<Task<bool>>(OnMoveNext);
_onDisposed = GetAsyncCallback<Task>(OnDisposed);
}
// ... other methods
}
The last part to do are methods overriden on `Logic:
OnPull used whenever the downstream stage calls for new data. Here we need to call for next element of async enumerator sequence.
OnDownstreamFinish called whenever the downstream stage has finished and will not ask for any new data. It's the place for us to dispose our enumerator.
Thing is these methods are not async/await, while their enumerator's equivalent are. What we basically need to do there is to:
Call corresponding async methods of underlying enumerator (OnPull → MoveNextAsync and OnDownstreamFinish → DisposeAsync).
See, if we can take their results immediately - it's important part that usually is done for us as part of C# compiler in async/await calls.
If not, and we need to wait for the results - call ContinueWith to register our callback wrappers to be called once async methods are done.
sealed class Logic : OutGraphStageLogic
{
// ... constructor and fields
public override void OnPull()
{
var hasNext = _enumerator.MoveNextAsync();
if (hasNext.IsCompletedSuccessfully)
{
// first try short-path: values is returned immediately
if (hasNext.Result)
// check if there was next value and push it downstream
Push(_outlet, _enumerator.Current);
else
// if there was none, we reached end of async enumerable
// and we can dispose it
DisposeAndComplete();
}
else
// we need to wait for the result
hasNext.AsTask().ContinueWith(_onMoveNext);
}
// This method is called when another stage downstream has been completed
public override void OnDownstreamFinish() =>
// dispose enumerator on downstream finish
DisposeAndComplete();
private void DisposeAndComplete()
{
var disposed = _enumerator.DisposeAsync();
if (disposed.IsCompletedSuccessfully)
{
// enumerator disposal completed immediately
if (_failReason is not null)
// if we close this stream in result of error in MoveNextAsync,
// fail the stage
FailStage(_failReason);
else
// we can close the stage with no issues
CompleteStage();
}
else
// we need to await for enumerator to be disposed
disposed.AsTask().ContinueWith(_onDisposed);
}
private void OnMoveNext(Task<bool> task)
{
// since this is callback, it will always be completed, we just need
// to check for exceptions
if (task.IsCompletedSuccessfully)
{
if (task.Result)
// if task returns true, it means we read a value
Push(_outlet, _enumerator.Current);
else
// otherwise there are no more values to read and we can close the source
DisposeAndComplete();
}
else
{
// task either failed or has been cancelled
_failReason = task.Exception as Exception ?? new TaskCanceledException(task);
FailStage(_failReason);
}
}
private void OnDisposed(Task task)
{
if (task.IsCompletedSuccessfully) CompleteStage();
else {
var reason = task.Exception as Exception
?? _failReason
?? new TaskCanceledException(task);
FailStage(reason);
}
}
}
As of Akka.NET v1.4.30 this is now natively supported inside Akka.Streams via the RunAsAsyncEnumerable method:
var input = Enumerable.Range(1, 6).ToList();
var cts = new CancellationTokenSource();
var token = cts.Token;
var asyncEnumerable = Source.From(input).RunAsAsyncEnumerable(Materializer);
var output = input.ToArray();
bool caught = false;
try
{
await foreach (var a in asyncEnumerable.WithCancellation(token))
{
cts.Cancel();
}
}
catch (OperationCanceledException e)
{
caught = true;
}
caught.ShouldBeTrue();
I copied that sample from the Akka.NET test suite, in case you're wondering.
You can also use an existing primitive for streaming large collections of data. Here is an example of using Source.unfoldAsync to stream pages of data - in this case github repositories using Octokit - until there is no more.
var source = Source.UnfoldAsync<int, RepositoryPage>(startPage, page =>
{
var pageTask = client.GetRepositoriesAsync(page, pageSize);
var next = pageTask.ContinueWith(task =>
{
var page = task.Result;
if (page.PageNumber * pageSize > page.Total) return Option<(int, RepositoryPage)>.None;
else return new Option<(int, RepositoryPage)>((page.PageNumber + 1, page));
});
return next;
});
To run
using var sys = ActorSystem.Create("system");
using var mat = sys.Materializer();
int startPage = 1;
int pageSize = 50;
var client = new GitHubClient(new ProductHeaderValue("github-search-app"));
var source = ...
var sink = Sink.ForEach<RepositoryPage>(Console.WriteLine);
var result = source.RunWith(sink, mat);
await result.ContinueWith(_ => sys.Terminate());
class Page<T>
{
public Page(IReadOnlyList<T> contents, int page, long total)
{
Contents = contents;
PageNumber = page;
Total = total;
}
public IReadOnlyList<T> Contents { get; set; } = new List<T>();
public int PageNumber { get; set; }
public long Total { get; set; }
}
class RepositoryPage : Page<Repository>
{
public RepositoryPage(IReadOnlyList<Repository> contents, int page, long total)
: base(contents, page, total)
{
}
public override string ToString() =>
$"Page {PageNumber}\n{string.Join("", Contents.Select(x => x.Name + "\n"))}";
}
static class GitHubClientExtensions
{
public static async Task<RepositoryPage> GetRepositoriesAsync(this GitHubClient client, int page, int size)
{
// specify a search term here
var request = new SearchRepositoriesRequest("bootstrap")
{
Page = page,
PerPage = size
};
var result = await client.Search.SearchRepo(request);
return new RepositoryPage(result.Items, page, result.TotalCount);
}
}
I've been attempting to see how long functions take to execute in my code as practice to see where I can optimize. Right now I use a helper class that is essentially a stopwatch with a message to check these. The goal of this is that I should be able to wrap whatever method call I want in the helper and I'll get it's duration.
public class StopwatcherData
{
public long Time { get; set; }
public string Message { get; set; }
public StopwatcherData(long time, string message)
{
Time = time;
Message = message;
}
}
public class Stopwatcher
{
public delegate void CompletedCallBack(string result);
public static List<StopwatcherData> Data { get; set; }
private static Stopwatch stopwatch { get; set;}
public Stopwatcher()
{
Data = new List<StopwatcherData>();
stopwatch = new Stopwatch();
stopwatch.Start();
}
public static void Click(string message)
{
Data.Add(new StopwatcherData(stopwatch.ElapsedMilliseconds, message));
}
public static void Reset()
{
stopwatch.Reset();
stopwatch.Start();
}
}
Right now to use this, I have to call the Reset before the function I want so that the timer is restarted, and then call the click after it.
Stopwatcher.Reset()
MyFunction();
Stopwatcher.Click("MyFunction");
I've read a bit about delegates and actions, but I'm unsure of how to apply them to this situation. Ideally, I would pass the function as part of the Stopwatcher call.
//End Goal:
Stopwatcher.Track(MyFunction(), "MyFunction Time");
Any help is welcome.
It's not really a good idea to profile your application like that, but if you insist, you can at least make some improvements.
First, don't reuse Stopwatch, just create new every time you need.
Second, you need to handle two cases - one when delegate you pass returns value and one when it does not.
Since your Track method is static - it's a common practice to make it thread safe. Non-thread-safe static methods are quite bad idea. For that you can store your messages in a thread-safe collection like ConcurrentBag, or just use lock every time you add item to your list.
In the end you can have something like this:
public class Stopwatcher {
private static readonly ConcurrentBag<StopwatcherData> _data = new ConcurrentBag<StopwatcherData>();
public static void Track(Action action, string message) {
var w = Stopwatch.StartNew();
try {
action();
}
finally {
w.Stop();
_data.Add(new StopwatcherData(w.ElapsedMilliseconds, message));
}
}
public static T Track<T>(Func<T> func, string message) {
var w = Stopwatch.StartNew();
try {
return func();
}
finally {
w.Stop();
_data.Add(new StopwatcherData(w.ElapsedMilliseconds, message));
}
}
}
And use it like this:
Stopwatcher.Track(() => SomeAction(param1), "test");
bool result = Stopwatcher.Track(() => SomeFunc(param2), "test");
If you are going to use that with async delegates (which return Task or Task<T>) - you need to add two more overloads for that case.
Yes, you can create a timer function that accepts any action as a delegate. Try this block:
public static long TimeAction(Action action)
{
var timer = new Stopwatch();
timer.Start();
action();
timer.Stop();
return timer.ElapsedMilliseconds;
}
This can be used like this:
var elapsedMilliseconds = TimeAction(() => MyFunc(param1, param2));
This is a bit more awkward if your wrapped function returns a value, but you can deal with this by assigning a variable from within the closure, like this:
bool isSuccess ;
var elapsedMilliseconds = TimeToAction(() => {
isSuccess = MyFunc(param1, param2);
});
I've had this problem a while ago as well and was always afraid of the case that I'll leave errors when I change Stopwatcher.Track(() => SomeFunc(), "test")(See Evk's answer) back to SomeFunc(). So I tought about something that wraps it without changing it!
I came up with a using, which is for sure not the intended purpose.
public class OneTimeStopwatch : IDisposable
{
private string _logPath = "C:\\Temp\\OneTimeStopwatch.log";
private readonly string _itemname;
private System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
public OneTimeStopwatch(string itemname)
{
_itemname = itemname;
sw.Start();
}
public void Dispose()
{
sw.Stop();
System.IO.File.AppendAllText(_logPath, string.Format($"{_itemname}: {sw.ElapsedMilliseconds}ms{Environment.NewLine}"));
}
}
This can be used a easy way
using (new OneTimeStopwatch("test"))
{
//some sensible code not to touch
System.Threading.Thread.Sleep(1000);
}
//logfile with line "test: 1000ms"
I only need to remove 2 lines (and auto format) to make it normal again.
Plus I can easily wrap multiple lines here which isn't possible without defining new functions in the other approach.
Again, this is not recommended for terms of few miliseconds.
I have the following async code that gets called from so many places in my project:
public async Task<HttpResponseMessage> MakeRequestAsync(HttpRequestMessage request)
{
var client = new HttpClient();
return await client.SendAsync(request).ConfigureAwait(false);
}
An example of how the above method gets called:
var tasks = items.Select(async i =>
{
var response = await MakeRequestAsync(i.Url);
//do something with response
});
The ZenDesk API that I'm hitting allows about 200 requests per minute after which I'm getting a 429 error. I need to do some sort of a Thread.sleep if I encounter the 429 error, but with with async/await, there may be so many requests in parallel threads waiting to process, I am not sure how I can make all of them sleep for 5 seconds or so and then resume again.
What's the correct way to approach this problem? I'd like to hear quick solutions as well as good-design solutions.
I do not think that this is a duplicate, as marked recently. The other SO poster does not need a time-based sliding window (or time based throttling) and the answer there does not cover this situation. That works only when you want to set a hard limit on outgoing requests.
Anyway, a quasi-quick solution is to make the throttling in the MakeRequestAsync method. Something like this:
public async Task<HttpResponseMessage> MakeRequestAsync(HttpRequestMessage request)
{
//Wait while the limit has been reached.
while(!_throttlingHelper.RequestAllowed)
{
await Task.Delay(1000);
}
var client = new HttpClient();
_throttlingHelper.StartRequest();
var result = await client.SendAsync(request).ConfigureAwait(false);
_throttlingHelper.EndRequest();
return result;
}
The class ThrottlingHelper is just something I made now so you may need to debug it a bit (read - may not work out of the box).
It tries to be a timestamp sliding window.
public class ThrottlingHelper : IDisposable
{
//Holds time stamps for all started requests
private readonly List<long> _requestsTx;
private readonly ReaderWriterLockSlim _lock;
private readonly int _maxLimit;
private TimeSpan _interval;
public ThrottlingHelper(int maxLimit, TimeSpan interval)
{
_requestsTx = new List<long>();
_maxLimit = maxLimit;
_interval = interval;
_lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
}
public bool RequestAllowed
{
get
{
_lock.EnterReadLock();
try
{
var nowTx = DateTime.Now.Ticks;
return _requestsTx.Count(tx => nowTx - tx < _interval.Ticks) < _maxLimit;
}
finally
{
_lock.ExitReadLock();
}
}
}
public void StartRequest()
{
_lock.EnterWriteLock();
try
{
_requestsTx.Add(DateTime.Now.Ticks);
}
finally
{
_lock.ExitWriteLock();
}
}
public void EndRequest()
{
_lock.EnterWriteLock();
try
{
var nowTx = DateTime.Now.Ticks;
_requestsTx.RemoveAll(tx => nowTx - tx >= _interval.Ticks);
}
finally
{
_lock.ExitWriteLock();
}
}
public void Dispose()
{
_lock.Dispose();
}
}
You would use it as a member in the class that makes the requests, and instantiate it like this:
_throttlingHelper = new ThrottlingHelper(200, TimeSpan.FromMinutes(1));
Don't forget to dispose it when you're done with it.
A bit of documentation about ThrottlingHelper:
Constructor params are the maximum requests you want to be able to do in a certain interval and the interval itself as a time span. So, 200 and 1 minute means that that you want no more than 200 requests/minute.
Property RequestAllowed lets you know if you are able to do a request with the current throttling settings.
Methods StartRequest & EndRequest register/unregister a request by using the current date/time.
EDIT/Pitfalls
As indicated by #PhilipABarnes, EndRequest can potentially remove requests that are still in progress. As far as I can see, this can happen in two situations:
The interval is small, such that requests do not get to complete in good time.
Requests actually take more than the interval to execute.
The proposed solution involves actually matching EndRequest calls to StartRequest calls by means of a GUID or something similar.
if there are multiple requests waiting in the while loop for RequestAllowed some of them might start at the same time. how about a simple StartRequestIfAllowed?
public class ThrottlingHelper : DisposeBase
{
//Holds time stamps for all started requests
private readonly List<long> _requestsTx;
private readonly Mutex _mutex = new Mutex();
private readonly int _maxLimit;
private readonly TimeSpan _interval;
public ThrottlingHelper(int maxLimit, TimeSpan interval)
{
_requestsTx = new List<long>();
_maxLimit = maxLimit;
_interval = interval;
}
public bool StartRequestIfAllowed
{
get
{
_mutex.WaitOne();
try
{
var nowTx = DateTime.Now.Ticks;
if (_requestsTx.Count(tx => nowTx - tx < _interval.Ticks) < _maxLimit)
{
_requestsTx.Add(DateTime.Now.Ticks);
return true;
}
else
{
return false;
}
}
finally
{
_mutex.ReleaseMutex();
}
}
}
public void EndRequest()
{
_mutex.WaitOne();
try
{
var nowTx = DateTime.Now.Ticks;
_requestsTx.RemoveAll(tx => nowTx - tx >= _interval.Ticks);
}
finally
{
_mutex.ReleaseMutex();
}
}
protected override void DisposeResources()
{
_mutex.Dispose();
}
}
we have a weired problem when using Rhino Mocks and Threads. I've
tried to isolate the problem, but now I'm stuck to this:
[TestClass]
public class FoolTests
{
[TestMethod]
public void TestMethod_Scenario_Result()
{
for (int i = 0; i < 5; i++)
{
var fool = MockRepository.GenerateStub<IFool>();
fool.Stub(x => x.AmIFool).Return(false);
new Fool(fool);
}
}
}
public class Fool
{
private readonly IFool _fool;
private readonly Thread _thread;
public Fool(IFool fool)
{
_fool = fool;
_thread = new Thread(Foolish);
_thread.Start();
}
private void Foolish()
{
while (true)
{
var foo = _fool.Foolness;
}
}
}
public interface IFool
{
bool AmIFool { get; }
bool Foolness { get; set; }
}
Nearly all the time when running this test, I get "Test method FoolTests.TestMethod_Scenario_Result threw exception: System.InvalidOperationException: This action is invalid when the mock object is in replay state." on line "fool.Stub(x => x.AmIFool).Return(false);".
I have no idea what should be wrong here. Has anyone an idea or do I have to dig into the Rhino Mocks-code?
Not sure if this is a complete answer, but this page has an interesting note on multi-threading:
http://www.ayende.com/projects/rhino-mocks/api/files/MockRepository-cs.html
MockRepository is capable of verifying in multiply threads, but recording in multiply threads is not recommended.
First thing I'd try is setting up all mocks and expectations, then executing your constructors. This seems to be working for me:
[TestMethod]
public void TestMethod_Scenario_Result()
{
var stubs = new IFool[5];
for (int i = 0; i < stubs.Length; ++i)
{
var fool = MockRepository.GenerateStub<IFool>();
fool.Stub(x => x.AmIFool).Return(false);
stubs[i] = fool;
}
foreach (var stub in stubs)
new Fool(stub);
}
Since this code works, I imagine the problem is that you are doing playback in one thread while recording (for a different stub) in another thread. Playing back and recording at the same time seems like it isn't thread safe, even if you're operating on different stubs.