I'm looking for implementations of the active object pattern, but haven't for much so far. This is what I came up with:
http://geekswithblogs.net/dbose/archive/2009/10/17/c-activeobject-runnable.aspx
Need something a little bit more involved. Preferably for .NET Version <= 3.5.
The simple implementation that uses System.Threading.Tasks.Task
class ActiveObject : IDisposable
{
private Task _lastTask = Task.Factory.StartNew(() => { });
public void Dispose()
{
if (_lastTask == null)
return;
_lastTask.Wait();
_lastTask = null;
}
public void InvokeAsync(Action action)
{
if (_lastTask == null)
throw new ObjectDisposedException(GetType().FullName);
_lastTask = _lastTask.ContinueWith(t => action());
}
}
InvokeAsync isn't thread-safe, use lock (_lastTask) lastTask = ...; if you need it.
See System.Threading.Tasks.Task.
I haven't looked at the code, but this seems to be an implementation of the active object pattern.
http://www.codeproject.com/KB/architecture/LongRunningActiveObject.aspx
Adding to Anton Tykhyy's answer, there is a version of System.Threading.Tasks.Task for .NET 3.5 available as a part of Reactive Extensions. Note that this version does not have official support from Microsoft.
Related
I have various interfaces and I need to be able to call them. Here is the base class:
public class MyActorBase<TChild>: ActorBase where TChild : MyActorBase<TChild>
{
public MyActorBase()
{
var actors = ChildClass
.GetInterfaces()
.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IActorMessageHandler<>))
.Select(x=> (arguments: x.GetGenericArguments(), definition: x))
.ToImmutableList();
if (actors.Any())
{
var ty = actors.First();
var obj = Activator.CreateInstance(ty.definition, true);
// how to call method implementation
}
}
protected sealed override bool Receive(object message) => false;
private Type ChildClass => ((this as TChild)?? new object()).GetType();
}
public interface IActorMessageHandler<in T>
{
Task Handle(T msg);
}
I read these blog post:
Dont use Activator.CreateInstance
Linq Expressions
Creating objects performance implications
The writers already knew the type at compile time hence were able to cast correctly. I do not know anything at compile time so I cannot use a generic method or typecast it using () operator or as operator.
UPDATE: I think people are not getting the idea of what I want to achieve. so consider this. I made a nuget package which anyone can
depend upon. Somewhere in the world, someone writes this code:
public class MyMessage
{
public int Number { get; }
public MyMessage(int number) => Number = number;
}
public class MyNewActor: MyActorBase<MyNewActor>, IActorMessageHandler<MyMessage>
{
public Task Handle(MyMessage msg)
{
return Task.CompletedTask;
}
}
I want that any class that implements the IActorMessageHandler, i should be able to call its method Handle(T msg). so while I was able to instantiate it (considering that I'm not using any dependency injection) how can I call the method in the most efficient way?
Is there any alternate to reflection?
you should not use Activator.CreateInstance it's very much expensive. instead, you may use Expression.Lamda to create objects in an efficient way.
var object = Expression.Lambda<Func<IActorMessageHandler<TChild>>>(Expression.New(ty.definition.Value.GetConstructor(Type.EmptyTypes) ?? throw new
Exception("Failed to create object"))
).Compile()();
What about using the dynamic keyword? This is basically optimized reflection nicely wrapped for you:
dynamic obj = Activator.CreateInstance(ty.definition, true);
Task t = obj.Handle(msg); //need to define msg before
It bypasses compile-time checks and defers method look-up at run-time.
Note that it will fail at run-time if no resolution for the Handle method can be performed.
This blog post concludes that dynamic ends up being much quicker than reflection when called fairly often because of caching optimizations.
I'm using Polly .NET for wrapping my methods with retry behavior.
Polly makes it quite easy and elegant but I'm trying to take it to the next level.
Please see this Python example (it might have few mistakes, but that's not the point here):
#retry(wait_exponential_multiplier=250,
wait_exponential_max=4500,
stop_max_attempt_number=8,
retry_on_result=lambda failures_count: failures_count > 0)
def put():
global non_delivered_tweets
logger.info("Executing Firehose put_batch command on {} tweets".format(len(non_delivered_tweets)))
response = firehose.put_record_batch(DeliveryStreamName=firehose_stream_name, Records=non_delivered_tweets)
failures_count = response["FailedPutCount"]
failures_list = []
if failures_count > 0:
for index, request_response in enumerate(response["RequestResponses"]):
if "ErrorCode" in request_response:
failures_list.append(non_delivered_tweets[index])
non_delivered_tweets = failures_list
return failures_count
The benefits in writing code like the above:
You read the core logic
You consider that the core logic is retried in the specified cases
Since the two are not mixed - it makes the code much more readable in my opinion.
I would like to achieve this syntax with Polly on C#, using attributes.
I have minimum knowledge in C# attributes, and for what I've read, it seems that this is not possible.
I would be happy to have something like this:
class Program
{
static void Main(string[] args)
{
var someClassInstance = new SomeClass();
someClassInstance.DoSomething();
}
}
class Retry : Attribute
{
private static readonly Policy DefaultRetryPolicy = Policy
.Handle<Exception>()
.WaitAndRetry(3, retryAttempt => TimeSpan.FromSeconds(5));
public void Wrapper(Action action)
{
DefaultRetryPolicy.Execute(action);
}
}
class SomeClass
{
[Retry]
public void DoSomething()
{
// core logic
}
}
As you can see, in my example - the [Retry] attribute wraps the DoSomething method with retry logic.
If that is possible, I would be very happy to learn how to implement it.
Thanks a lot for help !
Of course, that is possible. However, it is way more complicated than in Python. Unlike Python where decorators are executable code that may exchange the decorated object, attributes in C# are pure metadata. .NET attributes do not have access to the object they are decorating but rather stand for themselves.
Therefore, you have to connect the attribute and the method yourself and especially replace the method yourself (i.e. replace the function with the core logic with the function that also includes retries etc.). The latter is not possible implicitly in C#, you have to do that explicitly.
It should work similar to this:
class RetryExecutor
{
public static void Call(Action action)
{
var attribute = action.Method.GetCustomAttribute(typeof(Retry));
if (attribute != null)
{
((Retry)attribute).Wrap(action);
}
else
{
action();
}
}
}
// Something that might need to be invoked
private void MightnInvoke()
{
// Invoke if we need to.
if (this.InvokeRequired) this.Invoke(new Action(this.MightnInvoke));
// Do stuff here.
}
Is this the best way to invoke something on the fly in c#?
Basically i'm trying to avoid having extra code blocks where i don't need them.
or is it better to use synchronization context?
public void SyncContext(object state)
{
try
{
int id = Thread.CurrentThread.ManagedThreadId;
Console.Writeline("Run thread: " + id);
SynchronizationContext CommandContext = state as SynchronizationContext;
// Do stuff here and then use the CommandContext.
var Somestate = "Connected";
CommandContext.Send(Sometask, Somestate.ToString());
Thread.Sleep(250);
}
catch (System.ComponentModel.InvalidAsynchronousStateException)
{
}
public void Sometask(object state)
{
// We can work in here and be on the same thread we came from.
string Target = state as string;
if (Target == "Connected")
{ }
}
UPDATE:
Coming back to this, After profiling thread concurrency it turns out the method of sync context i gave as an example is indeed wrong. Don't use it useless you intend on changing it slightly to be thread safe.
In the official MSDN docs for SynchronizationContext it says;
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe..
Which would lead me to believe that your implementation may not be thread safe (though if it's working, then maybe I've mis-interpreted something).
Personally, it looks a little verbose for my liking.
I use the following in production and have never had an issue with threading.
public delegate void ActionCallback();
public static void AsyncUpdate(this Control ctrl, ActionCallback action)
{
if (ctrl != null && (ctrl.IsHandleCreated && !ctrl.IsDisposed && !ctrl.Disposing))
{
if (!ctrl.IsHandleCreated)
ctrl.CreateControl();
AsyncInvoke(ctrl, action);
}
}
private static void AsyncInvoke(Control ctrl, ActionCallback action)
{
if (ctrl.InvokeRequired)
ctrl.BeginInvoke(action);
else action();
}
Used as follows;
myTextBox.AsyncUpdate(() => myTextBox.Text = "Test");
In the code below, due to the interface, the class LazyBar must return a task from its method (and for argument's sake can't be changed). If LazyBars implementation is unusual in that it happens to run quickly and synchronously - what is the best way to return a No-Operation task from the method?
I have gone with Task.Delay(0) below, however I would like to know if this has any performance side-effects if the function is called a lot (for argument's sake, say hundreds of times a second):
Does this syntactic sugar un-wind to something big?
Does it start clogging up my application's thread pool?
Is the compiler cleaver enough to deal with Delay(0) differently?
Would return Task.Run(() => { }); be any different?
Is there a better way?
using System.Threading.Tasks;
namespace MyAsyncTest
{
internal interface IFooFace
{
Task WillBeLongRunningAsyncInTheMajorityOfImplementations();
}
/// <summary>
/// An implementation, that unlike most cases, will not have a long-running
/// operation in 'WillBeLongRunningAsyncInTheMajorityOfImplementations'
/// </summary>
internal class LazyBar : IFooFace
{
#region IFooFace Members
public Task WillBeLongRunningAsyncInTheMajorityOfImplementations()
{
// First, do something really quick
var x = 1;
// Can't return 'null' here! Does 'Task.Delay(0)' have any performance considerations?
// Is it a real no-op, or if I call this a lot, will it adversely affect the
// underlying thread-pool? Better way?
return Task.Delay(0);
// Any different?
// return Task.Run(() => { });
// If my task returned something, I would do:
// return Task.FromResult<int>(12345);
}
#endregion
}
internal class Program
{
private static void Main(string[] args)
{
Test();
}
private static async void Test()
{
IFooFace foo = FactoryCreate();
await foo.WillBeLongRunningAsyncInTheMajorityOfImplementations();
return;
}
private static IFooFace FactoryCreate()
{
return new LazyBar();
}
}
}
Today, I would recommend using Task.CompletedTask to accomplish this.
Pre .net 4.6:
Using Task.FromResult(0) or Task.FromResult<object>(null) will incur less overhead than creating a Task with a no-op expression. When creating a Task with a result pre-determined, there is no scheduling overhead involved.
To add to Reed Copsey's answer about using Task.FromResult, you can improve performance even more if you cache the already completed task since all instances of completed tasks are the same:
public static class TaskExtensions
{
public static readonly Task CompletedTask = Task.FromResult(false);
}
With TaskExtensions.CompletedTask you can use the same instance throughout the entire app domain.
The latest version of the .Net Framework (v4.6) adds just that with the Task.CompletedTask static property
Task completedTask = Task.CompletedTask;
Task.Delay(0) as in the accepted answer was a good approach, as it is a cached copy of a completed Task.
As of 4.6 there's now Task.CompletedTask which is more explicit in its purpose, but not only does Task.Delay(0) still return a single cached instance, it returns the same single cached instance as does Task.CompletedTask.
The cached nature of neither is guaranteed to remain constant, but as implementation-dependent optimisations that are only implementation-dependent as optimisations (that is, they'd still work correctly if the implementation changed to something that was still valid) the use of Task.Delay(0) was better than the accepted answer.
return Task.CompletedTask; // this will make the compiler happy
Recently encountered this and kept getting warnings/errors about the method being void.
We're in the business of placating the compiler and this clears it up:
public async Task MyVoidAsyncMethod()
{
await Task.CompletedTask;
}
This brings together the best of all the advice here so far. No return statement is necessary unless you're actually doing something in the method.
When you must return specified type:
Task.FromResult<MyClass>(null);
I prefer the Task completedTask = Task.CompletedTask; solution of .Net 4.6, but another approach is to mark the method async and return void:
public async Task WillBeLongRunningAsyncInTheMajorityOfImplementations()
{
}
You'll get a warning (CS1998 - Async function without await expression), but this is safe to ignore in this context.
If you are using generics, all answer will give us compile error. You can use return default(T);. Sample below to explain further.
public async Task<T> GetItemAsync<T>(string id)
{
try
{
var response = await this._container.ReadItemAsync<T>(id, new PartitionKey(id));
return response.Resource;
}
catch (CosmosException ex) when (ex.StatusCode == System.Net.HttpStatusCode.NotFound)
{
return default(T);
}
}
return await Task.FromResult(new MyClass());
I don't want to write my own because i'm afraid i might miss something and/or rip off other people's work, so is there an ObjectPool (or similar) class existing in a library for .NET?
By object pool, i mean a class that assists caching of objects that take a long time to create, generally used to improve performance.
In the upcoming version of .NET (4.0), there's a ConcurrentBag<T> class which can easily be utilized in an ObjectPool<T> implementation; in fact the there's an article on MSDN that shows you how to do precisely this.
If you don't have access to the latest .NET framework, you can get the System.Collections.Concurrent namespace (which has ConcurrentBag<T>) in .NET 3.5 from Microsoft's Reactive Extensions (Rx) library (in System.Threading.dll).
UPDATE:
I'd also put forward BufferBlock<T> from TPL DataFlow. IIRC it's part of .net now. The great thing about BufferBlock<T> is that you can wait asynchronously for items to become available using the Post<T> and ReceiveAsync<T> extension methods. Pretty handy in an async/await world.
ORIGINAL ANSWER
A while back I faced this problem and came up with a lightweight (rough'n'ready) threadsafe (I hope) pool that has proved very useful, reusable and robust:
public class Pool<T> where T : class
{
private readonly Queue<AsyncResult<T>> asyncQueue = new Queue<AsyncResult<T>>();
private readonly Func<T> createFunction;
private readonly HashSet<T> pool;
private readonly Action<T> resetFunction;
public Pool(Func<T> createFunction, Action<T> resetFunction, int poolCapacity)
{
this.createFunction = createFunction;
this.resetFunction = resetFunction;
pool = new HashSet<T>();
CreatePoolItems(poolCapacity);
}
public Pool(Func<T> createFunction, int poolCapacity) : this(createFunction, null, poolCapacity)
{
}
public int Count
{
get
{
return pool.Count;
}
}
private void CreatePoolItems(int numItems)
{
for (var i = 0; i < numItems; i++)
{
var item = createFunction();
pool.Add(item);
}
}
public void Push(T item)
{
if (item == null)
{
Console.WriteLine("Push-ing null item. ERROR");
throw new ArgumentNullException();
}
if (resetFunction != null)
{
resetFunction(item);
}
lock (asyncQueue)
{
if (asyncQueue.Count > 0)
{
var result = asyncQueue.Dequeue();
result.SetAsCompletedAsync(item);
return;
}
}
lock (pool)
{
pool.Add(item);
}
}
public T Pop()
{
T item;
lock (pool)
{
if (pool.Count == 0)
{
return null;
}
item = pool.First();
pool.Remove(item);
}
return item;
}
public IAsyncResult BeginPop(AsyncCallback callback)
{
var result = new AsyncResult<T>();
result.AsyncCallback = callback;
lock (pool)
{
if (pool.Count == 0)
{
lock (asyncQueue)
{
asyncQueue.Enqueue(result);
return result;
}
}
var poppedItem = pool.First();
pool.Remove(poppedItem);
result.SetAsCompleted(poppedItem);
return result;
}
}
public T EndPop(IAsyncResult asyncResult)
{
var result = (AsyncResult<T>) asyncResult;
return result.EndInvoke();
}
}
In order to avoid any interface requirements of the pooled objects, both the creation and resetting of the objects is performed by user supplied delegates: i.e.
Pool<MemoryStream> msPool = new Pool<MemoryStream>(() => new MemoryStream(2048), pms => {
pms.Position = 0;
pms.SetLength(0);
}, 500);
In the case that the pool is empty, the BeginPop/EndPop pair provide an APM (ish) means of retrieving the object asynchronously when one becomes available (using Jeff Richter's excellent AsyncResult<TResult> implementation).
I can't quite remember why it is constained to T : class... there's probably none.
CodeProject has a sample ObjectPool implementation. Have a look here. Alternatively, there are some implementations here, here, and here.
How about System.Collections.Generic.Dictionary?
Sounds like you need a Factory pattern with caching.
You can try use .net reflector to look at the ThreadPool implementation.