C# Singleton Pattern with new Task - c#

I have a listenerclass which has methods listening msmque, and receive prices, and distribute depending on some conditions. Getting prices so fast, and performace is very important that point. When I reading code I realized that each received item from msmque creates new Task to make some process and distribute price.
Here is the thing when a price received, the new price comes before first price distribute. To solve that problem each price use another task, but orderService.DistributePrice coded as a singleton pattern, I can not be sure that, when I create new task, each task create their own singleton, or each new task uses the same singelton instance.
If yes, code is completely fail. Please let me now. Thank you
My msmque listener below,
private static void PriceQueuePeekCompleted(Object source, PeekCompletedEventArgs asyncResult)
{
var mq = (MessageQueue)source;
var recvMessage = mq.Receive();
if (recvMessage != null)
{
var price = JsonConvert.DeserializeObject<FinancialInstrumentPrice>((string)recvMessage.Body);
Task.Factory.StartNew(() =>Distributer.Instance.DistributePrice(price) );
}
mq.BeginPeek();}
And DistributerInstance distribute the prices,
internal class Distributer
{
private static Distributer _instance;
public List<IOrderService> _PriceListeners { get; set; }
private Distributer()
{
}
public static Distributer Instance
{
get
{
if (_instance == null)
{
_instance = new Distributer();
}
return _instance;
}
}
private void DistributePrice(FinancialInstrumentPrice _priceToDistribute)
{
foreach (IOrderService orderService in _PriceListeners)
{
orderService.GetPrice(_priceToDistribute);
}
}
}
public class FinancialInstrumentPrice {
public string Name { get; set; }
public decimal Price { get; set; }
}
public interface IOrderService
{
void GetPrice(FinancialInstrumentPrice _priceToDistribute);
}
At that point prices comes so fast from msmque, so before first price is distributed, second price is came, to solve that problem I am using Task.Factory to create new tasks for each received price, however as you can see above, Singelton pattern implemented to Distributer class. As far as I know there is only one instance in memory if singelton used. In this situation, what will happend. I am creating new tasks for each price, on the other hand different tasks try to use singelton object. Each new tasks can create their own Distributer class in memory or each task try to use same Distributer class instance ?
if they try to use same Distributer class instance, it is fail.
Thank you.

Related

Is there a design pattern for one input passed to n methods that each return the input for the next method

I'd like to know if there is a design pattern for this problem:
One input is used to construct an object (via constructor or via method return I don't care), that object is feed to the next method or constructor. This is repeated on a user specified set of processors, obviously throwing exceptions if there is a break in the chain of required inputs for the processors.
The output of all, some or none of the implemented processors is the same object.
I've got about 6 processors planned, possibly more in future.
Composition:
I'm not sure I like the composition design pattern because not every object is intended to be an output of this process and I can't think how to not output null values without the user knowing it's going to be null.
Chain of responsibility:
Chain of responsibility is the way to go according to what I've heard however I'm not sure i understand it. Is this design pattern suggesting to pass n function pointers to a function that runs through each? If so, I'm not sure how to setup the function that gets passed n function pointers.
my attempt:
I've got two interface that are inherited by n classes ie (FirstProcessor, FirstInput, FirstOutput, SecondProcessor, SecondOutput, ThirdProcessor,.., NProcessor, NOutput)
IChainedOutput
{
IChainedOutput Input {get;}
FinalOutputOBj GetFinalOutput()
}
IChainedProcessor
{
IChainedOutput Run(IChainedOutput previousOutput)
}
used like this:
IChainedProcessor previous = FirstProcessor(originalInput)
foreach(IChainedProcessor processor in processorList.Skip(1)
{
IChainedOutput current = processor.Run(previous)
previous = current;
}
FinalOutputObj output = previous.GetFinalOutput();
Problems:
FinalOutputObj is coupled with all the processor implementations which is bad. It's not comprised of all the IChainedOutput child class members but uses a good subset to calculate other values.
FinalOutputObj is being composed in a bad way and I don't see how I can escape from outputting null values if the list of processors does not contain every processor implemented.
There is a lot of downcasting in the constructors of the child classes which is a red flag for oop. However, the inputs for each block of processing logic are completely different. First input is a couple of vectors, second input is the output of the first which includes a handful of custom types and more vectors, etc
Each IChainedOutput contains the reference to the inputs used to create it. currently there is one to one mapping input to processor but i'm not sure in future. And this is more bad downcasting.
I'd like to not have to perfectly organise the list of processors, makes it too easy for other developers to make mistakes here. so the next one selected should be the one that has the correct constructor.
You could try a decorator approach like this:
public interface IChainProcessor
{
IChainOutput Run(IChainOutput previousOutput);
}
public interface IChainOutput
{
string Value { get; }
}
public class OutputExample : IChainOutput
{
public string Value { get; }
public OutputExample(string value)
{
this.Value = value;
}
}
public abstract class Processor : IChainProcessor
{
protected IChainProcessor nextProcessor;
public IChainOutput Run(IChainOutput previousOutput)
{
var myOutput = this.MyLogic(previousOutput);
return this.nextProcessor == null ? myOutput : this.nextProcessor.Run(myOutput);
}
protected abstract IChainOutput MyLogic(IChainOutput input);
}
public class ProcessorA : Processor
{
public ProcessorA() { }
public ProcessorA(ProcessorB nextProcessor)
{
this.nextProcessor = nextProcessor;
}
protected override IChainOutput MyLogic(IChainOutput input)
{
return new OutputExample($"{input.Value} + Processor_A_Output");
}
}
public class ProcessorB : ProcessorA
{
public ProcessorB() { }
public ProcessorB(ProcessorC nextProcessor)
{
this.nextProcessor = nextProcessor;
}
protected override IChainOutput MyLogic(IChainOutput input)
{
return new OutputExample($"{input.Value} + Processor_B_Output");
}
}
public class ProcessorC : ProcessorB
{
protected override IChainOutput MyLogic(IChainOutput input)
{
return new OutputExample($"{input.Value} + Processor_C_Output");
}
}
The usage would be something like the below:
private static int Main(string[] args)
{
var chain = new ProcessorA(new ProcessorB(new ProcessorC()));
var simpleChain = new ProcessorA(new ProcessorC());
var verySimpleChain = new ProcessorA();
var initialInput = new OutputExample("Start");
Console.WriteLine(chain.Run(initialInput).Value);
Console.WriteLine(simpleChain.Run(initialInput).Value);
Console.WriteLine(verySimpleChain.Run(initialInput).Value);
return 0;
}
The output of this example is:
Start + Processor_A_Output + Processor_B_Output + Processor_C_Output
Start + Processor_A_Output + Processor_C_Output
Start + Processor_A_Output
The abstract Processor class provides a template method that you can implement in subclasses. So every ProcessorX class only defines MyLogic(IChainOutput input)
The Processors extend each other to enforce compile time preservation of processor order. So it is impossible to build a chain where ProcessorB comes before ProcessorA. It is possible though to build a chain that omits some processors as in the above example.
The example I provide here does not cater for the final output, which I know is one of your main concerns. To deal with issue I would rather build a mapping class to convert IChainOutput into the final format (I don't know the real structure of your data so maybe this is not possible).
in some of my cases it would make sense to have the output of one processor be the input for multiple other processors
Using this pattern it would also be possible to construct a processor 'tree' rather than a chain, by allowing the Processor class to have a list of next steps. Your usage would then become something like this:
var chain = new ProcessorA(new ProcessorB(new ProcessorC()), new ProcessorB(new ProcessorD()));
I hope this can help you.
If I understood your explanation correctly you can use delegates to overcome your problem. One of the important point about delegates is that they can be chained together so that you can call any number of methods in a single event.
Each processor transforming specific input into a specific output. Therefore the processor implementation should know only two types.
public interface IStepProcessor<TInput, TOutput>
{
TOutput Process(TInput input);
}
The client code ideally should know only two type of data that is input data and final product. The client code don't care if there were some intermediary steps in the middle. Client make use the conveyor as a black box
public delegate TOutput Conveyor<TInput, TOutput>(TInput input);
Yet some external code should understand how the whole transformation is done. This code should know all the intermediate data types and have access to all intermediate processors. It is done best with dependency injection.
public class Factory
{
private readonly IStepProcessor<IDataInput, IDataStep1> m_Step1;
private readonly IStepProcessor<IDataStep1, IDataStep2> m_Task2;
private readonly IStepProcessor<IDataStep2, IDataStep3> m_Task3;
private readonly IStepProcessor<IDataStep3, IDataStepN> m_TaskN;
private readonly IStepProcessor<IDataStepN, IDataOutput> m_FinalTask;
public Factory(
IStepProcessor<IDataInput, IDataStep1> task1,
IStepProcessor<IDataStep1, IDataStep2> task2,
IStepProcessor<IDataStep2, IDataStep3> task3,
IStepProcessor<IDataStep3, IDataStepN> taskN,
IStepProcessor<IDataStepN, IDataOutput> finalTask
)
{
m_Step1 = task1;
m_Task2 = task2;
m_Task3 = task3;
m_TaskN = taskN;
m_FinalTask = finalTask;
}
public Conveyor<IDataInput, IDataOutput> BuildConveyor()
{
return (input) =>
{
return m_FinalTask.Process(
m_TaskN.Process(
m_Task3.Process(
m_Task2.Process(
m_Step1.Process(input)))));
};
}
}
Here goes my offer
public interface IDataInput { }
public interface IDataStep1 { }
public interface IDataStep2 { }
public interface IDataStep3 { }
public interface IDataStepN { }
public interface IDataOutput { }
public interface IStepProcessor<TInput, TOutput>
{
TOutput Process(TInput input);
}
public delegate TOutput Conveyor<TInput, TOutput>(TInput input);
public class Factory
{
private readonly IStepProcessor<IDataInput, IDataStep1> m_Step1;
private readonly IStepProcessor<IDataStep1, IDataStep2> m_Task2;
private readonly IStepProcessor<IDataStep2, IDataStep3> m_Task3;
private readonly IStepProcessor<IDataStep3, IDataStepN> m_TaskN;
private readonly IStepProcessor<IDataStepN, IDataOutput> m_FinalTask;
public Factory(
IStepProcessor<IDataInput, IDataStep1> task1,
IStepProcessor<IDataStep1, IDataStep2> task2,
IStepProcessor<IDataStep2, IDataStep3> task3,
IStepProcessor<IDataStep3, IDataStepN> taskN,
IStepProcessor<IDataStepN, IDataOutput> finalTask
)
{
m_Step1 = task1;
m_Task2 = task2;
m_Task3 = task3;
m_TaskN = taskN;
m_FinalTask = finalTask;
}
public Conveyor<IDataInput, IDataOutput> BuildConveyor()
{
return (input) =>
{
return m_FinalTask.Process(
m_TaskN.Process(
m_Task3.Process(
m_Task2.Process(
m_Step1.Process(input)))));
};
}
}
public class Client
{
private readonly Conveyor<IDataInput, IDataOutput> m_Conveyor;
public Client(Conveyor<IDataInput, IDataOutput> conveyor)
{
m_Conveyor = conveyor;
}
public void DealWithInputAfterTransformingIt(IDataInput input)
{
var output = m_Conveyor(input);
Console.Write($"Mind your business here {typeof(IDataOutput).IsAssignableFrom(output.GetType())}");
}
}
public class Program {
public void StartingPoint(IServiceProvider serviceProvider)
{
ISomeDIContainer container = CreateDI();
container.Register<IStepProcessor<IDataInput, IDataStep1>, Step1Imp>();
container.Register<IStepProcessor<IDataStep1, IDataStep2>, Step2Imp>();
container.Register<IStepProcessor<IDataStep2, IDataStep3>, Step3Imp>();
container.Register<IStepProcessor<IDataStep3, IDataStepN>, StepNImp>();
container.Register<IStepProcessor<IDataStepN, IDataOutput>, StepOImp>();
container.Register<Factory>();
Factory factory = container.Resolve<Factory>();
var conveyor = factory.BuildConveyor();
var client = new Client(conveyor);
}
}

Locking and thread safe around streaming API

Probably a fairly simple question but I want to know the best practice in making the code thread safe.
I am using an external non thread-safe API within a mutli-threaded environment.
It returns an IEnumerable<ApiDto>.
I then map each of the ApiDto to our application's DTO: MyDto.
How do I ensure that the code is thread-safe?
For example:
This is my class that gets items from API
public class ApiRepo
{
private IApi api;
public ApiRepo()
{
api=new Api("url");
}
public IEnumerable<MyDto> GetItems()
{
var apiDtos = api.GetNonThreadSafeItems();
foreach(var apiDto in apiDtos)
{
var myDto = new MyDto(apiDto.Name);
yield return myDto;
}
}
}
This is my client application.
Multiple instances of Client are created and data is retrieved from the API.
public class Client
{
public void GetData()
{
var items = new ApiRepo().GetItems().ToList();
Console.WriteLine(items.Count);
}
}
Should I put a lock within Client.GetData() or is there any better way to make the code thread-safe?
An API is 'not thread safe' means it operate base on some golbal resource whitout synchronize mechanism. So, in order to get correct result form the API, your need to make sure only one thread call it at one time. Base on your sample, most easy way to do that is like
public class ApiRepo
{
static private object theLock = new object();
private IApi api;
public ApiRepo()
{
api=new Api("url");
}
public IEnumerable<MyDto> GetItems()
{
IEnumerable<ApiDto> apiDtos = null;
lock(theLock)
{
apiDtos = api.GetNonThreadSafeItems();
}
foreach(var apiDto in apiDtos)
{
var myDto = new MyDto(apiDto.Name);
yield return myDto;
}
}
}

C# lock to simultaneously read/write and display results

here's my question:
Say I have this program (I'll try to semplify as much as I can):
receiveResultThread waits for result from differents network clients, while displayResultToUIThread updates the UI with all the results received.
class Program
{
private static Tests TestHolder;
static void Main(string[] args)
{
TestHolder = new Tests();
Thread receiveResultsThread = new Thread(ReceiveResult);
receiveResultsThread.Start();
Thread displayResultToUIThread = new Thread(DisplayResults);
displayResultToUIThread.Start();
Console.ReadKey();
}
public static void ReceiveResult()
{
while (true)
{
if (IsNewTestResultReceivedFromNetwork())
{
lock (Tests.testLock)
TestHolder.ExecutedTests.Add(new Test { Result = "OK" });
}
Thread.Sleep(200);
}
}
private static void DisplayResults(object obj)
{
while (true)
{
lock (Tests.testLock)
{
DisplayAllResultInUIGrid(TestHolder.ExecutedTests);
}
Thread.Sleep(200);
}
}
}
class Test
{
public string Result { get; set; }
}
class Tests
{
public static readonly object testLock = new object();
public List<Test> ExecutedTests;
public Tests()
{
ExecutedTests = new List<Test>();
}
}
class UIManager
{
public static void DisplayAllResultInUIGrid(List<Test> list)
{
//Code to update UI.
}
}
Considering that the scope is to not update the UI while the other thread is adding tests to the list, it is safe to use:
lock (Tests.testLock)
or should I use:
lock (TestHolder.testLock)
(changing the static property of testLock)?
Do you think this is a good way to write this kind of program or can you suggest a better pattern?
Thank you for your help!
Public (not talking about public static) lock objects tend to be dangerous. Please see here
The reason it's bad practice to lock on a public object is that you can never be sure who ELSE is locking on that object.
Furthermore just having a List<T> and adding objects from an outer scope could be a smell, too.
In my opinion it'd be a better idea to have a method AddTest in Tests
class Tests
{
private static readonly object testLock = new object();
private List<Test> executedTests;
public Tests()
{
ExecutedTests = new List<Test>();
}
public void AddTest(Test t)
{
lock(testLock)
{
executedTests.Add(t);
}
}
public IEnumerable<Test> GetTests()
{
lock(testLock)
{
return executedTests.ToArray();
}
}
[...]
}
Clients of your tests class do not have to worry about using the lock object correctly. Precisely, they don't have to worry about any of the internals of your class.
You could, anyway, rename your class to ConcurrentTestsCollectionor the like, that users of the class know, that it's thread safe to some extent.
While you can use Tasks and the async/await keywords to do this less verbosely, I don't think it will fully solve your question.
I will assume that ExecutedTests is a List(or like) that you want to be thread safe, which is why you are creating a lock while accessing it.
I would make the list, itself, thread safe, rather than the operations against it. This will remove the need for a lock or a lock object.
You could implement this yourself or use something in the System.Collections.Concurrent namespace.
P.S.
If the threads are meant to be closed(aborted) when the process is exited you should set the Thread's IsBackground property to true.

Does C# have a "ThreadLocal" analog (for data members) to the "ThreadStatic" attribute?

I've found the "ThreadStatic" attribute to be extremely useful recently, but makes me now want a "ThreadLocal" type attribute that lets me have non-static data members on a per-thread basis.
Now I'm aware that this would have some non-trivial implications, but:
Does such a thing exist already built into C#/.net? or since it appears so far that the answer to this is no (for .net < 4.0), is there a commonly used implementation out there?
I can think of a reasonable way to implement it myself, but would just use something that already existed if it were available.
Straw Man example that would implement what I'm looking for if it doesn't already exist:
class Foo
{
[ThreadStatic]
static Dictionary<Object,int> threadLocalValues = new Dictionary<Object,int>();
int defaultValue = 0;
int ThreadLocalMember
{
get
{
int value = defaultValue;
if( ! threadLocalValues.TryGetValue(this, out value) )
{
threadLocalValues[this] = value;
}
return value;
}
set { threadLocalValues[this] = value; }
}
}
Please forgive any C# ignorance. I'm a C++ developer that has only recently been getting into the more interesting features of C# and .net
I'm limited to .net 3.0 and maybe 3.5 (project has/will soon move to 3.5).
Specific use-case is callback lists that are thread specific (using imaginary [ThreadLocal] attribute) a la:
class NonSingletonSharedThing
{
[ThreadLocal] List<Callback> callbacks;
public void ThreadLocalRegisterCallback( Callback somecallback )
{
callbacks.Add(somecallback);
}
public void ThreadLocalDoCallbacks();
{
foreach( var callback in callbacks )
callback.invoke();
}
}
Enter .NET 4.0!
If you're stuck in 3.5 (or earlier), there are some functions you should look at, like AllocateDataSlot which should do what you want.
You should think about this twice. You are essentially creating a memory leak. Every object created by the thread stays referenced and can't be garbage collected. Until the thread ends.
If you looking to store unique data on a per thread basis you could use Thread.SetData. Be sure to read up on the pros and cons http://msdn.microsoft.com/en-us/library/6sby1byh.aspx as this has performance implications.
Consider:
Rather than try to give each member variable in an object a thread-specific value, give each thread its own object instance. -- pass the object to the threadstart as state, or make the threadstart method a member of the object that the thread will "own", and create a new instance for each thread that you spawn.
Edit
(in response to Catskul's remark.
Here's an example of encapsulating the struct
public class TheStructWorkerClass
{
private StructData TheStruct;
public TheStructWorkerClass(StructData yourStruct)
{
this.TheStruct = yourStruct;
}
public void ExecuteAsync()
{
System.Threading.ThreadPool.QueueUserWorkItem(this.TheWorkerMethod);
}
private void TheWorkerMethod(object state)
{
// your processing logic here
// you can access your structure as this.TheStruct;
// only this thread has access to the struct (as long as you don't pass the struct
// to another worker class)
}
}
// now hte code that launches the async process does this:
var worker = new TheStructWorkerClass(yourStruct);
worker.ExecuteAsync();
Now here's option 2 (pass the struct as state)
{
// (from somewhere in your existing code
System.Threading.Threadpool.QueueUserWorkItem(this.TheWorker, myStruct);
}
private void TheWorker(object state)
{
StructData yourStruct = (StructData)state;
// now do stuff with your struct
// works fine as long as you never pass the same instance of your struct to 2 different threads.
}
I ended up implementing and testing a version of what I had originally suggested:
public class ThreadLocal<T>
{
[ThreadStatic] private static Dictionary<object, T> _lookupTable;
private Dictionary<object, T> LookupTable
{
get
{
if ( _lookupTable == null)
_lookupTable = new Dictionary<object, T>();
return _lookupTable;
}
}
private object key = new object(); //lazy hash key creation handles replacement
private T originalValue;
public ThreadLocal( T value )
{
originalValue = value;
}
~ThreadLocal()
{
LookupTable.Remove(key);
}
public void Set( T value)
{
LookupTable[key] = value;
}
public T Get()
{
T returnValue = default(T);
if (!LookupTable.TryGetValue(key, out returnValue))
Set(originalValue);
return returnValue;
}
}
Although I am still not sure about when your use case would make sense (see my comment on the question itself), I would like to contribute a working example that is in my opinion more readable than thread-local storage (whether static or instance). The example is using .NET 3.5:
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.Linq;
namespace SimulatedThreadLocal
{
public sealed class Notifier
{
public void Register(Func<string> callback)
{
var id = Thread.CurrentThread.ManagedThreadId;
lock (this._callbacks)
{
List<Func<string>> list;
if (!this._callbacks.TryGetValue(id, out list))
{
this._callbacks[id] = list = new List<Func<string>>();
}
list.Add(callback);
}
}
public void Execute()
{
var id = Thread.CurrentThread.ManagedThreadId;
IEnumerable<Func<string>> threadCallbacks;
string status;
lock (this._callbacks)
{
status = string.Format("Notifier has callbacks from {0} threads, total {1} callbacks{2}Executing on thread {3}",
this._callbacks.Count,
this._callbacks.SelectMany(d => d.Value).Count(),
Environment.NewLine,
Thread.CurrentThread.ManagedThreadId);
threadCallbacks = this._callbacks[id]; // we can use the original collection, as only this thread can add to it and we're not going to be adding right now
}
var b = new StringBuilder();
foreach (var callback in threadCallbacks)
{
b.AppendLine(callback());
}
Console.ForegroundColor = ConsoleColor.DarkYellow;
Console.WriteLine(status);
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine(b.ToString());
}
private readonly Dictionary<int, List<Func<string>>> _callbacks = new Dictionary<int, List<Func<string>>>();
}
public static class Program
{
public static void Main(string[] args)
{
try
{
var notifier = new Notifier();
var syncMainThread = new ManualResetEvent(false);
var syncWorkerThread = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(delegate // will create closure to see notifier and sync* events
{
notifier.Register(() => string.Format("Worker thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
notifier.Register(() => string.Format("Worker thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
syncMainThread.Set();
});
notifier.Register(() => string.Format("Main thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.WaitOne(); // wait for worker thread to add its notification
syncMainThread.Reset();
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
notifier.Register(() => string.Format("Main thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
}
finally
{
Console.ResetColor();
}
}
}
}
When you compile and run the above program, you should get output like this:
alt text http://img695.imageshack.us/img695/991/threadlocal.png
Based on your use-case I assume this is what you're trying to achieve. The example first adds two callbacks from two different contexts, main and worker threads. Then the example runs notification first from main and then from worker threads. The callbacks that are executed are effectively filtered by current thread ID. Just to show things are working as expected, the example adds two more callbacks (for a total of 4) and again runs the notification from the context of main and worker threads.
Note that Notifier class is a regular instance that can have state, multiple instances, etc (again, as per your question's use-case). No static or thread-static or thread-local is used by the example.
I would appreciate if you could look at the code and let me know if I misunderstood what you're trying to achieve or if a technique like this would meet your needs.
I'm not sure how you're spawning your threads in the first place, but there are ways to give each thread its own thread-local storage, without using hackish workarounds like the code you posted in your question.
public void SpawnSomeThreads(int threads)
{
for (int i = 0; i < threads; i++)
{
Thread t = new Thread(WorkerThread);
WorkerThreadContext context = new WorkerThreadContext
{
// whatever data the thread needs passed into it
};
t.Start(context);
}
}
private class WorkerThreadContext
{
public string Data { get; set; }
public int OtherData { get; set; }
}
private void WorkerThread(object parameter)
{
WorkerThreadContext context = (WorkerThreadContext) parameter;
// do work here
}
This obviously ignores waiting on the threads to finish their work, making sure accesses to any shared state is thread-safe across all the worker threads, but you get the idea.
Whilst the posted solution looks elegant, it leaks objects. The finalizer - LookupTable.Remove(key) - is run only in the context of the GC thread so is likely only creating more garbage in creating another lookup table.
You need to remove object from the lookup table of every thread that has accessed the ThreadLocal. The only elegant way I can think of solving this is via a weak keyed dictionary - a data structure which is strangely lacking from c#.

Publishing to multiple subscribes in RX

I am investigating how to develop a plugin framework for a project and Rx seems like a good fit for what i am trying to achieve. Ultimately, the project will be a set of plugins (modular functionality) that can be configured via xml to do different things. The requirements are as follows
Enforce a modular architecture even within a plugin. This encourages loose coupling and potentially minimizes complexity. This hopefully should make individual plugin functionality easier to model and test
Enforce immutability with respect to data to reduce complexity and ensure that state management within modules is kept to a minimum
Discourage manual thread creation by providing thread pool threads to do work within modules wherever possible
In my mind, a plugin is essentially a data transformation entity. This means a plugin either
Takes in some data and transforms it in some way to produce new data (Not shown here)
Generates data in itself and pushes it out to observers
Takes in some data and does some work on the data without notifying outsiders
If you take the concept further, a plugin can consist of a number of all three types above.For example within a plugin you can have an IntGenerator module that generates some data to a ConsoleWorkUnit module etc. So what I am trying to model in the main function is the wiring that a plugin would have to do its work.
To that end, I have the following base classes using the Immutable nuget from Microsoft. What I am trying to achieve is to abstract away the Rx calls so they can be used in modules so the ultimate aim would be to wrap up calls to buffer etc in abstract classes that can be used to compose complex queries and modules. This way the code is a bit more self documenting than having to actually read all the code within a module to find out it subscribes to a buffer or window of type x etc.
public abstract class OutputBase<TOutput> : SendOutputBase<TOutput>
{
public abstract void Work();
}
public interface IBufferedBase<TOutput>
{
void Work(IList<ImmutableList<Data<TOutput>>> list);
}
public abstract class BufferedWorkBase<TInput> : IBufferedBase<TInput>
{
public abstract void Work(IList<ImmutableList<Data<TInput>>> input);
}
public abstract class SendOutputBase<TOutput>
{
private readonly ReplaySubject<ImmutableList<Data<TOutput>>> _outputNotifier;
private readonly IObservable<ImmutableList<Data<TOutput>>> _observable;
protected SendOutputBase()
{
_outputNotifier = new ReplaySubject<ImmutableList<Data<TOutput>>>(10);
_observable = _outputNotifier.SubscribeOn(ThreadPoolScheduler.Instance);
_observable = _outputNotifier.ObserveOn(ThreadPoolScheduler.Instance);
}
protected void SetOutputTo(ImmutableList<Data<TOutput>> output)
{
_outputNotifier.OnNext(output);
}
public void ConnectOutputTo(IWorkBase<TOutput> unit)
{
_observable.Subscribe(unit.Work);
}
public void BufferOutputTo(int count, IBufferedBase<TOutput> unit)
{
_observable.Buffer(count).Subscribe(unit.Work);
}
}
public abstract class WorkBase<TInput> : IWorkBase<TInput>
{
public abstract void Work(ImmutableList<Data<TInput>> input);
}
public interface IWorkBase<TInput>
{
void Work(ImmutableList<Data<TInput>> input);
}
public class Data<T>
{
private readonly T _value;
private Data(T value)
{
_value = value;
}
public static Data<TData> Create<TData>(TData value)
{
return new Data<TData>(value);
}
public T Value { get { return _value; } }
}
These base classes are used to create three classes; one for generating some int data, one to print out the data when they occur and the last to buffer the data as it comes in and sum the values in threes.
public class IntGenerator : OutputBase<int>
{
public override void Work()
{
var list = ImmutableList<Data<int>>.Empty;
var builder = list.ToBuilder();
for (var i = 0; i < 1000; i++)
{
builder.Add(Data<int>.Create(i));
}
SetOutputTo(builder.ToImmutable());
}
}
public class ConsoleWorkUnit : WorkBase<int>
{
public override void Work(ImmutableList<Data<int>> input)
{
foreach (var data in input)
{
Console.WriteLine("ConsoleWorkUnit printing {0}", data.Value);
}
}
}
public class SumPrinter : WorkBase<int>
{
public override void Work(ImmutableList<Data<int>> input)
{
input.ToObservable().Buffer(2).Subscribe(PrintSum);
}
private void PrintSum(IList<Data<int>> obj)
{
Console.WriteLine("Sum of {0}, {1} is {2} ", obj.First().Value,obj.Last().Value ,obj.Sum(x=>x.Value) );
}
}
These are run in a main like this
var intgen = new IntGenerator();
var cons = new ConsoleWorkUnit();
var sumPrinter = new SumPrinter();
intgen.ConnectOutputTo(cons);
intgen.BufferOutputTo(3,sumPrinter);
Task.Factory.StartNew(intgen.Work);
Console.ReadLine();
Is this architecture sound?
You are buffering your observable (.Buffer(count)) so that it only signals after count notifications arrive.
However, your IntGenerator.DoWork only ever produces a single value. Thus you never "fill" the buffer and trigger downstream notifications.
Either change DoWork so that it eventually produces more values, or have it complete the observable stream when it finishes its work. Buffer will release the remaining buffered values when the stream completes. To do this, it means somewhere IntGenerator.DoWork needs to cause a call to _outputNotifier.OnCompleted()

Categories

Resources