AccessViolationException on sp_session_player_load() - c#

I'm trying to create a stream application based on Spotify's libspotify SDK.
To achieve this in C# I'm using the ohLibspotify bindings and wrapper. This is only a thin abstraction layer so most of it will be a 1:1 mapping to the libspotify SDK. To play the incoming PCM data I'm using the NAudio library.
Most of the times I can play the first track. Then when I load the second one I get a AccessViolationException whilst trying to call sp_session_player_load(). Also this sometimes happens the first time I try to play a track and sometimes it happens the third time.
This is the function I use to play a track.
public void playTrack(string track, string juke)
{
new Thread(new ThreadStart(() =>
{
var playable = Link.CreateFromString(string.Format("spotify:track:{0}", track)).AsTrack();
if (playing)
{
player.Pause();
App.Logic.spotify.sp_session.PlayerUnload();
}
buffer = new BufferedWaveProvider(new WaveFormat())
{
BufferDuration = TimeSpan.FromSeconds(120),
DiscardOnBufferOverflow = false
};
var waitEvent = new AutoResetEvent(false);
while (!playable.IsLoaded())
{
waitEvent.WaitOne(30);
}
App.Logic.spotify.sp_session.PlayerLoad(playable);
App.Logic.spotify.sp_session.PlayerPlay(true);
player = new WaveOut();
player.Init(buffer);
player.Play();
playing = true;
})).Start();
}
The AccessViolationException occurs on line 6 of the following piece of code within the wrapper library.
[DllImport("libspotify")]
internal static extern SpotifyError sp_session_player_load(IntPtr #session, IntPtr #track);
public void PlayerLoad(Track #track)
{
SpotifyError errorValue;
errorValue = NativeMethods.sp_session_player_load(this._handle, track._handle);
SpotifyMarshalling.CheckError(errorValue);
}
The streaming callbacks:
public override void GetAudioBufferStats(SpotifySession session, out AudioBufferStats stats)
{
stats = new AudioBufferStats()
{
samples = App.Logic.spotify.player.buffer.BufferedBytes / 2,
stutter = 0
};
}
public override int MusicDelivery(SpotifySession session, AudioFormat format, IntPtr frames, int num_frames) {
int incoming_size = num_frames * format.channels * 2;
try
{
if (incoming_size > sample_buffer.Length)
{
short rendered_frames = Convert.ToInt16(Math.Floor((sample_buffer.Length / format.channels / 2d)));
short rendered_size = Convert.ToInt16(rendered_frames * format.channels * 2);
Marshal.Copy(frames, sample_buffer, 0, rendered_size);
App.Logic.spotify.player.buffer.AddSamples(sample_buffer, 0, rendered_size);
return rendered_frames;
}
else
{
Marshal.Copy(frames, sample_buffer, 0, incoming_size);
App.Logic.spotify.player.buffer.AddSamples(sample_buffer, 0, incoming_size);
return num_frames;
}
}
catch (Exception e)
{
return 0;
}
}

As #Weeble said in his comment to my question. It is very hard to diagnose the possible source of an AccessViolationException. In my case it was threading, there were multiple threads accessing the Spotify session object at once.
If this could be your problem as well you might want to look at this article. It talks about thread synchronization in C# and VB.Net. It's very easy really.
lock(sessionLock)
{
App.Logic.spotify.sp_session.PlayerLoad(playable);
App.Logic.spotify.sp_session.PlayerPlay(true);
}
The sessionLock object can be a simple instantiation of the Object type. Though it's not recommended to use any "actual" objects for this. In my case I did the following:
public class Player
{
private object sessionLock = new object();
// Rest of code with locks
}
This way you can lock the sessionLock object every time you want to access the, in this case, SpotifySession object. So that when thread 1 is loading a song and thread 2 wants to process the events at the same time, thread 2 will be blocked until the lock on sessionLock has been lifted.
As #Weeble suggested there are possibly some other things you might want to look into if threading is not the issue.
You might not be handling spotify's ref-counting correctly, and accidentally decreasing a ref-count too early or forgetting to increment one somewhere it is necessary.
You might be corrupting memory in the music callbacks that deal with native pointers.
There might be a bug in ohLibspotify or libspotify. If you think this is the case then go to the ohLibspotify repo or the openhome forum to report your issue.

Related

I need to access global (unmanaged) memory in c#

I am pretty new in c# and could need some help on a audio project.
My audio input buffers call a method when they are filled up. In the method I marshall this buffers to a local float[], pass this to a function, where some audio processing is done. After processing the function throws back the manipulated float[], which I pass by Marshall.copy to the audio aout buffer. it works but it is is pretty hard to get the audioprocessing done fast enough, to pass the result back to the method without ending in ugly glitches. If I enlarge the audio buffers it gets better but I get untolarable high latency in the signal chain.
One problem is the GC. My DSP Routine is doing some FFT and the methods frequently need allocate local variables. I think this is slowing down my process a lot.
So I need a way to allogate once (and re-access) a few pieces of umanaged memory, keep this mem fpr the entire runtime and just reference to it from the methods.
I found e.g:
IntPtr hglobal = Marshal.AllocHGlobal(8192);
Marshal.FreeHGlobal(hglobal);
SO what I tried is to define a global static class "Globasl" with static member and asigned that IntPtr to that.
Globals.mem1 = hglobal;
From within any nother method I can access this now by
e.g.
int[] f = new int[2];
f[0] = 111;
f[1] = 222;
Marshal.Copy(f, 0, Globals.mem1, 2);
Now comes my problem:
If I want to access this int[] from the example above in another method, how could I do this?
thank you for your fast help.
I was little unprecise seems, sorry
my audiodevice driver throws a buffer filled event which I catch, (in pseudocode since I dont have access to my home desktop right now). looks like:
void buffer (....)
{
byte[] buf = new byte[asiobuffersize];
marshall.copy(asioinbuffers, 0, buf, asiobufferlenth);
buf= manipulate(buf);
marshall.copy(buf, 0, asiooutbuffer, asiobufferlenth);
}
the manipulate function is doing some conversions from byte to float then some math (FFT) and backtransform to byte and looks like e.g.
private byte[] manipulate(byte[] buf, Complex[] filter)
{
float bu = convertTofloat(buf); //conversion from byte to audio float here
Complex[] inbuf = new Complex[bu.Length];
Complex[] midbuf = new Complex[bu.Length];
Complex[] mid2buf = new Complex[bu.Length];
Complex[] outbuf = new Complex[bu.Length];
for ( n....)
{
inbuf[n]= bu[n]; //Copy to Complex
}
midbuf=FFT(inbuf); //Doing FFT transform
for (n....)
{
mid2buf[n]=midbuf[n]*filter[n]; // Multiply with filter
}
outbuf=iFFT(mid2buf) //inverse iFFT transform
byte bu = convertTobyte(float); //conversion from float back to audio byte
return bu;
}
here I expect my speed issue to be. So I thought the problem could be solved if the manipulating funktion could just "get" a fixed piece of unmanaged memory where (once pre-created) sits fixed all those those variables (Complex e.g.) and pre allocated menory, so that I must not create new ones each time the function is called. I first expected the reason for my glitches in wrong FFT or math but it happens in kind of "sharp" time distances of few seconds, so it is not connected to audio signal issues like clipping. I think the isseue happens, when the GC is doing some serious work and eats me exactly this few miliseconds missing to get the outbut buffer filled in time.
acceptable.
I really doubt the issues you are experiencing are induced by your managed buffer creation/copying. Instead I think your problem is that you have your data capture logic coupled with your DSP logic. Usually, captured data resides in a circular buffer, where the data is rewritten after some period, so you should be fetching this data as soon as posible.
The problem is that you don't fetch the next available data block until after your DSP is done; you already know FFT operations are really CPU intensive! If you have a processing peak, you may not be able to retrieve data before it is rewritten by the capture driver.
One possibility to address your issue is try to increase, if posible, the size and/or available amount of capture buffers. This buys you more time before the captured data is rewritten. The other possibility, and is the one that I favor is decoupling your processing stage from your capture stage, in this way, if new data is available while you are busy performing your DSP computations, you can still grab it and buffer it almost as soon as it becomes available. You become much more resilient to garbage collection induced pauses or computing peaks inside your manipulate method.
This would involve creating two threads: the capture thread and the processing thread. You would also need an "Event" that signals the processing thread that new data is available, and a queue, that will serve as a dynamic, expandable buffer.
The capture thread would look something like this:
// the following are class members
AutoResetEvent _bufQueueEvent = new AutoResetEvent(false);
Queue<byte[]> _bufQueue = new Queue<byte[]>;
bool _keepCaptureThreadAlive;
Thread _captureThread;
void CaptureLoop() {
while( _keepCaptureThreadAlive ) {
byte[] asioinbuffers = WaitForBuffer();
byte[] buf = new byte[asiobuffers.Length]
marshall.copy(asioinbuffers, 0, buf, asiobufferlenth);
lock( _bufQueue ) {
_bufQueue.Enqueue(buf);
}
_bufQueueEvent.Set(); // notify processing thread new data is available
}
}
void StartCaptureThread() {
_keepCaptureThreadAlive = true;
_captureThread = new Thread(new(ThreadStart(CaptureLoop));
_captureThread.Name = "CaptureThread";
_captureThread.IsBackground = true;
_captureThread.Start();
}
void StopCaptureThread() {
_keepCaptureThreadAlive = false;
_captureThread.Join(); // wait until the thread exits.
}
The processing thread would look something like this
// the following are class members
bool _keepProcessingThreadAlive;
Thread _processingThread;
void ProcessingLoop() {
while( _keepProcessingThreadAlive ) {
_bufQueueEvent.WaitOne(); // thread will sleep until fresh data is available
if( !_keepProcessingThreadAlive ) {
break; // check if thread is being waken for termination.
}
int queueCount;
lock( _bufQueue ) {
queueCount = _bufQueue.Count;
}
for( int i = 0; i < queueCount; i++ ) {
byte[] buffIn;
lock( _bufQueue ) {
// only lock during dequeue operation, this way the capture thread Will
// be able to enqueue fresh data even if we are still doing DSP processing
buffIn = _bufQueue.Dequeue();
}
byte[] buffOut = manipulate(buffIn); // you are safe if this stage takes more time than normal, you will still get the incoming data
// additional logic using manipulate() return value
...
}
}
}
void StartProcessingThread() {
_keepProcessingThreadAlive = true;
_processingThread = new Thread(new(ThreadStart(ProcessingLoop));
_processingThread.Name = "ProcessingThread";
_processingThread.IsBackground = true;
_processingThread.Start();
}
void StopProcessingThread() {
_keepProcessingThreadAlive = false;
_bufQueueEvent.Set(); // wake up thread in case it is waiting for data
_processingThread.Join();
}
At my job we also perform a lot of DSP and this pattern has really helped us with the kind of issues you are experiencing.

Throttling file operations

I have a byte array that I want to persist in a file. But, I don't want to write to file each time it is updated because it can be updated very frequently. Currently I am planning to use an approach similar to following;
class ThrottleTest
{
private byte[] _internal_data = new byte[256];
CancellationTokenSource _cancel_saving = new CancellationTokenSource();
public void write_to_file()
{
Task.Delay(1000).ContinueWith((task) =>
{
File.WriteAllBytes("path/to/file.data", _internal_data);
}, _cancel_saving.Token);
}
public void operation_that_update_internal_data()
{
// cancel writing previous update
_cancel_saving.Cancel();
/*
* operate on _internal_data
*/
write_to_file();
}
public void another_operation_that_update_internal_data()
{
// cancel writing previous update
_cancel_saving.Cancel();
/*
* operate on _internal_data
*/
write_to_file();
}
}
I don't think this approach would work, because, when I cancel the token once, it will be canceled forever, so it will never write to the file.
First of all, I was wondering if I am on the right track here, and above code can be made to work. If not, what would be the best approach to achieve this behaviour. Moreover, is there a practical way to generalize it to Dictionary<string,byte[]>, where any byte[] can be modified independently?
I would start writing to file by cancelling first the previous operation.
I would also include the cancellation token in the Delay task.
CancellationTokenSource _cancel_saving;
public void write_to_file()
{
_cancel_saving?.Cancel();
_cancel_saving = new CancellationTokenSource();
Task.Delay(1000, _cancel_saving.Token).ContinueWith((task) =>
{
File.WriteAllBytes("path/to/file.data", _internal_data);
}, _cancel_saving.Token);
}
You should use Microsoft's Reactive Framework (aka Rx) - NuGet System.Reactive and add using System.Reactive.Linq; - then you can do this:
public class ThrottleTest
{
private byte[] _internal_data = new byte[256];
private Subject<Unit> _write_to_file = new Subject<Unit>();
public ThrottleTest()
{
_write_to_file
.Throttle(TimeSpan.FromSeconds(1.0))
.Subscribe(_ => File.WriteAllBytes("path/to/file.data", _internal_data));
}
public void write_to_file()
{
_write_to_file.OnNext(Unit.Default);
}
public void operation_that_update_internal_data()
{
/*
* operate on _internal_data
*/
write_to_file();
}
public void another_operation_that_update_internal_data()
{
/*
* operate on _internal_data
*/
write_to_file();
}
}
Your context seems a little odd to me. You are writing all of the bytes, and not using a stream. Putting aside your issue with the cancellation token, delaying a write by 1 second won't reduce the overall load or overall throughput to disk.
This answer has the following assumptions:
You are using an SSD and are concerned about hardware lifetime
This is a low priority activity, where some data loss will be tolerated
This is not a logging activity (otherwise an append to file would work better with a BufferedStream)
This is likely the saving of a serialized C# object tree to disk in case the power goes out
You don't want every change made to the object tree to result in a write to disk.
You don't want to write to disk every second if there has been no change to the object tree.
It should write to disk right away if there hasn't been a write for N seconds
It should wait if there has been a write recently.
Having the WriteAllBytes step as the throttle point is not ideal.
Usage:
rootObject.subObject.value = 9;
rootObject.Save(token);
Support code:
TimeSpan minimumDiskInterval = TimeSpan.FromSeconds(60);
DateTime lastSaveAt = DateTime.MinValue;
bool alreadyQueued = false;
public void Save(CancellationToken token)
{
if (alreadyQueued) //TODO: Interlocked with long instead for atomic memory barrier
return;
alreadyQueued = true; //TODO: Interlocked with long instead for atomic memory barrier
var thisSaveAt = DateTime.UtcNow;
var sinceLastSave = thisSaveAt.Subtract(lastSaveAt);
var difference = TimeSpan.TotalSeconds - sinceLastSave.TotalSeconds;
if (difference < 0)
{
//It has been a while - no need to delay
SaveNow();
}
else
{
//It was done recently
T Task.Delay(TimeSpan.FromSeconds(difference).ContinueWith((task) =>
{
SaveNow();
}, _cancel_saving.Token);
}
}
object fileAccessSync = new object();
public void SaveNow()
{
alreadyQueued = false; //TODO: Interlocked with long instead for atomic memory barrier
byte[] serializedBytes = Serialise(this)
lock (fileAccessSync)
{
File.WriteAllBytes("path/to/file.data", serializedBytes);
}
}

What makes sure a thread sees the latest value when passing data between two threads?

I'm using native library via PInvoke calls which returns Byte* and want to make sure that in a producer/consumer scenario that the consumer thread gets the latest data.
I have a very contrived example here, to try to illustrate what I'm asking. I'm aware of the 'hot loop' in the Consumer (it's just an example, the real code is much larger and would not be feasible to paste here).
public unsafe class ThreadExample {
class PInvokeResult {
public Byte* Data;
public Int32 Length;
}
// Shared Objects Used For Synchronization
Object SyncRoot = new Object();
Queue<PInvokeResult> WorkQueue = new Queue<PInvokeResult>();
// Producer Thread
void Producer() {
while (true) {
PInvokeResult workItem;
workItem = new PInvokeResult();
workItem.Data = PInvokeNativeLongRunningCall(out workItem.Length);
lock (SyncRoot) {
WorkQueue.Enqueue(workItem);
}
}
}
// Consume Thread
void Consumer() {
while (true) {
bool workAvailable = false;
PInvokeResult workItem = null;
lock (SyncRoot) {
if (WorkQueue.Count > 0) {
workItem = WorkQueue.Dequeue();
workAvailable = true;
}
}
if (workAvailable) {
ProcessWorkItem(workItem);
PInvokeReturnPointerBuffer(workItem.Data);
}
}
}
}
Is the lock here enough to make sure that the data that is read from the Byte* pointer on PInvokeResult never points to 'stale' data for the consumer?
What I mean by stale data here is this case:
One specific Byte* buffer gets returned from the PInvokeNativeLongRunningCall invocation.
The buffer is passed from the producer to the consumer, this uses locking on the SyncRoot to make sure access to the Queue is safe.
The consumer does some work on the item and then returns it to the native code via PInvokeReturnPointerBuffer
The buffer is then recycled and re-used on the native side and the data in the buffer is first set to all zeroes and then written to again.
The cycle then starts over from 1.
When the buffer for the second times comes to the Consumer, how can I be sure that the data that it sees it the latest new data that was written to by the PInvoke call?
This question is purely from the C# perspective, I know that the native code is fine and solid, it's a well used library.
Is this even something that a language has to account for, or is this completely handled by the CPU itself?

Threading A Method Contained In An Object - C#

I'm writing a program that will analyze changes in the stock market.
Every time the candles on the stock charts are updated, my algorithm scans every chart for certain pieces of data. I've noticed that this process is taking about 0.6 seconds each time, freezing my application. Its not getting stuck in a loop, and there are no other problems like exception errors slowing it down. It just takes a bit of time.
To solve this, I'm trying to see if I can thread the algorithm.
In order to call the algorithm to check over the charts, I have to call this:
checkCharts.RunAlgo();
As threads need an object, I'm trying to figure out how to run the RunAlgo(), but I'm not having any luck.
How can I have a thread run this method in my checkCharts object? Due to back propagating data, I can't start a new checkCharts object. I have to continue using that method from the existing object.
EDIT:
I tried this:
M4.ALProj.BotMain checkCharts = new ALProj.BotMain();
Thread algoThread = new Thread(checkCharts.RunAlgo);
It tells me that the checkCharts part of checkCharts.RunAlgo is gives me, "An object reference is required for the non-static field, method, or property "M4.ALProj.BotMain"."
In a specific if statement, I was going to put the algoThread.Start(); Any idea what I did wrong there?
The answer to your question is actually very simple:
Thread myThread = new Thread(checkCharts.RunAlgo);
myThread.Start();
However, the more complex part is to make sure that when the method RunAlgo accesses variables inside the checkCharts object, this happens in a thread-safe manner.
See Thread Synchronization for help on how to synchronize access to data from multiple threads.
I would rather use Task.Run than Thread. Task.Run utilizes the ThreadPool which has been optimized to handle various loads effectively. You will also get all the goodies of Task.
await Task.Run(()=> checkCharts.RunAlgo);
Try this code block. Its a basic boilerplate but you can build on and extend it quite easily.
//If M4.ALProj.BotMain needs to be recreated for each run then comment this line and uncomment the one in DoRunParallel()
private static M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
private static object SyncRoot = new object();
private static System.Threading.Thread algoThread = null;
private static bool ReRunOnComplete = false;
public static void RunParallel()
{
lock (SyncRoot)
{
if (algoThread == null)
{
System.Threading.ThreadStart TS = new System.Threading.ThreadStart(DoRunParallel);
algoThread = new System.Threading.Thread(TS);
}
else
{
//Recieved a recalc call while still calculating
ReRunOnComplete = true;
}
}
}
public static void DoRunParallel()
{
bool ReRun = false;
try
{
//If M4.ALProj.BotMain needs to be recreated for each run then uncomment this line and comment private static version above
//M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
checkCharts.RunAlgo();
}
finally
{
lock (SyncRoot)
{
algoThread = null;
ReRun = ReRunOnComplete;
ReRunOnComplete = false;
}
}
if (ReRun)
{
RunParallel();
}
}

Multiple publishers sending concurrent messages to a single subscriber in Retlang?

I need to build an application where some number of instances of an object are generating "pulses", concurrently. (Essentially this just means that they are incrementing a counter.) I also need to track the total counters for each object. Also, whenever I perform a read on a counter, it needs to be reset to zero.
So I was talking to a guy at work, and he mentioned Retlang and message-based concurrency, which sounded super interesting. But obviously I am very new to the concept. So I've built a small prototype, and I get the expected results, which is awesome - but I'm not sure if I've potentially made some logical errors and left the software open to bugs, due to my inexperience with Retlang and concurrent programming in general.
First off, I have these classes:
public class Plc {
private readonly IChannel<Pulse> _channel;
private readonly IFiber _fiber;
private readonly int _pulseInterval;
private readonly int _plcId;
public Plc(IChannel<Pulse> channel, int plcId, int pulseInterval) {
_channel = channel;
_pulseInterval = pulseInterval;
_fiber = new PoolFiber();
_plcId = plcId;
}
public void Start() {
_fiber.Start();
// Not sure if it's safe to pass in a delegate which will run in an infinite loop...
// AND use a shared channel object...
_fiber.Enqueue(() => {
SendPulse();
});
}
private void SendPulse() {
while (true) {
// Not sure if it's safe to use the same channel object in different
// IFibers...
_channel.Publish(new Pulse() { PlcId = _plcId });
Thread.Sleep(_pulseInterval);
}
}
}
public class Pulse {
public int PlcId { get; set; }
}
The idea here is that I can instantiate multiple Plcs, pass each one the same IChannel, and then have them execute the SendPulse function concurrently, which would allow each one to publish to the same channel. But as you can see from my comments, I'm a little skeptical that what I'm doing is actually legit. I'm mostly worried about using the same IChannel object to Publish in the context of different IFibers, but I'm also worried about never returning from the delegate that was passed to Enqueue. I'm hoping some one can provide some insight as to how I should be handling this.
Also, here is the "subscriber" class:
public class PulseReceiver {
private int[] _pulseTotals;
private readonly IFiber _fiber;
private readonly IChannel<Pulse> _channel;
private object _pulseTotalsLock;
public PulseReceiver(IChannel<Pulse> channel, int numberOfPlcs) {
_pulseTotals = new int[numberOfPlcs];
_channel = channel;
_fiber = new PoolFiber();
_pulseTotalsLock = new object();
}
public void Start() {
_fiber.Start();
_channel.Subscribe(_fiber, this.UpdatePulseTotals);
}
private void UpdatePulseTotals(Pulse pulse) {
// This occurs in the execution context of the IFiber.
// If we were just dealing with the the published Pulses from the channel, I think
// we wouldn't need the lock, since I THINK the published messages would be taken
// from a queue (i.e. each Plc is publishing concurrently, but Retlang enqueues
// the messages).
lock(_pulseTotalsLock) {
_pulseTotals[pulse.PlcId - 1]++;
}
}
public int GetTotalForPlc(int plcId) {
// However, this access takes place in the application thread, not in the IFiber,
// and I think there could potentially be a race condition here. I.e. the array
// is being updated from the IFiber, but I think I'm reading from it and resetting values
// concurrently in a different thread.
lock(_pulseTotalsLock) {
if (plcId <= _pulseTotals.Length) {
int currentTotal = _pulseTotals[plcId - 1];
_pulseTotals[plcId - 1] = 0;
return currentTotal;
}
}
return -1;
}
}
So here, I am reusing the same IChannel that was given to the Plc instances, but having a different IFiber subscribe to it. Ideally then I could receive the messages from each Plc, and update a single private field within my class, but in a thread safe way.
From what I understand (and I mentioned in my comments), I think that I would be safe to simply update the _pulseTotals array in the delegate which I gave to the Subscribe function, because I would receive each message from the Plcs serially.
However, I'm not sure how best to handle the bit where I need to read the totals and reset them. As you can see from the code and comments, I ended up wrapping a lock around any access to the _pulseTotals array. But I'm not sure if this is necessary, and I would love to know a) if it is in fact necessary to do this, and why, or b) the correct way to implement something similar.
And finally for good measure, here's my main function:
static void Main(string[] args) {
Channel<Pulse> pulseChannel = new Channel<Pulse>();
PulseReceiver pulseReceiver = new PulseReceiver(pulseChannel, 3);
pulseReceiver.Start();
List<Plc> plcs = new List<Plc>() {
new Plc(pulseChannel, 1, 500),
new Plc(pulseChannel, 2, 250),
new Plc(pulseChannel, 3, 1000)
};
plcs.ForEach(plc => plc.Start());
while (true) {
Thread.Sleep(10000);
Console.WriteLine(string.Format("Plc 1: {0}\nPlc 2: {1}\nPlc 3: {2}\n", pulseReceiver.GetTotalForPlc(1), pulseReceiver.GetTotalForPlc(2), pulseReceiver.GetTotalForPlc(3)));
}
}
I instantiate one single IChannel, pass it to everything, where internally the Receiver subscribes with an IFiber, and where the Plcs use IFibers to "enqueue" a non-returning method which continually publishes to the channel.
Again, the console output looks exactly like I would expect it to look, i.e. I see 20 "pulses" for Plc 1 after waiting 10 seconds. And the resetting of the counters after a read also seems to work, i.e. Plc 1 has 20 "pulses" after each 10 second increment. But that doesn't reassure me that I haven't overlooked something important.
I'm really excited to learn a bit more about Retlang and concurrent programming techniques, so hopefuly someone has the time to sift through my code and offer some suggestions for my specific concerns, or else even a different design based on my requirements!

Categories

Resources