rx.net merge reset stream - c#

We have a running service which processes messages from system x to system y.
It basically looks as follows:
aSystem.Messages.Subscribe(message =>
{
try
{
ProcessMessage(message);
}
catch(Exception ex)
{
_logger.LogFatal(ex.Message);
}
})
The problem is that we receive at least one message every second and our LogFatal message is configured to send an email. As a result, the mailbox exploded in a certain moment.
The code is "improved" by adding a custom Logging class which will hold the last timestamp. Based on that timestamp the message would log or not.
This looks cumbersome and I think this is the perfect scenario for Rx.NET. What we need is the following:
1) Log if the string changes
2) Log if a certain time passed
What I tried is the following:
var logSubject = new Subject<string>();
var logMessagesChangedStream = logSubject.DistinctUntilChanged(); // log on every message change
var logMessagesSampleStream = logSubject.Sample(TimeSpan.FromSeconds(10)); // log at least every 10 seconds
var subscription = logMessagesChangedStream.Merge(logMessagesSampleStream).Subscribe(result =>
{
_logger.LogFatal(result);
});
aSystem.Messages.Subscribe(message =>
{
try
{
ProcessMessage(message);
}
catch(Exception ex)
{
logSubject.OnNext(ex.Message);
}
})
It looks like it's working, but this will log the message twice, once for the DistinctUntilChanged and once for the Sample. So somehow I should reset the streams if one of them emitted a value. They work perfect independently, once merged they should listen to each other ;-)

There's the ambiguous operator Amb which races two sequences to see which wins first.
Observable.Amb(logMessagesChangedStream, logMessagesSampleStream)
The winning stream continues to propagate to the end - we don't want that. We're interested in starting the race again for the next value. Let's do that:
Observable.Amb(logMessagesChangedStream, logMessagesSampleStream)
.Take(1)
.Repeat()
Now the last problem is that DistinctUntilChanged loses its state every time we restart the race, and its behavior is to push the very first value it gets immediately. Let's fix that by turning it into a hot observable.
logSubject.DistinctUntilChanged().Publish();
Putting it all together:
var logSubject = new Subject<string>();
var logMessagesChangedStream = logSubject.DistinctUntilChanged().Publish(); // log on every message change
var logMessagesSampleStream = logSubject.Sample(TimeSpan.FromSeconds(5)); // log at least every 5 seconds
var oneof =
Observable
.Amb(logMessagesChangedStream, logMessagesSampleStream)
.Take(1)
.Repeat();
logMessagesChangedStream.Connect();
oneof.Timestamp().Subscribe(c => Console.WriteLine(c));
Changed the 10 seconds to 5, because.
Test
Action<string> onNext = logSubject.OnNext;
onNext("a");
onNext("b");
Delay(1000, () => { onNext("c"); onNext("c"); onNext("c"); onNext("d"); });
Delay(3000, () => { onNext("d"); onNext("e"); });
Delay(6000, () => { onNext("e"); });
Delay(10000, () => { onNext("e"); });
Output
a#1/30/2020 12:17:52 PM +00:00
b#1/30/2020 12:17:52 PM +00:00
c#1/30/2020 12:17:53 PM +00:00
d#1/30/2020 12:17:53 PM +00:00
e#1/30/2020 12:17:55 PM +00:00
e#1/30/2020 12:18:00 PM +00:00
e#1/30/2020 12:18:05 PM +00:00

Related

How do I keep track of time before executing any function?

I have this service which calls Notification() function every second. What this function does is it checks the time difference between lastupdated and DateTime.UtcNow and check if the Countdiff is greater than 1
If the above conditions is true then it checks if the AppGuid is already in the DB if it's not it inserts it and calls DoAction() function which sends a notification. Everything works fine till here.
The problem starts when the AppGuid is already in DB.
What I need is if AppGuid is already in DB then send notification every 15 Min, here I'm struggling with keeping track last time I send notification, I tried to compare last timestamp when it got updated and tried to compare the difference but it keeps sending notifications resulting spamming the pipeline.
I tried to keep elasped time with stopwatch class but the problem it waits for 15 min when app starts initially to send anything.
How do I send notification only once every 15 min?
public class DataGetService : DelegatingHandler, IHostedService
{
private Timer _timer;
Stopwatch stopwatch = new Stopwatch();
public Task StartAsync(CancellationToken cancellationToken)
{
stopwatch.Start();
_timer = new Timer(Heartbeat, null, 1000, 1000);
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
//Timer does not have a stop.
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Heartbeat(object state)
{
_ = Notifications(TimeSpan.FromMinutes(15));
}
public async Task Notifications(TimeSpan period)
{
// Result array have: LastUpdated, CountDiff, AppGuid, AppName, Timestamp
foreach (var d in result)
{
var x = _DBcontext.TestEvents.FirstOrDefault(o => o.AppGuid == (Guid)d.AppGuid);
if (DateTime.UtcNow - d.LastUpdated <= TimeSpan.FromMinutes(30) && d.CountDiff > 1)
{
if (x == null)
{
DoAction();
_DBcontext.TestEvents.Add(new TestEvents
{
AppGuid = (Guid)d.AppGuid,
AppName = d.AppName,
Timestamp = DateTime.UtcNow
});
}
else
{ //This is useful if app have crashed or was not able to update in db, it will update the current timestamp.
if (DateTime.UtcNow - x.Timestamp >= TimeSpan.FromMinutes(30))
{
x.Timestamp = DateTime.UtcNow;
}
else
{
if (stopwatch.Elapsed <= period)
return;
DoAction();
x.Timestamp = DateTime.UtcNow;
}
}
}
}
await _DBcontext.SaveChangesAsync();
stopwatch.Restart();
}
}
I'd do it the same way I would lock a user out for 15 minutes if they tried their password too many times: rather than trying to use a date field that tracks when a record was last updated, have a dedicated field for when the next event should be raised. At the time you send the first notification, set it to a future date and only send a new notification when the date falls into the past (whereupon you set the date future again)
By using a "last updated" field you risk another notification if something else about the record changes. If you feel that it's not relevant to put the date in the table concerned, consider having a table just for notification event dates; all it needs is a GUID and a date, and then it can function for any object ID in any table in the database (if the GUIDs are unique) - "no sending information about entity with GUID x until time y" is an easy thing to code for in that case and the eventing system doesnt need to know anything about the entity it is reporting on. You can make the subsystems naively raise events every second if they want to, but the sending of notifications can only happen every X minutes, so all the interim notifications are quenched. This simplifies the system raising the messages too; it can just raise them and not care for the logic of whether they should actually be notified or not
For future answer seekers:
I have added futureTime variable which adds 15 min to the Timestamp from DB.
The interesting thing was when I compare with if(DateTime.UtcNow == futureTime) the condition never became true, reason what I have understood is I call this function every second during the execution somehow system was skipping the time resulting false condition.
To overcome this I took the difference (DateTime.UtcNow - futureTime) and also consider the tolerance of seconds.
if (DateTime.UtcNow - d.LastUpdated <= TimeSpan.FromMinutes(30) && d.CountDiff > 1)
{
if (x == null)
{
DoAction();
_DBcontext.TestEvents.Add(new TestEvents
{
AppGuid = (Guid)item.AppGuid,
AppName = item.AppName,
Timestamp = DateTime.UtcNow
});
}
//This is useful if app have crashed or was not able to update in db, it will update the current timestamp.
else if (DateTime.UtcNow - x.Timestamp >= TimeSpan.FromMinutes(30))
{
x.Timestamp = DateTime.UtcNow;
}
else
{
//This will set the notification date in future
DateTime dbtimestamp = x.Timestamp;
DateTime futureTime = dbtimestamp.AddMinutes(15);
if (((DateTime.UtcNow - futureTime) - TimeSpan.FromSeconds(1)).Duration() < TimeSpan.FromSeconds(1.0))
{
DoAction();
x.Timestamp = DateTime.UtcNow;
}
}
}

How to make such delay which don't apply on specific operation only first time.. But It should apply later? C#

I had a function which update the database by every second (as continuously data coming by some Network) I wanted to put delay on that updating function.. As it would update database table by every 5 minutes..
Here is my Code
if (ip==StrIp)
{
Task.Delay(300000).ContinueWith(_=>
{ //I'm Using Task.Delay to make delay
var res= from i in dc.Pins //LINQ Query
where i.ip== ip
select i;
for each (var p in res)
{
p.time= System.DateTime.Now,
p.temperature= temp,
.
. //some other values
.
};
datacontext.submitChanges();
});
}
It is working and updating data by every 5 minutes, Now I want that data should update immediately only first time when application start but after that It should update after every 5 minutes.. But Right now my code isn't doing that..
How can I make such delay which ignore the operation first time, but apply on upcoming data iterations..?
Thanks in Advance
You could use a flag to determine whether it is the first time your method is called, e.g.:
private uint _counter = 0;
public YourMethod()
{
if (ip == StrIp)
{
Action<Task> action = _ =>
{
var res = from i in dc.Pins //LINQ Query
where i.ip == ip
select i;
//...
datacontext.submitChanges();
};
if (_counter++ == 0)
action();
else
Task.Delay(300000).ContinueWith(action);
}
}
Extract the inner logic of the task into a function/method (refactoring of VS or R# can to this automatically) and call the new function/method at start and on the interval.
I personally would go into another direction:
Have a in-memory queue that gets filled with data as it comes into your app. Then I would have a thread/task etc. which checks the queue every 5 minutes and updates the database accordingly. Remember to lock the queue for updates (concurrency). The ConcurrentQueue of .Net is one way to do it.

Creating generated sequence of events as a cold sequence

FWIW - I'm scrapping the previous version of this question in favor of different one along the same way after asking for advice on meta
I have a webservice that contains configuration data. I would like to call it at regular intervals Tok in order to refresh the configuration data in the application that uses it. If the service is in error (timeout, down, etc) I want to keep the data from the previous call and call the service again after a different time interval Tnotok. Finally I want the behavior to be testable.
Since managing time sequences and testability seems like a strong point of the Reactive Extensions, I started using an Observable that will be fed by a generated sequence. Here is how I create the sequence:
Observable.Generate<DataProviderResult, DataProviderResult>(
// we start with some empty data
new DataProviderResult() {
Failures = 0
, Informations = new List<Information>()},
// never stop
(r) => true,
// there is no iteration
(r) => r,
// we get the next value from a call to the webservice
(r) => FetchNextResults(r),
// we select time for next msg depending on the current failures
(r) => r.Failures > 0 ? tnotok : tok,
// we pass a TestScheduler
scheduler)
.Suscribe(r => HandleResults(r));
I have two problems currently:
It looks like I am creating a hot observable. Even trying to use Publish/Connect I have the suscribed action missing the first event. How can I create it as a cold observable?
myObservable = myObservable.Publish();
myObservable.Suscribe(r => HandleResults(r));
myObservable.Connect() // doesn't call onNext for first element in sequence
When I suscribe, the order in which the suscription and the generation seems off, since for any frame the suscription method is fired before the FetchNextResults method. Is it normal? I would expect the sequence to call the method for frame f, not f+1.
Here is the code that I'm using for fetching and suscription:
private DataProviderResult FetchNextResults(DataProviderResult previousResult)
{
Console.WriteLine(string.Format("Fetching at {0:hh:mm:ss:fff}", scheduler.Now));
try
{
return new DataProviderResult() { Informations = dataProvider.GetInformation().ToList(), Failures = 0};
}
catch (Exception)
{}
previousResult.Failures++;
return previousResult;
}
private void HandleResults(DataProviderResult result)
{
Console.WriteLine(string.Format("Managing at {0:hh:mm:ss:fff}", scheduler.Now));
dataResult = result;
}
Here is what I'm seeing that prompted me articulating these questions:
Starting at 12:00:00:000
Fetching at 12:00:00:000 < no managing the result that has been fetched here
Managing at 12:00:01:000 < managing before fetching for frame f
Fetching at 12:00:01:000
Managing at 12:00:02:000
Fetching at 12:00:02:000
EDIT: Here is a bare bones copy-pastable program that illustrates the problem.
/*using System;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
using Microsoft.Reactive.Testing;*/
private static int fetchData(int i, IScheduler scheduler)
{
writeTime("fetching " + (i+1).ToString(), scheduler);
return i+1;
}
private static void manageData(int i, IScheduler scheduler)
{
writeTime("managing " + i.ToString(), scheduler);
}
private static void writeTime(string msg, IScheduler scheduler)
{
Console.WriteLine(string.Format("{0:mm:ss:fff} {1}", scheduler.Now, msg));
}
private static void Main(string[] args)
{
var scheduler = new TestScheduler();
writeTime("start", scheduler);
var datas = Observable.Generate<int, int>(fetchData(0, scheduler),
(d) => true,
(d) => fetchData(d, scheduler),
(d) => d,
(d) => TimeSpan.FromMilliseconds(1000),
scheduler)
.Subscribe(i => manageData(i, scheduler));
scheduler.AdvanceBy(TimeSpan.FromMilliseconds(3000).Ticks);
}
This outputs the following:
00:00:000 start
00:00:000 fetching 1
00:01:000 managing 1
00:01:000 fetching 2
00:02:000 managing 2
00:02:000 fetching 3
I don't understand why the managing of the first element is not picked up immediately after its fetching. There is one second between the sequence effectively pulling the data and the data being handed to the observer. Am I missing something here or is it expected behavior? If so is there a way to have the observer react immediately to the new value?
You are misunderstanding the purpose of the timeSelector parameter. It is called each time a value is generated and it returns a time which indicates how long to delay before delivering that value to observers and then generating the next value.
Here's a non-Generate way to tackle your problem.
private DataProviderResult FetchNextResult()
{
// let exceptions throw
return dataProvider.GetInformation().ToList();
}
private IObservable<DataProviderResult> CreateObservable(IScheduler scheduler)
{
// an observable that produces a single result then completes
var fetch = Observable.Defer(
() => Observable.Return(FetchNextResult));
// concatenate this observable with one that will pause
// for "tok" time before completing.
// This observable will send the result
// then pause before completing.
var fetchThenPause = fetch.Concat(Observable
.Empty<DataProviderResult>()
.Delay(tok, scheduler));
// Now, if fetchThenPause fails, we want to consume/ignore the exception
// and then pause for tnotok time before completing with no results
var fetchPauseOnErrors = fetchThenPause.Catch(Observable
.Empty<DataProviderResult>()
.Delay(tnotok, scheduler));
// Now, whenever our observable completes (after its pause), start it again.
var fetchLoop = fetchPauseOnErrors.Repeat();
// Now use Publish(initialValue) so that we remember the most recent value
var fetchLoopWithMemory = fetchLoop.Publish(null);
// YMMV from here on. Lets use RefCount() to start the
// connection the first time someone subscribes
var fetchLoopAuto = fetchLoopWithMemory.RefCount();
// And lets filter out that first null that will arrive before
// we ever get the first result from the data provider
return fetchLoopAuto.Where(t => t != null);
}
public MyClass()
{
Information = CreateObservable();
}
public IObservable<DataProviderResult> Information { get; private set; }
Generate produces cold observable sequences, so that is my first alarm bell.
I tried to pull your code into linqpad* and run it and changed it a bit to focus on the problem. It seems to me that you have the Iterator and ResultSelector functions confused. These are back-to-front. When you iterate, you should take the value from your last iteration and use it to produce your next value. The result selector is used to pick off (Select) the value form the instance you are iterating on.
So in your case, the type you are iterating on is the type you want to produce values of. Therefore keep your ResultSelector function just the identity function x=>x, and your IteratorFunction should be the one that make the WebService call.
Observable.Generate<DataProviderResult, DataProviderResult>(
// we start with some empty data
new DataProviderResult() {
Failures = 0
, Informations = new List<Information>()},
// never stop
(r) => true,
// we get the next value(iterate) by making a call to the webservice
(r) => FetchNextResults(r),
// there is no projection
(r) => r,
// we select time for next msg depending on the current failures
(r) => r.Failures > 0 ? tnotok : tok,
// we pass a TestScheduler
scheduler)
.Suscribe(r => HandleResults(r));
As a side note, try to prefer immutable types instead of mutating values as you iterate.
*Please provide an autonomous working snippet of code so people can better answer your question. :-)

Throttle only if specific condition met

I have an observable that I am subscribing on. This obsevable will be returning an object that has a property called ActivationType that can be set multiple times.
What I am trying to achieve is log a message whenever ActivationType is set to "Type1". However, if ActivationType is set to "Type2", log the message only once and wait for 30 seconds before logging again if ActivationType is "Type2".
So if I have:
myObservable
.Where(o => o.ActivationType == "Type1" || o.ActivationType == "Type2") //listen for types 1 and 2
.Throttle() // ??? somehow only throttle if we are currently looking at Type2
.Subscribe(Log); //log some stuff
I believe Throttle() is what I am looking for but am not sure how to trigger it conditionally.
Any suggestions?
Ah, a perfect case for the near-impossible-to-understand Window operator!
EDIT:
I post this link like a dozen times a month, I swear - best read-thru I've seen of the Window, Join, Buffer, GroupJoin, etc. operators:
Lee Campbell: Rx Part 9–Join, Window, Buffer and Group Join
var source = new Subject<Thing>();
var feed = source.Publish().RefCount();
var ofType1 = feed.Where(t => t.ActivationType == "Type1");
var ofType2 = feed
// only window the type2s
.Where(t => t.ActivationType == "Type2")
// our "end window selector" will be a tick 30s off from start
.Window(() => Observable.Timer(TimeSpan.FromSeconds(30)))
// we want the first one in each window...
.Select(lst => lst.Take(1))
// moosh them all back together
.Merge();
// We want all "type 1s" and the buffered outputs of "type 2s"
var query = ofType1.Merge(ofType2);
// Let's set up a fake stream of data
var running = true;
var feeder = Task.Factory.StartNew(
() => {
// until we say stop...
while(running)
{
// pump new Things into the stream every 500ms
source.OnNext(new Thing());
Thread.Sleep(500);
}
});
using(query.Subscribe(Console.WriteLine))
{
// Block until we hit enter so we can see the live output
// from the above subscribe
Console.ReadLine();
// Shutdown our fake feeder
running = false;
feeder.Wait();
}
Why not just use two streams?
var baseStream = myObservable.Publish().RefCount(); // evaluate once
var type1 = baseStream.Where(o => o.ActivationType == "Type1");
var type2 = baseStream.Where(o => o.ActivationType == "Type2").Throttle(TimeSpan.FromSeconds(30));
type1.Merge(type2).Subscribe(Log);

Enumerating events occuring in time using reactive extensions (rx)

public interface Event
{
Guid identifier;
Timestamp ts;
}
We're thinking of using Reactive Extensions for a rewrite of a problem at my financial firm.
The premise is that we get events identified by a Guid (stock symbol + uniqueness entropy embedded into it), a timestamp, and a Value field. These come at a high rate, and we cannot act on these objects until "at least" after X seconds (10 seconds), after which we have to act on them, and remove them from the system.
Think about it like two windows, an initial window of "10 seconds" (for example T0 to T10), where we identify all the unique events (basically, group by guid), then we look into the next "10 seconds", "secondary window" (T10-T20), to make sure we're implementing the policy of "at least" 10 seconds. From the "initial window", we remove all the events (because we've accounted for them), and then from the "secondary window", we remove the ones that occurred in the "initial window". And we keep on moving 10 second sliding windows, so now we're looking at window T20-T30, repeat and rinse.
How could I implement this in Rx, because it seems like the way to go.
If you can rely on the server clock and the timestamp in the message (that is, we're in 'real life' mode), and you're after a sliding 10 second delay as opposed to a jumping 10 second window, then you can just delay the events 10 seconds:
var events = new Subject<Event>();
var delayedEvents = events.Delay(TimeSpan.FromSeconds(10));
Checking for unique events etc is just a matter of adding them to a set of some sort:
var guidSet = new HashSet<Guid>();
delayedEvents.Do(e => guidSet.Add(e.identifier));
If you're problem is instead that you must wait 10 seconds and then process the last 10 seconds at once, then you just want to buffer for 10 seconds instead:
var bufferedEvents = events.Buffer(TimeSpan.FromSeconds(10));
bufferedEvents.Do(es => { foreach (var e in es) guidSet.Add(e.identifier); });
I haven't shown the example of a sliding 10 second window as I can't imagine that's what you want (events get processed more than once).
Now we get serious. Let's say you don't want to rely on wall time and instead want to use the time within your events to drive your logic. Assuming event is redefined as:
public class Event
{
public Guid identifier;
public DateTime ts;
}
Create the historical scheduler and feed the scheduled events from the original ones:
var scheduler = new HistoricalScheduler();
var driveSchedule = events.Subscribe(e => scheduler.AdvanceTo(e.ts));
var target = events.SelectMany(e => Observable.Timer(e.ts, scheduler).Select(_ => e));
Now you can simply use the regular Rx combinators on target instead of event, and just pass through the scheduler so they are triggered appropriately, for example:
var bufferedEvents = target.Buffer(TimeSpan.FromSeconds(10), scheduler);
Here's a simple test. Create a hundred events each 'virtually' 30 seconds apart but in real-time triggered every second:
var now = DateTime.Now;
var test = Enumerable.Range(0,99).Select(i =>
Scheduler.ThreadPool.Schedule(
TimeSpan.FromSeconds(i),
() => events.OnNext(new Event() {
identifier = Guid.NewGuid(),
ts = now.AddSeconds(i * 30)
})
)
).ToList();
Subscribe to it and ask for 60 seconds of buffered events - and actually receive 2 events every 2 'real' seconds (60 virtual seconds):
target.Select(e => String.Format("{0} {1}", e.identifier, e.ts.ToString()))
.Buffer(TimeSpan.FromSeconds(60), scheduler)
.Select(es => String.Join(" - ", es))
.DumpLive();

Categories

Resources