Reactive Extensions error handling with Observable SelectMany - c#

I'm trying to write file watcher on certain folder using the reactive extensions library
The idea is to monitor hard drive folder for new files, wait until file is written completely and push event to the subscriber. I do not want to use FileSystemWatcher since it raises Changed event twice for the same file.
So I've wrote it in the "reactive way" (I hope) like below:
var provider = new MessageProviderFake();
var source = Observable.Interval(TimeSpan.FromSeconds(2), NewThreadScheduler.Default).SelectMany(_ => provider.GetFiles());
using (source.Subscribe(_ => Console.WriteLine(_.Name), () => Console.WriteLine("completed to Console")))
{
Console.WriteLine("press Enter to stop");
Console.ReadLine();
}
However I can't find "reactive way" to handle errors. For example, the file directory can be located on the external drive and became unavailable because of connection problem.
So I've added GetFilesSafe that will handle exception errors from the Reactive Extensions:
static IEnumerable<MessageArg> GetFilesSafe(IMessageProvider provider)
{
try
{
return provider.GetFiles();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
return new MessageArg[0];
}
}
and used it like
var source = Observable.Interval(TimeSpan.FromSeconds(2), NewThreadScheduler.Default).SelectMany(_ => GetFilesSafe(provider));
Is there better way to make SelectMany to call provider.GetFiles() even when an exception has been raised? I'm using error counter in such cases to repeat the reading operation N times and then fail (terminate the process).
Is there "try N time and wait Q seconds between attempts" in the Reactive Extensions?
There is a problem with GetFilesSafe also: it returns IEnumerable<MessageArg> for lazy reading however it can raise on iteration and exception will be thrown somewhere in the SelectMany

There's a Retry extension, that just subscribes to the observable again if the current one errors, but it sounds like that won't offer the flexibility you want.
You could build something using Catch, which subscribes to the observable you give it if an error occurs on the outer one. Something like the following (untested):
IObservable<Thing> GetFilesObs(int times, bool delay) {
return Observable
.Return(0)
.Delay(TimeSpan.FromSeconds(delay ? <delay_time> : 0))
.SelectMany(_ => Observable.Defer(() => GetFilesErroringObservable()))
.Catch(Observable.Defer(() => GetFilesObs(times - 1, true)));
}
// call with:
GetFilesObs(<number_of_tries>, false);
As written, this doesn't do anything with the errors other than trigger a retry. In particular, when enough errors have happened, it will just complete without an error, which might not be what you want.

Related

Dts.TaskResult = (int)ScriptResults. Failure on top of Dts.Events.FireError() is good?

In the script task:
else if (val == 0)
{
Dts.Events.FireError(0, "", "Custom Message ", "", 0);
Dts.TaskResult = (int)ScriptResults.Failure;
}
When we have Dts.Events.FireError() in the script task, and when it gets invoked it fails the task as well as displaying the custom error message. So is is good to write the
Dts.TaskResult = (int)ScriptResults.Failure;
to fail the task as in the above code.
Is it not like calling the fail twice?
Any use case we should have this both.
Should we use both
"It depends." How do you want to handle error handling?
My experience has been that it is cleaner to set the TaskResult to Failure and then use precedence constraints out of the task to drive control flow behavior. That is, "yes, this task failed but this package still has work to do." e.g. The file we expected isn't there - that's the error, but I'm going to take an error path to drive the next action (send email alert about missing file)
Otherwise, you get to use the Event Handlers which is totally a valid approach but for all the shops I've consulted in, maybe two have used them well. Many people get tripped up over the possibility that an event is raised several times due to container nesting and reraising of events.
If I know I am killing out execution from a Task, then FireError event can be helpful as it helps me log exactly why I'm aborting processing (File not found exception).

ReactiveX: Make Observable.Create() be called only once

I'm trying to build a data access layer with ReactiveX (more precisely, Rx.Net) and SQLite.Net.
Part of the job is making an observable that returns the database connection, so that it can be open lazily, only when needed. This is what I came up with so far:
var connection = Observable.Create<SQLiteConnection>(observer =>
{
Debug.WriteLine("CheckInStore: Opening database connection");
var database = new SQLiteConnection(configuration.ConnectionString.DatabasePath);
observer.OnNext(database);
observer.OnCompleted();
return Disposable.Create(() =>
{
Debug.WriteLine("CheckInStore: Closing database connection");
database.Close();
});
});
// Further down the line, a query would look like this:
var objects = connection.SelectMany(db => db.Query<>("select * from MyTable"));
Unfortunately, every time somebody subscribes to this observable, a new connection is created. And it is also closed once the subscription is disposed.
I tried using .Replay(1).RefCount(), but it didn't change anything. I'm not sure to understand that whole RefCount thing anyway.
How can I make this database connection a singleton?
Have a look at this code, which is equivalent, but doesn't open a DB connection:
var conn = Observable.Create<int>(o =>
{
Debug.WriteLine("Opening");
o.OnNext(1);
o.OnCompleted(); //This forces closing code to be called. Comment me out.
return Disposable.Create(() =>
{
Debug.WriteLine("Closing");
});
})
//.Replay(1)
//.RefCount() //.Replay(1).RefCount is necessary if you want to cache the result
;
var sub1 = conn.SelectMany(i => Observable.Return(i)).Subscribe(i => Debug.WriteLine($"1: {i}"));
var sub2 = conn.SelectMany(i => Observable.Return(i)).Subscribe(i => Debug.WriteLine($"2: {i}"));
sub1.Dispose();
sub2.Dispose();
var sub3 = conn.SelectMany(i => Observable.Return(i)).Subscribe(i => Debug.WriteLine($"3: {i}"));
sub3.Dispose();
There's a number of problems here:
Your dispose/unsubscription code will get called everytime you either unsubscribe or complete the observable. Since you're calling OnCompleted, it's going to be open/closed every time.
If you want to re-use the same connection, you need to use .Replay(1).RefCount(). Observable.Create runs the whole function every time a subscriber connects, there's nothing (except .Replay(1).Refcount()) that caches it for you.
Even if you add .Replay(1).Refcount() and remove OnCompleted, you will still get disposal (meaning DB-Closed) behavior if there's no outstanding subscriptions (like after the sub2.Dispose() call).
If you don't dispose the subscriptions, either through using(var sub = connection.SelectMany(...)) or explicitly via sub.Dispose(), you'll never unsubscribe, since this Observable has no way of terminating. In other words, opposite problem of 3, your Close code will never happen.
I hope you get the picture: This is a pretty error-prone way of doing things. I would recommend a simple iterative call, since that tends to work better for DB calls anyway. If you insist on RX, I would look at Observable.Using for your DB connection initialization.

Trap timeouts in Parse

The Parse docs states that: "By default, all connections have a timeout of 10 seconds, so tasks will not hang indefinitely."
I have this code (from Parse web site:
try
{
Task t=test.SaveAsync();
await t;
int j = t.Id;
}
catch (ParseException exc)
{
if (exc.Code == ParseException.ErrorCode.ObjectNotFound)
{
// Uh oh, we couldn't find the object!
}
else
{
//Some other error occurred
}
}
I run my app (on iPhone/Xamarin) with the network on, then I set 100% network loss, then I run the code above. I neither get an exception, nor does code reach the line after the await statement where I read the task ID. In other words, I do not know what happened.
My question is:
How do I trap Pasre timeouts in case there is no network? Is there an event or something I can use or I have to implement it all by myself, with timers and all?
Thanks, don escamilloATgmail.com

Delayed NUnit Assert message evaluation

I have this assert in my test code
Assert.That(() => eventData.Count == 0,
Is.True.After(notificationPollingDelay),
"Received unexpected event with last event data" + eventData.Last().Description());
that asserts some condition after a period of time and on failure produces a message. it fails to run because the message string is constructed when the assert starts and not when the assert ends. therefore the eventData collection is still empty (as it is initially) and the attempt to get the Description of the last item in the collection fails. is there a workaround or decent alternative to this in NUnit or do I have to revert to using Thread.Sleep in my tests?
PS: I'm using NUnit 2.5.10.
You may use this scheme:
var constrain = Is.True.After(notificationPollingDelay);
var condition = constrain.Matches(() => eventData.Count == 0);
Assert.IsTrue(condition,
"Received unexpected event with last event data" +
eventData.Last().Description());
This method is similar to the use Thread.Sleep
In NUnit version 3.50 I had to use a different syntax.
Here is the example:
var delayedConstraint = Is.True.After( delayInMilliseconds: 100000, pollingInterval: 100);
Assert.That( () => yourCondition, delayedConstraint );
This will test whether `yourCondition` is true waiting a certain maximum time using the `DelayedConstraint` created by the `Is.True.After` method.
In this example the DelayedConstraint is configured to use maximum time of 100 seconds polling every 0.1 seconds.
See aslo the legacy NUnit 2.5 documentation for DelayedConstraint.
The simplest answer is "don't include that text in your failure message". I personally almost never include a failure message; if your test is atomic enough you don't need to do it. Usually if I need to figure out a cryptic failure, only a debugger helps anyway.
If you really want to do it, this code should work without managing the threads yourself.
try
{
Assert.That(() => eventData.Count == 0, Is.True.After(notificationPollingDelay));
}
catch(AssertionException)
{
throw new Exception("Received unexpected event with last event data" + eventData.Last().Description());
}

Read Event Log From Newest to Oldest

I have written a short program to establish the uptime for remote PC's using the event log messages which are posted at startup and shutdown. Currently the logic is :
foreach (eventlogentry)
{
if (entryTime > OldestTime)
{
if (entry = Startup)
{
addOnTime(entry.Time);
}
if (entry = Shutdown)
{
addOffTime(entry.Time);
}
}
}
"OldestTime" define how far to scan backwards in time....
I would like to know if there is anyway to easily ammend my program to read the events from newest to oldest?
It's reading remote event logs and its taking a while for this function to run, as it starts at the end and reads forward.
I know this because I added a "else" block to the first "if" to break out of the foreach block, if the entry isnt within the timespan we are looking for and the program stop's at the first event it reads.
It has been a while since you asked this question, but i ran into the same problem and found a solution.
using System.Diagnostics;
using System.Linq;
EventLog events = new EventLog("Application", System.Environment.MachineName);
foreach (EventLogEntry entry in events.Entries.Cast<EventLogEntry>().Reverse())
{
//Do your tasks
}
This awnser still is not as fast as to just enumerate it forward but it is a bit more elegant than using a loop to copy items to a list.
#leinad13, for your application you need to change System.Environment.MachineName to a string with the name of the computer you want the events from and change "Application" to the log you want to see. I think "System" in your case.
Your best solution may be to move the data into a list and then reverse the order of that. Something like the following:
EventLog eventLog = new EventLog();
eventLog.Log = myEventLog;
var eventList = new List<EventLogEntry>();
foreach(EventLogEntry entry in eventLog.Entries)
{
eventList.Add(entry);
}
eventList.Reverse();
That should get the data in the reverse order, i.e. latest first, and then you can just process it as before but this time exit the loop when you hit a date before the oldest time.
Its not an ideal solution as you are still processing the whole log but it may be worth trying out to see if you get an improvement in performance

Categories

Resources