in a service application I am iterating through the Windows application event log to parse Events in order react depanding on the entry message.
In the case that the event log is full (Windows usually makes sure there is enough space by deleting old entries - this is configurable in the eventvwr.exe settings), the service always runs into an IndexOutOfBoundsException while iterating through the EventLog.Entries collection. No matter how I iterate (for-loop, using the collections enumerator, copying the collection into an array, ...), I can't seem to get rid of this ´bug´.
Currently, I ensure that the log is not full in order to keep the service running by regularly deleting the last few item by parsing the event log file and deleting the last few nodes (Don't beat me up, I couldn't find a better alternative...).
How can I iterate through the collection without trying to access already deleted entries?
Is there probably a more elegant method? I am only trying to acces the logs written during the last x seconds (even LINQ failed to select those when the log is full - same exception), could this help?
Thanks for any advice and hints
Frank
Edit: I forgot to mention that my assumption is the loops are accessing entries which are being deleted during iteration by Windows. Basically that is why I tried to clone the collection. Is there perhaps a way to lock the collection for a small amount of time for just my application?
I have hit this as well, more so on 2008R2 Domain Controllers. The problem is the logs are wrapping, and so the index seems to change between when you started iterating the events and when you got to this point.
Doesn't seem to be a cure other than retry.
Looking at it from a practical point of view, why is there a problem at all?
If you want to iterate over all the entries, and sometimes when you try to read an entry that doesn't really exist you get an IndexOutOfBoundsException, then just catch this exception and ignore it.
If you know what this exception means, and you know what you want to do, just handle the exception and continue working. That's what exceptions are for, after all...
In case anyone finds this thread:
Avoiding this behavior doesn't seem to be possible. Even copying the collection fails and locking the file is not possible (due to system restrictions).
Instead, I implemented a periodic checking algorithm which backs up the event log and clears it at a defined usage percentage (e.g. 95%), such that an overflow or deletion should not happen.
Related
I do get reports via Crashlytics, that some of the Users of my Unity app (roughly 0.5%) get an UnauthorizedAccessException when I call FileInfo.Length;
the interesting part of the stacktrace is:
Non-fatal Exception: java.lang.Exception
UnauthorizedAccessException : Access to the path '/storage/emulated/0/Android/data/com.myCompany.myGreatGame/files/assets/myAsset.asset' is denied.
System.IO.__Error.WinIOError (System.IO.__Error)
System.IO.FileInfo.get_Length (System.IO.FileInfo)
The corresponding File (it's a different file for every report) was written (or is currently written) by the same application (possibly many sessions earlier). The call happens in a backgroundthread and there might be some writing going on at the same time. But according to the .net doc this property should be pre-cached (see https://learn.microsoft.com/en-us/dotnet/api/system.io.fileinfo.length?view=netframework-2.0)
The whole code causing it is:
private static long DirSize(DirectoryInfo d)
{
long size = 0;
FileInfo[] fileInfos = d.GetFiles();
foreach (FileInfo fileInfo in fileInfos)
{
size += fileInfo.Length;
}
...
Did anyone experience something similar and knows what might be causing it?
This looks like a very exotic error, and because of that, I have no evidence to back up my suggestions.
Suggestion 1:
User has installed Antivirus software - Those applications work sometimes like malware, locking files that are not used by the host program to test them (especially if they want to prevent malicious behavior). This would explain the rare nature of the error. I would try to see permissions of the file after the failed call of the Length method, this might give you (and possibly us) more insights.
Suggestion 2:
You cannot read length when the application is actively writing to this file in some circumstances. This should never happen but bugs happen even in OS. Possible path: Some application is writing to the File. The file is modified and metadata (including Lenght) is written, while it happens you are reading length from another thread, OS Locks the file from reading metadata (including Length), while metadata is written (probably for security reasons)
Suggestion 3 (and most probable):
Bad SD Card/Memory/CPU - Some random errors always can happen because you do not control the client's hardware. I would check if this 0.5% of errors are not from one User, or seemingly from multiple users but because of other issues with hardware, their unique ID resets (check for other data like phone model as this might also give you clues).
You are most likely trying to access a file you don't have permissions to access. There are certain files that even Administrator cannot access.
You could do a Try/Catch block to handle the exception.
See this question.
If you read carefully Microsoft's documentation it clearly states that:
an I/O Error is thrown in case the Refresh fails
The FileInfo.Length Property is pre-cached only in a very precise list of cases (GetDirectories, GetFiles, GetFileSystemInfos, EnumerateDirectories, EnumerateFiles, EnumerateFileSystemInfos). The cached info should be refreshed by calling the Refresh() method.
Interpolating #1 and #2 you easily identify the problem: while you try to get that information, you have a file open with an exclusive lock, which gives you the error in #1. I would suggest approaching this implementing two different logics, one is the obvious try/catch block, but because that block (a) costs in performances and (b) doesn't solve the logical problem of knowing the file size, you also should cache those data yourself when you acquire the exclusive lock.
Put those in a static table in memory, a simple key/value (file/size), and check against it before to call FileInfo.Length(). Basically, when you acquire the lock you add the file/size value to the dictionary, and when you are done you remove it. This way you will never get the error again while being able to compute the directory size all the same.
~Pino
I have a running order for 2 handlers Deleting and Reordering pictures and would like some advises for the best solution.
On the UI some pictures are deleted, the user clicks on the deleted button. The whole flow, delete command up to an event handler which actually deletes the physical files is started.
Then immediately the user sorts the remaining pictures. A new flow from reorder command up to the reordering event handler for the file system fires again.
Already there is a concurrency problem. The reordering cannot be correctly applied without having the deletion done. At the moment this problem is handled with some sort of lock. A temp file is created and then deleted at the end of the deletion flow. While that file exists the other thread (reordering or deletion depending on the user actions) awaits.
This is not an ideal solution and would like to change it.
The potential solution must be also pretty fast (off course the current one is not a fast one) as the UI is updated thru a JSON call at the end of ordering.
In a later implementation we are thinking to use a queue of events but for the moment we are pretty stuck.
Any idea would be appreciated!
Thank you, mosu'!
Edit:
Other eventual consistency problems that we had were solved by using a Javascript data manager on the client side. Basically being optimist and tricking the user! :)
I'm starting to believe this is the way to go here as well. But then how would I know when is the data changed in the file system?
Max suggestions are very welcomed and normally they apply.
It is hard sometimes to explain all the details of an implementation but there is a detail that should be mentioned:
The way we store the pictures means that when reordered all pictures paths (and thus all links) change.
A colleague hat the very good idea of simply remove this part. That means that even if the order will change the path of the picture will remain the same. On the UI side there will be a mapping between the picture index in the display order and its path and this means there is no need to change the file system anymore, except when deleting.
As we want to be as permissive as possible with our users this is the best solution for us.
I think, in general, it is also a good approach when there appears to be a concurrency issue. Can the concurrency be removed?
Here is one thought on this.
What exactly you are reordering? Pictures? Based on, say, date.
Why there is command for this? The result of this command going to be seen by everyone or just this particular user?
I can only guess, but it looks like you've got a presentation question here. There is no need to store pictures in some order on the write side, it's just a list of names and links to the file storage. What you should do is to store just a little field somewhere in the user settings or collection settings: Date ascending or Name descending. So you command Reorder should change only this little field. Then when you are loading the gallery this field should be read first and based on this you should load one or another view. Since the store is cheap nowadays, you can store differently sorted collections on the read side for every sort params you need.
To sum up, Delete command is changing the collection on the write side, but Reoder command is just user or collection setting. Hence, there is no concurrency here.
Update
Based on your comments and clarifications.
Of course, you can and, probably, should restrict user actions only by one at the time. If time of deletion and reordering is reasonably short. It's always a question of type of user experience you are asked to achieve. Take a usual example of ordering system. After an order placed, user can almost immediately see it in the UI and the status will be something like InProcess. Most likely you won't let user to change the order in any way, which means you are not going to show any user controls like Cancel button(of course this is just an example). Hence, you can use this approach here.
If 2 users can modify the same physical collection, you have no choice here - you are working with shared data and there should be kind of synchronization. For instance, if you are using sagas, there can be a couple of sagas: Collection reordering saga and Deletion saga - they can cooperate. Deletion process started first - collection aggregate was marked as deletion in progress and then right after this reordering saga started, it will attempt to start the reordering process, but since deletion saga is inprocess, it should wait for DeletedEvent and continue the process afterwards.The same if Reordering operation started first - the Deletion saga should wait until some event and continue after that event arrived.
Update
Ok, if we agreed not touch the file system itself, but the aggregate which represents the picture collection. The most important concurrency issues can be solved with optimistic concurrency approach - in the data storage a unique constraint, based on aggregate id and aggregate version, is usually used.
Here are the typical steps in the command handler:
This is the common sequence of steps a command handler follows:
Validate the command on its own merits.
Load the aggregate.
Validate the command on the current state of the aggregate.
Create a new event, apply the event to the aggregate in memory.
Attempt to persist the aggregate. If there's a concurrency conflict during this step, either give up, or retry things from step 2.
Here is the link which helped me a lot some time ago: http://www.cqrs.nu/
In my plugin I have a code to check the execution context Depth to avoid infinite loop once the plugin updates itself/entity, but there are cases that entity is being updated from other plugin or workflow with depth 2,3 or 4 and for that specific calls, from that specific plugin I want to process the call and not stop even if the Depth is bigger then 1.
Perhaps a different approach might be better? I've never needed to consider Depth in my plug-ins. I've heard of other people doing the same as you (checking the depth to avoid code from running twice or more) but I usually avoid this by making any changes to the underlying entity in the Pre Operation stage.
If, for example, I have code that changes the name of an Opportunity whenever the opportunity is updated, by putting my code in the post-operation stage of the Update my code would react to the user changing a value by sending a separate Update request back to the platform to apply the change. This new Update itself causes my plug-in to fire again - infinite loop.
If I put my logic in the Pre-Operation stage, I do it differently: the user's change fires the plugin. Before the user's change is committed to the platform, my code is invoked. This time I can look at the Target that was sent in the InputParameters to the Update message. If the name attribute does not exist in the Target (i.e. it wasn't updated) then I can append an attribute called name with the desired value to the Target and this value will get carried through to the platform. In other words, I am injecting my value into the record before it is committed, thereby avoiding the need to issue another Update request. Consequently, my changes causes no further plug-ins to fire.
Obviously I presume that your scenario is more complex but I'd be very surprised if it couldn't fit the same pattern.
I'll start by agreeing with everything that Greg said above - if possible refactor the design to avoid this situation.
If that is not possible you will need to use the IPluginExecutionContext.SharedVariables to communicate between the plug-ins.
Check for a SharedVariable at the start of your plug-in and then set/update it as appropriate. The specific design you'll use will vary based on the complexity you need to manage. I always get use a string with the message and entity ID - easy enough to serialize and deserialize. Then I always know whether I'm already executing the against a certain message for a specific record or not.
I have a friend who is in disagreement with me on this, and I'm just looking to get some feedback as to who is right and wrong in this situation.
FileInfo file = ...;
if (file.Exists)
{
//File somehow gets deleted
//Attempt to do stuff with file...
}
The problem my friend points out is that, "so what if the file exists when I check for existence? There is nothing to guard against the chance that right after the check the file gets deleted and attempts to access it result in an exception. So, is it even worth it to check for existence before-hand?"
The only thing I could come up with is that MSDN clearly does a check in their examples, so there must be more to it. MSDN - FileInfo. But, it does have me wondering... is the extra call even worth it?
I would have both if (file.Exists) and a try catch. Relying only on exception handling does not express explicitly what you have in mind. if (file.Exists) is self-explaining.
If someone deletes the file just in that millisecond between checking and working with the file, you can still get an exception. Nevertheless, there are also other conditions, which can lead to an exception: The file is read-only; you do not have the requested security permissions, and more.
I agree with your friend here for the most part (context depending on whether or not you have withheld pertinent information from your question). This is an example of an exception that can occur outside of your magnificent code. Checking for the existence of the file and performing your operation is a race condition.
The fact is that this exception can occur and there is NOTHING you can do to prevent it. You must catch it. It's completely out of your control. For example, what if the network goes down, lightning strikes your datacenter and it catches on fire, or a squirrel chews thru the cables? While it's not practical to try and figure out every single way in which the code will raise an exception, it is a good practice to do your best in situations where you know it's a good possibility and do your best to handle it.
I would say this depends on the context. if the file was just created and then this process ran, then it doesn't make sense to check if it exists. you can assume that it does because the code is still executing.
however if this is a file that is continuously deleted & created, then yes it does make sense to ensure it exists before continuing.
another factor is who/what is accessing the file. if there are multiple clients accessing the file then there is a greater chance of the file being modified/removed and therefore it would make sense to check if the file exists.
I have a program that needs to retrieve some data about a set of files (that is, a directory and all files within it and sub directories of certain types). The data is (very) expensive to calculate, so rather than traversing the filesystem and calculating it on program startup, I keep a cache of the data in a SQLite database and use a FilesystemWatcher to monitor changes to the filesystem. This works great while the program is running, but the question is how to refresh/synchronize the data during program startup. If files have been added (or changed -- I presume I can detect this via last modified/size) the data needs to be recomputed in the cache, and if files have been removed, the data needs to be removed from the cache (since the interface traverses the cache instead of the filesystem).
So the question is: what's a good algorithm to do this? One way I can think of is to traverse the filesystem and gather the path and last modified/size of all files in a dictionary. Then I go through the entire list in the database. If there is not a match, then I delete the item from the database/cache. If there is a match, then I delete the item from the dictionary. Then the dictionary contains all the items whose data needs to be refreshed. This might work, however it seems it would be fairly memory-intensive and time-consuming to perform on every startup, so I was wondering if anyone had better ideas?
If it matters: the program is Windows-only written in C# on .NET CLR 3.5, using the SQLite for ADO.NET thing which is being accessed via the entity framework/LINQ for ADO.NET.
Our application is cross-platform C++ desktop application, but has very similar requirements. Here's a high-level description of what I did:
In our SQLite database there is a Files table that stores file_id, name, hash (currently we use last modified date as the hash value) and state.
Every other record refers back to a file_id. This makes is easy to remove "dirty" records when the file changes.
Our procedure for checking the filesystem and refreshing the cache is split into several distinct steps to make things easier to test and to give us more flexibility as to when the caching occurs (the names in italics are just what I happened to pick for class names):
On 1st Launch
The database is empty. The Walker recursively walks the filesystem and adds the entries into the Files table. The state is set to UNPROCESSED.
Next, the Loader iterates through the Files table looking for UNPARSED files. These are handed off to the Parser (which does the actual parsing and inserting of data)
This takes a while, so 1st launch can be a bit slow.
There's a big testability benefit because you can test the walking the filesystem code independently from the loading/parsing code. On subsequent launches the situation is a little more complicated:
n+1 Launch
The Scrubber iterates over the Files table and looks for files that have been deleted and files that have been modified. It sets the state to DIRTY if the file exists but has been modified or DELETED if the file no longer exists.
The Deleter (not the most original name) then iterates over the Files table looking for DIRTY and DELETED files. It deletes other related records (related via the file_id). Once the related records are removed, the original File record is either deleted or set back to state=UNPARSED
The Walker then walks the filesystem to pick-up new files.
Finally the Loader loads all UNPARSED files
Currently the "worst case scenario" (every file changes) is very rare - so we do this every time the application starts-up. But by splitting the process up unto these steps we could easily extend the implementation to:
The Scrubber/Deleter could be refactored to leave the dirty records in-place until after the new
data is loaded (so the application "keeps working" while new data is cached into the database)
The Loader could load/parse on a background thread during an idle time in the main application
If you know something about the data files ahead of time you could assign a 'weight' to the files and load/parse the really-important files immediately and queue-up the less-important files for processing at a later time.
Just some thoughts / suggestions. Hope they help!
Windows has a change journal mechanism, which does what you want: you subscribe to changes in some part of the filesystem and upon startup can read a list of changes which happened since last time you read them. See: http://msdn.microsoft.com/en-us/library/aa363798(VS.85).aspx
EDIT: I think it requires rather high privileges, unfortunately
The first obvious thing that comes to mind is creating a separate small application that would always run (as a service, perhaps) and create a kind of "log" of changes in the file system (no need to work with SQLite, just write them to a file). Then, when the main application starts, it can look at the log and know exactly what has changed (don't forget to clear the log afterwards :-).
However, if that is unacceptable to you for some reason, let us try to look at the original problem.
First of all, you have to accept that, in the worst case scenario, when all the files have changed, you will need to traverse the whole tree. And that may (although not necessarily will) take a long time. Once you realize that, you have to think about doing the job in background, without blocking the application.
Second, if you have to make a decision about each file that only you know how to make, there is probably no other way than going through all files.
Putting the above in other words, you might say that the problem is inherently complex (and any given problem cannot be solved with an algorithm that is simpler than the problem itself).
Therefore, your only hope is reducing the search space by using tweaks and hacks. And I have two of those on my mind.
First, it's better to query the database separately for every file instead of building a dictionary of all files first. If you create an index on the file path column in your database, it should be quicker, and of course, less memory-intensive.
Second, you don't actually have to query the database at all :-)
Just store the exact time when your application was last running somewhere (in a .settings file?) and check every file to see if it's newer than that time. If it is, you know it's changed. If it's not, you know you've caught it's change last time (with your FileSystemWatcher).
Hope this helps. Have fun.