I am new to programming and I came across a problem and I'm not sure how to deal with it.
I use the line
textBox2.Text = System.IO.File.ReadAllText(path);
To read from a text file and paste the contents in textBox2.
Now the issue is that the text file I'm try to read is a large (couple megabytes) text file. This text file contains logs from a program, new logs are always added at the bottom of the file.
Now I want to update textBox2 if the text file is updated. However I am not sure how to do this in an efficient way. One way is to just read the whole text file again, but since the text file is so big, this is a very slow process.
I am interested in finding out a different and faster way to handle this. I'm not really interested in the exact code, I just hoped to find out in what direction I should look and what options I can consider.
Well, two obvious things you could check:
The size of the file (FileInfo.Length)
The last write time (FileSystemInfo.LastWriteTimeUtc)
If you keep track of those, you should be able to detect when the file has changed - at least with a reasonable degree of confidence.
Additionally, you can use FileSystemWatcher to watch for changes.
Also, you might want to consider keeping track of where you've read to - so you could just read the new data, by seeking to the right place in the file.
Finally, a TextBox may not really be the best user interface for a huge log file. If this is a structured log file, it would be good to have that structure represented in the UI - for example, one row in a table per log entry, potentially with filtering options etc.
You can check every X seconds. If the file changed then update, if not, do nothing. You can keep the modification time of the file to know if it changed or not.
Related
I work on an app where user can type in some text. Text is saved to XML file, I try to make the file save “on the fly” as user is typing so it saves instantly. However if data is typed quick, I get an error of “file currently in use”. How to overcome this issue?
The reason for the error is that you are trying to write a file while the previous write operation is incomplete and the file is still open for write.
Now, if you absolutely must make a write on every character change - I would put in a queue in place, so when XML content is changed - instead of writing to a file right away - agg a message to a queue. Then have the code that monitors that queue and only writes the next chnage once the previous write has finished.
You can try to put a flag to control if the file is already open or not. If this is open, you keep the text and don't write on XML, but if it is not you just write.
This is a concurrency problem, you can acess the website: https://www.oreilly.com/library/view/concurrency-in-c/9781491906675/ch01.html to get more options.
I wrote a custom control for output file name selection with the typical: text box for the filename, a "browse" button, and some other functionality specific to my application.
The text box changes color depending on the filename. If the file location cannot be written to, it turns red. If the file already exist, it turns yellow. Otherwise, it remains the system-assigned color.
To see if a file exists, I use IO.File.Exists; simple enough.
I implemented the "if the file can be written to" as a simple try-catch block where a file is actually opened, something written in it, closed, then deleted. If at any point an exception is thrown, I know the user can't use that filename and I turn the text box red.
This is a catch-all; since I'm doing the actual operation I intend to do, it is fool-proof. However, it seems irresponsible to have software creating and deleting files like crazy just to see if it can.
So my question is, how do I replicate this functionality without creating files? I can see I have to:
Check the path for legality (e.g., 'z:' is not a valid filename). This entails parsing the path and making sure all directories exist.
If the location exists, I have to check for write permissions. (Several answered questions exist to this end.)
Is there anything else?
EDIT
Within minutes I see people are already voting up an answer that criticizes that I'm checking at all that the file is accessible before actual writing to it occurs. While I appreciate experts "standing back" from my question to see whether or not there is a completely different way to achieve it, telling me I shouldn't be doing it is not an answer to my question.
So let me elaborate on my application (I am not expecting hundreds of users at the same time).
I use this file chooser control in data acquisition applications. In many situations the test that you are about to run is "expensive" in one way or another. Therefore it is critical to set things up very carefully. Overwriting data can be very expensive (and for the fearful user I have a checkbox that will append the date and time down to the millisecond to the filename).
So the purpose of my indicator colors is not to provide a surefire way for the software to know the file can be written to (that check is still done at the instant it actually has to), it's to serve as an indicator to the user that at least he has set up the file name correctly so if he goes forward he is guaranteed not to overwrite old data and he's almost sure a last-minute IO error (filename typo) won't let the experiment run unrecorded.
I suggest this - don't check anything before user commits the action. With your current approach, even if you verified the file is okay, it may be locked 5 seconds later when the user actually commits to write to a file. Doing preliminary checks may only give user a false impression of estimated success. Especially consider this point on a terminal server with 100+ simultaneous users.
There is nothing wrong with showing a prompt with Retry/Cancel/etc. if no access, and let user decide.
EDIT:
No offense, but there are standards on how such collisions are handled. Windows standard is to show a prompt to the user. Also consider this - if you suddenly have a deny in write access to the folder, which you are not expected to have, you probably need to hire another system/network administrator.
If the operation is costly, make sure this guy is paid well. C'mon, what if your network goes down during writing? Hard drive? Router? There are many reasons why writing to a file can be interrupted, and you should be prepared for that. If you cannot afford it, make sure you have invested in good infrastructure and good people to support it.
Down on earth, you can increase chances of acquiring a successful lock on the file:
Pick a unique file name, using datetime-based hash as a suffix/prefix.
Write to user's home directory, also known as %UserProfile%, it is likely that you will succeed.
I can understand your problem with not wanting to risk losing "expensive" data because the file couldn't be written and a responsible program will do it's best to avoid the situation.
I would do this by cacheing the results. Before the test is run write a mock result to a file somewhere in the user data space, then leave the file open and write the real result to the file. After this is done write it to the user-specified file. Provide a recovery option that will read the cache file and write it out to the user's file.
Your approach could fail because just because the file was writable at the start doesn't mean it's still writable. The network could have gone down. Someone could have removed the flash drive. Someone else could be doing a large data transfer through a buggy router. (Real world case--it took me a long time to prove it was a network problem and not my program. finally accepted it was their fault when I showed that dir :*.* /s on multiple machines at once would almost certainly cause one or more to fail.)
I want my C# (winforms) application to be multilingual. My idea is:
I will have my translations in some text file(s), each "sentence" or phrase will have it's unique ID (integer)
at the start-up of the app I will iterate through all controls on all forms I have in my app (I suppose this should be done in each form's 'Load' event handler) and I will test the control of it's type
i.e. if it is a button or menu item, I will read it's default 'Text' property, locate this phrase in one text file, read it's unique ID and through this ID will locate translated phrase in (other) text file
then I will overwrite that 'Text' property of the control with translated phrase
This enables me to have separate text file with phrases for each and every language (easy to maintain individual translation in the future - only 1 txt file)
I would like to hear from you - proffesionals if there is some better / easier / faster / more 'pro' way how to accomplish this.
What format of translation text file should I use (plain text, XML, ini....) - it should be human readable. I don't know if finding a phrase in XML would be in C# faster than going line-by-line in plain text file and searching for given phrase/string...?
EDIT - I want users (community) to be able to translate my app for them into their native language without my interaction (it means Microsoft's resources are out of the game)
Thank you very much in advance.
CLOSED - My solution:
Looks like I'm staying at my original concept - every phrase will be in separate line of plain text file - Unicode encoding (and ID at the beginning of the line). I was thinking about deleting ID's too and to use only the line numbers, but it would need advanced text editor (Notepad shows no line numbers) and if somebody accidentaly hits shortcut for "Delete line" and doesn't notice that, whole app would go crazy :)
//sample of my translation text file for one language
0001:Text of my first button
0002:Text of my first label
0003:MessageBox title text
...etc etc
Why not use Microsoft's resource file method? You won't need to write any complex custom code this way.
It sounds like you are somewhat invested in the "one text file" idea, or else you would probably lean towards the standard way and use Microsoft's resource files. Handling for resource files is built-in, and the controls are already keyed to support it. But, as you are probably aware, each translation goes into it's own resource file. So you are left juggling multiple files to distribute with your app.
With a custom, roll-your-own solution, you can probably trim it down to one unicode file. But you will have to loop through the controls to set the text, and then look up the text for each one. As you add control types, you will have to add support in your code for them. Also, your text file will grow in large chunks as you add languages, so you will have to account for that as well.
I still lean towards using the resource files, but your phrasing suggests you already don't like that solution, so I don't think I have changed your mind.
Edit:
Since you want the solution separated from the app to avoid having to recompile, you could distribute SQL-CE database files for each language type. You can store the text values in NVARCHAR fields.
That will make your querying easier, but raises the self-editing requirements. You would have to provide a mechanism for users to add their own translation files, as well as edit screens.
Edit 2:
Driving towards a solution. :)
You can use a simple delimited text file, encoded in Unicode, with a convention based naming system. For example:
en-US.txt
FormName,ControlName,Text
"frmMain","btnSubmit","Save"
"frmMain","lblDescription","Description"
Then you can use the CurrentUICulture to determine which text file to load for localization, falling back to en-US if no file is found. This lets the users create (and also change!) their own localization files using common text editors and without any steep learning curve.
If you want the users to edit the translations through your application while keeping things simple and quick, resource file is best. If you don't like it, the second best option is XML file.
Still, to answer you question on how to do it best with a text file, it is pretty straight forward: You just make sure that your unique identifier (int probably) are in order (validate before using the file). Then to search quickly, you use the technique of the halves.
You look for number X, so you go to the file's middle line. If id > x, to go to ¼ of the file, etc.
You cut in two until you get to the right line. This is the fastest know research method.
NOTE: Beware of the things that are external to the application but need translation: External file items, information contained in a database, etc.
I'm reading the contents of an XML file and parsing that into an object model.
When I modify the values in the object model, then use the following code to save it back to the xml:
XElement optionXml = _panelElement.Elements("options").FirstOrDefault();
optionXml.SetAttributeValue("arming", value.ToString());
_document.Save(_fileName);
This works, as far as I can see, because when I close the application and restart it the values that I had saved are reflected in the object model next time I view it.
However, when I load the actual XML file, the values are still as they were originally.
Why is this? What do I need to do to save the actual XML file with the new values?
You are most likely experiencing file system virtualisation, which was introduced in Windows Vista.
Basically what this means is that you are saving your file, just not where you think you're saving it. For example, you might think that you are saving to C:\Program Files\Your App\yourFile.xml, but what is happening under the hood is that the OS is silently redirecting that to %APPDATA%\Your App\yourFile.xml. When you go to reload it, once again the OS silently redirects from that location.
This is a security measure designed to better encapsulate applications and their data and to prevent unauthorised writes to locations where damage can occur. You can still force a save to %PROGRAMFILES%\Your App, but to do that you either need to relax the ACLs applied to that folder, or you need to elevate the privilege level your application runs at.
I wasn't sure whether to put this as a comment or as an answer, but I think it could be a potential answer. It sounds like the XML file is being saved because the data is being persisted across instances of the application. It may be file system virtualization like slugster mentioned, but it might be a simple as the fact that you are looking at the wrong copy of the XML file. If you are using a relative path, the file may have been copied to the new location. I would suggest you do a quick file search for that file name and see what you get back.
It turns out the file was being copied to and read from the Output Directory. I can see that it's being updated as expected from there.
I'm developing a document based desktop app which writes a fairly large and complex file to disk when the user saves his document. What is the best practice to do here to prevent data corruption? There are a number of things that can happen:
The save process may fail half way, which is of course a serious application error, but in this case one would rather have the old file left than the corrupted half-written file. The same problem will occur if the application is terminated for some other reason half way through the file writing.
The most robust approach I can think of is using a temporary file while saving and only replace the original file once the new file has been successfully created. But I find there are several operations (creating tempfile, saving to tempfile, deleting original, moving tempfile to original) that may or may not fail, and I end up with quite a complicated mess of try/catch statements to handle them correctly.
Is there a best practice/standard for this scenario? For example is it better to copy the original to a temp file and then overwrite the original than to save to a temp file?
Also, how does one reason with the state of a file in a document based application (in windows)? Is it better to leave the file open for writing by the application until the user closes the document, or to just quickly get in an read the file on open and quickly close it again? Pros and cons?
Typically the file shuffling dance goes something like this, aiming to end up with file.txt containing the new data:
Write to file.txt.new
Move file.txt to file.txt.old
Move file.txt.new to file.txt
Delete file.txt.old
At any point you always have at least one valid file:
If only file.txt exists, you failed to start writing file.txt.new
If file.txt and file.txt.new exist, you probably failed during the write - file.txt should be the valid old copy. (If you can validate files, you could try loading the new file - it could be the move that failed)
If file.txt.old and file.txt.new exist, the second move operation failed. You can use either file, depending on whether you want new or old
If file.txt.old and file.txt exist, the delete operation failed. Again, you can use either file.
This is assuming you're on a file system with an atomic move operation. If that's not the case, I believe the procedure is the same but you'd need to be more careful about the recovery procedure.
Answering from the last question:
If we are talking here about fairly complex and big files, I would personaly choose to lock the file as during the reading I may not need to load all data on view, but only that one user needs now.
One first:
Save in temp file always.
Replace old one with new one, if this fails, considering the fact that your app is document management app, your primary objective failed, so the worst ever case, but you have new temp file. So on this error can close your app and reopen (critical error), on reopenning control if there is a temp file, if yes, run recovering of data, more or less like VS does in case of crashes.
Creating a temp file and then replacing the original file by the temp file (the latter being a cheap operation in terms of I/O) is the mechanism used by MFC's document persistence classes. I've NEVER seen it fail. Neither have users reported such problems. And yes back then the documents were large (they were complex as well but that's irrelevant as far as I/O is concerned).