I have a column in my grid view with images of a progress bar. These images are created on each render and written to my 'write' folder.
However, after Microsft's patch KB3052480, IIS resets once files in the application's directory have been created, changed, or overwritten.
This can be changed in IIS's settings so that it never resets on update . However this means the application would need to manually be restarted when any patch is applied (not an acceptable outcome).
Is there a way to keep the setting (so that IIS still resets on updates such as changes to .dll files) but still create and write images without it resetting.
I have looked around a lot but there is not much information on this particular issue.
What I was thinking is- somehow stop monitoring changes to file right before the save takes place. And then resume monitoring again.
How would this be done, or is there another way to prevent IIS from recycling after this specific change?
To answer your question mentioned in the comment, which I think is your real question, to prevent the app domain from recycling on file save don't put the files you are saving inside the websites' folder. Instead have it in some other path that is not part of the application.
I'm a bit late but if you are using asp.net framework, then you can store the "dynamic" files in App_Data, i think its a exception to the recycle rule.
Related
I have a stylesheet in my application ~/Content/theme/style.css. It is referenced in my application using standard bundling as such:
bundles.Add(new StyleBundle("~/Content/css").Include(
"~/Content/font-awesome/font-awesome.css",
"~/Content/theme/style.css"));
Now, I have used a Sass compiler (Libsass) to allow me to change the output style.css file to a customised user output file as required.
So basically I do something like this.
CompilationResult compileResult = SassCompiler.CompileFile(Server.MapPath(Path.Combine(WebConfigSettings.RootSassPath, "style.scss"), options: new CompilationOptions {
SourceMap = true,
SourceMapFileUrls = true
});
and then I save like this.
string outputPath = Server.MapPath(WebConfigSettings.ThemeOutputPath);
if (System.IO.File.Exists(outputPath))
System.IO.File.Copy(outputPath, string.Format("{0}.bak", outputPath), true);
System.IO.File.WriteAllText(Server.MapPath(WebConfigSettings.ThemeOutputPath), compileResult.CompiledContent);
However intermittently I receive the following dreaded access error: "The process cannot access the file C:....\style.css" because it is being used by another process." (Note: This occurs at the File.WriteAllText line)
This doesn't make sense because I do not open any streams to the file and perform what I assume to be a single atomic operation using File.WriteAllText.
Now I have also noticed that this error is particularly likely when I use two different browsers to modify this file consecutively.
My assumption is that one of two things is happening.
Either:
a. The bundling packager is somehow locking the file because it has been modified while it updates the bundles and not releasing the lock or
b. Because two different connections access the file somehow a lock persists across them.
So, has anyone run into anything similar? Any suggestions on how I might be able to fix this issue?
PS: I have tried using HttpRuntime.UnloadAppDomain(); as a hacky way to try and release any locks on the file but this doesn't seem to be helping.
Your web server itself will get a read lock on the file(s) when they are served. So, if you are going to be writing files at the same time, collisions will be inevitable.
Option 1
Write to disk in a retry loop and ignore this exception. The files are likely to be available for writing within a very short time span.
Option 2
Avoid the web server locking the files by serving them yourself from a cache.
From this answer:
...if you are updating these [files] a lot, you're really defeating IIS's caching mechanisms here. And it is not healthy for the web server to be serving files that are constantly changing. Web servers are great at serving static files.
Now if your [files] are so dynamic, perhaps you'll need to serve it through a server-side program instead.
Since you mentioned in a comment that your end users are changing the files, I would suggest doing the following to ensure there is no chance of a locking conflict:
Use an action method to serve the content of the bundle.
By default, read the files from disk.
When an end user loads the "edit" functionality of the application, load the content from the file(s) into a cache. Your action method that serves the content should check this cache first, serving it if available, and serve the file(s) from disk if not.
When the end user saves the content, compile the content, write it to disk, then invalidate the cache. If the user doesn't save, the cache will just time out eventually and the files will be read from disk again by end users.
See How can I add the result of an ASP.NET MVC controller action to a Bundle? for some potential solutions on how to serve the bundle from an action method. I would probably use a solution similar to this one (although the caching strategy might need to be different).
Alternatively, you could make the cache reload every time it is empty in a user request and update both the files and cache during the "save" operation which would probably be simpler and reduce the chance of a file lock issue to zero, but wouldn't scale as well.
When one page rendered on browser, then the optimizer process the bundled css and jqueries into caching. So when once the page got cashed, one page re-request the browser first will check for page cached contents, if not present then only it make service call. There is only two solutions for your question less or sass type css usage.
turn off bundling
Less,coffeescript,scss & sass bundling
I'm new to writing WindowsServices. I used to make web-applications.
I need to make this service that checks directories for files. If a certain trigger-file is present, the files in the directory get copied, and other stuff gets done.
My service is ready, almost.
It works fine, but I have an issue.
If the trigger-file is present, the copying and processing starts.
But at the same time, the service keeps checking my directories.
So at a given point, it comes back to the directory that is being copied.
How can I prevent it from recopying the directory?
(I hope I am clear in my explanation)
I found my solution, just when I was reading my question again and looking at your comments.
I can simply delete my trigger-file and then the service should skip the folder.
Sorry for your trouble.
I'm reading the contents of an XML file and parsing that into an object model.
When I modify the values in the object model, then use the following code to save it back to the xml:
XElement optionXml = _panelElement.Elements("options").FirstOrDefault();
optionXml.SetAttributeValue("arming", value.ToString());
_document.Save(_fileName);
This works, as far as I can see, because when I close the application and restart it the values that I had saved are reflected in the object model next time I view it.
However, when I load the actual XML file, the values are still as they were originally.
Why is this? What do I need to do to save the actual XML file with the new values?
You are most likely experiencing file system virtualisation, which was introduced in Windows Vista.
Basically what this means is that you are saving your file, just not where you think you're saving it. For example, you might think that you are saving to C:\Program Files\Your App\yourFile.xml, but what is happening under the hood is that the OS is silently redirecting that to %APPDATA%\Your App\yourFile.xml. When you go to reload it, once again the OS silently redirects from that location.
This is a security measure designed to better encapsulate applications and their data and to prevent unauthorised writes to locations where damage can occur. You can still force a save to %PROGRAMFILES%\Your App, but to do that you either need to relax the ACLs applied to that folder, or you need to elevate the privilege level your application runs at.
I wasn't sure whether to put this as a comment or as an answer, but I think it could be a potential answer. It sounds like the XML file is being saved because the data is being persisted across instances of the application. It may be file system virtualization like slugster mentioned, but it might be a simple as the fact that you are looking at the wrong copy of the XML file. If you are using a relative path, the file may have been copied to the new location. I would suggest you do a quick file search for that file name and see what you get back.
It turns out the file was being copied to and read from the Output Directory. I can see that it's being updated as expected from there.
When I call FileInfo(path).LastAccessTime or FileInfo(path).LastWriteTime on a file that is in the process of being written it returns the time that the file was created, not the last time it was written to (ie. now).
Is there a way to get this information?
Edit: To all the responses so far. I hadn't tried Refresh() but that does not do it either. I am returned the time that the file was started to be written to. The same goes for the static method, and creating a new instance of FileInfo.
Codymanix might have the answer, but I'm not running Windows Server (using Windows 7), and I don't know where the setting is to test.
Edit 2: Nobody finds it interesting that this function doesn't seem to work?
The FileInfo values are only loaded once and then cached. To get the current value, call Refresh() before getting a property:
f.Refresh();
t = f.LastAccessTime;
Another way to get the current value is by using the static methods on the File class:
t = File.GetLastAccessTime(path);
Starting in Windows Vista, last access time is not updated by default. This is to improve file system performance. You can find details here:
http://blogs.technet.com/b/filecab/archive/2006/11/07/disabling-last-access-time-in-windows-vista-to-improve-ntfs-performance.aspx
To reenable last access time on the computer, you can run the following command:
fsutil behavior set disablelastaccess 0
As James has pointed out LastAccessTime is not updated.
The LastWriteTime has also undergone a twist since Vista. When the process has the file still open and another process checks the LastWriteTime it will not see the new write time for a long time -- until the process has closed the file.
As a workaround you can open and close the file from your external process. After you have done that you can try to read the LastWriteTime again which is then the up to date value.
File System Tunneling:
If an application implements something like a rolling logger which closes the file and then renames it to a different file name you will also run into issues since the creation time and file size of the "old" file is remembered by the OS although you did create a new file. This includes wrong reports of the file size even if you did recreate log.txt from scratch which is still 0 bytes in size. This feature is called OS File System Tunneling which is still present on Windows 8.1 . An example how to work around this issue check out RollingFlatFileTracelistener from Enterprise Library.
You can see the effects of file system tunneling on your own machine from the cmd shell.
echo test > file1.txt
ren file1.txt file2.txt
Wait one minute
echo test > file1.txt
dir /tc file*.txt
...
05.07.2015 19:26 7 file1.txt
05.07.2015 19:26 7 file2.txt
The file system is a state machine. Keeping states correctly synchronized is hard if you care about performance and correctness.
This strange tunneling syndrome is obviously still used by application which do e.g. autosave a file and move it to a save location and then recreate the file again at the same location. For these applications it makes to sense to give the file a new creation date because it was only copied around. Some installers do also such tricks to move files temporarily to a different location and write the contents back later to get past some file exists check for some install hooks.
Have you tried calling Refresh() just before accessing the property (to avoid getting a cached value)? If that doesn't work, have you looked at what Explorer shows at the same time? If Explorer is showing the wrong information, then it's probably something you can't really address - it might be that the information is only updated when the file handle is closed, for example.
There is a setting in windows which is sometimes set especially on server systems so that modified and accessed times for files are not set for better performance.
From MSDN:
When first called, FileSystemInfo
calls Refresh and returns the
cached information on APIs to get
attributes and so on. On subsequent
calls, you must call Refresh to get
the latest copy of the information.
FileSystemInfo.Refresh()
If you're application is the one doing the writing, I think you are going to have to "touch" the file by setting the LastWriteTime property your self between each buffer of data you write. Some psuedocode:
while(bytesWritten < totalBytes)
{
bytesWritten += br.Write(buffer);
myFileInfo.LastWriteTime = DateTime.Now;
}
I'm not sure how severely this will affect write performance.
Tommy Carlier's answer got me thinking....
A good way to visualise the differences is seperately running the two snippets (I just used LinqPAD) simliar to below while also running sysinternals Process Monitor.
while(true)
File.GetLastAccessTime([file path here]);
and
FileInfo bob = new FileInfo(path);
while(true){
string accessed = bob.LastAccessTime.ToString();
}
If you look at Process Monitor while running the first snippet you will see repeated and constant access attempts to the file for the LinqPAD process. The second snippet will do an initial access of the file, for which you will see activity in process monitor, and then very little afterwards.
However if you go and modify the file (I just opened the text file I was monitoring using FileInfo and added a character and saved) you will see a series of access attempts by the LinqPAD process to the file in process monitor.
This illustrates the non-cached and cached behaviour of the two different approachs respectively.
Will the non-cached approach wear a hole in the hard drive?!
EDIT
I went away feeling all clever over my testing and then used the caching behaviour of FileInfo in my windows service (basically to sit in a loop and say 'Has-file-changed-has-file-changed...' before doing processing)
While this approach worked on my dev box, it did not work in the production environment, ie the process just kept running regardless if the file had changed or not. I ended up changing my approach to checking and just used GetLastAccessTime as part of it. Don't know why it would behave differently on production server....but I am not too concerned at this point.
Recently I was working on displaying workflow diagram images in our web application. I managed to use the rehosted WF designer and create images on-the-fly on the server, but imagining how large the workflow diagrams can very quickly become, I wanted to give a better user experience by using some ajax control for displaying images that would support zoom & pan functionality.
I happened to come across the website of seadragon, which seems to be just an amazing piece of work that I could use. There is just one disadvantage - in order to use their library for generating deep zoom versions of images I have to use the file structure on a server. Because of the temporary nature of the images I am using (workflow diagrams with progress indicators), it is important to not only be able to create such images but also to get rid of them after some time.
Now the question is how can I best ensure that the temporary image files and the folder hierarchy can be created on a server (ASP.NET web app), and later cleaned up. I was thinking of using the cache functionality and by the expiration of the cache item delete the corresponding image folder hierarchy, or simply in the Application_Start and Application_End of Global.asax delete the content of the whole temporary folder, but I'm not really sure whether this is a good idea and whether there are some security restrictions or file-system-related troubles. What do you think ?
We do something similar for creating PDF reports and found the easiest way is to use a timestamp check to determine how "old" files are, and then delete them based on a period of time, in our case more then 2 hours old. This is done before the next PDF document is created, but as part of the creation process. We also created a specific folder and gave the ASP.Net user read/write access to the folder.
The only disadvantage is that if the process of creating PDF's is not used regularly there will be a build up of files, however they will be cleaned up eventually. In 2 years and close on 4000 PDF's we have yet to have an error doing it this way.
Use the App_Data folder. This folder is inside your application and writable by your app without having to go outside the context of the app, but it's also secured from casual browsing. It's meant to hold data files for your application.
Application_Start and Application_End will only fire once each, so if you need better cleanup than that, I would consider using a cache structure or a simple windows service to handle the cleanup.
First, you have to make sure your IIS worker process has rights to write/delete files from your cache directory (and NOT the rest of your site, just in case)
2nd, I would stay away from using App_Start and App_End, App end to clean up files is not 100% guaranteed to fire, and you could end up with a growing pile of orphaned images.
I would instead make a scheduled process, maybe runs once per hour, or once a day, depending on what you want. And have it check how old each image in your cache is, and if its older than your arbitrary "expiure time" then delete it.
Other than that there's not much to it.