Overwriting ASP.NET MVC active stylesheet bundle - c#

I have a stylesheet in my application ~/Content/theme/style.css. It is referenced in my application using standard bundling as such:
bundles.Add(new StyleBundle("~/Content/css").Include(
"~/Content/font-awesome/font-awesome.css",
"~/Content/theme/style.css"));
Now, I have used a Sass compiler (Libsass) to allow me to change the output style.css file to a customised user output file as required.
So basically I do something like this.
CompilationResult compileResult = SassCompiler.CompileFile(Server.MapPath(Path.Combine(WebConfigSettings.RootSassPath, "style.scss"), options: new CompilationOptions {
SourceMap = true,
SourceMapFileUrls = true
});
and then I save like this.
string outputPath = Server.MapPath(WebConfigSettings.ThemeOutputPath);
if (System.IO.File.Exists(outputPath))
System.IO.File.Copy(outputPath, string.Format("{0}.bak", outputPath), true);
System.IO.File.WriteAllText(Server.MapPath(WebConfigSettings.ThemeOutputPath), compileResult.CompiledContent);
However intermittently I receive the following dreaded access error: "The process cannot access the file C:....\style.css" because it is being used by another process." (Note: This occurs at the File.WriteAllText line)
This doesn't make sense because I do not open any streams to the file and perform what I assume to be a single atomic operation using File.WriteAllText.
Now I have also noticed that this error is particularly likely when I use two different browsers to modify this file consecutively.
My assumption is that one of two things is happening.
Either:
a. The bundling packager is somehow locking the file because it has been modified while it updates the bundles and not releasing the lock or
b. Because two different connections access the file somehow a lock persists across them.
So, has anyone run into anything similar? Any suggestions on how I might be able to fix this issue?
PS: I have tried using HttpRuntime.UnloadAppDomain(); as a hacky way to try and release any locks on the file but this doesn't seem to be helping.

Your web server itself will get a read lock on the file(s) when they are served. So, if you are going to be writing files at the same time, collisions will be inevitable.
Option 1
Write to disk in a retry loop and ignore this exception. The files are likely to be available for writing within a very short time span.
Option 2
Avoid the web server locking the files by serving them yourself from a cache.
From this answer:
...if you are updating these [files] a lot, you're really defeating IIS's caching mechanisms here. And it is not healthy for the web server to be serving files that are constantly changing. Web servers are great at serving static files.
Now if your [files] are so dynamic, perhaps you'll need to serve it through a server-side program instead.
Since you mentioned in a comment that your end users are changing the files, I would suggest doing the following to ensure there is no chance of a locking conflict:
Use an action method to serve the content of the bundle.
By default, read the files from disk.
When an end user loads the "edit" functionality of the application, load the content from the file(s) into a cache. Your action method that serves the content should check this cache first, serving it if available, and serve the file(s) from disk if not.
When the end user saves the content, compile the content, write it to disk, then invalidate the cache. If the user doesn't save, the cache will just time out eventually and the files will be read from disk again by end users.
See How can I add the result of an ASP.NET MVC controller action to a Bundle? for some potential solutions on how to serve the bundle from an action method. I would probably use a solution similar to this one (although the caching strategy might need to be different).
Alternatively, you could make the cache reload every time it is empty in a user request and update both the files and cache during the "save" operation which would probably be simpler and reduce the chance of a file lock issue to zero, but wouldn't scale as well.

When one page rendered on browser, then the optimizer process the bundled css and jqueries into caching. So when once the page got cashed, one page re-request the browser first will check for page cached contents, if not present then only it make service call. There is only two solutions for your question less or sass type css usage.
turn off bundling
Less,coffeescript,scss & sass bundling

Related

Prevent IIS Recycling for specific change in files

I have a column in my grid view with images of a progress bar. These images are created on each render and written to my 'write' folder.
However, after Microsft's patch KB3052480, IIS resets once files in the application's directory have been created, changed, or overwritten.
This can be changed in IIS's settings so that it never resets on update . However this means the application would need to manually be restarted when any patch is applied (not an acceptable outcome).
Is there a way to keep the setting (so that IIS still resets on updates such as changes to .dll files) but still create and write images without it resetting.
I have looked around a lot but there is not much information on this particular issue.
What I was thinking is- somehow stop monitoring changes to file right before the save takes place. And then resume monitoring again.
How would this be done, or is there another way to prevent IIS from recycling after this specific change?
To answer your question mentioned in the comment, which I think is your real question, to prevent the app domain from recycling on file save don't put the files you are saving inside the websites' folder. Instead have it in some other path that is not part of the application.
I'm a bit late but if you are using asp.net framework, then you can store the "dynamic" files in App_Data, i think its a exception to the recycle rule.

Best way to track files being moved (possibly between disks), VB.NET (or C#)

I am developing a "dynamic shortcutting" application which creates special shortcut files which point to a registry entry rather than an actual file/executable. The registry entry contains the path of the desired file. I want to have a daemon running which watches the linked-to files and updates their registry entries if they are moved or renamed. Renamed I can handle using System.IO.FileSystemWatcher, but what is the best way to handle moved files?
I know this is beyond the basic functions of FSW (despite being a low-level file-system operation). The question is, what is the best way of doing it?
Most posts/articles I have read suggest ways that feel altogether "hacky", which basically involve looking for a delete followed by a create in a new place of a file, and connecting the two by file size, meta-data, time between the delete/create triggers, hashes, etc. This may well be the method I have to resort to, setting up FSWs on all drives. However, I am hoping there might be a better way.
Is it possible to either:
2.1. Listen in to the shell and "hear" move operations?
2.2 Or (even more radical) replace or add something to the shell move operation that either triggers some sort of event or performs the registry-updating task itself, precluding the need for the daemon?
I have a feeling that everyone is going to tell me that 1. is the only course, but I look forward to your suggestions. (answers in VB.NET preferred, but can translate from C# if necessary).
[I'm not sure if this should be appended as an "update" to my original post or posted as a separate answer]
To sum up (all two of) the answers plus my own experimenting (to try to give a definitive answer to this question):
It seems the only high-level (.NET) solution is to use the FileSystemWatcher which does not detect "move" out-of-the-box (despite it being a low-level command). The FSW approach is non-trivial, comparably resource-expensive, sloppy in places (i.e. using timers) and has its limitations and caveats. Nor does it provide a true reflection of "move" - it merely infers it from symptoms that are very likely to be a move (and have the same effect on the file-system in any case) but could theoretically be produced by non-move actions. Also, it appears you have to know what files you want to watch for moves in advance of the move happening, there's no-way of telling as it occurs.
On a lower-level (which would involve C++), one could hook API calls to get a faithful picture of when "moves" are called. This has the advantage that you don't have to decide to watch files in advance, and is also less resource-expensive than listening to "deletes" and "creates" and trying to compare them.
On a systems-programming level (which would involve C++ and could easily break your computer if you didn't know what you were doing) one could build a filesystem filter driver: this would take the concept of detecting moves to a truly anal level, detecting re-allocation of filesystem resources performed even without the kernel.
After some experimenting, here is the general structure of how the FileSystemWatcher approach (or at least the most obvious one to me) works, its quirks and its limitations. [no code atm, it's all pretty integrated into my application and I'm yet to optimise it, but I might add some snippets in here later].
The FileSystemWatcher method (to detect when files are moved or renamed):
.1. FileSystemWatchers.
You will need to create one FSW for each highest-level directory you want to monitor (for example, one for each writable logical drive).
.2. Renamed.
Straightforward renaming of the file is trivially handled.
.3. Moved.
This part is very far from trivial; it basically involves comparing files in three different scenarios.
3.0.1. Deciding if a deleted/moved-from file is the same as a created/moved-to file.
For determining whether a deleted and a created file are a match, filename is useless (can be changed during a move). You could use a mixture of file size and attributes like time created, or even a hash of the entire file. In my particular solution I only needed to watch the movement of specific files "registered" before load-time, so I was able to give these files a unique fingerprint as metadata that I could then use to compare files (this works fine in real-world scenarios, but is easy to break maliciously in testing, which disappoints me as a perfectionist.)
3.0.1.1. When to read filesize/attributes/take hash?
Before I came up with the static fingerprint idea, I was testing my code with a simple filesize + creation date validation check. I quickly realised though that I had to have a note of the filesize and creation date (or hash or whatever else you want to use) of the deleted file BEFORE it signals as "deleted", because you can't check the size of a file that doesn't exist. If (like me) you know the files you want to watch in advance, then you need to read in those values before you enable the FileSystemWatchers; you also need to listen for "change" events on those files to update the values of filesize and creation date, take a new hash etc. This then begs the question: what do you do if you DON'T know what files you are interested in watching to see if they move? What if you only know you are possibly interested in knowing if they've moved when they "delete"? That, unfortunately, is beyond me (it wasn't something I had to deal with.) Unless you can come up with a solution to this problem, there is zero point in continuing with the FileSystemWatcher approach. Furthermore, I would conjecture (though could very easily be wrong) that there is no high-level solution that will meet your needs. If you do however come up with a solution (please post it below/comment on this post/edit it in here on this post), I have made the rest of this compatible.
3.1. Scenario 1: Direct moving of the file itself.
Upon the "delete" of a specific file being detected, you need to start listening for a "create" of a congruous file. Rather than listening indefinitely for the matching "create" of a file that might just have been deleted (which in reality involves inspecting every file created in the directory), you can use a timer to start and stop a "listening" flag (practical, but from a purist point of view a little arbitrary), deciding that after e.g. 1000ms with no appropriately matching create it's likely there won't be one.
3.2.0. A common misconception.
A lot of people seem to be under the impression, after glancing at the docs, that moving or renaming a folder triggers a rename for all their subfiles and subfolders rather than a delete and a create. In actual fact what the docs say is:
If you cut and paste a folder with files into a folder being watched, the FileSystemWatcher object reports only the folder as new, but not its contents because they are essentially only renamed.
(i.e. only the top folder throws rename or create/delete and the subfiles/subfolders throw NOTHING). Meaning if you want to know when and where a certain file is moved, you have to listen out for each and every of its ascendent folders as well.
3.2.1. Scenario 2: Renaming of a containing folder.
In my solution, because I knew all the files I was watching, whenever one of my FileSystemWatchers reported a rename of a folder rather than a file (the portion of the string after the last "/" will contain no ".") I checked each of my watched files to see if their paths were in that directory and if so, changed the beginning of the filepath to the path of the new directory et voila!, I knew where my files had been moved to. If you do not now in advance what files you are looking for, then you will have to recursively search through everything in every folder that throws a "rename".
3.2.2. Scenario 3: Moving of a containing folder.
This one feels like a slap in the face: in order to build your move-detection routine, you have to be able to detect moves. Here folders will throw a "delete" followed by a "create". In my case the solution just recycles the techniques in 3.1 and 3.2.1: when a folder "delete" is detected, I check to see if it contains any of my watched files. If it does, I set a "listen" flag (and a timer to snuff it) and check the subdirectory path of my file in the old folder against every new folder "create" that is detected to see if it points to a file with the desired fingerprint. If it does, I now have the old and new paths of the file and have detected the move. If you don't know what files to watch for, you may have to validate folder moves by comparing size on disk and number of subfiles/subfolders between "deleted" folder and "created" folders to confirm a folder has moved first, then search the folder recursively for the files you're interested in.
3.3. FURTHER COMPLICATION: Cross-drive moving of large files.
This is a problem I fortunately didn't run into (because I was only comparing fingerprint metadata, and didn't need access to files); however moving large files between drives (which transfer in stages, triggering a create event then a series of change events) can cause real headaches.
3.3.1. Headache 1: The "create" fires when the destination file is incomplete.
This means comparing its size to a "deleted" file will produce a false negative. You can't even take a hash of the first part of the file to indicate to your program that this "might" be the deleted file, because the move operation will have the file access permissions locked down. You just have to try and tell if the created file might still be moving and wait for it to finish.
3.3.2. Headache 2: No sure way to "tell" that the created file is still being moved.
Some have suggested checking the file access permissions on the created file, but they might be indistinguishable from those on a file created and still in use by any random application. Others have suggested setting short time-limited listen flags for "changes" on the file, but again this is indistinguishable from a file being modified by an application. In fact if the file happened to be a log file constantly and rapidly being updated by some process, then waiting for "changes" to the file to timeout might never end.
3.3.3. Headache 3: (UNTESTED) possibly these sort of moves "delete" the file after "creating" the destination file*.
It makes sense that this would be the case, though I haven't tested it. [if anyone does know, feel free to edit (or delete) this section appropriately]
3.4. A philosophical quandry: are two identical files the same?
This is a very pedantic and arbitrary thought-experiment, but say you have two drives, each with an identical copy of File.txt. You run a batch file that deletes the copy on the first drive then immediately makes a copy of the file on the second drive into the same folder on the second drive and names it Copy of File.txt. Unless you are using fingerprints, your code will identify a delete and then a create of an identical file and be unable to distinguish what happened from a move (with renaming) of the file from the first drive to the second. The final state of the filesystem is identical in both cases so it shouldn't cause your application to behave unexpectedly, but art thou really content to call that a "move" based purely on isomorphism? (especially when you know the kernel sees it differently)?
Using high-level unrestricted api provided by C# - no, you cant. Use FileSystemWatcher.. On same drive operation of moving file is not "delete and create" - it's "rename".
If you can/want to go into lower-level, then you can hook MoveItem and MoveItems of IFileOperation shell's interface, and MoveFile from Kernel32.dll... It will work with most of apps, but require expansion for security rights for your application, that mostly unacceptable in corporative environment..
The task has two flaws that make it hard to implement: (a) move operation across the disks is actually a sequence of read/write operations followed by deletion rather than move. And during those read/write operations there can be some transformation of data in place ; and (b) moving can be performed not by just a shell.
What you can do is employ a filesystem filter driver to intercept file operations right when they take place. Then you need to detect the sequence of read and write operations performed by the same process over your file. I.e. if your code detects, that the file is read sequentially (NOTE: some copying tools can read the file in multiple threads in parallel) and then write similar blocks of data to the other file AND after reading everything the source file is deleted AND the complete file contents have been written to the other place, then you can guess that you have come over file move operation.
Bump & update: This may well be against the rules of StackOverflow, but I would like to point out to the many people landing on this page (and the myriad similar questions on SO) that I have started a feature request on MicroSoft UserVoice to add MOVE detection to FileSystemWatcher. The best solution in the long term, rather than trying to work around the problem, might be to petition MicroSoft to fix it. If you have come here because you too need a solution to this problem, please consider clicking here and voting for this feature.

Do Awesomium WebSessions share disk cache?

Using Awesomium.NET 1.7 RC3, if I create a WebSession and a WebView in my application like so:
var webSession =
WebCore.CreateWebSession("C:\\AwCache", new WebPreferences{...});
var webView =
WebCore.CreateWebView(500, 500, webSession);
...and then exit the app, will the cached data (images, css etc.) be available the next time my app starts and creates a WebSession using the same location for the cache?
I believe the cache will still be available. While most of my experience with caching was in Awesomium 1.6.6 and was done by setting the WebCoreConfig.UserDataPath property when calling WebCore.Initialize(), a little testing hints that it is still available.
If you look at the files created when you first run your code and access a web page (I chose Flickr just so there would be a reasonable amount of images on the page), you'll see that inside your AwCache folder, there's another folder called 'Cache'. This folder contains 4 'data_X' files, an index file and a number of 'f_XXXXXX' files. One other thing worth noting is how quickly those files are generated on the first app run. When you rerun the app, no new files are created as long as you're visiting the same URL, but the time stamp on the data_X files, the index files, and maybe a couple of the f_X files get updated, but many f_X files remain the same. The file changes also happen very quickly.
I believe the f_X files are the actual cached items from the site, as visiting a different site will result in an increasing number of f_X files, while revisiting the same site will not.
Obviously, this is far from a matter-of-fact answer, but based on these observations, I think it seems apparent that the cache is maintained. One final piece, if you look at the Awesomium 1.7 documentation, CreateWebSession(WebPreferences) specifies in bold that it is in-memory cache, where the CreateWebSession(string, WebPreferences) method that you are calling does not.

Minifying and combining files in .net

I am looking at implementing some performance optimization around my javascript/css. In particular looking to achieve the minification and combining of such. I am developing in .net/c# web applications.
I have a couple of options and looking for feedback on each:
First one is this clever tool I came across Chirpy which via visual studio combines, minifies etc -> http://chirpy.codeplex.com/ This is a visual studio add in but as I am in a team environment, this tool isnt ideal.
My next option is to use an Msbuild task (http://yuicompressor.codeplex.com/) to minify the files and also combine them (maybe read from an xml file what needs to be combined). While this works for minifying fine, the concern I have is that I will have to maintain what must be combined which could be a headache.
3rd option is to use msbuild task just for the minifying and at runtime using some helper classes, combine the files on a per page basis. This would combine the files, give it a name and add a version to it.
Any other options I could consider? My concern with the last option is that it may have performance issues as I would have to open the file from the local drive, read its contents and then combine the files. This is alot of processing at run time. I was looking at something like Squishit - https://github.com/jetheredge/SquishIt/downloads This minifies the files at run time but I would look at doing this at compile time.
So any feedback on my approaches would be great? If the 3rd option would not cause performance issues, I am leading towards it.
We have done something similar with several ASP.NET web applications. Specifically, we use the Yahoo Yui compressor, which has a .NET library version which you can reference in your applications.
The approach we took was to generate the necessary merged/minified files at runtime. We wrapped all this logic up into an ASP.NET control, but that isn't necessary depending on your project.
The first time a request is made for a page, we process through the list of included JS and CSS files. In a separate thread (so the original request returns without delay) we then merged the included files together (1 for JS, 1 for CSS), and then apply the Yui compressor.
The result is then written to disk for fast reference in the future
On subsequent requests, the page first looks for the minified versions. If found, it just serves those up. If not, it goes through the process again.
As some icing to the cake:
For debug purposes, if the query string ?debug=true is present, the merged/minified resources are ignored and the original individual files are served instead (since it can be hard to debug optimized JS)
We have found this process to work exceptionally well. We built it into a library so all our ASP.NET sites can take advantage. The post-build scripts can get complicated if each page has different dependencies, but the run-time can determine this quite easily. And, if someone needs to make a quick fix to a CSS file, they can do so, delete the merged versions of the file, and the process will automatically start over without need to do post-build processing with MSBuild or NAnt.
RequestReduce provides a really nice solution for combining and minifying javascript and css at run time. It will also attempt to sprite your background images. It caches the processed files and serves them using custom ETags and far future headers. RequestReduce uses a response filter to transform the content so no code or configuration is needed for basic functionality. It can be configured to work in a web farm environment and sync content accross several servers and can be configured to point to a CDN. It can be downloaded at http://www.RequestReduce.com or from Visual Studio via Nuget. The source is available at https://github.com/mwrock/RequestReduce.
have you heard of Combres ?
go to : http://combres.codeplex.com and check it out
it minifies your CSS and JS files at Runtime meaning you can change any file and upload it and each request the client does it minifies it.
all you gotta do is add the files u wanna compress to a list in the combres XML file and just call the list from your page / masterpage.
if you are using VS2010 you can easily install it on your project using NuGet
here's the Combres NuGet link: http://combres.codeplex.com/wikipage?title=5-Minute%20Quick%20Start
I did a really nice solution to this a couple of years back but I don't have the source left. The solution was for webforms but it should work fine to port it to MVC. I'll give it a try to explain what I did in some simple step. First we need to register the scripts and we wrote a special controller that did just that. When the controller was rendered it did three things:
Minimize all the files, I think we used the YUI compression
Combine all the files and store as string
Calculate a hash for the string of the combined files and use that as a virtual filename. You store the string of combined files in a cached dictionary on the server with the hash value as key, the html that is rendered needs to point to a special folder where the "scripts" are located.
The next step is to implement a special HttpHandler that handles request for files in the special folder. When a request is made to that special folder you make a lookup in the cached dictionary and returns the string bascially.
One really nice feature of this is that the returned script is always valid so the user will never have to ask you for an update of the script. The reason for that is when you make a change to any of the script files the hash value will change and the client will ask for a new script.
You can use this for css-files as well with no problems. I remebered making it configurable so you could turn off combine files, minimize files, or just exclude one file from the process if you wanted to do some debugging.
I might have missed some details, but it wasn't that hard to implement and it turned out very well.
Update: I've implemented a solution for MVC and released it on nuget and have the source up on github.
Microsoft’s Ajax minifier is suprisingly good as a minification tool. I wrote a blog post on combining files and using their minifier in a javascript and stylesheet handler:
http://www.markistaylor.com/javascript-concatenating-and-minifying/
It's worthwhile combining the files at run time to avoid having to synchronise new versions. However, once they are programmatically combined, cache them to disk. Then the code which runs each time the files are fetched need only check that the files haven't changed before serving the cached version.
If they have changed, then the compression code can run as a one-off.
Whilst there will be a slight performance cost, you will also receive a performance benefit from fewer file requests.
This is the approach that the Minify tool uses to compress JS/CSS, which has worked really well for me. It's Linux/PHP only, but you might get some more ideas there too.
I needed a solution for combining/minifying CSS/JS on a .NET 2.0 web app and SquishIt and other tools I found weren't .NET 2.0-compatible, I created my own solution that uses a syntax similar to SquishIt but is compatible with .NET 2.0. Since I thought other people might find it useful I put it up on Github. You can find it here: https://github.com/AlliterativeAlice/simpleyui

How to handle temporary files in an ASP.NET application

Recently I was working on displaying workflow diagram images in our web application. I managed to use the rehosted WF designer and create images on-the-fly on the server, but imagining how large the workflow diagrams can very quickly become, I wanted to give a better user experience by using some ajax control for displaying images that would support zoom & pan functionality.
I happened to come across the website of seadragon, which seems to be just an amazing piece of work that I could use. There is just one disadvantage - in order to use their library for generating deep zoom versions of images I have to use the file structure on a server. Because of the temporary nature of the images I am using (workflow diagrams with progress indicators), it is important to not only be able to create such images but also to get rid of them after some time.
Now the question is how can I best ensure that the temporary image files and the folder hierarchy can be created on a server (ASP.NET web app), and later cleaned up. I was thinking of using the cache functionality and by the expiration of the cache item delete the corresponding image folder hierarchy, or simply in the Application_Start and Application_End of Global.asax delete the content of the whole temporary folder, but I'm not really sure whether this is a good idea and whether there are some security restrictions or file-system-related troubles. What do you think ?
We do something similar for creating PDF reports and found the easiest way is to use a timestamp check to determine how "old" files are, and then delete them based on a period of time, in our case more then 2 hours old. This is done before the next PDF document is created, but as part of the creation process. We also created a specific folder and gave the ASP.Net user read/write access to the folder.
The only disadvantage is that if the process of creating PDF's is not used regularly there will be a build up of files, however they will be cleaned up eventually. In 2 years and close on 4000 PDF's we have yet to have an error doing it this way.
Use the App_Data folder. This folder is inside your application and writable by your app without having to go outside the context of the app, but it's also secured from casual browsing. It's meant to hold data files for your application.
Application_Start and Application_End will only fire once each, so if you need better cleanup than that, I would consider using a cache structure or a simple windows service to handle the cleanup.
First, you have to make sure your IIS worker process has rights to write/delete files from your cache directory (and NOT the rest of your site, just in case)
2nd, I would stay away from using App_Start and App_End, App end to clean up files is not 100% guaranteed to fire, and you could end up with a growing pile of orphaned images.
I would instead make a scheduled process, maybe runs once per hour, or once a day, depending on what you want. And have it check how old each image in your cache is, and if its older than your arbitrary "expiure time" then delete it.
Other than that there's not much to it.

Categories

Resources