I am looking for the best solution to make it easy to add new languages to an asp.net website without deploying/building the existing code base.
From what I have researched, it seems that you can compile resource files on the fly into Satellite assemblies but I am uncertain how to make the application use these DLL's once generated?
The other option I have seen is to store the translations in the Database, and write a custom ResourceProvider so that the built-in localization methods can be used, whilst abstracting the actual implementation (in this case a database).
Either way, the front end for this site will be the same (meta:resourcekey for the controls etc).
But I am struggling on deciding which approach will be the easiest to upkeep. For example, does publishing a new Satellite Assembly restart the Application Pool or does everything tick over nicely?
EDIT
The translations will be provided by a 3rd party API so human maintenance quality is not important. I thought I would add this due to the answers received.
With Asp.Net (currently) you do not have to compile by your own, you can simply deploy resx files (to App_LocalResources or App_GlobalResources folder) and Asp.Net will take care of compiling them into Satellite Assemblies. That's cool.
With Database approach, you are risking the problems with synchronization: how will you know if given resource string is translated? Also, correcting them is not very easy (for Translators/Localization Engineers). And you would need to prepare "install" script anyway. If that is what you are going to give to translators, good luck. You would face plenty of overtranslations and you would have to correct them (manually?).
Resx files (being simple XML) are a bit easier to validate (it is either valid XML in terms of given XSD or it is not). Besides, it is standard technology, you won't need to implement anything by yourself. That is why, I would recommend it.
EDIT
Another issue with Database-driven resources could be character encoding. You would need to create your own translation kit. My experience is, the result might be in several different encodings. Especially, if you want to use plain text file. On the other hand, default encoding of XML files is UTF-8...
RESX
Having around 30+ languages in mit Windows Forms and Web Forms application (this one, if I'm allowed to place a link), I finally had most success with simple RESX files in App_LocalResources.
What I discovered though was that the compilation was extremly slow in VS.NET, so did a slightly modified approach:
Have only English RESX files in the VS.NET solution.
Have a shadow structure of the website with only the App_LocalResources for all languages, including English in a separate folder, not visible to VS.NET.
Write a simple CMD script to copy the real English resources to the separate folder.
Use my free tool Zeta Resource Editor to actually translate inside the separate folder.
Have a publish script that copies the real web files (like ASPX, ASAX, MASTER, etc.) to the website and also copy the resources to the websites.
This approach makes compilation really fast and still allows me to keep compilation and translations separated.
The drawback is that the first call of the live web application compiles rather long, until now, I figures no way to speed this up/precompile (although I do believe that this is possible).
Database
I also did some projects with localization in database and custom <%#...%> blocks to load the languages.
Today, I would vote against this as it is non-standard. Although it would be probably just as fast to compile, no matter whether 1 or 50 languages are involved.
3rd Party Tools
You also could fetch a commercial product to do the translation, if I could afford, I would have done this most likely, too.
Just my two cent...
Related
I'm designing a Visual Studio project that will use a potentially large number (up to ~300) of script files of various types. These files will not be shared with other projects, i.e. they are exclusive to the project. Is there any good argument against including all these files as an embedded resource (by setting their build action) and then using Assembly.GetManifestResourceStream() to retrieve them at run time? Or would this be a misuse of embedded resources, and a filesystem directory holding these resources as files should be used instead?
The benefits of using embedded resources appear to be:
the produced assembly is easy to share and deploy (single file deployment without having to worry about missing script files or invalid paths);
isolation and convenience: all the resources can be accessed from within the VS project.
The arguments against this approach are usually:
you can't modify the resources without recompilation and redeployment of the assembly. However, even for a project with many text-based resources, the resulting assembly will be rather small and just as easy to redeploy as overwriting e.g. a single filesystem-located script file would be. And theoretically, it should be possible to replace just the changed part of the assembly (hence smaller update), but I'm not aware if any such existing diff/merge tools exist.
The produced DLL will be larger, eventually perhaps significantly so; but then again, the total size of the program to deploy would be the same if you created a lean assembly without embedded resources, and deployed the resources separately to a directory.
Are there other considerations? More generally, is there a reason that project resources - regardless of what they are - should not be included as embedded resources, other than the required assembly recompilation in case of modification?
I can give you some insights from a complex environment perspective.
If your application is anywhere near critical or significant to your organisation and you need to minimize incident response time then it is of course better to have all scripts as separate files. Even though it might be very easy to recompile your assembly, in a structured corporate environment hotfix release usually requires number of hoops to jump through even in an emergency. Besides, this requires a dev person while support person should be good enough for changing a script file.
Other consideration could be if (at least some) scripts do not run properly when streamed from resources. They may need a place to write intermediate or result data. There might also be some dependencies between scripts (one calls another etc.)
One other factor is that having resources separate allows for quick review when you do not have access to project source. This adds some transparency to your application (which might be desired or not). It also might be useful to help determine what is happening with your application in case of problems and potentially make a quick change/fix (somewhat similar to my first point).
Generally I would say it depends on your requirements. If you need to be able to make frequent changes to your scripts (or other non-compilable resources) then having them separate is much better. If they don't change too often and you like to have neat, simple and compact file structure then embedding is a good choice.
If this is a web-project which you are going to use only on your own host, then it is better not to use it as embedded resources, but as usual files. (you need to install the project only once, but it will be easy to do small updates)
If you want to create a dll, which may be reusable in other projects, then it's better to use embedded resources - if you do some update, you can update just one dll in each project.
I am looking for best practices for a portable C# project that runs on multiple platform. In my case I have different wrapper dll's for each platforms providing interface/classes, etc. Whereas thy are the same on each platform I still need to reference the corresponding dll on each platform. What are best practices in such a case?
I could use conditional references in the C# project file (with the technique described here). Then I would require to know if I am on Linux, Windows or OS X. How would I do that?
Another option is to create a separate project for each platform. But then I have code redundancy because I have to implement each interface/inheritance for each platform. While the implementation is identical they derive from de facto different types coming from different assemblies (though they are the very same when using them).
What are possible strategies in this case and what are the advantages or downsides?
I have done something like that some time ago. I do not know if one should consider my way as best practice, but it worked for me:
I did opt for the "separate project for each platform" approach (only MS-.net and MonoDroid for me at the moment), but worked around code duplication by simply putting both .csproj files into the same folder. I then added the code files used in the first project to the second one.
I tiptoed around any inconsisencies in the dependencies of my code by inserting conditionals, which I subsequently added to the according csproj.
The only notable downside of this approch is, that one gets frequent complaints from Visual Studio if reopening a file (by double-clicking in project explorer) in the second project, that is already opend via the first one (implying both are added to the same solution, as I did). But these notifications seemed to do no harm and I was able to happily code away.
Imho this slight annoyance is preferable to the trouble it would cause to have the same code in two places. Maybe this could be mitigated by keeping the code together in a proper source control system like Git (e.g. on two different branches), but as the multi-csproj approach worked so well for me, I did not try anything down that route.
I'm inheriting a web application and the previous programmer compiled all his code into a .dll. The .cs files are not present on the server.
Working on previous projects, I've always uploaded the .aspx file and the corresponding .cs file. It's never been a problem for me and I always thought it was standard procedure. Am I wrong or just paranoid?
Will,I think this is quite common to keep code precompiled into dll. Then the code is less exposed for potential security holes. This provides also many advantages, which include faster initial response time, error checking, source-code protection, and efficient deployment. This is particularly important in large sites where there are frequent changes in Web pages and code files.
Leaving source code as a part of the project isn't necessarily the best source code management process. There are tools for that.
Also, precompiling source code isn't out of the ordinary (this is a Web Application project rather than a Web Site project in Visual Studio), and has many benefits.
Note that this doesn't make you wrong or paranoid.
There are good reasons for both strategies you just have to figure out what is going to work best for you environment and for the application.
In some ways it is good to have it precompiled if you worry about someone accidently making a change on the server but not checking the change into source control. With non-precompiled if you don't have change control on your server it can be hard to figure out who "accidently" made a change and why without checking it in.
On the other side, if you don't precompile it can make deployment more straight forward.
Just do a little research behind both strategies and decide what is going to work best in your situation.
As Nader pointed out, in a Web Application you don't need the CS files at all. There is not a huge risk of the source files being served accidentally, as protecting these files is a core function of IIS request management. Still, it is generally good practice not to deploy them to a production web server.
In any case, source files should at the bare minimum always be backed up in a location that is not the web server and should be source controlled whenever possible. I have seen too many websites where the source files were lost and the site was useless as a result.
Like everyone above has said, compiling source code into DLLs is considered best practice.
If you'd like to see the code of the DLLs you've been left with, there's a handy (and free!) tool called Reflector (apologies if you've already got it)
http://www.red-gate.com/products/reflector/
Just load up the DLL and then disassemble to view the source.
Web Application Projects compile into .dlls and leave no source on the server.
Web Site Projects deploy all the source to the server.
It's a religious war as to which is best. Google will present you with many varied opinions, so I won't press my own opinions on you.
Is it ok to roll your own localization framework? I would be ok using the default .NET localization behavior (i.e., putting text in resource files named a certain way in the assembly), except that we have localized images and text that need to be rendered in DirectX in addition to WinForms and WPF.
I could put form-specific strings in one place and other strings somewhere else, but I think that it makes more sense to keep everything in one place, not to mention it will help to avoid duplicates (for domain values like Yes/No, etc.). It's also possible we may be moving this tool to another platform in the future, so it would be nice to have all the localization information in one platform-agnostic area.
I realize this is a little subjective, but I'm looking for a best practice here...I've worked on projects that take both approaches. Any thoughts?
I have developed systems in which localisation is implemented via database-stored data and metadata. If your app is already making intense use of a fast database backend, you could create a database-backed localisation layer and use it to store localised information, including textual and non-textual data. It has worked great for us in a few ocasions.
Edit. The details won't fit in here, but basically we mirrored the logic of the key/value resource manager that the Windows API or .NET use. We extended that by allowing resources to be grouped into groups, which can be nested arbitrarily. Resource names can be given as, for example, "ClientManagement.MainForm.StatusBar.ReadyMsg", meaning the ready message text to display on the status bar of the main form in the client management user interface. On app startup, a locale setting is read from a config file and a resource manager initialised with it; all the subsequent calls to the resource manager will be using such a locale setting until explicitly changed. We also built an administrative user interface that allowed us to edit the resources stored in the database, and even add new languages. A final comment: data to be localised is not only labels and icons on screen. Option values in combo boxes, for example, also need to be localised.
We implemented a localization using DB backend. We were able to create a great resource editor which allows "translator" end users to dynamically update translations (cannot do that with a resx!). We were also able to support an approval process and group translations by module such that an entire module could be approved for use in a language, or not.
We also decided to implement the localization provider for Asp.Net, which basically does 'automatic' localization with no code by the developer. This was actually the only difficult part of the project as the interface is not well documented. It was hard to debug because it actually runs within Visual Studio host process. We used a web service to decouple the implementation which greatly simplified things. Another good thing is that the translations are automatically cached so the DB is not working as hard. A bad thing is that when your translation service/back end is down and if you do not precompile your asp.net web site, when the user launches a 'new' page, the compiler might decided NOT to translate the page. This behaviour remains (even after the translation service starts up again) until you force a recompile of the site.
We’re coming to a big release in a web project I work on, and I’m starting to think more and more about javascript/css performance and versioning. I’ve got a couple high level ideas so far:
Write (or customize) an http handler to do this for me. This would obviously have to handle caching as well to justify the on the fly IO that would occur.
Add these steps into a custom msbuild script that would be ran for deployment only.
I’m also looking at automatically generating config files for each of the servers I deploy to, which lends itself to the second idea. The major advantage I see with the first idea is that I could dynamically handle versioning (at least that’s what one of my links at the bottom says, I’ve yet to convince myself that this would actually work).
Anyway, I’m curious if any of these problems have already been solved. I’d love any feedback. Thanks!
Here are some resources that I've been looking at so far:
http://madskristensen.net/post/Combine-multiple-stylesheets-at-runtime.aspx
http://madskristensen.net/post/Remove-whitespace-from-stylesheets-and-JavaScript-files.aspx
http://www.west-wind.com/WebLog/posts/413878.aspx
http://svn.offwhite.net/trac/SmallSharpTools.Packer/wiki
You do this as part of the continuous integration build process you have.
Compare all JS to the previous checked inversion, for each that have changes, call out to the YUI Compressor on that JS file and name the output with the current revision number. Add that file to your repository, and change a config file to have the latest revision number for that js file. Then you will write a custom control that imports a js file. This control will either use the uncompressed js when running on a development machine, or the compressed file with the revision number from the config file when it is run on a deployed setup.
In addition for 1), Microsoft has built in support for embedding resource files into DLL. This will always get updated when your project is changed and recompiled.
The problem is, you don't have control over caching and file name. When debugging, it's hard to pick which one to debug when everything is called "webresource.axd" something. That was hell.
Would love to read how others do it, too.
Personally, I rather have it done as part of the build process as to avoid the performance cost of dynamically doing this on each request. I guess you can lessen the hit by implementing proper caching, but why bother... IIS can handle that for you already (unless you are not running on top of IIS, I guess).
As a general recommendation, things that Steven Souders talk about is also great if you want to speed up browser rendering. If you have not already, take a look at this.
My team recently moved away from keeping scripts as embedded resources and we are very happy with the results. Yes, you can combine and minify them using handlers, but it's a bit of a hassle, especially when you want to host them from a separate domain.
What we do now is we keep all of our control script files separated and then use a tool, like js-builder, during the build process to combine and minify them. We actually output two files from the tool, one simply combined for debugging, and the combined and minified one for production use.