I am creating a new application right now and I want to make all right at the start so I can grow with it in the future.
I have looked on several guides descibing how to make a multilanguage supported application, but I can't figure out witch one to use.
Some tutorials are old and I don't know if they are out of date.
http://www.codeproject.com/Articles/352583/Localization-in-ASP-NET-MVC-with-Griffin-MvcContri
http://geekswithblogs.net/shaunxu/archive/2012/09/04/localization-in-asp.net-mvc-ndash-upgraded.aspx
http://www.hanselman.com/blog/GlobalizationInternationalizationAndLocalizationInASPNETMVC3JavaScriptAndJQueryPart1.aspx
http://www.chambaud.com/2013/02/27/localization-in-asp-net-mvc-4/
https://github.com/turquoiseowl/i18n
I found that they are 2 ways of storing the language data, either in db or in resource files.
What are the pro/cons?
Is there another way that is prefered?
This is what I want:
Easy to maintain (Add/Change/Remove)
Full language support. (views, currency, time/date, jquery, annotations and so on..)
Enable to change language.
Auto detect language.
Future safe.
What is the prefered way of doing this? got any good tutorial that is best practice for 2013?
I've written a blog post convering the following aspect of a multilingual asp.net mvc app:
Database: I split table in two parts, one containing non-translatable fields the other containing the fields that need translation.
Url Routes: I normally keep the culture in the URL that way you get a well indexed ML site.
routes.MapRoute(
name: "ML",
url: "{lang}/{controller}/{action}/{id}",
defaults: new { lang = "en-CA", controller = "Home", action = "Index", id = UrlParameter.Optional }
);
From a base controller class you can make that {lang} parameters available to your Controller. See the blog post for all details.
Querying your data: You would simply pass the Culture to your repository or database context. The idea is to create a view model that merge the two table that you separated for translation.
public ActionResult Index(string id)
{
var vm = pages.Get(id, Language); // Language is the {lang} from the route
return View(vm);
}
Your Views: Use the .NET Resources file with the generic two-letter language code, ex: HomePage.en.resx, HomePage.fr.resx. The locale en**-US**, en**-CA** is useful for formatting currency, dates, numbers etc. Your Resources file will mostly likely be the same for English US, Canada, etc.
Images: Use a format like imagename-en.png, imagename-fr.png. From your Views make the {lang} route parameter available via an extension method and display your images like this:
<img src="/content/logos/imagename-#this.CurrentLanguage()" />
You may also have a complete separate folder for your language supported. Example: /content/en/imagename.png, /content/fr/imagename.png.
JavaScript:: I usually create a folder name LanguagePacks and JS files called Lang-en.js, Lang-fr.js. The idea is to create "static" class that you can use all over your other JS files:
// lang-en.js
var Lang = {
globalDateFormat = 'mm-dd-yy';
greeting: 'Hello'
};
On your Views, you load the correct Language file
<script type="text/javascript" src="/content/js/languagepacks/lang-#(this.CurrentLanguage()).js"></script>
From a JavaScript module
// UI Module
var UI = function() {
function notify() {
alert(Lang.greeting);
}
return {
notify: notify
}
};
There's no one way to do a multilingual web app. Use the Resources to translate your text, they can be quickly swap at run-time. You have multiple editor that can open those that you can pass to translator etc.
You can check the blog post and get a detailed example on how I do this.
I have an approach for myself based on db, however it may not be suitable for large scale apps.
I create a Translation table/entity for holding titles and texts which should be multilingual.
So, whenever I want to render a view, first I retrieve the appropriate translation from db and then pass it to the view as model:
var t = dbCtx.Translations.Find(langID);
// ...
return View(t);
And in view, I render the content like the following:
<tr>
<td>#Html.DisplayFor(m => m.WelcomeMessage)</td>
<td>#Url.Action(Model.EnterSite, "Index", "Home")</td>
</tr>
And about getting the appropriate answer, well you have several ways. You can use session:
Session["Lang"] = "en";
// ...
var lang = (string)Session["Lang"] ?? "en";
Or by passing it through query string, or combination of them.
For auto detecting language, you should decide of the following:
a) Detecting from the browser
b) Detecting from user IP and guessing geo location of him/her and setting the appropriate language for him/her
I kinda have a feeling that this question does not have a straight forward answer. For formatting and what not you should always be using the Culture and the UICulture i.e. stuff like 'en-GB' for British English and 'en-US' for US English (and yes, there is a difference). All of .Net is built around it and that way you can use local formatting without really thinking about it.
You should also check out the System.Globalization namespace for more details:
http://msdn.microsoft.com/en-us/library/system.globalization%28v=vs.110%29.aspx
As for were the culture should come from, then at least in our company the policy has always without exception from the query string. The reason for that is that if you use IP localization for example, then if a Spanish user is looking at a Spanish site in Japan and then would be switched to the Japanese version, then not exactly wrong but can be annoying if you've told the client that it's a direct Spanish link or something. That said if the culture is undefined in the query string, then using the IP to get guess with language the user would like to have, would not be a bad idea, but really depends on your clients needs.
As for were to get the translations, then it really depends, both resource and DB have their ups and downs. So the major up-points for DB are that, its easy to share between applications and if you need to update a phrase for any reason, then you can update them all through one DB entry, but this can be a fault as well, because some phrases have dual meanings and can be translated completely differently in other languages although the same sentence is used in English. The other big problem with DB is that to a degree (but this depends on implementation) you loose intelligence in VS for the phrases.
Resources of course have proper intelligence, but they are fairly static to that web application you are using, you can't share resources across applications... well not exactly true, but more or less you can't. Though that static nature is also a plus, because all of the resources are contained within the application and you know that no outside effect can influence it.
In reality it really depends on the project in hand. you simply have to way your own pros and cons and decide for your self... sorry. But if it helps at all, I can tell you how things are in my company. We make several applications, that share the same phrases all across various web applications. We used to have it in DBs, but what happened was that the translations kept going array, meaning a client asked for a better translation in Spanish for one app, but that made no sense what so ever on others. That happened more than you might think. Then we moved to the resources, but what happened there was that when one app's translation was updated, then no-one remembered to update the other apps and it actually happened that 3 different apps translated the same term in 3 different ways. In the end we decided to go back to the DB, because the ability to change all of the translations at once meant more to us, then the fact no outside influence could effect it.
In general there are other pros and cons as well, but all that is fairly irrelevant compared to the above. You really need to ask how you are using the translations. As for general editing (excluding the above point), then aider approach works just as well, you can just as easily change and edit or extend translations with both approaches.
That said, if the DB and the DB calls are designed badly, then adding new languages can be easier with resources, simply copy the resource file, add the culture extension to the name and add the translations to the resource and you are done, but again, completely down to the DB design, so that is something to keep in mind when designing the DB and says fairly nothing about holding the translations in the DB.
By in large I would say that the resources are easier to use and very easy to maintain and extend (and they are already built into .NET) though DB has a clear advantage if you need to share translations. So if I would have to say I would say the recommended way is to use Resources, but DB does have it's place and it really depends on the project.
I've been looking at the same sources as you for the same needs but as you say it is very hard to find one preferred solution.
But I decided to go with the solution by Michael Chambaud that you have linked in your question. Reasons for this was that it was easy to implement, gave me the url's I wanted and since I'm still not 100% certain... easy to replace in the future.
In additon to this I will use jquery globalize for client side formatting.
It would be really interesting to hear what solution you decided on?
Related
I am using ASP.NET MVC and have to develop and deploy multiple websites based on a first website.
There are variation in some controllers, some views, some scripts and some models, and the Database are different on each website (mainly columns are differents but table names remains the same).
Is there a way to handle such a thing in a single Visual Studio Project, in order to make maintaining easier, and be able to add common feature easily on every website ?
Currently, I copy the pilote project into a new VS project, and change all the variation. But I find it's not an ideal situation (because of maintaining/improving).
I have implemented something like that years ago and can give some general advice you might find useful.
First of all developing app with such "multitenancy" has nothing to do with MVC pattern itself. You can do that without MVC :D. Second, if the websites supposed to work with different business domains I am afraid there is no generic way to do what you want. In my case it was just a number of e-commerce platforms.
Anyway, consider next things.
1.Think about using sub-domain approach if you can. It will free you from stupid routing and cookies shenanigans. Basically you can map *.yourdomain.com to one app and handle the necessary logic related to tenant easily. So in my case it was an application that behaved differently depending on provided url, not route, but sub-domain, 'superclient.yourdomain.com' for example. Its not always possible or good idea, but think about it.
2.Dependency Injection everywhere. Well, its useful in general but in your case is absolute must have - try abstract any tenant specific logic in separate types and init them in one place. Its everything related to localization, timezone settings, app theme, branding info on the app header etc. Just initialize and inject where its needed. If you have something like
if (website1) {
showBlockOne();
} else if (website2) {
showBlockTwo();
} else if (website3) {
showBlockThree();
}
then you doing something wrong, its a road to insanity. It should be something like
_currentTenantViewContext.ShowBlock();
So its polymorphism over conditional operators in most cases.
3.In my case the requirement was to create an app which can work with any language so I had to handle that issue on database side as well. The problem is that if usually you have lets say for example ProductType table in database with Id and Name, in multitenant application its not that simple. My solution was to create two tables - ProductType with Id and Code fields and ProductTypeLocalization table with Id, ProductTypeId, LanguageId, Value fields to handle this problem. Requirement also was to make this values editable from admin panel...
I don't know is it the case for you, but if yes think about it before the shit hits the fan in future.
4.For situations where you need some specific fields in some database table only for one site its not a good idea to spawn them freely (in general). Consider using some generic format for that, like JSON or XML. If you have few additional text fields for specific website in some table just create one field (call it ExtraSettings or something) and store this strings as JSON in that one field. Of course you have to handle this data in separate way, but its about dependency injection again.
Also you can use NoSQL for that.
5.Provide feature toggling, different websites requires different blocks to be displayed, rules applied etc. You have to have some way to on/off them for particular website(tenant) without recompile/redeploy.
Hope it helps.
A common REST API pattern is to define a single item lookup from a collection like this:
//get user with id 123
GET /users/123
Another common REST API pattern is to define a search using a POST + body like this:
POST /users/
{
FirstName:"John",
LastName:"Smith"
}
For the sake of consolidation, development, maintenance and support throughput, how common is it to implement all lookups through a single search like this?:
POST /users/
{
Id:123
FirstName:"John",
LastName:"Smith"
}
It seems like if an org is trying to maximize development throughput and minimize maintenance and support overhead then consolidating the API call like this appears to be a reasonable solution. How common is it for developers to implement this type of pattern these days?
This isn't a great question for SO, given that it's primarily opinion based.
It seems like if an org is trying to maximize development throughput and minimize maintenance and support overhead then consolidating the API call like this appears to be a reasonable solution.
Which is better, your opinion above, or the single responsibility principle.
Presumably, if you given a resource ID, the underlying implementation can efficiently look it up.
Search, assumes a search like implementation--that is, searching for a resource given a set of parameters. This can be efficient or inefficient depending on it's underlying implementation.
If you were to implement a single API call that has different behavior depending on it's arguments, you end up with more complex implementation, which is harder to test, which may make that implementation more error prone.
With an API design that alters the control flow based on the presence of inputs--it opens up design choices around whether it's an error if both sets of inputs are provided, or whether one set takes priority over another set. Further in a priority case, if one set produces no results, do you fall back to the other set?
Often in design, the simpler the implementation the more easily it's functionality is to reason about.
Thinking about the principle of least surprise, an API that better conforms to convention would be easier conceptually to understand than one that does not. While that isn't a strong argument in and of itself, there is merit to having an API that can be used in a fashion similar to other popular REST APIs.
As a consumer of your API, when should I use the ID and when should I use search? Contrast that with an API that shows very clearly, that if I have and ID I can use that ID to retrieve a resource, AND if I don't I can use Search to find that resource.
Also food for thought, why implement search as a POST, and not a GET with query strings parameters?
In my opinion:
If one variable (like an id) is enough. Use the first.
If you need more info use the second.
The third makes no sense to me because if you have the ID, you don't need to provide anything else; so why should I bother the client (or myself if I put myself in his/her position) to bother with to set-up the object structure.
I think this is related to test-driven development. Make it yourself (and others, who will use your API) as clear and easy as possible.
I have a fairly simple console app that monitors an exchange mailbox, picks particular emails out, and updates a couple of databases based on the contents.
I would like to implement a couple of similar systems. While it would be very simple to duplicate this system, I am looking at a more sophisticated solution - mainly an intellectual exercise, a learning exercise.
I would like to build a core application that pulls template information periodically from a DB; this information would tell the app that is has to monitor a given mailbox for emails with given characteristics at a particular interval.
I envision creating a master template (assembly) with some virtual functions (pre-processing, process items, archive items, send notifications etc). In turn, I'd create any number of templates that implement the interfaces in the master template, but the functionality could differ wildly in each case, one may update a database, while the next might store something in a file system.
My first question is whether this is a sensible implementation?
My second question is how to dynamically reference each template, and how would I call the methods of these templates at the appropriate time?
If I were to extend my Templates project, adding a new class for each new template required, I'd overcome the problem of dynamically referencing the templates. But if I wanted to keep them in separate assemblies.. Is there a way to just drop them into the project? Don't forget, the templates will be listed in a DB, so the app will be aware of them, but how to make use of them...
UPDATE:
I've figured how I can dynamically reference each template class; it requires me to supply the Assembly-Qualified Name to GetType:
I've tried to dynamically generate the template in the main app:
string aqn= "MasterTemplates.TestTemplate, TestTemplate, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null";
MasterTemplate mt = (MasterTemplate)Activator.CreateInstance(Type.GetType(aqn));
So if I keep updating my MasterTemplates project, adding in new classes as necessary, I can achieve what I am aiming for. However, how can I handle different template assemblies?
In the meantime, I'm shortly going to look at DBM's suggestion of the Managed Extensibility Framework.
Conclusion:
I don't have the time to fully investigate MEF; though it's overkill for my current needs, it looks extremely promising. And I haven't figured how to easily develop and use different assemblies for different templates - instead I am keeping all templates in one assembly, which I will have to recompile and up-date each time I require a new template. Not quite as sophisticated as the MEF alternative, but simpler and suited to my current needs.
You could use MEF to dynamically load plugins. It's in-the-box in VS2010, and works great for dynamically loading assemblies.
When using activator with a string, use the Activator.CreateInstance(String,String) overload.
Alternatively you can create an instance of the type and use it like that:
Activator.CreateInstance(Type.GetType(templateName));
Here's the story so far:
I'm doing a C# winforms application to facilitate specifying equipment for hire quotations.
In it, I have a List<T> of ~1500 stock items.
These items have a property called AutospecQty that has a get accessor that needs to execute some code that is specific to each item. This code will refer to various other items in the list.
So, for example, one item (let's call it Item0001) has this get accessor that may need to execute some code that may look something like this:
[some code to get the following items from the list here]
if(Item0002.Value + Item0003.Value > Item0004.Value)
{ return Item0002.Value }
else
{ return Item0004.Value }
Which is all well and good, but these bits of code are likely to change on a weekly basis, so I'm trying to avoid redeploying that often. Also, each item could (will) have wildly different code. Some will be querying the list, some will be doing some long-ass math functions, some will be simple addition as above...some will depend on variables not contained in the list.
What I'd like to do is to store the code for each item in a table in my database, then when the app starts just pull the relevant code out and bung it in a list, ready to be executed when the time comes.
Most of the examples I've seen on the internot regarding executing a string as code seem quite long-winded, convoluted, and/or not particularly novice-coder friendly (I'm a complete amateur), and don't seem to take into account being passed variables.
So the questions are:
Is there an easier/simpler way of achieving what I'm trying to do?
If 1=false (I'm guessing that's the case), is it worth the effort of all the potential problems of this approach, or would my time be better spent writing an automatic update feature into the application and just keeping it all inside the main app (so the user would just have to let the app update itself once a week)?
Another (probably bad) idea I had was shifting all the autospec code out to a separate DLL, and either just redeploying that when necessary, or is it even possible to reference a single DLL on a shared network drive?
I guess this is some pretty dangerous territory whichever way I go. Can someone tell me if I'm opening a can of worms best left well and truly shut?
Is there a better way of going about this whole thing? I have a habit of overcomplicating things that I'm trying to kick :P
Just as additional info, the autospec code will not be user-input. It'll be me updating it every week (no-one else has access to it), so hopefully that will mitigate some security concerns at least.
Apologies if I've explained this badly.
Thanks in advance
Some options to consider:
1) If you had a good continuous integration system with automatic build and deployment, would deploying every week be such an issue?
2) Have you considered MEF or similar which would allow you to substitute just a single DLL containing the new rules?
3) If the formula can be expressed simply (without needing to eval some code, e.g. A+B+C+D > E+F+G+H => J or K) you might be able to use reflection to gather the parameter values and then apply them.
4) You could use Expressions in .NET 4 and build an expression tree from the database and then evaluate it.
Looks like you may be well served by implementing the specification pattern.
As wikipedia describes it:
whereby business logic can be recombined by chaining the business logic together using boolean logic.
Have you considered something like MEF, then you could have lots of small dlls implementing various versions of your calculations and simply reference which one to load up from the database.
That is assuming you can wrap them all in a single (or small number of) interfaces.
I would attack this problem by creating a domain specific language which the program could interpret to execute the rules. Then put snippits of the DSL code in the database.
As you can see, I also like to overcomplicate things. :-) But it works as long as the long-term use is simplified.
You could have your program compile up your rules at runtime into a class that acts like a plugin using the CSharpCodeProvider.
See Compiling code during runtime for a sample of how to do this.
When throwing custom exceptions or issuing messages to the end user, one could use hard-coded the strings (including string constants), use resource-only assemblies or get strings from a table in a database.
I would like my application to be able to switch to a different language easily without having to recompile. While storing the string resources in a assembly or database would achieve this purpose, it adds to the complexity of program logic which in turn adds to the cost of the product.
My question is: what is the best way to go with the objective in mind without ignoring the cost that comes with each option? If you have a practice that is better than what's been listed, I'd love to hear it.
Technologies:
OS: Windows family
Platform: .NET Frame 2 and up
Language: C#
Database: MS SQL 2005 and up
Thanks guys!
Cullen
Use resources:
How does this add more complexity to the program logic?
try
{
//do something with System.Net.Mail with invalid email..
}
catch (FormatException fex)
{
throw new Exception(Resources.ErrorMsg.Invalid_Email, fex);
}
Edit
In VS2008 when you create a resource, you can define if its internal or public. So assume we set it to public, in an assembly called ClassLibrary1, we can access a property like:
ClassLibrary1.Properties.Resources.InvalidError
Where InvalidError is the name of the error. Again I don't think this adds any compelxity to the logic.
.NET already supports multiple resources for multiple cultures using a naming convention:
<default resource file name>.<culture>.resx
Essentially as Josh pointed out VS2008 creates a nice type safe wrapper to access these resources.
However the VS UI exposes the bear minimum of what you can do.
If you create a new Resource File called exactly the same as the default, however add the culture info before the resx. (NOTE: You will need to create it somewhere else then copy it into the magic Properties folder.)
Then your application will if you have applied the culture to the thread accessing the resource pull the correct string from the specific resources.
For example:
// Using the default culture
string s = Resources.Error1Msg;
Thread.CurrentThread.CurrentUICulture = new CultureInfo("es-CO");
// Using the specific culture specified as above:
s = Resources.Error1Msg;
If you need to parameterize you message, use string.Format to parameterize the output.
One word of caution is try to architect your application layers in such a way that your exceptions carry a rich payload (to describe the error), instead of relying on just text.
That way your presentation layer can present the best UI experience which might utilize the payload.
HTH
Philip