"Interface" for .Net Resource files - c#

I am building a multi-language MVC application and have a series of resource files with translated strings for messages that will be displayed to the user.
Is there any way of ensuring that any resource files added in the future have all required keys and are spelled correctly?
As an analogy, if the resource file was a regular class, you could provide an interface to ensure that all required method and properties were present in the implementing class. Is there a similar concept for resource files?

I've been unable to find a supported way to enforce an explicit contract upon a .resx file. Since your goal is ultimately to catch implementation errors before they show up at runtime (and compile time checking isn't possible), I recommend falling back to static code analysis. Luckily, .NET makes this trivially easy:
Use the System.Resources.ResXResourceReader class to read the contents of the resx files to be validated.
Implement a test that asserts against all required keys in the "contract" you'd like to enforce on the resx.
Test should run as part of an existing test suite, and failure will warn a developer of the implicit contract before encountering the problem at runtime.
Since your resource files will exist in a known location, you can trivially ensure that the tests run against all resx files in that directory. In this way, you don't even need to update the test when new resource files are added, only if the contract changes.
I've used a similar approach to help with maintenance of stored procedure names kept in (an extensive number of) resx files. Since the resource files are spread across dozens of projects, manual maintenance is tedious and error-prone -- in other words, it doesn't get done. The static code analysis approach has yielded few downsides, and I think it would work well in your case as well.
Landing page for resource files on MSDN
ResXResourceReader on MSDN
System.Resources.ResXResourceReader requires a reference to System.Windows.Forms. It's available on both .NET and Mono.

Related

How to select between different Resource files in .Net?

I'm trying to figure out how to choose between two different (identically designed) Resources files in code. All of the examples I can find online are in reference to having different language specific Resource files which are chosen based on setting the culture value. That won't work for my probelm.
This is a web service which returns an image from one of several different image repository systems. Depending on a parameter passed in to the method, the service will need to access an image repository system in order to pull the image being requested. When accessing the image repository, there are a bunch of "magic string" GUID values that represent different IDs for various lookups in that system. One of the purposes of this service is to hide all of that complexity from the user. Rather than hard-code these GUIDs into the code, I have put them into a Resources file.
The problem is this: Each different image repository system has the same set of magic string IDs that need to be used. However, the actual GUID values for these magic strings are different depending on which repository you are connecting to. For example, there is a value called "GroupIDPrompt" which might be "8a48f393642a753f0164418b670a7cdf" on one system, but "63aa28c3637b58680163b25f7e5a5d96" on a different system. In code, I'd like to refer to this value as just "Resources.GroupIDPrompt" or something similar, but I need to be able to set which Resources file will be used at runtime, based on what the consumer of the service sent me.
Normally, I might solve a problem like this by using an interface, and instantiating a specific implementation of that interface based on the request. There are two reasons that doesn't work here - #1, Resource code files are generated automatically, and if I edit them to make them inherit from an interface, this will get broken everytime the file is regenerated. #2, All resource values are created to be static members, and interfaces aren't allowed to declare static members.
I could throw the Resources files away and instead build a class to expose these values, but that means re-introducing magic hard-coded strings to my code. That isn't too terrible, I suppose, but the Resource editor is really quite handy for managing and editing these values.

Centralized vs local string resource files

I am trying to figure out whether a global string resource file for the entire application or a local resource file for each small sub area would be a better choice.
It seems like a translator would appreciate the one file approach vs hundreds of them. It is also easier to write helper functions since there is only going to be one static resource class.
The downside is that the resource name might be really long to properly identify the place where it is suppose to be in and it might be hard to locate related strings when the file grows big.
Where as a local resource file would produce lots of duplicated strings or make it confusing if we need to use multiple instances of static resource classes because the strings are spread between multiple of them.
So what would be a better way to go?
Maybe you could break your resources into 3 files (depending on your application design):
ResourcesCore
For translated enum values and common expressions
ResourcesEntity
For strings related to translation of some entity properties (e.g. Person.Name)
ResourcesWeb (or ResourceUI)
For other UI related stuff (like strings on UI, labels, descriptions, etc.)
You could then use ResXManager extension for VS to manage you resource strings (way easier than native .NET ResX manager, at least for me).

Maintaining and Deploying Two Versions of an Application Simultaneously

I have a C# WinForms application in Visual Studio 2010 that is used by two different customers. The basic functionality of the application is the same for each customer, but certain lines of code (names of stored procedures, resources, certain behaviors) are different between versions. So far, I have kept the application in the same project, and used preprocessor directives when building/publishing to switch between which deployment to use. However, the scope of the project has grown to a point where this is no longer feasible.
Since so much of the code is shared, I'm trying to avoid duplicating source code files. I'm wondering what the best approach is to maintaining an application that requires different versions to be deployed simultaneously.
Use interfaces to define your classes. Having an interface means that you can have multiple implementations of the same interface, one for each of the clients. This will require you to analyze your existing codebase and identify logical separations in your code where these interfaces can be defined.
You then have the ability to load an interface as needed based on the client. You could, for example, do this via configuration. Based on a configuration value you load Implementation1 or Implementation2. There are many, many ways to accomplish this particular bit. You should read up on dependency injection, inversion of control and have a look at tools like Ninject, Autofac, Unity.
It may actually be difficult at first considering how you have been using preprocessor directives but seeing as how your application is growing, you will need this refactoring to happen. Keep in mind that if you do not do it now, this refactoring will be a lot more expensive later as your application becomes more complex.
The different functionality should be a part of the application's architecture. If you need different functionality for different customers, abstract it away - create an interface that wraps up the behaviour, then implement it in two different ways in two different assemblies. Then (depending on your deployment mechanism), you can ship your app with either one DLL or the other. To avoid having to recompile, add references, etc, you can use Dependency Injection frameworks such as Ninject, Castle Windsor, MEF etc. That's a "plugin-like" architecture, if code is sufficiently different.
If you're talking about text, colours, basic differences, they should simply not be hard coded but instead data-driven. If your app is internet-connected, it could download the appropriate settings when the user logs in. Else, something to indicate the text/colours/behaviour could be put in a config file specific to the customer. You can use config transforms to simplify that process.
You might be able to separate some of the differences by using resource, configuration, or property files of some kind. By this, I mean you store some kind of value in the file, such as the name of the stored procedure to use in a particular situation. Then your code reads the name from the file and runs it. You can change the values in the file without needing to rebuild your code for each deployment.

Dynamic Assembly Resolution/Management

Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.

Using a part of a class in multiple projects

I have a set of methods that do some utility work over SQL connection, and until now these have been copied over from project to project. But as time goes on, project numbers have grown and I need to keep these methods in sync in case I find a bug or need to update it.
I have managed to get it to the state that SQL access class is a partial class, one part is specific for project and contains wrappers for a specific database. The second part is the common one and contains methods that are used in all project-specific databases.
The problem is that now I would have the "utility" class copied over 8 projects, with the same content, but in different namespaces. In C/C++ it would have been simple, because I would just have #included the contents of the file wherever needed. What should I do in C#?
Separate out the class so that you can have a complete class containing all of the common code, in a common project. Use a common interface to represent the bits of functionality which will be project-specific, implementing that interface in each project and passing an instance of the interface into the common code where necessary.
As Jon says, a library assembly is a good idea.
There are some situations when an assembly reference doesn't lend it self to the requirements so, if creating a library assembly is not an option, it is possible to use a feature easily overlooked in Visual Studio, adding an existing file as a link.
This would allow you to maintain the common part of the partial class in a file that is available in all your projects.
The only restriction is that a relative path is used to reference the file.
The only problem I have had with this strategy is with the open source Mercurial scc provider. When removing a linked file from a project, the underlying file is deleted. Quite annoying but this may not be an issue for you.
Update: The linked file bug in the VS Mercurial SCC should be fixed in the next release.

Categories

Resources