Custom code access permissions - c#

We have a server written in C# (Framework 3.5 SP1). Customers write client applications using our server API. Recently, we created several levels of license schemes like Basic, Intermediate and All. If you have Basic license then you can call few methods on our API. Similarly if you have Intermediate you get some extra methods to call and if you have All then you can call all the methods.
When server starts it gets the license type. Now in each method I have to check the type of license and decide whether to proceed further with the function or return.
For example, a method InterMediateMethod() can only be used by Intermediate License and All license. So I have to something like this.
public void InterMediateMethod()
{
if(licenseType == "Basic")
{
throw new Exception("Access denied");
}
}
It looks like to me that it is very lame approach. Is there any better way to do this? Is there any declarative way to do this by defining some custom attributes? I looked at creating a custom CodeAccessSecurityAttribute but did not get a good success.

Since you are adding the "if" logic in every method (and god knows what else), you might find it easier to use PostSharp (AOP framework) to achieve the same, but personally, I don't like either of the approaches...
I think it would be much cleaner if you'd maintained three different branches (source code) for each license, which may add a little bit of overhead in terms of maintenance (maybe not), but at least keep it clean and simple.
I'm also interested what others have to say about it.
Good post, I like it...

Possibly one easy and clean approach would be to add a proxy API that duplicates all your API methods and exposes them to the client. When called, the proxy would either forward the call to your real method, or return a "not licensed" error. The proxies could be built into three separate (basic, intermediate, all) classes, and your server would create instances of the approprate proxy for your client's licence. This has the advantage of having minimal performance overhead (because you only check the licence once). You may not even need to use a proxy for the "all" level, so it'll get maximum performance. It may be hard to slip this in depending on your existing design though.
Another possibility may be to redesign and break up your APIs into basic/intermediate/all "sections", and put them in separate assemblies, so the entire assembly can be enabled/disabled by the licence, and attempting to call an unlicensed method can just return a "method not found" error (e.g. a TypeLoadException will occur automatically if you simply haven't loaded the needed assembly). This will make it much easier to test and maintain, and again avoids checking at the per-method level.
If you can't do this, at least try to use a more centralised system than an "if" statement hand-written into every method.
Examples (which may or may not be compatible with your existing design) would include:
Add a custom attribute to each method and have the server dispatch code check this attribute using reflection before it passes the call into the method.
Add a custom attribute to mark the method, and use PostSharp to inject a standard bit of code into the method that will read and test the attribute against the licence.
Use PostSharp to add code to test the licence, but put the licence details for each method in a more data driven system (e.g. use an XML file rather than attributes to describe the method permissions). This will allow you to easily change the licensing across the entire server by editing a single file, and allow you to easily add whole new levels or types of licences in future.
Hope that gives you some ideas.

You might really want to consider buying a licensing solution rather than rolling your own. We use Desaware and are pretty happy with it.
Doing licensing at the method level is going to take you into a world of hurt. Maintenance on that would be a nightmare, and it won't scale at all.
You should really look at componentizing your product. Your code should roughly fall into "features", which can be bundled into "components". The trick is to make each component do a license check, and have a licensing solution that knows if a license includes a component.
Components for our products are generally on the assembly level, but for our web products they can get down to the ASP.Net server control level.

I wonder how the people are licensing the SOA services. They can be licensed per service or per end point.

That can be very hard to maintain.
You can try with using strategy pattern.
This can be your starting point.

I agree with the answer from #Ostati that you should keep 3 branches of your code.
What I would further expand on that is then I would expose 3 different services (preferably WCF services) and issue certificates that grant access to the specific service. That way if anyone tried to access the higher level functionality they would just not be able to access the service period.

Related

using attribute to read Method Parameters

I want to log the entry of methods. In entry log I would have inputs\parameters received by the method. This has to be done for thousands of methods.
I thought of doing this logging of input parameters using C# ATTRIBUTES, since they fired before method call. (Something similar to ActionFilters in MVC)
Is that possible to read method parameters through attributes?
The concept you are looking for is called aspect oriented programming (AOP). It is a technique that allows you to "weave" in blocks of boilerplate code across your application code. Logging is a perfect example for that. You can either go the hard way and implement logging before and after each method call manually (which is on the one hand not feasible in large projects and on the other hand error prone).
Or you can use an AOP Framework that allows you to define these cross cutting functions in one place and apply it declaratively to your application code. There are several approaches to achieve this; one is to create IL after the build of the application logic and therefore integrating the aspects at compile time. A well known example for this is PostSharp. There also is a free edition that is good for the start.
BTW: PostSharp heavily relies on attributes, so you're on the right track.
Another option is to integrate the aspects at run time (keyword is interception). Most IoC Frameworks offer this. This approach is easy to use but has some downsides IMHO (weaker runtime Performance, only virtual methods can be intercepted).
Attributes are not 'fired before method call', the code that invokes a method that is decorated with an Attribute may (or may not) do something based on the presence of the Attribute.
The Attribute doesn't know the member it is applied on, nor can access it in any (straight forward) way.

Dynamic Assembly Resolution/Management

Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.

WCF single point-of-contact

WCF beginner's question: I've been told that changing the WCF contract is costly and requires constant maintenance (recreating the proxy in the client side), and therefore the preferred method is having one very generic point-of-contact method (which decides how to act, say, according to a given enum parameter).
This sounds quite smelly to me, but I haven't been able to find any information about this issue (bad choice of search keywords? probably).
Any advice, or maybe a useful link?
Thanks!
You don't need to generate the proxy again, you can simply ensure the client is built with the correct interface version. If you're very careful and only add methods, not remove or modify, that works just fine too. That's a lot of responsibility to manage, of course.
To use an interface rather than generate a client proxy, check my question from a while ago:
WCF Service Reference generates its own contract interface, won't reuse mine
You are confusing some terms here and I think you might be referring to a known flaw which has been fixed in .Net 3.5 SP1.
Recreating the WCF proxy used to be an expensive operation at runtime. This has been improved in .Net 3.5 to cache the proxy objects transparently MSDN Blog.
If you are referring to the "code maintenance" of the proxy, then all you are referring to is implementing an interface at the client. If you need to maintain the interface then this comes back to basic SOA. If your services provide access and as much information as possible, assuming that your service will be used for purposes you haven't yet considered then you will likely not need to modify the interface after it is created. You should also consider your upgrade paths as well.
Juval Lowy has a good discussion about this problem in his book which is a little dense but has some pretty good information in it.
A piece of advice: WCF has a whole lot of features designed to make your code really simple and elegant. If you are worreid about maintenance, what you may be driven to do is write an interface:
string ServiceMethod(string xml) //returns XML
Don't do this. Take the time to design a good maintainable interface and a good data/message contract. This will let WCF provide all the extras you get for free when hosting your service for interaction.
Generic (as in non-specific, monolithic) interfaces are hard to understand and program to. The reason not to define a single method as the API is that it's impossible for clients to understand what's going on, and when you change the (implicit) API of this interface, your clients will break in horrible ways that you won't detect at compile time.
It's been a while since I touched WCF, but if your clients are internal (same codebase, versioning and deployment schemes), then regenerating the WCF proxies is very easy, and having a "strong" detailed API will make your life so much easier than a generic one.
It depends on what kind of change you mean. Change to the service contract is indeed costly and should not happen. Service contracts are (or should be) at a sufficiently high level of granularity that change is very rare.
More common are changes to the types which are exposed on the service. These changes are more common and therefore you do need to approach your change in such a way as to avoid breaking existing clients if possible.
There are several ways you can do this, such as exposing your types polymorphically using an interface, but the simplest way is to simply ensure that changes to your types on add new data member fields and make the new fields non-mandatory. If you can limit your changes to these then this is has the lowest impact to existing clients and enables new clients to use the new fields.
Hope this helps.
This is true that modifying the service contract (interface) would also required the client to recreate the proxy class at their end using the new published WSDL and may even require the client to change their code as par the new proxy. I don't think you can create such a generic interface that can handle all changes further down the road in the contract. A contract has to be written very carefully so that it doesn't change often and if there is a need to change the contract then it is better to deploy the service with a different version so that your old client can still work with the old version.

Extensibility framework/pattern/good practice for Web services?

I'm currently working on a large real-time OLAP application. All data are hold in RAM (a few gigabytes) and the common tasks involve brute scanning over the large quantity of that data (which is fine). The results of processing are exposed via a Web service (singleton/multithreaded) and presented using Silverlight-based client.
The problem is that various customers need different functionality/algorithms and I don't know how to provide extensibility on the server-side. For the client side (Silverlight) I can use MEF/Prism, but I'm not sure what would be a good approach to tackle this problem on the server.
Please note that ideally other web-services should have a direct access (i.e. without marshaling) to the data of the main/current service which holds the large data model.
Are there any:
a) frameworks/libraries
b) patterns
c) good pracitces
which would help me to modularize the application and make the selection of desired modules and their deployment relatively easy?
Sounds to me like Dependency Inversion is required: isolate logical parts of the system (algorithms, etc) by defining interfaces, then use a DI / IoC framework to load the desired implementation at runtime (or on application start, etc).
I haven't used Ninject, but plenty of people love it, so you could try that; there's also Spring.Net.
Good Practices:
Ensure you have clear precise logging so you know what's being used and when.
Think about whether you want a 'default' implementation to load if the desired one fails, or whether you deliberately want to fail so that the wrong data isn't returned by mistake (such as the use of a different algorythm).
I've found that using attributes to decorate injectable modules is really helpful (especially in a web-based system that you don't have immeadiate access to) one reason for this is that you can build pages or controls that list all the known / available implementations at runtime.
You can also use the attribute approach to build a UI that lets users select which one they want; I use it for an open source web-application framework I built: http://www.morphological.geek.nz/Morphfolia/Capabilities/AttributeDriven.aspx

Architectural question: In what assembly should I put which class, for a clean solution?

PREAMBLE:
This is by far the longest post I've left here...but I think it's required in this case.
I've had questions about these kinds of things for a long time: how to name assemblies, and how to divide up classes within them.
I'd like to give an example of an application here, with only a bare minimum of classes to demonstrate what I'm trying to understand.
Imagine an application that
Accepts client messages, store them in a db, and then later dequeues them to an MTA server.
It's a Web application that has both an ASP.NET interface to write a message + attach attachments.
There's also a Silverlight client, so the webapp exposes a ClientServices WCF ServiceContract, with one OperationContract (SaveMessage).
There's also a Windows client...does the same thing as the Silerlight contract.
OK. that should be enough of a fake scenario to demonstrate my cluelessness.
The above will need the following classes:
Message
MessageAddress
MessageAddressType (an enum with From, To)
MessageAddressCollection
MessageAttachment
MessageAttachmentType
MessageAttachmentCollection
MessageException
MessageAddressFormatException
MessageExtensions (static extension for Message)
MessageAddressExtensions (static extension for MessageAddress)
MessageAttachmentExtensions (static extension for MessageAttachment)
Project.Contract.dll
My first stab at organizing the above into the right assemblies would be observing that Message, MessageAddress, MessageAttachment, the enums needed for its properties (MessageAddressType, MessageAttachmentType) and the collections needed for them(MessageAddressCollection, MessageAttachmentCollection), are all to be marked as [DataContract] so that they can be serialized between the WCF client and the server.
Being common to both, I think I would move them into a neutral shared assembly called Contract.
Project.Client.dll
I'll need a Client proxy of the server [ServiceContract], that refs the classes in the Contract.dll.
So now the server, which also refs Project.Contract.dll could now save serialized Messages received from a WCF Client, and save them into a db.
Plugins
Next I would realize that I would like to have these objects be processed server side by 3rd party plugins (eg; a virus checker)...
But plugins should have readonly access (only) to the variables in order to check the variables, and throw errors if they see something they don't like.
So I would think about going back to have Message inherit from IMessageReadOnly ...but where to put that interface?
Project.Interfaces.dll
If I put it in an assembly called Project.Interfaces.dll, this would work for the plugins who could reference that without having a reference to Contracts.dll...but now the client has to reference both Contracts assembly AND Interfaces...doesn't sound like a good direction...
Duplicate Objects
Alternatively, I could have two Messages structures (and duplicate the other MessageAttachment, etc. classes as well)...one for communicating from client to server (in the Contracts.dll), and then use a second ServerMessage/ServerMessageAddress/ServerMessageAddressCollection on the server side, which inherits from IMessageReadOnly, and then it would appear that I am closer to what I want.
With duplicate objects, plugins are limited in access, while Server BL, etc. has full access for types relevant to its work, all while the client has different but identical objects...
In fact...they I should probably start considering them as non-identical, making it clearer in my head that the objects are just there to talk to clients, ie Contract/Comm objects)...
The Website UI
which brings up ...hum...if there are two different Messages, and they have now different properties...which one is the most appropriate for using to back the ASP.NET forms? The ServerMessage object seems fastest (no mapping going on between types)...but all the logic has already been worked out against client message objects (with different properties and internal logic). So would I use a ClientMessage, and map it to a Servermessage, to keep the various UI logics the same, across different mediums? or should i prefer mapping, and just rewrite the UI validation?
What about the third case, Silverlight...The Contracts assembly was a Full Framework assembly...which Silverlight can't ref (different framework/build mechanism)....so the assembly that i have on the Silverlight side might be exactly the same code, but has to be a different assembly. How does that work out?
What exactly to Consider as DataContract?
Finally...and this is, I swear, near the end of my huge question...what about the pesky extra classes that are not clearly DataContract?
For example, The MessageAddress was a DataContract. Ok. And the enums it exposed are part of it...Makes sense... But if the messageAddress constructor raises a MessageAddressFormatException...is it considered part of the DataContract?
Can there be Classes common to both Server, Client, AND Plugins?
Or is it an exception that is common to BOTH ServerMessageAddress and ClientMessageAddress, so should not be duplicated, and instead be in a Common assembly...so that in the end, the client has to bind to Contracts AND Common? (Didn't we just go down this alley with the Interfaces assembly?)
What about common Base classes/Interfaces?
And should these exceptions have common base classes? for example...ClientMessageAddressException, ServerMessageAddressException, ServerMessageVirusException (from plugin)...should I struggle to get them to -- as best as possible -- all derive from an abstract MessageException...or is there a time when enheritence/reusse just no longer an appropriate goal to strive for?
HUGE THANKS FOR READING THIS FAR.
I'm a developer and on the tech side I can bumble along ok...but these kinds of questions, where I've had to lay out the assemblies, the architecture, myself, leave me hugely perplexed...and lose me SOOOO much time, as I drive myself batty, moving things around from one assembly to another to see which one is the best fit, all while not really certain of what I am doing, and trying to not get circular references...
So -- really -- thanks for listening, and I hope this gets read by people who can describe how to lay out the above cleanly, hopefully expressing how to think my way through it for future projects as well.
After spending 10 minutes editing the question for formatting, I'm still going to downvote it. There's no way I'm going to read all that.
Go pick up a copy of
Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
As an architect, I've learned that it doesn't pay to get too wrapped up in getting things absolutely perfect the first time, and perfect is subjective. Refactoring, especially moving classes between assemblies, doesn't have too huge a cost. It sounds to me like you're already thinking things through logically and correctly. Here's my opinions on a few of your questions:
Q: Should I have read-only contracts for my data contract classes?
The plugins most-likely shouldn't be aware of your data contracts at all. A virus checker may take a byte array, a spell checker a string and locale, etc. If you're making a general interface layer for the plugins, you should just isolate what's shared to the data specific to the plugin. This will allow you to maximize their reuse. Thus, I think you'll get little payoff on creating interfaces to your data contract structures, which should mostly be dumb bags of data with little logic that are practically interfaces themselves.
Q: Should I use the same data contract classes as my Silverlight app does in my ASP.NET application or use server-side classes directly?
I would go with the client message objects so you can benefit from code reuse. Object creation is fairly cheap, and I'm sure that most of the mapping would be one-to-one. It's not as fast, true, but that won't be the bottleneck in your application.
Q: Where do I put my exception classes?
I would put your example exception classes in the assembly with the data contract, since they are all raised due to contract violations or as a means to communicate errors while fulfilling the contract.
Q: Should the exceptions have common base classes?
I have yet to need to do this, but I don't know your code base as well as you do. My guess is that it will gain you little if anything.
Edit:
You may be overplanning for the future. In my experience, taking a YAGNI approach has allowed us to get the important things done more quickly. Making incremental design changes is preferred to spending valuable time building an elaborate architecture that you might never even benefit from.

Categories

Resources