Using [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("System.Windows")] To expose Internal properties - c#

Ok so I had a qustion awhile back regarding Silverlight 4 Data Binding with anonymous types, one of the answers was to use [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("System.Windows")] in your AssemblyInfo.cs file.
I tried this and it works!
I know I'm making all my internal properties classes and methods visible to the System.Windows Assembley.
But what kind of risk is this with the following in mind:
The product is a hosted silverlight based web application, so it wont be distributed.
Thanks in advance

Well, actually it will be distributed to every client that accesses it, but that is not the point.
Information hiding is primarily a API design concern. If allowing a framework assembly to peek into your assemblies in order to facilitate your development I see no problems with it.
No one is going to be able to backdoor you if that is what you are worried about.

Related

Dynamic Assembly Resolution/Management

Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.

In C# (VS-2010), is there a way to fail a frontend build if a certain library class is used? (When normally it would compile just fine?)

I'm writing a library that has a bunch of classes in it which are intended to be used by multiple frontends (some frontends share the same classes). For each frontend, I am keeping a hand edited list of which classes (of a particular namespace) it uses. If the frontend tries to use a class that is not in this list, there will be runtime errors. My goal is to move these errors to compile time.
If any of you are curious, these are 'mapped' nhibernate classes. I'm trying to restrict which frontend can use what so that there is less spin up time, and just for my own sanity. There's going to be hundreds of these things eventually, and it will be really nice if there's a list somewhere that tells me which frontends use what that I'm forced to maintain. I can't seem to get away with making subclasses to be used by each frontend and I can't use any wrapper classes... just take that as a given please!
Ideally, I want visual studio to underline red the offending classes if someone dares to try and use them, with a nice custom error in the errors window. I also want them GONE from the intellisense windows. Is it possible to customize a project to do these things?
I'm also open to using a pre-build program to analyze the code for these sorts of things, although this would not be as nice. Does anyone know of tools that do this?
Thanks
Isaac
Let's say that you have a set of classes F. You want these classes to be visible only to a certain assembly A. Then you segregate these classes in F into a separate assembly and mark them as internal and set the InternalsVisibleTo on that assembly to true for this certain assembly A.
If you try to use these classes from any assembly A' that is not marked as InternalsVisibleTo from the assembly containing F, then you will get a compile-time error if you try to use any class from F in A'.
I also want them GONE from the intellisense windows. Is it possible to customize a project to do these things?
That happens with the solution I presented above as well. They are internal to the assembly containing F and not visible from any assembly A' not marked as InternalsVisibleTo in the assembly containing F.
However, I generally find that InternalsVisibleTo is a code smell (not always, just often).
You should club your classes into separate dlls / projects and only provide access to those dlls to front end projects that are 'appropriate' for it. This should be simple if your front-end and the group of classes it may use are logically related.
If not then I would say some thing smells fishy - probably your class design / approach needs a revisit.
I think you'll want to take a look at the ObsoleteAttribute: http://msdn.microsoft.com/en-us/library/system.obsoleteattribute%28v=VS.100%29.aspx
I believe you can set IsError to true and it will issue an error on build time.
(not positive though)
As for the intellisense you can use EditorBrowseableAttribute: http://msdn.microsoft.com/en-us/library/system.componentmodel.editorbrowsableattribute.aspx Or at least that is what seems to get decorated when I add a service reference and cannot see the members.

Type Redefinition With WSE Web Service Import

Consider the following Visual Studio project structure
ProjectA.csproj
AClass.cs
ProjectB.csproj
References
ProjectA
Web References
AWebService
AWebService.csproj
References
ProjectA
ReturnAClassViaWebService.asmx
The issue occurs when ProjectB adds the web reference to AWebService and automatically generates all the proxy code for accessing AWebService including a new implementation of AClass. Since all of our other code needs to use the AClass defined in ProjectA, we're forced to convert the AWebService.AClass returned from the service into something we can use.
We're currently considering two solutions, neither of which are ideal.
Manually editing the generated Reference.cs to remove new definitions of AClass
Serializing AWebService.AClass to a stream then deserializing to ProjectA.AClass
Does anyone have any better solutions? This seems like something common enough for other developers to have experienced it.
Ideally we would like to have the proxy code generated in ProjectB to reference ProjectA.AClass rather than generating a whole new implementation.
Our environment is VS 2008 using .NET 2.0.
I have had the same problem that you are describing and I have tried both of the options you specify without being entirely happy about either of them.
The reason we both have this issue is at least partly because the shared-library-between-consumer-and-provider-of-a-web-service-solution is in violation of accepted patterns and practices for web service design. On the consumer side, it should be sufficient to know the interface published in the WSDL.
Still, if you are prepared to accept a tight coupling between your web service provider and web service consumer and you know for certain that your current client will never be replaced by a different client (which might not be capable of referencing the shared library), then I understand why the proposed solution seems like a neat way to structure your app. IMPORTANT NOTE: Can we really honestly answer yes to both of these questions? Probably not.
To recap:
The issue appears when you have classes (e.g. a strongly typed dataset) defined in some sort of shared library (used on both client and server).
Some of your shared classes are used in the interface defined by your web service.
When the web reference is added there are proxy classes defined (for your shared classes) within the web reference namespace.
Due to the different namespaces the proxy class and its actual counterpart in the shared library are incompatible.
Here are four solutions that can be tried if you want to go ahead with the shared library setup:
Don't. Use the proxy class on the client side. This is how it is intendend to be done. It works fine unless you simultaneously want to leverage aspects of the shared library that are not exposed by the web service WSDL.
Implement or use a provided copy/duplication feature of the class (e.g. you could try to Merge() one strongly typed dataset into another). A Cast is obviosuly not possible, and the copy option is usually not a very good solution either since it tends to have undesirable side-effects. E.g. When you Merge a dataset into another, all the rows in the target dataset will be labeled as 'changed'. This could be resurrected with AcceptChanges(), but what if a couple of the received rows were actually changed.
Serialize everything - except for elementary data types - into strings (and back again on the consumer side). Loss of type safety is one important weakness of this approach.
Remove the explicit declaration of the shared class in Reference.cs and strip the namespace from the shared class wherever it is mentioned within Reference.cs. This is probably the best option. You get what you really wanted. The shared class is returned by the web service. The only irritating drawback with this solution is that your modifications to the reference.cs file is lost whenever you update your web reference. Trust me: It can be seriously annoying.
Here is a link to a similar discussion:
You can reuse existing referenced types between the client and service by clicking on the 'Advanced' button on the 'Add Service Reference' form. Make sure the 'Reuse types in referenced assemblies' checkbox is checked and when the service client is generated it should reuse all types from project A.
In past versions this has not always worked correctly and I've had to explicitly select the shared type assemblies by selecting the 'Reuse types in specified referenced assemblies' option and then checking the appropriate assemblies in the list box. However, I just tested this with VS 2008 SP1 and it appears to work as expected. Obviously, you need to make sure that the types that are being used by the service and client projects are both from project A.
Hope that this helps.
We encountered a similar problem with one of our projects. Because we had several dependencies, we ended up creating a circular reference because project 1 required objects from project 2, but project 2 could not be build before project 3, which relied on project 1 to be build.
To solve this problem, we extracted all the public standalone classes from both projects and placed them inside a single librarie. In the end we created something like this:
Framework.Objects
Framework.Interface
Framework.Implementation
WebService
The WebService would be linked to all projects in our case, whereas external parties would only be linking to the objects and interface classes to work with. The actuall implementation was coupled at runtime through reflection.
Hope this helps

Code Organization Connundrum: Web Project With Multiple Supporting DLLs?

I am trying to get a handle on the best practice for code
organization within my project. I have looked around on
the internet for good examples and, so far, I have seen
examples of a web project with one or multiple supporting
class libraries that it references or a web project with
sub-folders that follow its namespace conventions.
Assuming there is no right answer, this is what I currently
have for code organization:
MyProjectWeb
This is my web site. I am referencing my class libraries here.
MyProject.DLL
As the base namespace, I am using this DLL for files that
need to be generally consumable. For example, my class "Enums"
that has all the enumerations in my project lives there. As
does class MyProjectException for all exception handling.
MyProject.IO.DLL
This is a grouping of maybe 20 files that handle file upload and
download (so far).
MyProject.Utilities.DLL
ALl my common classes and methods bunched up together in one
generally consumable DLL. Each class follows a "XHelper" convention
such as "SqlHelper, AuthHelper, SerializationHelper, and so on...
MyProject.Web.DLL
I am using this DLL as the main client interface.
Right now, the majority of class files here are:
1) properties (such as School, Location, Account, Posts)
2) authorization stuff ( such as custom membership, custom role,
& custom profile providers)
My question is simply - does this seem logical?
Also, how do I avoid having to cross reference DLLs from one
project library to the next? For example, MyProject.Web.DLL
uses code from MyProject.Utilities.DLL and MyProject.Utilities.DLL
uses code from MyProject.DLL. Is this solved by clicking on properties and selecting "Dependencies"? I tried that but still don't seem to be accessing the namespaces of
the assembly I have selected. Do I have to reference every
assembly I need for each class library?
Responses appreciated and thanks for your patience.
It is logical in that it proceeds logically from your assumptions. The fact that you are asking the question leads me to believe you might not think it is rational.
In general, things should be broken down along conceptual boundaries rather than technical ones. MyProject.IO.DLL is an example of this principle surfacing in your current design. All of the IO things logically go together, so they end up in a single binary. Makes sense.
Breaking things down into namespaces based on their technical type - enum, class, etc. - is going to be a little more problematic.
The dependencies problem is the same one you'd have breaking one class up with many and it is resolved using the same technique: inversion of dependency. Where two things seemingly need to depend on one another, add an intermediary thing that represents the contract between the first two. This can be abstractions, constants, mediators etc... whatever you need to make it so that instead of thing A depending on thing B and thing B depending on thing A, you have things A and B depending on thing C.

Architectural question: In what assembly should I put which class, for a clean solution?

PREAMBLE:
This is by far the longest post I've left here...but I think it's required in this case.
I've had questions about these kinds of things for a long time: how to name assemblies, and how to divide up classes within them.
I'd like to give an example of an application here, with only a bare minimum of classes to demonstrate what I'm trying to understand.
Imagine an application that
Accepts client messages, store them in a db, and then later dequeues them to an MTA server.
It's a Web application that has both an ASP.NET interface to write a message + attach attachments.
There's also a Silverlight client, so the webapp exposes a ClientServices WCF ServiceContract, with one OperationContract (SaveMessage).
There's also a Windows client...does the same thing as the Silerlight contract.
OK. that should be enough of a fake scenario to demonstrate my cluelessness.
The above will need the following classes:
Message
MessageAddress
MessageAddressType (an enum with From, To)
MessageAddressCollection
MessageAttachment
MessageAttachmentType
MessageAttachmentCollection
MessageException
MessageAddressFormatException
MessageExtensions (static extension for Message)
MessageAddressExtensions (static extension for MessageAddress)
MessageAttachmentExtensions (static extension for MessageAttachment)
Project.Contract.dll
My first stab at organizing the above into the right assemblies would be observing that Message, MessageAddress, MessageAttachment, the enums needed for its properties (MessageAddressType, MessageAttachmentType) and the collections needed for them(MessageAddressCollection, MessageAttachmentCollection), are all to be marked as [DataContract] so that they can be serialized between the WCF client and the server.
Being common to both, I think I would move them into a neutral shared assembly called Contract.
Project.Client.dll
I'll need a Client proxy of the server [ServiceContract], that refs the classes in the Contract.dll.
So now the server, which also refs Project.Contract.dll could now save serialized Messages received from a WCF Client, and save them into a db.
Plugins
Next I would realize that I would like to have these objects be processed server side by 3rd party plugins (eg; a virus checker)...
But plugins should have readonly access (only) to the variables in order to check the variables, and throw errors if they see something they don't like.
So I would think about going back to have Message inherit from IMessageReadOnly ...but where to put that interface?
Project.Interfaces.dll
If I put it in an assembly called Project.Interfaces.dll, this would work for the plugins who could reference that without having a reference to Contracts.dll...but now the client has to reference both Contracts assembly AND Interfaces...doesn't sound like a good direction...
Duplicate Objects
Alternatively, I could have two Messages structures (and duplicate the other MessageAttachment, etc. classes as well)...one for communicating from client to server (in the Contracts.dll), and then use a second ServerMessage/ServerMessageAddress/ServerMessageAddressCollection on the server side, which inherits from IMessageReadOnly, and then it would appear that I am closer to what I want.
With duplicate objects, plugins are limited in access, while Server BL, etc. has full access for types relevant to its work, all while the client has different but identical objects...
In fact...they I should probably start considering them as non-identical, making it clearer in my head that the objects are just there to talk to clients, ie Contract/Comm objects)...
The Website UI
which brings up ...hum...if there are two different Messages, and they have now different properties...which one is the most appropriate for using to back the ASP.NET forms? The ServerMessage object seems fastest (no mapping going on between types)...but all the logic has already been worked out against client message objects (with different properties and internal logic). So would I use a ClientMessage, and map it to a Servermessage, to keep the various UI logics the same, across different mediums? or should i prefer mapping, and just rewrite the UI validation?
What about the third case, Silverlight...The Contracts assembly was a Full Framework assembly...which Silverlight can't ref (different framework/build mechanism)....so the assembly that i have on the Silverlight side might be exactly the same code, but has to be a different assembly. How does that work out?
What exactly to Consider as DataContract?
Finally...and this is, I swear, near the end of my huge question...what about the pesky extra classes that are not clearly DataContract?
For example, The MessageAddress was a DataContract. Ok. And the enums it exposed are part of it...Makes sense... But if the messageAddress constructor raises a MessageAddressFormatException...is it considered part of the DataContract?
Can there be Classes common to both Server, Client, AND Plugins?
Or is it an exception that is common to BOTH ServerMessageAddress and ClientMessageAddress, so should not be duplicated, and instead be in a Common assembly...so that in the end, the client has to bind to Contracts AND Common? (Didn't we just go down this alley with the Interfaces assembly?)
What about common Base classes/Interfaces?
And should these exceptions have common base classes? for example...ClientMessageAddressException, ServerMessageAddressException, ServerMessageVirusException (from plugin)...should I struggle to get them to -- as best as possible -- all derive from an abstract MessageException...or is there a time when enheritence/reusse just no longer an appropriate goal to strive for?
HUGE THANKS FOR READING THIS FAR.
I'm a developer and on the tech side I can bumble along ok...but these kinds of questions, where I've had to lay out the assemblies, the architecture, myself, leave me hugely perplexed...and lose me SOOOO much time, as I drive myself batty, moving things around from one assembly to another to see which one is the best fit, all while not really certain of what I am doing, and trying to not get circular references...
So -- really -- thanks for listening, and I hope this gets read by people who can describe how to lay out the above cleanly, hopefully expressing how to think my way through it for future projects as well.
After spending 10 minutes editing the question for formatting, I'm still going to downvote it. There's no way I'm going to read all that.
Go pick up a copy of
Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries (2nd Edition)
As an architect, I've learned that it doesn't pay to get too wrapped up in getting things absolutely perfect the first time, and perfect is subjective. Refactoring, especially moving classes between assemblies, doesn't have too huge a cost. It sounds to me like you're already thinking things through logically and correctly. Here's my opinions on a few of your questions:
Q: Should I have read-only contracts for my data contract classes?
The plugins most-likely shouldn't be aware of your data contracts at all. A virus checker may take a byte array, a spell checker a string and locale, etc. If you're making a general interface layer for the plugins, you should just isolate what's shared to the data specific to the plugin. This will allow you to maximize their reuse. Thus, I think you'll get little payoff on creating interfaces to your data contract structures, which should mostly be dumb bags of data with little logic that are practically interfaces themselves.
Q: Should I use the same data contract classes as my Silverlight app does in my ASP.NET application or use server-side classes directly?
I would go with the client message objects so you can benefit from code reuse. Object creation is fairly cheap, and I'm sure that most of the mapping would be one-to-one. It's not as fast, true, but that won't be the bottleneck in your application.
Q: Where do I put my exception classes?
I would put your example exception classes in the assembly with the data contract, since they are all raised due to contract violations or as a means to communicate errors while fulfilling the contract.
Q: Should the exceptions have common base classes?
I have yet to need to do this, but I don't know your code base as well as you do. My guess is that it will gain you little if anything.
Edit:
You may be overplanning for the future. In my experience, taking a YAGNI approach has allowed us to get the important things done more quickly. Making incremental design changes is preferred to spending valuable time building an elaborate architecture that you might never even benefit from.

Categories

Resources