I have a WPF application for which my users can create their own plugins by using MEF. Each plugin implements an interface that allows the main application to perform CRUD operations on some data source, e.g. a database.
I have created 2 plugins:
LocalDatabase - provides data from an SQLite database
RemoteDatabase - provides data from a MySQL database
Both are using Entity Framework to do their job. Each of those plugins needs to have its own implementation of the DbConfiguration class.
Now, the problem is that the WPF application loads those 2 plugins, but fails to assign each of them their own implementation of the DbConfiguration class, because it seems that you can have only one DbConfiguration per AppDomain.
So I always have only one of those plugins working.
I was thinking about having just one implementation of the DbConfiguration class and give each plugin an option to add its required configs to that, but the problem is that it creates some coupling between the WPF application and Entity Framework. I'd like to keep the Entity Framework stuff only inside the plugins without the need of modifying the WPF application. It shouldn't care about what plugins use to access their data source.
Is there any way of making it work this way? Could I maybe somehow create a separate AppDomain per each plugin, so maybe then each could use its own DbConfiguration class?
I've found a solution which is a bit hacky, but it does seem to work, so I thought I'd post it, in an unlikely case that someone would face the same issue somewhere in the future.
After some additional research, I've learnt that you can use the DbConfiguration.Loaded event to register some additional Dependency Resolvers for EF. So, in each plugin's constructor, I subscribe the event and add a new Dependency Resolver: SQLite for the LocalDatabase and MySql for the RemoteDatabase. I got rid of the custom DbConfiguration classes from each plugin.
This looked promising, but actually a new problem appeared - there were cases where LocalDatabase plugin called the MySql resolver and it actually returned the MySql implementation of the requested service type. Obviously the LocalDatabase plugin couldn't work with that, because it expected the SQLite implementation. And vice-versa.
So, each of the Resolvers, would actually need to check who called the GetService method - if it's some method from the same assembly that the custom resolver is in, it tries to resolve. Otherwise it's assumed that a resolver from different plugin should take care of that request and it returns null to actually let it do that.
The problem is that the GetService method doesn't supply any information about the requester. So that's where I came up with the hacky solution, which uses StackTrace to check whether any of the called methods belongs to the same Assembly that the current Resolver resides in.
public class CustomMySqlDbDependencyResolver : IDbDependencyResolver
{
private readonly Assembly _executingAssembly = Assembly.GetExecutingAssembly();
private readonly MySqlDependencyResolver _mySqlResolver = new MySqlDependencyResolver();
public object GetService(Type type, object key)
{
var stackTrace = new StackTrace();
StackFrame[] stackFrames = stackTrace.GetFrames().Skip(1).ToArray();
bool shouldResolve = stackFrames.Any(f => f.GetMethod().DeclaringType.Assembly.Equals(_executingAssembly));
if (!shouldResolve)
{
return null;
}
var resolvedService = _mySqlResolver.GetService(type, key);
return resolvedService;
}
public IEnumerable<object> GetServices(Type type, object key)
{
var service = GetService(type, key);
if (service != null)
{
yield return service;
}
}
}
Related
I have an ASP.NET MVC application using StructureMap.
I have created a service called SecurityContext which has a static Current property. A simplified version looks like this:
public class SecurityContext : ISecurityContext
{
public bool MyProperty { get; private set; }
public static SecurityContext Current
{
get
{
return new SecurityContext() { MyProperty = true };
}
}
}
I've hooked this up in my StructureMap registry as follows:
For<ISecurityContext>().Use(() => SecurityContext.Current);
My understanding of this Linq expression overload of the Use method is that the returned concrete object is the same for the entire HTTP request scope.
However, I've set up a test case where my context interface is injected in two places, once in the controller's constructor and again using the SetterProperty attribute in the base class my view inherits from.
When debugging I observe the Current static method being hit twice so clearly my assumptions are wrong. Can anyone correct what I'm doing here? The reason I want this request-scoped is because I'm loading certain data into my context class from the database so I don't want this to happen multiple times for a given page load.
Thanks in advance.
The default lifecycle for a configuration is Transient, thus each request for an ISecurityContext will create a new instance of SecurityContext. What I think you want is to use the legacy HttpContext lifecycle.
Include the StructureMap.Web nuget package. Then change your configuration to the following:
For<ISecurityContext>()
.Use(() => SecurityContext.Current)
.LifeCycleIs<HttpContextLifecycle>();
More information on lifecyles can be found here.
The HttpContextLifecycle is obsolete, however I do not know if or when it will be removed. The StructureMap team does recommend against using this older ASP.Net lifecycle. They state in the documentation that most modern web frameworks use a nested container per request to accomplish the same scoping. Information about nested containers can be found here.
I don't know if the version of ASP.Net MVC you are using is considered a modern web framework. I doubt it is because ASP.Net Core 1.0 is the really the first in the ASP.Net line to fully embrace the use of DI. However, I will defer to #jeremydmiller on this one.
Using SimpleInjector, I am trying to register an entity that depends on values retrieved from another registered entity. For example:
Settings - Reads settings values that indicate the type of SomeOtherService the app needs.
SomeOtherService - Relies on a value from Settings to be instantiated (and therefore registered).
Some DI containers allow registering an object after resolution of another object. So you could do something like the pseudo code below:
container.Register<ISettings, Settings>();
var settings = container.Resolve<ISettings>();
System.Type theTypeWeWantToRegister = Type.GetType(settings.GetTheISomeOtherServiceType());
container.Register(ISomeOtherService, theTypeWeWantToRegister);
SimpleInjector does not allow registration after resolution. Is there some mechanism in SimpleInjector that allows the same architecture?
A simple way to get this requirement is to register all of the available types that may be required and have the configuration ensure that the container returns the correct type at run time ... it's not so easy to explain in English so let me demonstrate.
You can have multiple implementations of an interface but at runtime you want one of them, and the one you want is governed by a setting in a text file - a string. Here are the test classes.
public interface IOneOfMany { }
public class OneOfMany1 : IOneOfMany { }
public class OneOfMany2 : IOneOfMany { }
public class GoodSettings : ISettings
{
public string IWantThisOnePlease
{
get { return "OneOfMany2"; }
}
}
So let's go ahead and register them all:
private Container ContainerFactory()
{
var container = new Container();
container.Register<ISettings, GoodSettings>();
container.RegisterAll<IOneOfMany>(this.GetAllOfThem(container));
container.Register<IOneOfMany>(() => this.GetTheOneIWant(container));
return container;
}
private IEnumerable<Type> GetAllOfThem(Container container)
{
var types = OpenGenericBatchRegistrationExtensions
.GetTypesToRegister(
container,
typeof(IOneOfMany),
AccessibilityOption.AllTypes,
typeof(IOneOfMany).Assembly);
return types;
}
The magic happens in the call to GetTheOneIWant - this is a delegate and will not get called until after the Container configuration has completed - here's the logic for the delegate:
private IOneOfMany GetTheOneIWant(Container container)
{
var settings = container.GetInstance<ISettings>();
var result = container
.GetAllInstances<IOneOfMany>()
.SingleOrDefault(i => i.GetType().Name == settings.IWantThisOnePlease);
return result;
}
A simple test will confirm it works as expected:
[Test]
public void Container_RegisterAll_ReturnsTheOneSpecifiedByTheSettings()
{
var container = this.ContainerFactory();
var result = container.GetInstance<IOneOfMany>();
Assert.That(result, Is.Not.Null);
}
As you already stated, Simple Injector does not allow mixing registration and resolving instances. When the first type is resolved from the container, the container is locked for further changes. When a call to one of the registration methods is made after that, the container will throw an exception. This design is chosen to force the user to strictly separate the two phases, and prevents all kinds of nasty concurrency issues that can easily come otherwise. This lock down however also allows performance optimizations that make Simple Injector the fastest in the field.
This does however mean that you sometimes need to think a little bit different about doing your registrations. In most cases however, the solution is rather simple.
In your example for instance, the problem would simply be solved by letting the ISomeOtherService implementation have a constructor argument of type ISettings. This would allow the settings instance to be injected into that type when it is resolved:
container.Register<ISettings, Settings>();
container.Register<ISomeOtherService, SomeOtherService>();
// Example
public class SomeOtherService : ISomeOtherService {
public SomeOtherService(ISettings settings) { ... }
}
Another solution is to register a delegate:
container.Register<ISettings, Settings>();
container.Register<ISomeOtherService>(() => new SomeOtherService(
container.GetInstance<ISettings>().Value));
Notice how container.GetInstance<ISettings>() is still called here, but it is embedded in the registered Func<ISomeOtherService> delegate. This will keep the registration and resolving separated.
Another option is to prevent having a large application Settings class in the first place. I experienced in the past that those classes tend to change quite often and can complicate your code because many classes will depend on that class/abstraction, but every class uses different properties. This is an indication of a Interface Segregation Principle violation.
Instead, you can also inject configuration values directly into classes that require it:
var conString = ConfigurationManager.ConnectionStrings["Billing"].ConnectionString;
container.Register<IConnectionFactory>(() => new SqlConnectionFactory(conString));
In the last few application's I built, I still had some sort of Settings class, but this class was internal to my Composition Root and was not injected itself, but only the configuration values it held where injected. It looked like this:
string connString = ConfigurationManager.ConnectionStrings["App"].ConnectionString;
var settings = new AppConfigurationSettings(
scopedLifestyle: new WcfOperationLifestyle(),
connectionString: connString,
sidToRoleMapping: CreateSidToRoleMapping(),
projectDirectories: ConfigurationManager.AppSettings.GetOrThrow("ProjectDirs"),
applicationAssemblies:
BuildManager.GetReferencedAssemblies().OfType<Assembly>().ToArray());
var container = new Container();
var connectionFactory = new ConnectionFactory(settings.ConnectionString);
container.RegisterSingle<IConnectionFactory>(connectionFactory);
container.RegisterSingle<ITimeProvider, SystemClockTimeProvider>();
container.Register<IUserContext>(
() => new WcfUserContext(settings.SidToRoleMapping), settings.ScopedLifestyle);
UPDATE
About your update, if I understand correctly, you want to allow the registered type to change based on a configuration value. A simple way to do this is as follows:
var settings = new Settings();
container.RegisterSingle<ISettings>(settings);
Type theTypeWeWantToRegister = Type.GetType(settings.GetTheISomeOtherServiceType());
container.Register(typeof(ISomeOtherService), theTypeWeWantToRegister);
But please still consider not registering the Settings file at all.
Also note though that it's highly unusual to need that much flexibility that the type name must be placed in the configuration file. Usually the only time you need this is when you have a dynamic plugin model where a plugin assembly can be added to the application, without the application to change.
In most cases however, you have a fixed set of implementations that are already known at compile time. Take for instance a fake IMailSender that is used in your acceptance and staging environment and the real SmptMailSender that is used in production. Since both implementations are included during compilation, allowing to specify the complete fully qualified type name, just gives more options than you need, and means that there are more errors to make.
What you just need in that case however, is a boolean switch. Something like
<add key="IsProduction" value="true" />
And in your code, you can do this:
container.Register(typeof(IMailSender),
settings.IsProduction ? typeof(SmtpMailSender) : typeof(FakeMailSender));
This allows this configuration to have compile-time support (when the names change, the configuration still works) and it keeps the configuration file simple.
I'm learning MEF and I wanted to create a simple example (application) to see how it works in action. Thus I thought of a simple translator. I created a solution with four projects (DLL files):
Contracts
Web
BingTranslator
GoogleTranslator
Contracts contains the ITranslate interface. As the name applies, it would only contain contracts (interfaces), thus exporters and importers can use it.
public interface ITranslator
{
string Translate(string text);
}
BingTranslator and GoogleTranslator are both exporters of this contract. They both implement this contract and provide (export) different translation services (one from Bing, another from Google).
[Export(typeof(ITranslator))]
public class GoogleTranslator: ITranslator
{
public string Translate(string text)
{
// Here, I would connect to Google translate and do the work.
return "Translated by Google Translator";
}
}
and the BingTranslator is:
[Export(typeof(ITranslator))]
public class BingTranslator : ITranslator
{
public string Translate(string text)
{
return "Translated by Bing";
}
}
Now, in my Web project, I simply want to get the text from the user, translate it with one of those translators (Bing and Google), and return the result back to the user. Thus in my Web application, I'm dependent upon a translator. Therefore, I've created a controller this way:
public class GeneralController : Controller
{
[Import]
public ITranslator Translator { get; set; }
public JsonResult Translate(string text)
{
return Json(new
{
source = text,
translation = Translator.Translate(text)
});
}
}
and the last piece of the puzzle should be to glue these components (parts) together (to compose the overall song from smaller pieces). So, in Application_Start of the Web project, I have:
var parts = new AggregateCatalog
(
new DirectoryCatalog(Server.MapPath("/parts")),
new DirectoryCatalog(Server.MapPath("/bin"))
);
var composer = new CompositionContainer(parts);
composer.ComposeParts();
in which /parts is the folder where I drop GoogleTranslator.dll and BingTranslator.dll files (exporters are located in these files), and in the /bin folder
I simply have my Web.dll file which contains importer. However, my problem is that, MEF doesn't populate Translator property of the GeneralController with the required translator. I read almost every question related to MEF on this site, but I couldn't figure out what's wrong with my example. Can anyone please tell me what I've missed here?
OK what you need to do is (without prescribing for performance, this is just to see it working)
public class GeneralController : Controller
{
[Import]
public ITranslator Translator { get; set; }
public JsonResult Translate(string text)
{
var container = new CompositionContainer(
new DirectoryCatalog(Path.Combine(HttpRuntime.BinDirectory, "Plugins")));
CompositionBatch compositionBatch = new CompositionBatch();
compositionBatch.AddPart(this);
Container.Compose(compositionBatch);
return Json(new
{
source = text,
translation = Translator.Translate(text)
});
}
}
I am no expert in MEF, and to be frank for what I use it for, it does not do much for me since I only use it to load DLLs and then I have an entry point to dependency inject and from then on I use DI containers and not MEF.
MEF is imperative - as far as I have seen. In your case, you need to pro-actively compose what you need to be MEFed, i.e. your controller. So your controller factory need to compose your controller instance.
Since I rarely use MEFed components in my MVC app, I have a filter for those actions requiring MEF (instead of MEFing all my controllers in my controller facrory):
public class InitialisePluginsAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
CompositionBatch compositionBatch = new CompositionBatch();
compositionBatch.AddPart(filterContext.Controller);
UniversalCompositionContainer.Current.Container.Compose(
compositionBatch);
base.OnActionExecuting(filterContext);
}
}
Here UniversalCompositionContainer.Current.Container is a singleton container initialised with my directory catalogs.
My personal view on MEF
MEF, while not a DI framework, it does a lot of that. As such, there is a big overlap with DI and if you already use DI framework, they are bound to collide.
MEF is powerful in loading DLLs in runtime especially when you have WPF app where you might be loading/unloading plugins and expect everything else to work as it was, adding/removing features.
For a web app, this does not make a lot of sense, since you are really not supposed to drop a DLL in a working web application. Hence, its uses are very limited.
I am going to write a post on plugins in ASP.NET MVC and will update this post with a link.
MEF will only populate imports on the objects which it constructs itself. In the case of ASP.NET MVC, it is ASP.NET which creates the controller objects. It will not recognize the [Import] attribute, so that's why you see that the dependency is missing.
To make MEF construct the controllers, you have to do the following:
Mark the controller class itself with [Export].
Implement a IDependencyResolver implementation which wraps the MEF container. You can implement GetService by asking the MEF container for a matching export. You can generate a MEF contract string from the requested type with AttributedModelServices.GetContractName.
Register that resolver by calling DependencyResolver.SetResolver in Application_Start.
You probably also need to mark most of your exported parts with [PartCreationPolicy(CreationPolicy.NonShared)] to prevent the same instance from being reused in several requests concurrently. Any state kept in your MEF parts would be subject to race conditions otherwise.
edit: this blog post has a good example of the whole procedure.
edit2: there may be another problem. The MEF container will hold references to any IDisposable object it creates, so that it can dispose those objects when the container itself is disposed. However, this is not appropriate for objects with a "per request" lifetime! You will effectively have a memory leak for any services which implement IDisposable.
It is probably easier to just use an alternative like AutoFac, which has a NuGet package for ASP.NET MVC integration and which has support for per-request lifetimes.
As #Aliostad mentioned, you do need to have the composition initialise code running during/after controller creation for it to work - simply having it in the global.asax file will not work.
However, you will also need to use [ImportMany] instead of just [Import], since in your example you could be working with any number of ITranslator implementations from the binaries that you discover. The point being that if you have many ITranslator, but are importing them into a single instance, you will likely get an exception from MEF since it won't know which implementation you actually want.
So instead you use:
[ImportMany]
public IEnumerable<ITranslator> Translator { get; set; }
Quick example:
http://dotnetbyexample.blogspot.co.uk/2010/04/very-basic-mef-sample-using-importmany.html
This question requires a bit of context before it makes sense so I'll just start with a description of the project.
Project Background
I have an open source project which is a command-prompt style website (U413.com, U413.GoogleCode.com). This project is built in ASP.NET MVC 3 and uses Entity Framework 4. Essentially the site allows you to pass in commands and arguments and then the site returns some data. The concept is fairly simple, but I didn't want to use one giant IF statement to handle the commands. Instead, I decided to do something somewhat unique and build an object that contains all the possible commands as methods on the object.
The site uses reflection to locate methods that correspond to the sent command and execute them. This object is built dynamically based on the current user because some users have access to different commands than other users (e.g. Administrators have more than moderators, and mods have more than users, etc, etc).
I built a custom CommandModuleFactory that would be created in the MVC controller and would call it's BuildCommandModule method to build out a command module object. I am now using Ninject for dependency injection and I want to phase out this CommandModuleFactory, in favor of having the ICommandModule injected into the controller without the controller doing any work.
ICommandModule has one method defined, like this:
public interface ICommandModule
{
object InvokeCommand(string command, List<string> args);
}
InvokeCommand is the method that performs the reflection over itself to find all methods that might match the passed in command.
I then have five different objects that inherit from ICommandModule (some of them inherit from other modules as well so we don't repeat commands):
AdministratorCommandModule inherits from ModeratorCommandModule which inherits from UserCommandModule which inherits from BaseCommandModule.
I then also have VisitorCommandModule which inherits from BaseCommandModule because visitors will not have access to any of the commands in the other three command modules.
Hopefully you can start to see how this works. I'm pretty proud of the way this is all working so far.
The Question
I want Ninject to build my command module for me and bind it to ICommandModule so that I can just make my MVC controller dependent upon ICommandModule and it will receive the correct version of it. Here is what my Ninject module looks like where the binding takes place.
public class BuildCommandModule : NinjectModule
{
private bool _isAuthenticated;
private User _currentUser;
public BuildCommandModule(
bool isAuthenticated,
string username,
IUserRepository userRepository
)
{
this._isAuthenticated = isAuthenticated;
this._currentUser = userRepository.GetUserBy_Username(username);
}
public override void Load()
{
if (_isAuthenticated)
if (_currentUser.Administrator)
//load administrator command module
this.Bind<ICommandModule>().To<AdministratorCommandModule>();
else if (_currentUser.Moderator)
//Load moderator command module
this.Bind<ICommandModule>().To<ModeratorCommandModule>();
else
//Load user command module
this.Bind<ICommandModule>().To<UserCommandModule>();
else
//Load visitor command module
this.Bind<ICommandModule>().To<VisitorCommandModule>();
}
}
A couple things are happening here. Firstly, the Ninject module depends on a few things. It depends on a boolean indicating whether or not the user is authenticated (to determine if it will be one of the logged in command modules, or the visitor command module). Next it depends on a string username and IUserRepository. Here is where my mappings are defined in Global.asax.
protected override IKernel CreateKernel()
{
var kernel = new StandardKernel();
kernel.Bind<IBoardRepository>().To<BoardRepository>();
kernel.Bind<IReplyRepository>().To<ReplyRepository>();
kernel.Bind<ITopicRepository>().To<TopicRepository>();
kernel.Bind<IUserRepository>().To<UserRepository>();
kernel.Load(new BuildCommandModule(User.Identity.IsAuthenticated, User.Identity.Name, kernel.Get<IUserRepository>()));
return kernel;
}
You can see that I map IUserRepository to its concrete type before I load the Ninject module to build my command module (try not to confuse Ninject binding modules with my command modules :S). I then use kernel.Get<IUserRepository>() to resolve my Ninject module's dependency on it.
My problem here is that HttpContext.Current.User is null. I'm not sure how to tell whether or not a user is logged in during the Ninject binding phase. Any ideas?
How might I get reference to the logged in user when I'm doing my Ninject bindings? Or can you think of a better way for me to do conditional binding for my ICommandModule?
You should use a provider instead of putting the logic in your module. First you can create something like a SecurityInformation class that can tell you whether the user is authenticated and their role. Currently your implementation I think only uses the authorization information of the first user to start the app. However you want to check the current user's permissions every time an instance of this module is requested.
public class CommandModuleProvider : IProvider
{
public Type Type { get { return typeof(ICommandModule); } }
public object Create(IContext context)
{
var securityInfo = context.Kernel.Get<SecurityInformation>();
if (securityInfo.IsAuthenticated)
if (securityInfo.IsCurrentUserAdministrator)
//load administrator command module
return context.Kernel.Get<AdministratorCommandModule>();
else if (securityInfo.IsCurrentUserModerator)
//Load moderator command module
return context.Kernel.Get<ModeratorCommandModule>();
else
//Load user command module
return context.Kernel.Get<UserCommandModule>();
else
//Load visitor command module
return context.Kernel.Get<VisitorCommandModule>();
}
}
The binding would then be specified like
Kernel.Bind<ICommandModule>().ToProvider<CommandModuleProvider>();
There should be a (very) limited number of Kernels running in your application: preferably just one in most cases. Instead of trying to create a new kernel for each user, make your binding produce a different implementation for each user. This can be done using IProviders as Vadim points out. Following is a variation on the same idea:
public override void Load()
{
Bind<ICommandModule>().ToMethod(
c =>
{
var sessionManager = c.Kernel<ISessionManager>();
if (!sessionManager.IsAuthenticated)
return c.Kernel.Get<VisitorCommandModule>();
var currentUser = sessionManager.CurrentUser;
if (currentUser.IsAdministrator)
return c.Kernel.Get<AdministratorCommandModule>();
if (currentUser.IsModerator)
return c.Kernel.Get<ModeratorCommandModule>();
return c.Kernel.Get<UserCommandModule>();
}).InRequestScope();
}
In this implementation, I would expect ISessionManager to be implemented with a class that checks the current HttpContext to determine who is logged in, and provide basic information about this person.
InRequestScope() now resides in the Ninject.Web.Common library, and will help to avoid re-performing all this logic more than once per request.
I have this scenario where a webservice method I'm consuming in C# returns a Business object, when calling the webservice method with the following code I get the exception "Unable to cast object of type ContactInfo to type ContactInfo" in the reference.cs class of the web reference
Code:
ContactInfo contactInfo = new ContactInfo();
Contact contact = new Contact();
contactInfo = contact.Load(this.ContactID.Value);
Any help would be much appreciated.
This is because one of the ContactInfo objects is a web service proxy, and is in a different namespace.
It's a known problem with asmx-style web services. In the past I've implemented automatic shallow-copy to work around it (here's how, although if I were doing it again I'd probably look at AutoMapper instead).
For example, if you have an assembly with the following class:
MyProject.ContactInfo
and you return an instance of it from a web method:
public class DoSomethingService : System.Web.Services.WebService
{
public MyProject.ContactInfo GetContactInfo(int id)
{
// Code here...
}
}
Then when you add the web reference to your client project, you actually get this:
MyClientProject.DoSomethingService.ContactInfo
This means that if, in your client application, you call the web service to get a ContactInfo, you have this situation:
namespace MyClientProject
{
public class MyClientClass
{
public void AskWebServiceForContactInfo()
{
using (var service = new DoSomethingService())
{
MyClientProject.DoSomethingService.ContactInfo contactInfo = service.GetContactInfo(1);
// ERROR: You can't cast this:
MyProject.ContactInfo localContactInfo = contactInfo;
}
}
}
}
It's on that last line that I use my ShallowCopy class:
namespace MyClientProject
{
public class MyClientClass
{
public void AskWebServiceForContactInfo()
{
using (var service = new DoSomethingService())
{
MyClientProject.DoSomethingService.ContactInfo contactInfo = service.GetContactInfo(1);
// We actually get a new object here, of the correct namespace
MyProject.ContactInfo localContactInfo = ShallowCopy.Copy<MyClientProject.DoSomethingService.ContactInfo, MyProject.ContactInfo>(contactInfo);
}
}
}
}
NOTE
This only works because the proxy class and the "real" class have exactly the same properties (one is generated from the other by Visual Studio).
As several of the other answers have suggested, it is because .NET sees them as two different classes. I personally would recommend using something like AutoMapper. I've been using it, and it seems pretty awesome. You can copy your objects in 1-2 lines of code.
Mapper.CreateMap<SourceClass, DestinationClass>();
destinationInstance = Mapper.Map<SourceClass, DestinationClass>(sourceInstance);
Actually this is not a bug. It's a problem with the version changes of your own project!
Because your final run don't use the original imported references on compile!
For example, I was making a chat server, client. I used a packet structure to transmit data on client project.
Then imported the same reference to server project.
When casting Packet packet = (Packet)binaryFormatter.Deserialize(stream); I got the same error. Because the actual running reference at server project is not the reference now at client project! Because I have rebuilt client project many times after!
In casting <new object>=(<new object>) <old object> always the new object needs to be a newer or same version as the old object!
So what I did was I built a separate project to create a DLL for the Packet class and imported the DLL file to both projects.
If I did any change to Packet class, I have to import the reference to both client and server again.
Then the casting won't give the above exception!
How are you referencing the class in your web service project as well as consumer project? If you have simply used a file link, this could well explain the cause of the error. The way serialiasation works for .NET (Web Services or otherwise I believe) is by using reflection to load/dump the data of an object. If the files are simply linked, then they are actually getting compiled to different types in different assemblies, which would explain why you have the same name but can't cast between them. I recommend creating a 'Core' library which both the web service and consumer project references, and contains the ContactInfo class which you use everywhere.
This isn't a problem - it's a feature.
They are two independent classes. Compare the two, and notice that the proxy class has none of the constructors, methods, indexers, or other behavior from the original class. This is exactly the same thing that would happen if you consumed the ASMX service with a Java program.
Seems like you have two different classes on both ends. Your application has ContactInfo class and your webservice also have the ContactInfo class. Both are two completely different classes. One way is to use the WebService class on your side. If you are using ContactInfo inside your web service then it will be serialized and will be available on the client side for use.
You can also modify your References.cs file generated by Visual Studio when the web reference is added. If you remove the proxy generated classes and add a reference (using statements) to your personal classes, you'll be able to use them straight away, without shallow copy / reflection or heavy mapping. (but you'll have to re-apply your modification if you regenerate the proxy layer).
I also tried to serialize the proxy object and deserialize them back in my DTO classes but it was quite heavy resources wise so I ended up modifying the References cs generated layer.
Hope it will help other people coming here :)
Kindly.