WCF Xml/Json serialization of domain objects, what exactly is getting serialized? - c#

I have a domain class User that has at least 20 properties, and it is from another library so it doesn't have any contract decorations. When I return this over a WCF service as xml or json, its only bringing back like 3 of the properties. I thought maybe it was leaving out collections and whatnot, but even simple fields like Name and Email were not being returned at all.
So I guess my question is, can someone explain what exactly is being serialized and returned over the service? None of the properties are decorated with anything like [DataMember], yet some are serialized and returned while others are not. As I understand, it should automatically serialize all public properties. And on a side thought, if someone could point me in the right direction of how to add these declarations to an existing library to assist in the serialization, it would be appreciated.
UPDATE:
I was looking at the wsdl and found the reference to an xsd file (assumingly generated by the serializer). I noticed that I only has those 3 [mapping]
fields listed. not sure what this is or if I can mess with it.

It turns out that the reason these properties weren't serializing is because they weren't exactly public in that they were read-only. I actually had the properties set to:
public string MyProperty { get; internal set; }
I did this because I do use object initializers in my internal system classes (controller type stuff) and do not wish to allow the consumer to set these properties. I read that you can set them to protected and it will allow it to serialize, however this doesn't work for my implementation.
These are POCO classes, so my solution (albeit not exactly an answer to the problem) was to create DTO classes. Since all of the properties in the DTOs were fully public, all I do is populate those with data from the POCO and return the dto. Everything gets serialized properly.

Take a look at your domain class, and see if it is inheriting from another class. If it is, the User class probably only has the three properties you are seeing.
What I have found to work well is to create a special service model (or view model) as the public data interface, not a direct interface to the domain model. As a benefit, you have much greater control of the data that can be exposed - you limit the risk of unintentional data leakage, as well as optimizing the data sent over the wire.
Best of luck!

Related

Can domain-driven-design read model have basic logic?

I have a question regarding read models. I use read models when i get data from database and eqivalent entity/aggregate models to use in repositories. My question is can read model class have constructor which would check properties? For instance could I have such read model class. From the other hand i already have such checks in eqivalent domain model EmployeeModel therefore i am not convinced as it would be a bit of duplication. The additional question would be if in my EmployeeModel (domain) has not nullable EmploymentDate can i mark it nullablein read model means can read model be a diffrent that eqivalent domain model?
class EmployeeReadModel
{
public DateTime? EmploymentDate { get; set; }
}
can i add constructor and check for such read model?
class EmployeeReadModel
{
public DateTime? EmploymentDate { get; set; }
EmployeeReadModel(DateTime? employeeDate)
{
EmploymentDate = employeeDate?? throw new Exception();
}
}
A read model is something that I see as going over-the-wire. As such it should be easily serializable and methods usually present a problem. Also, if there isn't a default constructor then you also have issues.
Since a read model represents existing data there isn't too much sense in validating it. I would rather leave the validation to the domain model.
Given that a read model is more of a data transfer object chances are that once it leaves your system the receiving system is going to use it plainly as data. For instance, even a web front-end would parse a json representation of the data to consume it.
If you really would like methods on your read model classes then perhaps consider extension methods as these don't interfere with any serialization.
Can domain-driven-design read model have basic logic?
You won't normally have domain logic, in the sense of "state machines" in the read model.
However, you do have constraints that you may need to satisfy, that are inconsistent with the data that you have available.
For example, suppose I'm sent a query with ID:12345, and I'm supposed to respond with a message using the Foo schema, which includes a Bar member that is restricted to the integer values 0-9. We look in the book of record using ID:12345, and discover that the domain model has decided "this one goes to eleven".
So the data that is available doesn't match the required pre-conditions. Now what?
One thing to notice in this sort of setting is that you've got conflicting requirements; if you manage to get all the way to production without discovering that conflict, then you've failed at a number of quality inspection points in your pipeline.
In other words, you're supposed to not have this problem by having discovered it and fixed it a long time ago.
One of the nice things about crash on conflict is that it pulls the Andon cord hard -- everything screeches to a halt. Bonus - that's really easy to detect. The downside, of course, is that you lose revenue until you get a fix deployed.
The downside is that a lot of things can get caught in the blast radius of the crash. And in particular if your monitoring and repairing tools can't run because you are crashing on conflict, it's going to be a real pain to fix.
In other words, we want to be very precise - it's not the responsibility of the read model to detect whether the write model or the human operators are behaving correctly; it's only the job of the read model to determine if read model can satisfy its own requirements with the data that has been provided.

Exposing Data as C# Properties - Good or Bad?

I am kinda not getting my head around this and was wondering if someone could please help me understand this.
So here is the problem, I have a class in which there are no required parameters. If user does not set the fields I can take the default value and carry on. Previously, I designed the same class as Joshua Bloch's Builder Pattern (Effective Java) (immutable object). I didn't had any good reason for making the class immutable except for the fact that I didn't wanted to have telescopic constructors and I didn't wanted to expose the data of the class.
But now, a fellow programmer friend is trying to convince me that it's okay to expose the data from the class using C# properties. I am not sure about this and I still feel that I should not be allowing user to muck with data.
Maybe I am completely wrong in my understanding. Could someone please clear my doubt about this, that whether it's good or bad to expose the data from the class?
If it is good then in what case it is good? Or else if someone can please point me to the article/book that clarifies this I would really appreciate it.
Thanks!
Expose the data in the class if it is needed or of interest outside the class, and do not do so if it is not. Expose it read-only if it's only needed to be read outside, and expose it as a full read/write property if it should be able to be changed. Otherwise, keep it in a private field.
immutable classes are easier to reason about especially in a multi tasking application, but they usually pay in performance (because when you need to change the value of a field you need to build the whole class again with the new value).
So, you could be ok or (depending on what you're coding) even better off with properties but as usual there's no silver bullet.
Settable properties are also the only way to code objects for some specific frameworks or libraries (e.g. ORMs like NHibernate), because you can't control how the library/framework initializes the object.
About constructors, C# 4 has optional parameters, that could help you avoid a long chain of constructors and also communicate much more clearly the fact that the parameters are optional.
However I can't think of many cases where you would end up with classes with a long list of optional parameters. If you find that you're coding classes like that too often (especially with the builder pattern, which is very elegant looking on the consumers' side of the class but complicates the code for the class itself) you may be using the wrong design. Are you sure you are not asking your classes to have too many responsibilities?
It basically depend on what's the purpose of your Class in the application context (could you give us more details?).
Anyway reckon that you could make a property safe from external changes by declaring is setter as private:
http://msdn.microsoft.com/en-us/library/bb384054.aspx
public string UserName { get; private set; }
It's "good" when the consumer of the class needs the data. You have two possibilities to offer properties.
if you only want to offer a property for information purpose, then choose a read only property like this:
public string MyInformation { get; private set; }
If you have the need to allow the consumer to change that property, then make the setter public like that:
public string MyChangeableInformation { get; set; }
But if the consumer has no need to get the information, then hide it in your class!
But now, a fellow programmer friend is trying to convince me that it's
okay to expose the data from the class using C# properties. I am not
sure about this and I still feel that I should not be allowing user to
muck with data.
As a rule of thumb, methods should represent actions whereas properties represent data. What your friend might have tried telling you is that you can expose the data of your class to outside world and still maintain full control on how other classes are accessing the data. In your case, as other have mentioned, you can use properties with private setters, such that the caller should not be able to modify the data.

Importing data from third party datasource (open architecture design )

How would you design an application (classes, interfaces in class library) in .NET when we have a fixed database design on our side and we need to support imports of data from third party data sources, which will most likely be in XML?
For instance, let us say we have a Products table in our DB which has columns
Id
Title
Description
TaxLevel
Price
and on the other side we have for instance Products:
ProductId
ProdTitle
Text
BasicPrice
Quantity.
Currently I do it like this:
Have the third party XML convert to classes and XSD's and then deserialize its contents into strong typed objects (what we get as a result of this process is classes like ThirdPartyProduct, ThirdPartyClassification, etc.).
Then I have methods like this:
InsertProduct(ThirdPartyProduct newproduct)
I do not use interfaces at the moment but I would like to. What I would like is implement something like
public class Contoso_ProductSynchronization : ProductSynchronization
{
public void InsertProduct(ContosoProduct p)
{
Product product = new Product(); // this is our Entity class
// do the assignments from p to product here
using(SyncEntities db = new SyncEntities())
{
// ....
db.AddToProducts(product);
}
}
// the problem is Product and ContosoProduct have no arhitectural connection right now
// so I cannot do this
public void InsertProduct(ContosoProduct p)
{
Product product = (Product)p;
using(SyncEntities db = new SyncEntities())
{
// ....
db.AddToProducts(product);
}
}
}
where ProductSynchronization will be an interface or abstract class. There will most likely be many implementations of ProductSynchronization. I cannot hardcode the types - classes like ContosoProduct, NorthwindProduct might be created from the third party XML's (so preferably I would continue to use deserialization).
Hopefully someone will understand what I'm trying to explain here. Just imagine you are the seller and you have numerous providers and each one uses their own proprietary XML format. I don't mind the development, which will of course be needed everytime new format appears, because it will only require 10-20 methods to be implemented, I just want the architecture to be open and support that.
In your replies, please focus on design and not so much on data access technologies because most are pretty straightforward to use (if you need to know, EF will be used for interacting with our database).
[EDIT: Design note]
Ok, from a design perspective I would do xslt on the incoming xml to transform it to a unified format. Also very easy to verify the result xml towards a schema.
Using xslt I would stay away from any interface or abstract class, and just have one class implementation in my code, the internal class. It would keep the code base clean, and the xslt's themselves should be pretty short if the data is as simple as you state.
Documenting the transformations can easily be done wherever you have your project documentation.
If you decide you absolutely want to have one class per xml (or if you perhaps got a .net dll instead of xml from one customer), then I would make the proxy class inherit an interface or abstract class (based off your internal class, and implement the mappings per property as needed in the proxy classes. This way you can cast any class to your base/internal class.
But seems to me doing the conversion/mapping in code will make the code design a bit more messy.
[Original Answer]
If I understand you correctly you want to map a ThirdPartyProduct class over to your own internal class.
Initially I am thinking class mapping. Use something like Automapper and configure up the mappings as you create your xml deserializing proxy's. If you make your deserialization end up with the same property names as your internal class, then there's less config to do for the mapper. Convention over Configuration.
I'd like to hear anyones thoughts on going this route.
Another approach would be to add a .ToInternalProduct( ThirdPartyClass ) in a Converter class. And keep adding more as you add more external classes.
The third approach is for XSLT guys. If you love XSLT you could transform the xml into something which can be deserialized into your internal product class.
Which one of these three I'd choose would depend on the skills of the programmer, and who will maintain adding new external classes. The XSLT approach would require no recompiling or compiling of code as new formats arrived. That might be an advantage.

Is my objective possible using WCF (and is it the right way to do things?)

I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server.
However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries.
My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me.
I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client.
Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets.
So my questions are:
Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)?
If sending the object-graph down is the right way, is WCF the right tool?
And if WCF is right, what's the best way to get WCF to serialise my object graph?
Object graphs can be used with DataContract serialization.
Note: Make sure you're preserving object references, so that you don't end up with multiple copies of the same object in the graph when they should all be the same reference, the default behavior does not preserve identity like this.
This can be done by specifying the preserveObjectReferences parameter when constructing a DataContractSerializer or by specifying true for the IsReference property on DataContractAttribute (this last attribute requires .NET 3.5SP1).
However, when sending object graphs over WCF, you have the risk of running afoul of WCF quotas (and there are many) if you don't take care to ensure the graph is kept to a reasonable size.
For the net.tcp transport, for example, important ones to set are maxReceivedMessageSize, maxStringContentLength, and maxArrayLength. Not to mention a hidden quota of 65335 distinct objects allowed in a graph (maxObjectsInGraph), that can only be overridden with difficulty.
You can also use classes that only expose read accessors with the DataContractSerializer, and have no parameterless constructors:
using System;
using System.IO;
using System.Runtime.Serialization;
class DataContractTest
{
static void Main(string[] args)
{
var serializer = new DataContractSerializer(typeof(NoParameterLessConstructor));
var obj1 = new NoParameterLessConstructor("Name", 1);
var ms = new MemoryStream();
serializer.WriteObject(ms, obj1);
ms.Seek(0, SeekOrigin.Begin);
var obj2 = (NoParameterLessConstructor)serializer.ReadObject(ms);
Console.WriteLine("obj2.Name: {0}", obj2.Name);
Console.WriteLine("obj2.Version: {0}", obj2.Version);
}
[DataContract]
class NoParameterLessConstructor
{
public NoParameterLessConstructor(string name, int version)
{
Name = name;
Version = version;
}
[DataMember]
public string Name { get; private set; }
[DataMember]
public int Version { get; private set; }
}
}
This works because DataContractSerializer can instantiate types without calling the constructor.
You got yourself mixed up with the serializers:
the XmlSerializer requires a parameter-less constructor, since when deserializing, the .NET runtime will instantiate a new object of that type and then set its properties
the DataContractSerializer has no such requirement
Check out the blog post by Dan Rigsby which explains serializers in all their glory and compares the two.
Now for your design - my main question is: does it make sense to have a single function that return all the settings, the client manipulates those and then another function which receives back all the information?
Couldn't you break those things up into smaller chunks, smaller method calls? E.g. have separate service methods to set each individual item of your configuration? That way, you could
reduce the amount of data being sent across the wire - the object graph to be serialized and deserialized would be much simpler
make your configuration service more granular - e.g. if someone needs to set a single little property, that client doesn't need to read the whole big server config, set one single property, and send back the huge big chunk - just call the appropriate method to set that one property

Mixing custom and basic serialization?

I've got a class with well over 100 properties (it's a database mapping class) and one of the properties has to be in a method. In other words this data is not exposed via a property but via methods:
"ABCType GetABC(), SetABC(ABCType value)"
It's all very un-C#-like. I shudder when I see it.
The class needs to be serializable so it can be sent over web services, and the data exposed by the Get/Set methods needs to be serialized too. (It's in a method because of a strange thing the grid I'm using does with reflection; it can't handle objects that contain properties of the same type as the containing object. The problem property stores the original state of the database object in case a revert is required. Inefficient implementation, yes - but I'm unable to re-engineer it.)
My question is this: since only this 1 field needs custom serialization code, I'd like to use custom serialization only for calling GetABC and SetABC, reverting to basic XML serialization for the rest of the class. It'll minimize potential for bugs in my serialization code. Is there a way?
The first thing I'd try is adding a property for serialization, but hiding it from the UI:
[Browsable(false)] // hide in UI
public SomeType ABC {
get {return GetABC();}
set {SetABC(value);}
}
You can't really mix and match serialization unfortunately; once you implement IXmlSerializable, you own everything. If you were using WCF, then DataContractSerialier supports non-public properties for serialization, so you could use:
[DataMember]
private SomeType ABC {
get {return GetABC();}
set {SetABC(value);}
}
but this doesn't apply for "asmx" web-services via XmlSerializer.
Does the [Browsable] trick work at all? Assuming the custom grid uses TypeDescriptor, another option might be to hide it via ICustomTypeDescriptor, but that is a lot of work just to hide a property...

Categories

Resources