How to parametrize a C# unit test so that instead of a series of similar assert statements the test would iterate through a list of parameters (incl. expected values) and compare result with the expected values?
Use case:
this particular unit test needs to check an XML document and go through a list of XML element names, verifying that the document contains these elements and their values match what is expected
Assert part of the test method consists of series of assertions like this:
var width = output.Element(namespace + "width");
Assert.IsNotNull(width);
Assert.AreEqual(width.Value, "600");
I would like to avoid redundant code and iterate through the same code with different values instead. How do I define a data structure to iterate through in the assertion checking?
The data structure needed is a list of tuples (containing elements of types (XName, string) in this case). How to express that in C#? Are there some standard unit-testing tools that can help here?
More information:
using the Visual Studio unit-testing framework (Microsoft.VisualStudio.TestTools.UnitTesting) and .Net 3.5
I do not need to run the use case itself with various parameter values, just the assert part of it (the code quoted above)
Nunit has something called TestCases which you access via an attribute. This sounds like something you are asking for:
http://nunit.org/?p=testCase&r=2.5
UPDATE:
This answer was provided prior to the question update specifying the framework being used
UPDATE
This question also looks like it has relevancy: MS Test Equivalent (or lack of)
Does MSTest have an equivalent to NUnit's TestCase?
It seems like you just need to extract a method to meet your literal requested need:
public void AssertElementExistsWithValue(XmlElement parent, string nameSpace, string childName, string value)
{
var child = parent.Element(namespace + childName);
Assert.IsNotNull(child);
Assert.AreEqual(child.Value, "600");
}
I usually use the Linq Xml classes, so I apologize if I have a compile error. You'll get the gist, I'm sure.
When I test xml formatting, I usually write two tests. The first is a round trip test: write the entity to xml, read it back, assert that they are the same. This is a nice value oriented test that doesn't break if you change the name of an element.
The second test I write is one that pins the format of XML exactly. I get the xml from a correctly formatted object, and use it as a constant in a test, and assert the correct object is created. This test fails for implementation detail reasons, but that's ok. It is there to force me to notice if I break backwards compatibility with data formats.
Since you say that you are using mstest, here are couple of resources from MSDN and independent blog that illustrate Data Driven Testing using Microsoft's unit testing framework.
In summary, you need to specify a DataSource attribute in your TestMethod and point it to the source of data. It can be a CSV or SQL Server CE.
Related
Situational Background: XSD with SCH
XML Schema (XSD)
I have an XML schema definition ("the schema") that includes several other XSDs, all in the same namespace. Some of those import other XSDs from foreign namespaces. All in all, the schema declares several global elements that can be instantiated as XML documents. Let's call them Global_1, Global_2 and Global_3.
Business Rules (SCH)
The schema is augmented by a Schematron file that defines the "business rules". It defines a number of abstract rules, and each abstract rule contains a number of assertions using the data model defined via XSD. For instance:
<sch:pattern>
<sch:rule id="rule_A" abstract="true">
<sch:assert test="if (abc:a/abc:b = '123') then abc:x/abc:y = ('aaa', 'bbb', 'ccc') else true()" id="A-01">Error message</sch:assert>
<sch:assert test="not(abc:c = 'abcd' and abc:d = 'zz')" id="A-02">Some other error message</sch:assert>
</sch:rule>
<!-- (...) -->
</sch:pattern>
Each abstract rule is extended by one or more non-abstract (concrete) rule that defines a specific context in which the abstract rule's assertions are to be validated. For example:
<sch:pattern>
<!-- (...) -->
<sch:rule context="abc:Global_1/abc:x/abc:y">
<sch:extends rule="rule_A"/>
</sch:rule>
<sch:rule context="abc:Global_2/abc:j//abc:k/abc:l">
<sch:extends rule="rule_A"/>
</sch:rule>
<!-- (...) -->
</sch:pattern>
In other words, all the assertions defined within the abstract rule_A are being applied to their specific contexts.
Both "the schema" and "the business rules" are subject to change - my program gets them at run-time and I don't know their content at design-time. The only thing I can safely assume is that there are no endless recursive structures in the schema: There is always one definite leaf node for every type and no type contains itself. Put differently, there are no "infinite loops" possible in the instances.
The Problem I want To Solve
Basically, I want to evaluate programmatically if each of the defined rules is correct. Since correctness can be quite a problematic topic, here by correctness I simply mean: Each XPath used in a rule (i.e. its context and within the XQueries of its inherited assertions) is "possible", meaning it can exist according to the data model defined in the schema. If, for instance, a namespace prefix is forgotten (abc:a/b instead of abc:a/abc:b), this XPath will never return anything other than an empty node set. The same is true if one step in the XPath is accidentally omitted, or spelled wrong, etc. This is obviously not a very strong claim for "correctness" of such a rule, but it'll do for a first step.
My Approach Towards A Solution For This
At least to me it doesn't seem like a trivial problem to evaluate an XPath (not to speak of the entire XQuery!) designed for the instance of a schema against the actual schema, given how it may contain axis steps like //, ancestor::, sibling::, etc. So I decided to construct something I would call a "maximum instance": By recursively iterating through all global elements and their children (and the structure of their respective complex types etc.), I build an XML instance at run-time that contains every possible element and attribute where it would be in the normal instance, but all at once. So every optional element/attribute, every element within a choice block and so on. So, said maximum instance would look something like this:
<maximumInstance>
<Global_1>
<abc:a>
<abc:b additionalAttribute="some_fixed_value">
<abc:j/>
<abc:k/>
<abc:l/>
</abc:b>
</abc:a>
</Global_1>
<Global_2>
<abc:x>
<abc:y>
<abc:a/>
<abc:z>
<abc:l/>
</abc:z>
</abc:y>
</abc:x>
</Global_2>
<Global_3>
<!-- ... -->
</Global_3>
<!-- ... -->
</maximumInstance>
All it takes now is to iterate over all abstract rules: And for every assertion in each abstract rule it must be checked that for every context the respective abstract rule is extended by, every XPath within an assertion results in a non-empty node set when evaluated against the maximum instance.
Where I'm stuck
I have written a C# (.NET Framework 4.8) program that parses "the schema" into said "maximum instance" (which is an XDocument at run-time). It also parses the business rules into a structure that makes it easy to get each abstract rule, its assertions, and the contexts these assertions are to be validated against.
But currently, I only have each complete XQuery (just like they are in the Schematron file) which effectively creates an assertion. But I actually need to break the XQuery down into its components (I guess I'd need the abstract syntax tree) so that I would have all individual XPaths. For instance, when given the XQuery if (abc:a/abc:b = '123') then abc:x/abc:y = ('aaa', 'bbb', 'ccc') else true(), I would need to retrieve abc:a/abc:b and abc:x/abc:y.
I assume that this could be done using Saxon-HE (or maybe another Parser/Compiler currently available for C# I don't know about). Unfortunately, I have yet to understand how to make use of Saxon well enough to even find at least a valid starting point for what I want to achieve. I've been trying to use the abstract syntax tree (so I can access the respective XPaths in the XQuery) seemingly accessible via XQueryExecutable:
Processor processor = new Processor();
XQueryCompiler xqueryCompiler = processor.NewXQueryCompiler();
XQueryExecutable exe = xqueryCompiler.Compile(xquery);
var AST = exe.getUnderlyingCompiledQuery();
var st = new XDocument();
st.Add(new XElement("root"));
XdmNode node = processor.NewDocumentBuilder().Build(st.CreateReader());
AST.explain((node); // <-- this is an error!
But that doesn't get me anywhere: I don't find any properties exposed I could work with? And while VS offers me to use AST.explain(...) (which seems promising), I'm unable to figure out what to parametrize here. I tried using a XdmNode which I thought would be a Destination? But also, I am using Saxon 10 (via NuGet), while Destination seems to be from Saxon 9: net.sf.saxon.s9api.Destination?!
Does anybody who was kind enough to read through all of this have any advice for me on how to tackle this? :-) Or, maybe there's a better way to solve my problem I haven't thought of - I'm also grateful for suggestions.
TL;DR
Sorry for the wall of text! In short: I have Schematron rules that augment an XML schema with business logic. To evaluate these rules (not: validate instances against the rules!) without actual XML instances, I need to break down the XQueries which make up the Schematron's assertions into their components so that I can handle all XPaths used in them. I think it can be done with Saxon-HE, but my knowledge is too limited to even understand what a good starting point what be for that. I'm also open for suggestions regarding a possibly better approach to solve my actual problem (as described in detail above).
Thank you for taking the time to read this.
If this were an XSD schema rather than a Schematron schema, then Saxon-EE would do the job for you automatically: this is very similar what a schema-aware XQuery processor attempts to do. But another difference is that in schema-aware XQuery, you can't assume that every element named foo is a valid instance of the element declaration named foo in the schema; it's quite legitimate, for example, for a query to transform valid instances into invalid instances, or vice versa. The input and output, after all, might conform to different schemas.
Saxon uses path analysis to do this: it looks at path expressions to see "where they might lead". Path analysis is also used to assess streamability, and to support document projection (building a trimmed-down tree representation of the source document that leaves out the parts that the query cannot reach). The path analysis in Saxon is by no means complete, for example it doesn't attempt to handle recursive functions. Although all these operations require Saxon-EE, the basic path analysis code is actually present in Saxon-HE, but I would offer no guarantee that it works for any purpose other than those described.
You're basically right that this is a tough problem you've set yourself, and I wish you luck with it.
Another approach you could adopt that wouldn't involve grovelling around the Saxon internals is to convert the XQuery to XQueryX, which is an XML representation of the parse tree, and then inspect the XQueryX (presumably using XQuery) to find the parts you need.
While XQueryX (as pointed out by Michael Kay) would theoretically have been exactly what I was looking for, unfortunately I could not find anything useful regarding an implementation for .NET during my research.
So I eventually solved the whole thing by creating my own parser using the XPath3.1 grammar for ANTLR4 as an ideal starting point. This way, I am now able to retrieve a syntax tree of any Schematron rule expression, allowing me to extract each contained XPath expression (and its sub expressions) separately.
Note that another stumbling block has been the fact that .NET still (!) only handles XPath 1.0 genuinely: While my parser does everything as supposed to, for some of the found expressions .NET gave me "illegal token" errors when trying to evaluate them. Installing the XPath2 NuGet package by Chertkov/Heyenrath was the solution.
I have numerous unit tests within my solution. I'm using NUnit and have a lot of Asserts all over the place e.g. Assert.AreEqual(5, someVar).
This sort of pattern of testing expected numbers versus actual numbers is repeated a lot.
The project I'm working on involves a lot of tweaking of the models that produce these numbers. My current process is, I'll tweak the model based on requirements and this will throw off all my Unit Tests as expected. What then begins is the manual process of updating all the expected values in my Assert to match the new actual values so that my Unit Tests stop breaking until another tweak to the model is required.
My question is, does there exist a plugin or a pattern such that all the expected numbers within the Asserts can be updated automatically with the actual values when I require them to?
Try setting a universally applied starting value that you can change without recalculating your asserts.
For example, say you are doing ICalculator with .AddOne() and .TimesTwo() methods
You can set a _startingNumber = 1
Tests for AddOne() Assert(_startingNumber + 1, result)
Tests for TimesTwo() Assert(_startingNumber * 2, result)
When you change _startingNumber your Asserts stay correct.
If you are asserting logic and not values you could try a lookup table
_lookup = new Dictionary<string, string>();
_lookup.Add("SaveStatus", "SaveValue");
_lookup.Add("DeletedStatus", "DeletedValue");
Then your assert looks like Assert(_lookup["SaveStatus"], result)
So if today you are expecting "SaveValue" but tomorrow you are expecting "SaveValueNew" you can change it in the lookup but all the asserts expecting a save status will be correct.
I'm not sure you are approaching this the right way round.
Your unit tests show that your current model is working as expected.
If you are going to change your model, you should first change your unit test assertions so that they match how you would like your model to behave.
Then you change your model and run the unit tests to see if everything has worked.
To change the model first and then automatically change the assertion values in your tests so that they match what the changed model is producing renders the unit tests effectively useless. They cannot be telling you if there is any problem with the way you implemented the model changes.
This sounds like Mocking
You can install it with NuGet Packages in VS.
I am writing a simple file converter which will take an XML file and convert it to CSV or vice-versa.
I have implemented 2 classes, XMLtoCSV and CSVtoXML and both implement a Convert method which takes the input file path and filter text and filters the XML by the given filter and performs the conversion. (e.g if the XML contains employee details, we might want to filter it so only employees from a certain department is retrieved and converted to CSV file).
I have a unit test which tests this Convert method. In it I am specifying the input file path and filter string and call the Convert function and assert the boolean result but I also need to test if the filtering worked and conversion has been completed.
My question is that do you really need to access the file IO and do the filtering and conversion via unit test? Is this not integration testing? If not then how can I assert the filtering has worked without actually converting the file and returning the results? I thought about Moq'ing the Convert method, but that will not necessarily prove that my Convert method is working fine.
Any help/advice is appreciated.
Thanks
I will suggest you to use streams in your classes and pass file stream in application and a "fake" or StringStream, for example, in unit tests. This will makes you more flexible in case you will decide to get this xml from WebService or any other way - you will just need to pass a stream, not file path.
My question is that do you really need to access the file IO and do
the filtering and conversion via unit test? Is this not integration
testing?
Precisely - in this case you are testing 3 things - the File IO system, the actual file contents, and the Convert method itself.
I think you need to look at restructuring your code to make it more amenable to unit testing (that's not a criticism of your code!). Consider your definition of the Convert method:
In it I am specifying the input file path and filter string
So your Convert method is actually doing two things - opening/reading a file, and converting the contents. You need to change things around so that the Convert method does one thing only - specifically, perform the conversion of a string (or indeed a stream) without having any reference to where it came from.
This way, you can correctly test the Convert method by supplying it with a string that you define yourself in your unit test - One test with known good data,and one with known bad data.
e.g.
void Convert_WithGoodInput_ReturnsTrue()
{
var input="this is a piece of data I know is good and should pass";
var sut = new Converter(); //or whatever it's called :)
bool actual = sut.Convert(input);
Assert.AreEqual(true,actual,"Convert failed to convert good data...");
}
void Convert_WithBadInput_ReturnsFalse()
{
var input="this is a piece of data I know is BAD and should Fail. Bad Data! Bad!";
var sut = new Converter(); //or whatever it's called :)
bool actual = sut.Convert(input);
Assert.AreEqual(false,actual,"Convert failed to complain about bad data...");
}
Of course inside your Convert method you are doing all sorts of arcane and wonderful things and at this point you might then look at that method and see if perhaps you can split it out into several internal methods, the functionality of which is perhaps provided by separate classes, which you provide as dependencies to the Converter class, and which in turn can all be tested in isolation.
By doing this you will be able to test both the functionality of the converter method, and you will be in a position to start using Mocks so that you can test the functional behaviour of it as well - such as ensuring that the frobber is called exactly once, and always before the gibber, and that the gibber always calls the munger, etc.
Bonus
But wait, there's more!!!!1!! - once your Converter class/method is arranged like this you will suddenly find that you can now implement an XML to Tab-delimited, or XML to JSON, or XML to ???? simply by writing the relevant component and plugging it into the Converter class. Loose coupling FTW!
e.g (and here I am just imagining how the guts of your convert function might work)
public class Converter
{
public Converter(ISourceReader reader, IValidator validator, IFilter filter,IOutputformatter formatter)
{
//boring saving of dependencies to local privates here...
}
public bool Convert(string data,string filter)
{
if (!validator.Validate(data)) return false;
var filtered = filter.Filter(data);
var raw = reader.Tokenise(filtered);
var result = formatter.Format(raw);
//and so on
return true; //or whatever...
}
}
Of course I am not trying to tell you how to write your code but the above is a very testable class for both unit and functional testing, because you can mix and match Mocks, Stubs and Reals as and where you like.
So, I've been searching around on the internet for a bit, trying to see if someone has already invented the wheel here. What I want to do is write an integration test that will parse the current project, find all references to a certain method, find it's arguments, and then check the database for that argument. For example:
public interface IContentProvider
{
ContentItem GetContentFor(string descriptor);
}
public class ContentProvider : IContentProvider
{
public virtual ContentItem GetContentFor(string descriptor)
{
// Fetches Content from Database for descriptor and returns in
}
}
Any other class will get an IContentProvider injected into their constructor using IOC, such that they could write something like:
contentProvider.GetContentFor("SomeDescriptor");
contentProvider.GetContentFor("SomeOtherDescriptor");
Basically, the unit test finds all these references, find the set of text ["SomeDescriptor", "SomeOtherDescriptor"], and then I can check the database to make sure I have rows defined for those descriptors. Furthermore, the descriptors are hard coded.
I could make an enum value for all descriptors, but the enum would have thousands of possible options, and that seems like kinda a hack.
Now, this link on SO: How I can get all reference with Reflection + C# basically says it's impossible without some very advanced IL parsing. To clarify; I don't need Reflector or anything - it's just to be an automated test I can run so that if any other developers on my team check in code that calls for this content without creating the DB record, the test will fail.
Is this possible? If so, does anyone have a resource to look at or sample code to modify?
EDIT: Alternatively, perhaps a different method of doing this VS trying to find all references? The end result is I want a test to fail when the record doesnt exist.
This will be very difficult: your program may compute the value of the descriptor, which will mean your test is able to know which value are possible without executing said code.
I would suggest to change the way you program here, by using an enum type, or coding using the type safe enum pattern. This way, each and every use of a GetContentFor will be safe: the argument is part of the enum, and the languages type checker performs the check.
Your test can then easily iterate on the different enum fields, and check they are all declared in your database, very easily.
Adding a new content key requires editing the enum, but this is a small inconvenient you can live with, as it help a log ensuring all calls are safe.
I have a class that serializes a set of objects (using XML serialization) that I want to unit test.
My problem is it feels like I will be testing the .NET implementation of XML serialization, instead of anything useful. I also have a slight chicken and egg scenario where in order to test the Reader, I will need a file produced by the Writer to do so.
I think the questions (there's 3 but they all relate) I'm ultimately looking for feedback on are:
Is it possible to test the Writer, without using the Reader?
What is the best strategy for testing the reader (XML file? Mocking with record/playback)? Is it the case that all you will really be doing is testing property values of the objects that have been deserialized?
What is the best strategy for testing the writer!
Background info on Xml serialization
I'm not using a schema, so all XML elements and attributes match the objects' properties. As there is no schema, tags/attributes which do not match those found in properties of each object, are simply ignored by the XmlSerializer (so the property's value is null or default). Here is an example
<MyObject Height="300">
<Name>Bob</Name>
<Age>20</Age>
<MyObject>
would map to
public class MyObject
{
public string Name { get;set; }
public int Age { get;set; }
[XmlAttribute]
public int Height { get;set; }
}
and visa versa. If the object changed to the below the XML would still deserialize succesfully, but FirstName would be blank.
public class MyObject
{
public string FirstName { get;set; }
public int Age { get;set; }
[XmlAttribute]
public int Height { get;set; }
}
An invalid XML file would deserialize correctly, therefore the unit test would pass unless you ran assertions on the values of the MyObject.
Do you need to be able to do backward compatibility? If so, it may be worth building up unit tests of files produced by old versions which should still be able to be deserialized by new versions.
Other than that, if you ever introduce anything "interesting" it may be worth a unit test to just check you can serialize and deserialize just to make sure you're not doing something funky with a readonly property etc.
I would argue that it is essential to unit test serialization if it is vitally important that you can read data between versions. And you must test with "known good" data (i.e. it isn't sufficient to simply write data in the current version and then read it again).
You mention that you don't have a schema... why not generate one? Either by hand (it isn't very hard), or with xsd.exe. Then you have something to use as a template, and you can verify this just using XmlReader. I'm doing a lot of work with xml serialization at the moment, and it is a lot easier to update the schema than it is to worry about whether I'm getting the data right.
Even XmlSerializer can get complex; particularly if you involve subclasses ([XmlInclude]), custom serialization (IXmlSerializable), or non-default XmlSerializer construction (passing additional metadata at runtime to the ctor). Another possibility is creative use of [XmlIngore], [XmlAnyAttribute] or [XmlAnyElement]; for example you might support unexpected data for round-trip (only) in version X, but store it in a known property in version Y.
With serialization in general:
The reason is simple: you can break the data! How badly you do this depends on the serializer; for example, with BinaryFormatter (and I know the question is XmlSerializer), simply changing from:
public string Name {get;set;}
to
private string name;
public string Name {
get {return name;}
set {name = value; OnPropertyChanged("Name"); }
}
could be enough to break serialization, as the field name has changed (and BinaryFormatter loves fields).
There are other occasions when you might accidentally rename the data (even in contract-based serializers such as XmlSerializer / DataContractSerializer). In such cases you can usually override the wire identifiers (for example [XmlAttribute("name")] etc), but it is important to check this!
Ultimately, it comes down to: is it important that you can read old data? It usually is; so don't just ship it... prove that you can.
For me, this is absolutely in the Don't Bother category. I don't unit test my tools. However, if you wrote your own serialization class, then by all means unit test it.
If you want to ensure that the serialization of your objects doesn't break, then by all means unit test. If you read the MSDN docs for the XMLSerializer class:
The XmlSerializer cannot serialize or deserialize the following:Arrays of ArrayListArrays of List<T>
There is also a peculiar issue with enums declared as unsigned longs. Additionally, any objects marked as [Obsolete] do no get serialized from .Net 3.5 onwards.
If you have a set of objects that are being serialized, testing the serialization may seem odd, but it only takes someone to edit the objects being serialized to include one of the unsupported conditions for the serialisation to break.
In effect, you are not unit testing XML serialization, you are testing that your objects can be serialized. The same applies for deserialization.
Yes, as long as what needs to be tested is properly tested, through a bit of intervention.
The fact that you're serializing and deserializing in the first place means that you're probably exchanging data with the "outside world" -- the world outside the .NET serialization domain. Therefore, your tests should have an aspect that's outside this domain. It is not OK to test the Writer using the Reader, and vice versa.
It's not only about whether you would just end up testing the .NET serialization/deserialization; you have to test your interface with the outside world -- that you can output XML in the expected format and that you can properly consume XML in the anticipated format.
You should have static XML data that can be used to compare against serialization output and to use as input data for deserialization.
Assume you give the job of note taking and reading the notes back to the same guy:
You - Bob, I want you to jot down the following: "small yellow duck."
Bob - OK, got it.
You - Now, read it back to me.
Bob - "small yellow duck"
Now, what have we tested here? Can Bob really write? Did Bob even write anything or did he memorize the words? Can Bob actually read? -- his own handwriting? What about another person's handwriting? We don't have answers to any of these questions.
Now let's introduce Alice to the picture:
You - Bob, I want you to jot down the following: "small yellow duck."
Bob - OK, got it.
You - Alice, can you please check what Bob wrote?
Alice - OK, he's got it.
You - Alice, can you please jot down a few words?
Alice - Done.
You - Bob, can you please read them?
Bob - "red fox"
Alice - Yup, that sounds right.
We now know, with certainty, that Bob can write and read properly -- as long as we can completely trust Alice. Static XML data (ideally tested against a schema) should sufficiently be trustworthy.
In my experience it is definitely worth doing, especially if the XML is going to be used as an XML document by the consumer. For example, the consumer may need to have every element present in the document, either to avoid null checking of nodes when traversing or to pass schema validation.
By default the XML serializer will omit properties with a null value unless you add the [XmlElement(IsNullable = true)] attribute. Similarly, you may have to redirect generic list properties to standard arrays with an XMLArray attribute.
As another contributor said, if the object is changing over time, you need to continuously check that the output is consistent. It will also protect you against the serializer itself changing and not being backwards compatible, although you'd hope that this doesn't happen.
So for anything other than trivial uses, or where the above considerations are irrelevant, it is worth the effort of unit testing it.
There are a lot of types that serialization can not cope with etc. Also if you have your attributes wrong, it is common to get an exception when trying to read the xml back.
I tend to create an example tree of the objects that can be serialized with at least one example of each class (and subclass). Then at a minimum serialize the object tree to a stringstream and then read it back from the stringstream.
You will be amazed the number of time this catches a problem and save me having to wait for the application to start up to find the problem. This level of unit testing is more about speeding up development rather then increasing quality, so I would not do it for working serialization.
As other people have said, if you need to be able to read back data saved by old versions of your software, you had better keep a set of example data files for each shipped version and have tests to confirm you can still read them. This is harder then it seems at first, as the meaning of fields on a object may change between versions, so just being able to create the current object from a old serialized file is not enough, you have to check that the meaning is the same as it was it the version of the software that saved the file. (Put a version attribute in your root object now!)
I agree with you that you will be testing the .NET implementation more than you'll be testing your own code. But if that's what you want to do (perhaps you don't trust the .NET implementation :) ), I might approach your three questions as follows.
Yes, it's certainly possible to test the writer without the reader. Use the writer to serialize the example (20-year old Bob) you provided to a MemoryStream. Open the MemoryStream with an XmlDocument. Assert the root node is named "MyObject". Assert it has one attribute named "Height" with value "300". Assert there is a "Name" element containing a text node with value "Bob". Assert there is an "Age" element containing a text node with value "20".
Just do the reverse process of #1. Create an XmlDocument from the 20-year old Bob XML string. Deserialize the stream with the reader. Assert the Name property equals "Bob". Assert the Age property equals 20. You can do things like add test case with insignificant whitespace or single quotes instead of double-quotes to be more thorough.
See #1. You can extend it by adding what you consider to be tricky "edge" cases you think could break it. Names with various Unicode characters. Extra long names. Empty names. Negative ages. Etc.
I have done this in some cases... not testing the serialisation as such, but using some 'known good' XML serializations and then loading them into my classes, and checking that all the properties (as applicable) have the expected values.
This is not going to test anything for the first version... but if the classes ever evolve I know I will catch any breaking changes in the format.
We do acceptance testing of our serialization rather than unit testing.
What this means is that our acceptance testers take the XML schema, or as in your case some sample XML, and re-create their own serializable data-transfer class.
We then use NUnit to test our WCF service with this clean-room XML.
With this technique we've identified many, many errors. For example, where we have changed the name of the .NET member and forgotten to add an [XmlElement] tag with a Name = property.
If there's nothing you can do to change the way your class serializes, then you're testing .NET's implementation of XML serialization ;-)
If the format of the serialized XML matters, then you need to test the serialization. If it's important that you can deserialize it, then you need to test deserialization.
Seeing how you can't really fix serialization, you shouldn't be testing it - instead, you should be testing your own code and the way it interacts with the serialization mechanism. For example, you might need to unit-test the structure of the data you're serializing to make sure that no-one accidentally changes a field or something.
Speaking of which, I have recently adopted a practice where I check such things at compile-time rather than during execution of unit tests. It's a bit tedious, but I have a component that can traverse the AST, and then I can read it in a T4 template and write lots of #error messages if I meet something that shouldn't be there.