For a configuration file in XML that you are using to store application settings, does it's class object need to be one of the classes that is used within the application, or do you create a special configuration class and the translate the values inside you application to the other objects that use those values?
For example
Say I want to keep track of these 3 settings for a product
Name
ID
Color
So My config file looks something simple like
<Product>
<Name>product1</Name>
<ID>2343435</ID>
<Color>Blue</Color>
</Product>
But my Product class that I'm using in the application has many more properties and methods
like
class Product{
string Name;
string Color;
int ID;
bool isObsolete;
SpecialType ProductProperty;
Product(){}
ObsoleteProduct(){
//do stuff
}
OtherMethod(){
//Do stuff
}
}
So then am I supposed to make XML representation from the actual Product class I'm using, or do I use the simpler form that only contains the settings I care about? Because if I use the simpler form, then I'll have two classes, and I'll need to move the values between objects.
The xml serialization and deserialization can be configured a lot using attributes, as described here. For instance, properties can be omitted from serialization, the names of properties can be explicitly specified and the serialization styles for array types can be selected.
Furthermore, the access on array types is done via the ICollection interface, which permits to provide additional logic on insertion of elements, which is discussed in this article.
I am creating a tile-based game in XNA and will be loading the level information from xml files. I have no problem with loading xml data but I would like feedback on the approach I'm thinking of using for reading the xml file.
The code has a class hierarchy such that:
A Level class contains:
- a collection of TileLayers
- a collection of Tilesets
A TileLayer class contains
- a collection of Tiles
A Tileset class contains:
- a collection of TilsetTiles
A TilesetTile class contains:
- a TileId
- a collection of TileProperties
- a TileRectangle
Each of the above classes requires some information from the xml level file.
When I load a level I would like to simply call Level.Load();
My intention is that each class will have a Load() method that will:
1. Load any specific info it needs from the xml file.
2. Call Load() on its children.
The problem I see is that the code for processing the xml will be scattered around in different files making changes difficult to implement (for instance if I decide to encrypt the xml file), and no doubt breaks several aspects of the SOLID principles.
I have this idea that I could create an xmlLevelReader class whose sole purpose is to read an xml level file.
This class could then expose methods that can be called from the Load() method in each of the classes described above.
For example the TileLayer class Load() method could call xmlLevelReader.GetTiles() which would return an IEnumerable<Tile>
Do you think this approach will work?
Can you foresee any difficulties?
Is this approach too simplistic/complicated?
Any constructive criticism welcomed!
Thanks
Based on your comment, I see that you are using Tiled Map Editor. This lead me to suggest that you use TiledLib. Here is a brief explanation of how you can get up and running with importing your .tmx files for use in game.
Content Pipeline Overview
File -> Content Import -> Content Process -> Content Write -> .xnb -> ContentRead -> Game Object
TiledLib
TiledLib only handles the ContentImporter part of the above diagram. It will essentially handle reading the .tmx XML and allow you to process the data into whatever objects you need at run time. Fortunately, the TiledLib author has provided a Demos section in the download as well.
Basic Tiled Map Processor Demo
BasicDemo main game project which contains the ContentManager.Load call.
BasicDemoContent project which has the .tmx file from Tiled
BasicDemoContentPipeline project which has the ContentProcessor
TiledLib which has the ContentImporter
You really only need to worry about how the ContentProcessor works because TiledLib handles all the importing for you. Although I do suggest looking through the different classes to understand how it is deserializing the XML (for educational purposes).
The example ContentProcessor in the Basic Demo project takes in a MapContent object from the ContentImporter (TiledLib) and outputs a DemoMapContent object which is serialized to .xnb at build time and deserialized to a Map object at run time. Here are the classes that represent the map after being processed completely:
[ContentSerializerRuntimeType("BasicDemo.Map, BasicDemo")]
public class DemoMapContent
{
public int TileWidth;
public int TileHeight;
public List<DemoMapLayerContent> Layers = new List<DemoMapLayerContent>();
}
[ContentSerializerRuntimeType("BasicDemo.Layer, BasicDemo")]
public class DemoMapLayerContent
{
public int Width;
public int Height;
public DemoMapTileContent[] Tiles;
}
[ContentSerializerRuntimeType("BasicDemo.Tile, BasicDemo")]
public class DemoMapTileContent
{
public ExternalReference<Texture2DContent> Texture;
public Rectangle SourceRectangle;
public SpriteEffects SpriteEffects;
}
A Map contains a tile width, tile height, and a list of MapLayers.
A MapLayer contains a width, a height, and a list of Tiles.
A MapTile contains a texture, a source rectangle (proper rectangle in the tileset), and optional sprite effects (I've never used any).
How It's Made
I suggest reading the comments of the ContentProcessor to understand what is happening, but in brief, this is the basics:
Load texture data for tile set
Get source rectangles for each tile from within the texture
Iterate over all layers of the map
For each layer, iterate over all tiles
For each tile, get the proper texture and source rectangle
Assign all data from the input to the output properly
Caveats
As long as you stick to basic types (vague, I know), you also do not need to worry about ContentWriter and ContentReader parts of the content pipeline. It will automatically serialize and deserialize the .xnb files at build and run time, respectively. See a question I asked about this: here.
Also, you'll notice that if you're using object layers from within Tiled, the demo does not show you how to process those. TiledLib does properly import them, you just need to pull the data out and stick it in your proper classes. I'll try to edit this answer later with an example of how to do that.
If you are just wanting to load in XML without manipulating the data at all, you can just use the built in XNA Content Serializer.
Essentially you define a class which maps to your xml format, and then read the XML into an instance of that class.
For example. Here I define the class I want to load into:
SpriteRenderDefinition.cs
I chose this one because it has nested classes like the case you describe. Note that it goes into the ContentDefinitions project of you XNA solution.
Now here is the xml file that fills in the content of a SpriteRenderDefinition:
Sprite.xml
The format of that XML maps directly to the member names of SpriteRenderDefinition.
And finally, the code to actually load that XML data into an actual Object at runtime is very straight forward:
SpriteRenderDefinition def = GameObjectManager.pInstance.pContentManager.Load<SpriteRenderDefinition>(fileName);
After calling that line, you have a SpriteRenderDefintion object populated with all the content of the XML file! That is all the code I wrote. Everything else is built into XNA. It's really quite slick and useful if you take an hour or so to figure it out!
I have the following situation. In my C# application, I have a class which i serialize using XmlSerializer. The class is pretty complex, and an object of my class gets saved on local disc as an application file, which can be opened later (classic save work and reopen work). My problems is that during the development, the class of the object which gets serialized might change. I would like to have a version system, which allows my app to realize that the saved xml it belongs to an older version but still can be opened. Old app versions can not open new xml versions as well.
For example:
class ComplexObject
{
public string settings1;
public string settings2;
}
I serialize object, send app in production.
Tomorrow my class became
class ComplexObject
{
public string settings1;
public string settings2;
public string settings3;
}
How will my new version of app open serialized objects of old class definitions as well as new class definition with no error on loading file to object (deserialization)
Any suggestions and basic samples are welcomed!
Thanks
It all depends on the choice of serializer. In the case of XmlSerializer this is fine and will just work; clients with the new value will load the new value; clients without will not. Sample:
var reader = XmlReader.Create(new StringReader(
#"<ComplexObject><foo>123</foo><bar>abc</bar></ComplexObject>"));
var ser = new XmlSerializer(typeof (ComplexObject));
var obj = (ComplexObject)ser.Deserialize(reader);
with:
public class ComplexObject
{
public string foo;
}
which works and loads foo but not bar.
Do not use BinaryFormatter for this - that leads to a world of hurt. If you want binary output, consider something like protobuf-net which is designed to be overtly accommodating with versioning.
Version-tolerant serialization
In short, you either mark fields as Optional (and fill them with default values) or implement deserialization constructor which will parse values as you want them.
I hope I understood your problem correctly. You're having a class serialized to a file. Then you change the class in memory (e.g you add another property). No you want to deserialize this class from the file. This is no problem as long as you only add new properties. They will be ignored by the deserializer. He creates a new instance of your class (that is the reason, why serializable classes have to have a default constructor) and tries to fill the properties he finds in the stream to derserialize. If you change property's type or remove a property, you won't be able to deserialize that.
One workaround for "remove properties" maybe to keep properties you intentionally wanted to remove and ignore those furthermore.
You can take a look at Version Tolerant Serialization explained in msdn
http://msdn.microsoft.com/en-us/library/ms229752%28v=vs.80%29.aspx
Track 1
You could create some if..else mechanism for a new version file opener which would try to open file from lowest possible version to higher.
Track 2
You could store version information in your files.
class ComplexObject
{
public string settings1;
public string settings2;
public string fileVersion;
}
Track 3
You could use different file extensions for different file versions.(like .doc, .docx)
I'm writing a small 2D shooter game in XNA, and I've decided that, so that one could implement custom made content in the game, it loads definitions of the game objects from XML files. When I found out that XNA has an easy to use XML serializer, it made it extra easy. The problem is, my objects that I want to serialize are DrawableGameComponents. XNA's XML serializer, the ContentTypeWriter class which you extend to create custom content writers, requires that the object have a constructor with no arguments to default to. DrawableGameComponent, however, requires a Game object in its constructor and will not let you set the game after the object is initialized. I cannot modify the behavior of the ContentTypeWriter enough, however, to accept a non-blank constructor, because the content is loaded by an entirely different method that I cannot overwrite. So essentially I have this:
class Star : DrawableGameComponent{
public Star(Game game)
: base(game)
{
}
}
With the ContentTypeWriter requiring a constructor with no arguments. I can't create one though, because then I have no way to get a Game object into the Star class. I could just not make it a DrawableGameComponent, but I am trying to decouple these objects from the primary game class such that I can reuse them, etc. and without a Game object this is absurdly difficult. My question therefore is, does anyone know how to modify ContentTypeWriter enough to allow a constructor with arguments, or any ways around this?
I also thought about writing my own XML parsing code using XPath or the Linq XML classes but XNA throws a fit if I have any XML files in the project that do not follow the XNA schema and won't build. Would it be reasonable to Write a base class with only the primary fields of the class and a DrawableGameComponent version that uses the decorator pattern, and serialize only the base? I'm pulling out my hair trying to get around this, and wondering what exactly I should be doing in this situation.
I am also building levels via parsing level files, and i use System.Xml to load data. I changed the properties on the Xml file i added to the following:
Build Action: None
Copy To Output Directory: Copy If Newer
then i wrote some code like this:
public static LevelInfo LoadLevel(
string xmlFile,
GraphicsDevice device,
PhysicsSimulator sim,
ContentManager content)
{
FileInfo xmlFileInfo = new FileInfo(xmlFile);
XDocument fileDoc = XDocument.Load(xmlFile);
//this part is game specific
LevelInfo levelData = new LevelInfo();
levelData.DynamicObjects = LevelLoader.LoadDynamicObjects(device, sim, content, xmlFileInfo, fileDoc);
levelData.StaticObjects = LevelLoader.LoadStaticObjects(device, sim, content, xmlFileInfo, fileDoc);
levelData.LevelAreas = LevelLoader.LoadAreas(device, xmlFileInfo, fileDoc);
return levelData;
}
This is just a sample but it lets you build objects however you want with whatever XML data you want.
For those curious, here's the xml file:
<Level>
<Object Type="Custom"
PositionX="400"
PositionY="400"
IsStatic="true"
Rotation="0"
Texture="sampleObj1_geometrymap"
Mass="5"
ColorR="0"
ColorG="255"
ColorB="0">
</Object>
<Object Type="Custom"
PositionX="400"
PositionY="600"
IsStatic="false"
Rotation="0"
Texture="sampleObj2_geometrymap"
Mass="5"
ColorR="230"
ColorG="230"
ColorB="255">
</Object>
<Object Type="Area"
MinPositionX="0"
MinPositionY="0"
MaxPositionX="300"
MaxPositionY="300"
AreaType="Goal">
</Object>
</Level>
I have a class that serializes a set of objects (using XML serialization) that I want to unit test.
My problem is it feels like I will be testing the .NET implementation of XML serialization, instead of anything useful. I also have a slight chicken and egg scenario where in order to test the Reader, I will need a file produced by the Writer to do so.
I think the questions (there's 3 but they all relate) I'm ultimately looking for feedback on are:
Is it possible to test the Writer, without using the Reader?
What is the best strategy for testing the reader (XML file? Mocking with record/playback)? Is it the case that all you will really be doing is testing property values of the objects that have been deserialized?
What is the best strategy for testing the writer!
Background info on Xml serialization
I'm not using a schema, so all XML elements and attributes match the objects' properties. As there is no schema, tags/attributes which do not match those found in properties of each object, are simply ignored by the XmlSerializer (so the property's value is null or default). Here is an example
<MyObject Height="300">
<Name>Bob</Name>
<Age>20</Age>
<MyObject>
would map to
public class MyObject
{
public string Name { get;set; }
public int Age { get;set; }
[XmlAttribute]
public int Height { get;set; }
}
and visa versa. If the object changed to the below the XML would still deserialize succesfully, but FirstName would be blank.
public class MyObject
{
public string FirstName { get;set; }
public int Age { get;set; }
[XmlAttribute]
public int Height { get;set; }
}
An invalid XML file would deserialize correctly, therefore the unit test would pass unless you ran assertions on the values of the MyObject.
Do you need to be able to do backward compatibility? If so, it may be worth building up unit tests of files produced by old versions which should still be able to be deserialized by new versions.
Other than that, if you ever introduce anything "interesting" it may be worth a unit test to just check you can serialize and deserialize just to make sure you're not doing something funky with a readonly property etc.
I would argue that it is essential to unit test serialization if it is vitally important that you can read data between versions. And you must test with "known good" data (i.e. it isn't sufficient to simply write data in the current version and then read it again).
You mention that you don't have a schema... why not generate one? Either by hand (it isn't very hard), or with xsd.exe. Then you have something to use as a template, and you can verify this just using XmlReader. I'm doing a lot of work with xml serialization at the moment, and it is a lot easier to update the schema than it is to worry about whether I'm getting the data right.
Even XmlSerializer can get complex; particularly if you involve subclasses ([XmlInclude]), custom serialization (IXmlSerializable), or non-default XmlSerializer construction (passing additional metadata at runtime to the ctor). Another possibility is creative use of [XmlIngore], [XmlAnyAttribute] or [XmlAnyElement]; for example you might support unexpected data for round-trip (only) in version X, but store it in a known property in version Y.
With serialization in general:
The reason is simple: you can break the data! How badly you do this depends on the serializer; for example, with BinaryFormatter (and I know the question is XmlSerializer), simply changing from:
public string Name {get;set;}
to
private string name;
public string Name {
get {return name;}
set {name = value; OnPropertyChanged("Name"); }
}
could be enough to break serialization, as the field name has changed (and BinaryFormatter loves fields).
There are other occasions when you might accidentally rename the data (even in contract-based serializers such as XmlSerializer / DataContractSerializer). In such cases you can usually override the wire identifiers (for example [XmlAttribute("name")] etc), but it is important to check this!
Ultimately, it comes down to: is it important that you can read old data? It usually is; so don't just ship it... prove that you can.
For me, this is absolutely in the Don't Bother category. I don't unit test my tools. However, if you wrote your own serialization class, then by all means unit test it.
If you want to ensure that the serialization of your objects doesn't break, then by all means unit test. If you read the MSDN docs for the XMLSerializer class:
The XmlSerializer cannot serialize or deserialize the following:Arrays of ArrayListArrays of List<T>
There is also a peculiar issue with enums declared as unsigned longs. Additionally, any objects marked as [Obsolete] do no get serialized from .Net 3.5 onwards.
If you have a set of objects that are being serialized, testing the serialization may seem odd, but it only takes someone to edit the objects being serialized to include one of the unsupported conditions for the serialisation to break.
In effect, you are not unit testing XML serialization, you are testing that your objects can be serialized. The same applies for deserialization.
Yes, as long as what needs to be tested is properly tested, through a bit of intervention.
The fact that you're serializing and deserializing in the first place means that you're probably exchanging data with the "outside world" -- the world outside the .NET serialization domain. Therefore, your tests should have an aspect that's outside this domain. It is not OK to test the Writer using the Reader, and vice versa.
It's not only about whether you would just end up testing the .NET serialization/deserialization; you have to test your interface with the outside world -- that you can output XML in the expected format and that you can properly consume XML in the anticipated format.
You should have static XML data that can be used to compare against serialization output and to use as input data for deserialization.
Assume you give the job of note taking and reading the notes back to the same guy:
You - Bob, I want you to jot down the following: "small yellow duck."
Bob - OK, got it.
You - Now, read it back to me.
Bob - "small yellow duck"
Now, what have we tested here? Can Bob really write? Did Bob even write anything or did he memorize the words? Can Bob actually read? -- his own handwriting? What about another person's handwriting? We don't have answers to any of these questions.
Now let's introduce Alice to the picture:
You - Bob, I want you to jot down the following: "small yellow duck."
Bob - OK, got it.
You - Alice, can you please check what Bob wrote?
Alice - OK, he's got it.
You - Alice, can you please jot down a few words?
Alice - Done.
You - Bob, can you please read them?
Bob - "red fox"
Alice - Yup, that sounds right.
We now know, with certainty, that Bob can write and read properly -- as long as we can completely trust Alice. Static XML data (ideally tested against a schema) should sufficiently be trustworthy.
In my experience it is definitely worth doing, especially if the XML is going to be used as an XML document by the consumer. For example, the consumer may need to have every element present in the document, either to avoid null checking of nodes when traversing or to pass schema validation.
By default the XML serializer will omit properties with a null value unless you add the [XmlElement(IsNullable = true)] attribute. Similarly, you may have to redirect generic list properties to standard arrays with an XMLArray attribute.
As another contributor said, if the object is changing over time, you need to continuously check that the output is consistent. It will also protect you against the serializer itself changing and not being backwards compatible, although you'd hope that this doesn't happen.
So for anything other than trivial uses, or where the above considerations are irrelevant, it is worth the effort of unit testing it.
There are a lot of types that serialization can not cope with etc. Also if you have your attributes wrong, it is common to get an exception when trying to read the xml back.
I tend to create an example tree of the objects that can be serialized with at least one example of each class (and subclass). Then at a minimum serialize the object tree to a stringstream and then read it back from the stringstream.
You will be amazed the number of time this catches a problem and save me having to wait for the application to start up to find the problem. This level of unit testing is more about speeding up development rather then increasing quality, so I would not do it for working serialization.
As other people have said, if you need to be able to read back data saved by old versions of your software, you had better keep a set of example data files for each shipped version and have tests to confirm you can still read them. This is harder then it seems at first, as the meaning of fields on a object may change between versions, so just being able to create the current object from a old serialized file is not enough, you have to check that the meaning is the same as it was it the version of the software that saved the file. (Put a version attribute in your root object now!)
I agree with you that you will be testing the .NET implementation more than you'll be testing your own code. But if that's what you want to do (perhaps you don't trust the .NET implementation :) ), I might approach your three questions as follows.
Yes, it's certainly possible to test the writer without the reader. Use the writer to serialize the example (20-year old Bob) you provided to a MemoryStream. Open the MemoryStream with an XmlDocument. Assert the root node is named "MyObject". Assert it has one attribute named "Height" with value "300". Assert there is a "Name" element containing a text node with value "Bob". Assert there is an "Age" element containing a text node with value "20".
Just do the reverse process of #1. Create an XmlDocument from the 20-year old Bob XML string. Deserialize the stream with the reader. Assert the Name property equals "Bob". Assert the Age property equals 20. You can do things like add test case with insignificant whitespace or single quotes instead of double-quotes to be more thorough.
See #1. You can extend it by adding what you consider to be tricky "edge" cases you think could break it. Names with various Unicode characters. Extra long names. Empty names. Negative ages. Etc.
I have done this in some cases... not testing the serialisation as such, but using some 'known good' XML serializations and then loading them into my classes, and checking that all the properties (as applicable) have the expected values.
This is not going to test anything for the first version... but if the classes ever evolve I know I will catch any breaking changes in the format.
We do acceptance testing of our serialization rather than unit testing.
What this means is that our acceptance testers take the XML schema, or as in your case some sample XML, and re-create their own serializable data-transfer class.
We then use NUnit to test our WCF service with this clean-room XML.
With this technique we've identified many, many errors. For example, where we have changed the name of the .NET member and forgotten to add an [XmlElement] tag with a Name = property.
If there's nothing you can do to change the way your class serializes, then you're testing .NET's implementation of XML serialization ;-)
If the format of the serialized XML matters, then you need to test the serialization. If it's important that you can deserialize it, then you need to test deserialization.
Seeing how you can't really fix serialization, you shouldn't be testing it - instead, you should be testing your own code and the way it interacts with the serialization mechanism. For example, you might need to unit-test the structure of the data you're serializing to make sure that no-one accidentally changes a field or something.
Speaking of which, I have recently adopted a practice where I check such things at compile-time rather than during execution of unit tests. It's a bit tedious, but I have a component that can traverse the AST, and then I can read it in a T4 template and write lots of #error messages if I meet something that shouldn't be there.