Serialize one object and deSerialize into a list of objects - c#

I have a type which i need to do the following
zip, serialize and write to a file. This could happen multiple times.
I should be able to recreate a list of this type of object i.e. de-serialize and store in a list of collection.
I have tried few solution like given below but they are slow. Need a fast solution using .Net 4.0.
Soln:
Create a zip stream, use BinaryFormatter and then using StreamWriter write a line in the file and close it.
Same reverse way for creating the list. Reading a line unzip and then de-serialize one object at a time.

From what I understand, the BinaryFormatter + StreamWriter combo can become pretty slow and bloated because it adds metadata to the byte array about the object or file, properties, and datatypes.
One option you have, if you are willing to work with a third party library, is Protocol Buffers. According to the site, it is lightweight, fast serialization format that Google uses in their data communications. It's also recommended in this StackOverflow question: Fast and compact object serialization in .NET.
There are two libraries are available for .NET:
By Marc Gravell: https://code.google.com/p/protobuf-net/
By Jonathan Skeet: https://code.google.com/p/protobuf-csharp-port/
Here is a table of results comparing "protobuf-net" (first link) and "proto#" (second link) to other serialization techniques (more tests available here):
Serializer size serialize deserialize
-------------------------------------------------------------
protobuf-net 3 268 1,881
proto# 3 76 1,792
BinaryFormatter 153 6,694 8,420
SoapFormatter 687 28,609 55,125
XmlSerializer 153 14,594 19,819
DataContractSerializer 205 3,263 10,516
DataContractJsonSerializer 26 2,854 15,621
If you would prefer to have a little more control over it, though, (and if you are just serializing objects), then this link from Code Project contains a neat pattern for serializing them: http://www.codeproject.com/Articles/14164/A-Fast-Serialization-Technique
The idea is that you implement the ISerializable interface for whatever class you need to serialize. This forces you to add a the ISerializable.GetObjectData method, which provides a SerializationWriter that you use to write each property individually, which you then add to a SerializationInfo object. The syntax itself for this is actually incredibly straightforward.
Here is a quick, abbreviated, sample of the GetObjectData method from the site:
// Serialize the object. Write each field to the SerializationWriter
// then add this to the SerializationInfo parameter
public void GetObjectData (SerializationInfo info, StreamingContext ctxt) {
SerializationWriter sw = SerializationWriter.GetWriter ();
sw.Write (id1);
sw.Write (id2);
sw.Write (id3);
sw.Write (s1);
sw.Write (s2);
// more properties here
sw.AddToInfo (info);
}
Here are the results of this author's tests:
Formatter Size (bytes) Time (uS)
--------------------------------------------------------------------
Standard serialization Binary 2080 364
Fast serialization Binary 421 74
Fast serialization SOAP 1086 308

Related

Serialize but not deserialize fields/properties in XML, JSON and (particularly) YAML?

I have a set of objects that contain fields & properties that need to be inspectable in the output of serialization but not read back in when deserialized.
This is purely for debugging/confirmation purposes. We are creating hundreds of files and I want to spot check that serialization is occurring correctly by adding supplementary information. I do not want this supplementary information to be read in during deserialization - it's impossible to do so in fact.
I also need to do this with equal facility across different serialization formats, so we can assess which one is working best. I have a generic serialization approach where the desired format is passed in as an argument, so don't want anything too messy or intricate for each different format.
I've hunted around and found various things on related topics - mostly to do with the opposite: not writing certain fields during serialization. What's out there seems to be quite complicated and at times hacky.
Is it possible to serialize an object differently to deserializing it using Json.Net?
JsonConvert .NET Serialize/Deserialize Read Only
Serialize Property, but Do Not Deserialize Property in Json.Net
Also it appears any approach is inconsistent between serialization formats. i.e. unlike the [*Ignore] attributes, there are no [*SerializeOnly] attributes (where * = JSON, XML, YAML).
Is there an easy way to do this across these serialization formats? Is there a single family of attributes that can help? Or is it idiosyncratic and hacky in each case?
I have tested and applied this only to XML serialization, but it works for me:
When I want a property to be serialized, but not read back, I just declare an empty setter.
public String VersionOfApplicationThatHasWrittenThisFile
{
get
{
return "1.0";
}
set
{
// Leave empty
}
}

Serializing a complex object with BinaryFormatter

I am trying to serialize a complex object which contains 2 lists of complex objects using the code below
public static byte[] SerializeObject(object obj)
{
BinaryFormatter formatter = new BinaryFormatter();
MemoryStream stream = new MemoryStream();
formatter.Serialize(stream, obj);
return stream.ToArray();
}
When I deserialize though I get NHibernate exceptions that my list objects failed to initialize, so I suspect that they haven't been serialized correctly in the first place. The error I receive is failure to lazily initialize a collection of some object, no session or session was closed.
But if they were properly serialized then there would not be a need to lazily initialize, they would already be there, right?
What may be happening here is that you are serializing NHibernate proxies for the collections. Depending on your mapping, for performance reasons, NHibernate will not load a collection up until the point you explicitly access its elements.
It's also able to do this for associations of various kinds (it's called 'lazy loading') and the way it works is that NHibernate actually instantiates and uses a proxy object that implements the correct interface (or derives from your classes in case of other associations).
You may already know all that, but I'm explaining it for context in case you don't.
If you need to know more about lazy loading check out this article: http://nhibernate.info/doc/howto/various/lazy-loading-eager-loading.html
In this case, NHibernate may be using a proxy for your lists and as the BinaryFormatter is accessing them in a non-conventional way that is what you end up serializing.
If this is the case, there are many ways in which you could go ahead and fix it and they depend on how you are structuring your project.
A quick way to confirm if this takes care of your issues is, before serializing your object go ahead and initialize its lazy properties (note that you need to do this for each one or recursively as the Initialize method will only load data for the proxy you give to it):
NHibernateUtil.Initialize(yourObject);
NHibernateUtil.Initialize(yourObject.List1);
NHibernateUtil.Initialize(yourObject.OtherList);
...etc...
SerializeObject(yourObject);

Problems trying to pre-load objects to a text file for faster load times

I have been working on a Windows Form Control project to import into a 3rd party client software using their supplied SDK. The custom control written by yet another company I am trying to load requires sign on to a server before displaying information, which can take 20-30 seconds. In order to speed things up I had the idea to pre-load information needed by the control to a text file. Since it is not a known type it is throwing errors when trying to serialize the class.
I have a Dictionary I am using to reference back to the proper ICamera class. If I change "cam" from an ICamera type to a string, for example "cam.GetLiveURL()". It writes the text file without issue. This is the code I am using to populate the Dictionary.
foreach (ICamera cam in _adapter.Cameras())
{
OCCamera.Add(cam.GetDisplayName(), cam);
}
I have tried XMLSerializer, and it seems it has difficulty dealing with a Dictionary.
I have tried BinaryFormatter and get the error:
Type 'OCAdapter.OCCamera' in Assembly 'OCAdapter.dll' in not marked as serializable.
I have tried DataContractSerializer and get the error:
Type 'OCAdapter.OCCamera' with data contract name
'OCCamera:http://schemas.datacontract.org/2004/07/OCAdapter' is not
expected. Consider using a DataContractResolver or add ant types not
known statically to the list of known types - for example, by using
the KnownTypeAttribute attribute or by adding the to the list of known
types passed to DataContractSerializer.
I have tried playing around with the DataContractResolver and can not seem to get it to work, I do not understand it at all.
The code I am using for the BinaryFormatter and DataContractSerializer are straight from MSDN or elsewhere, and test fine without the custom type.
Maybe there is a better way to handle all this, and I am missing it. I am not opposed to ditching the Dictionary route for something else, or I can rewrite any amount of other code to make this work.
Mistake 1: trying to serialize your implementation rather than the *data.
Mistake 2: using BinaryFormatter... just about ever (except maybe AppDomain marshalling)
My advice: create a simple model ("DTO" model) that just represents the data you need, but not in terms of your specific implementation (no OCAdapter.OCCamera etc). You can construct this DTO model in whatever way is convenient for whatever serialization library you like. I'm partial to protobuf-net, but many others exist. Then map to/from your DTO model and your implementation model.
Advantages:
it'll work
changes to the implementation don't impact the data; it only impacts the mapping code
you can use just about any serializer you want
you can version the data sensibly

Serialize and deserialize Object with non standard constructor - protobuf-net

I have an object which
does not have the Serializable attribute set
Has properties, who's type, does not have the Serializable attribute set
I do not have control over (meaning i cannot edit the class)
I tried reading THIS, it talks about substitution classes to fix this when using Sharpserializer but frankly, I don't understand how to do this when I don't know the properties of my object.
are there some of serialization frameworks that can do this?
Edit: I'm looking into protobuf.net
I cannot figure out how to get it to work in my scenario though - Im hoping Marc will swing by to save the day? :) -
I read this which is the exact same problem as mine, but I'm still getting
"Type is not expected and no contract can be inferred"
when using
private static byte[] ClienToBytes(IScsClient client)
{
using (var memoryStream = new MemoryStream())
{
RuntimeTypeModel.Default.Add(typeof(IScsClient), true).SetSurrogate(typeof(BinaryFormatterSurrogate<IScsClient>));
Serializer.Serialize(memoryStream, client);
return memoryStream.ToArray();
}
}
am I using the RunTimeTypeModel wrong?
I would try protobuf-net. Take a look here:
http://code.google.com/p/protobuf-net/
Quote from Website:
protocol buffers is the name of the binary serialization format used by Google for much of their data communications. It is designed to be:
small in size - efficient data storage (far smaller than xml)
cheap to process - both at the client and server
platform independent - portable between different programming architectures
extensible - to add new data to old messages

Is there any point Unit testing serialization?

I have a class that serializes a set of objects (using XML serialization) that I want to unit test.
My problem is it feels like I will be testing the .NET implementation of XML serialization, instead of anything useful. I also have a slight chicken and egg scenario where in order to test the Reader, I will need a file produced by the Writer to do so.
I think the questions (there's 3 but they all relate) I'm ultimately looking for feedback on are:
Is it possible to test the Writer, without using the Reader?
What is the best strategy for testing the reader (XML file? Mocking with record/playback)? Is it the case that all you will really be doing is testing property values of the objects that have been deserialized?
What is the best strategy for testing the writer!
Background info on Xml serialization
I'm not using a schema, so all XML elements and attributes match the objects' properties. As there is no schema, tags/attributes which do not match those found in properties of each object, are simply ignored by the XmlSerializer (so the property's value is null or default). Here is an example
<MyObject Height="300">
<Name>Bob</Name>
<Age>20</Age>
<MyObject>
would map to
public class MyObject
{
public string Name { get;set; }
public int Age { get;set; }
[XmlAttribute]
public int Height { get;set; }
}
and visa versa. If the object changed to the below the XML would still deserialize succesfully, but FirstName would be blank.
public class MyObject
{
public string FirstName { get;set; }
public int Age { get;set; }
[XmlAttribute]
public int Height { get;set; }
}
An invalid XML file would deserialize correctly, therefore the unit test would pass unless you ran assertions on the values of the MyObject.
Do you need to be able to do backward compatibility? If so, it may be worth building up unit tests of files produced by old versions which should still be able to be deserialized by new versions.
Other than that, if you ever introduce anything "interesting" it may be worth a unit test to just check you can serialize and deserialize just to make sure you're not doing something funky with a readonly property etc.
I would argue that it is essential to unit test serialization if it is vitally important that you can read data between versions. And you must test with "known good" data (i.e. it isn't sufficient to simply write data in the current version and then read it again).
You mention that you don't have a schema... why not generate one? Either by hand (it isn't very hard), or with xsd.exe. Then you have something to use as a template, and you can verify this just using XmlReader. I'm doing a lot of work with xml serialization at the moment, and it is a lot easier to update the schema than it is to worry about whether I'm getting the data right.
Even XmlSerializer can get complex; particularly if you involve subclasses ([XmlInclude]), custom serialization (IXmlSerializable), or non-default XmlSerializer construction (passing additional metadata at runtime to the ctor). Another possibility is creative use of [XmlIngore], [XmlAnyAttribute] or [XmlAnyElement]; for example you might support unexpected data for round-trip (only) in version X, but store it in a known property in version Y.
With serialization in general:
The reason is simple: you can break the data! How badly you do this depends on the serializer; for example, with BinaryFormatter (and I know the question is XmlSerializer), simply changing from:
public string Name {get;set;}
to
private string name;
public string Name {
get {return name;}
set {name = value; OnPropertyChanged("Name"); }
}
could be enough to break serialization, as the field name has changed (and BinaryFormatter loves fields).
There are other occasions when you might accidentally rename the data (even in contract-based serializers such as XmlSerializer / DataContractSerializer). In such cases you can usually override the wire identifiers (for example [XmlAttribute("name")] etc), but it is important to check this!
Ultimately, it comes down to: is it important that you can read old data? It usually is; so don't just ship it... prove that you can.
For me, this is absolutely in the Don't Bother category. I don't unit test my tools. However, if you wrote your own serialization class, then by all means unit test it.
If you want to ensure that the serialization of your objects doesn't break, then by all means unit test. If you read the MSDN docs for the XMLSerializer class:
The XmlSerializer cannot serialize or deserialize the following:Arrays of ArrayListArrays of List<T>
There is also a peculiar issue with enums declared as unsigned longs. Additionally, any objects marked as [Obsolete] do no get serialized from .Net 3.5 onwards.
If you have a set of objects that are being serialized, testing the serialization may seem odd, but it only takes someone to edit the objects being serialized to include one of the unsupported conditions for the serialisation to break.
In effect, you are not unit testing XML serialization, you are testing that your objects can be serialized. The same applies for deserialization.
Yes, as long as what needs to be tested is properly tested, through a bit of intervention.
The fact that you're serializing and deserializing in the first place means that you're probably exchanging data with the "outside world" -- the world outside the .NET serialization domain. Therefore, your tests should have an aspect that's outside this domain. It is not OK to test the Writer using the Reader, and vice versa.
It's not only about whether you would just end up testing the .NET serialization/deserialization; you have to test your interface with the outside world -- that you can output XML in the expected format and that you can properly consume XML in the anticipated format.
You should have static XML data that can be used to compare against serialization output and to use as input data for deserialization.
Assume you give the job of note taking and reading the notes back to the same guy:
You - Bob, I want you to jot down the following: "small yellow duck."
Bob - OK, got it.
You - Now, read it back to me.
Bob - "small yellow duck"
Now, what have we tested here? Can Bob really write? Did Bob even write anything or did he memorize the words? Can Bob actually read? -- his own handwriting? What about another person's handwriting? We don't have answers to any of these questions.
Now let's introduce Alice to the picture:
You - Bob, I want you to jot down the following: "small yellow duck."
Bob - OK, got it.
You - Alice, can you please check what Bob wrote?
Alice - OK, he's got it.
You - Alice, can you please jot down a few words?
Alice - Done.
You - Bob, can you please read them?
Bob - "red fox"
Alice - Yup, that sounds right.
We now know, with certainty, that Bob can write and read properly -- as long as we can completely trust Alice. Static XML data (ideally tested against a schema) should sufficiently be trustworthy.
In my experience it is definitely worth doing, especially if the XML is going to be used as an XML document by the consumer. For example, the consumer may need to have every element present in the document, either to avoid null checking of nodes when traversing or to pass schema validation.
By default the XML serializer will omit properties with a null value unless you add the [XmlElement(IsNullable = true)] attribute. Similarly, you may have to redirect generic list properties to standard arrays with an XMLArray attribute.
As another contributor said, if the object is changing over time, you need to continuously check that the output is consistent. It will also protect you against the serializer itself changing and not being backwards compatible, although you'd hope that this doesn't happen.
So for anything other than trivial uses, or where the above considerations are irrelevant, it is worth the effort of unit testing it.
There are a lot of types that serialization can not cope with etc. Also if you have your attributes wrong, it is common to get an exception when trying to read the xml back.
I tend to create an example tree of the objects that can be serialized with at least one example of each class (and subclass). Then at a minimum serialize the object tree to a stringstream and then read it back from the stringstream.
You will be amazed the number of time this catches a problem and save me having to wait for the application to start up to find the problem. This level of unit testing is more about speeding up development rather then increasing quality, so I would not do it for working serialization.
As other people have said, if you need to be able to read back data saved by old versions of your software, you had better keep a set of example data files for each shipped version and have tests to confirm you can still read them. This is harder then it seems at first, as the meaning of fields on a object may change between versions, so just being able to create the current object from a old serialized file is not enough, you have to check that the meaning is the same as it was it the version of the software that saved the file. (Put a version attribute in your root object now!)
I agree with you that you will be testing the .NET implementation more than you'll be testing your own code. But if that's what you want to do (perhaps you don't trust the .NET implementation :) ), I might approach your three questions as follows.
Yes, it's certainly possible to test the writer without the reader. Use the writer to serialize the example (20-year old Bob) you provided to a MemoryStream. Open the MemoryStream with an XmlDocument. Assert the root node is named "MyObject". Assert it has one attribute named "Height" with value "300". Assert there is a "Name" element containing a text node with value "Bob". Assert there is an "Age" element containing a text node with value "20".
Just do the reverse process of #1. Create an XmlDocument from the 20-year old Bob XML string. Deserialize the stream with the reader. Assert the Name property equals "Bob". Assert the Age property equals 20. You can do things like add test case with insignificant whitespace or single quotes instead of double-quotes to be more thorough.
See #1. You can extend it by adding what you consider to be tricky "edge" cases you think could break it. Names with various Unicode characters. Extra long names. Empty names. Negative ages. Etc.
I have done this in some cases... not testing the serialisation as such, but using some 'known good' XML serializations and then loading them into my classes, and checking that all the properties (as applicable) have the expected values.
This is not going to test anything for the first version... but if the classes ever evolve I know I will catch any breaking changes in the format.
We do acceptance testing of our serialization rather than unit testing.
What this means is that our acceptance testers take the XML schema, or as in your case some sample XML, and re-create their own serializable data-transfer class.
We then use NUnit to test our WCF service with this clean-room XML.
With this technique we've identified many, many errors. For example, where we have changed the name of the .NET member and forgotten to add an [XmlElement] tag with a Name = property.
If there's nothing you can do to change the way your class serializes, then you're testing .NET's implementation of XML serialization ;-)
If the format of the serialized XML matters, then you need to test the serialization. If it's important that you can deserialize it, then you need to test deserialization.
Seeing how you can't really fix serialization, you shouldn't be testing it - instead, you should be testing your own code and the way it interacts with the serialization mechanism. For example, you might need to unit-test the structure of the data you're serializing to make sure that no-one accidentally changes a field or something.
Speaking of which, I have recently adopted a practice where I check such things at compile-time rather than during execution of unit tests. It's a bit tedious, but I have a component that can traverse the AST, and then I can read it in a T4 template and write lots of #error messages if I meet something that shouldn't be there.

Categories

Resources