XmlSerializer produces slightly different XML on different machines - c#

I have an old C# .net4 project that needed maintenance recently and one of the unit tests that was comparing a serialized object to a "staple" XML string started failing because of slight namespace differences, i.e. the namespace on the compare string was xmlns:xsd and on the serialized object it was xmlns:xsi.
The weird thing is that the same unit test works fine on a different machine.
I'm rather clueless as to why this could happen.

Change tests. You shold't compare xmls as strings.
<a i="1" b="2"/>
is same as
<a b="2" i="1"/>
if you compare it as strings they will be different, but as xml they are identical.
This post shoud help you.

Don't know if it's your case, but, when I had to make some WCF retrocompatible with ASMX older webservices, it turned out that XmlSerializer works differently based on the version on the .NET framework, causing exceptions when trying to deserialize objects or re-read xml.
It makes sense that an older application, working on an old machine, with an old version of the framework might act differently on the new machine because even if the project targets an older framework, the newer one is used to elaborate some parts, such as the XML.
To add further details, I think the difference was between the passage from the 2.0 .NET framework to the newer ones (3, 3.5 etc). But I'm not sure of this, since quite some time has passed.
I repeat, I don't know if this is your case, but it might be a reason. One test should be to upgrade on the older machine the framework to the same version as the newer machine, and check if produces the same output, then do a system rollback to the previous state just before the installation of the newer framework.
Check this:
this, absolutely THIS, and this.
Let me know if this was useful.

Related

protobuf-net version 2.X to 3.X migration

I am doing an update of my protobuf-net library reference, specifically from 2.4.4 to 3.0.101. Previously, we used null in lists as they contain meaningful information to the business (e.g., new[] { "one", "two", null, null, "five" }). However, they are not supported in 3.x yet as far as I understand (https://protobuf-net.github.io/protobuf-net/releasenotes#).
Is there a suggested migration strategy for collections with nulls?
I can mitigate the change going forward with additional fields (e.g., transposing the collection to a dictionary & back again on serializing/deserializing), however backwards compatibility seems broken for data serialied with 2.x libraries. Are there any migration guides?
Given that 3.x doesn't yet support null retention, your options are somewhat limited:
Submit a PR to add the missing feature. A quick glance through the protobuf source code makes me think that it would be fairly trivial to implement. In line 161 of the linked source file there seems to be a throw for nulls, which is where I'd start. I could be very wrong about how complicated this would be, though.
See if you can use both libraries concurrently. You would need to know (or detect) whether data to serialize is in v2 or v3 format (I have not checked but would be surprised if there wasn't a way to detect this by looking at the first few bytes). You may need to compile a custom version to give it a different namespace, in order for the two to co-exist.
Migrate data to v3. You can do this as a one-off operation (smaller amounts of data that you control) or on-demand (large amounts of data or externally received data). You'll need to re-design the types used in lists with nulls, so that you no longer have nulls (e.g. by having a custom value that logically represents null).
Stay with v2. It's stable and works exceptionally well, so unless you have a specific need to upgrade it may not be worth the effort.

I'm running into some odd .NET behavior, is there something I may be missing?

I have an application written for work (Sorry, proprietary code, so I can't share much of it.) Written with VS2015 C# on Win7. It operated fine on my PC, (as it should,) but when it's deployed, not all the other machines react properly.
It started with writing it for .NET 4.5.2, it was later moved to .NET 4.6.2 but it still suffers the same issues. It would install and start to run, but immediately issue a Startup Exception on some machines, while others would have no problems.
As I stated, it doesn't happen on every machine installed to. (current images on the machines has either .NET 4.6.2 or 4.7) My work around so far has been to, after uninstalling the currently installed version, go all the way back to 4.5 and incrementally install each new version, which apparently clears the problem. (I usually make sure it's 4.7 or 4.7.1 when I'm finished.) Of course this eats time (about an hour.)
So, what I need is to better understand why some machines and not others are effected. More importantly, what areas should I focus on to find where the fault is that may be causing the issue. That way there, I can make it easier for others to either fix or better yet, not need to fix, the install.
The method throwing the Object reference not set to an instance of an object exception is:
// ORIGINAL SOURCE: http://www.codeproject.com/Articles/63147/Handling-database-connections-more-easily
// ORIGINAL ARTICLE: Handling database connections more easily
// ORIGINAL AUTHOR: Alaric Dailey
// PUBLISHED: 2011/07/11
//
// LICENSE: CPOL 1.02 - http://www.codeproject.com/info/cpol10.aspx
private string QuoteSuffix
{
get
{
// Create it if it doesn't already exist
if (string.IsNullOrEmpty(quotePrefix))
{
quoteSuffix = CommandBuilder.QuoteSuffix;
if (string.IsNullOrEmpty(quoteSuffix))
quoteSuffix = "\"";
quoteSuffix = quoteSuffix.Trim();
}
// send it out
return (quoteSuffix);
}
}
I assume that for some reason CommandBuilder isn't being initialized properly on the installs that fail to run. (The code is compiled as part of the build, not as a library or DLL)

Protocol buffers, getting C# to talk to C++ : type issues and schema issues

I am about to embark on a project to connect two programs, one in c#, and one in c++. I already have a working c# program, which is able to talk to other versions of itself. Before I start with the c++ version, I've thought of some issues:
1) I'm using protobuf-net v1. I take it the .proto files from the serializer are exactly what are required as templates for the c++ version? A google search mentioned something about pascal casing, but I have no idea if that's important.
2) What do I do if one of the .NET types does not have a direct counterpart in c++? What if I have a decimal or a Dictionary? Do I have to modify the .proto files somehow and squish the data into a different shape? (I shall examine the files and see if I can figure it out)
3) Are there any other gotchas that people can think of? Binary formats and things like that?
EDIT
I've had a look at one of the proto files now. It seems .NET specific stuff is tagged eg bcl.DateTime or bcl.Decimal. Subtypes are included in the proto definitions. I'm not sure what to do about bcl types, though. If my c++ prog sees a decimal, what will it do?
Yes, the proto files should be compatible. The casing is about conventions, which shouldn't affect actual functionality - just the generated code etc.
It's not whether or not there's a directly comparable type in .NET which is important - it's whether protocol buffers support the type which is important. Protocol buffers are mostly pretty primitive - if you want to build up anything bigger, you'll need to create your own messages.
The point of protocol buffers is to make it all binary compatible on the wire, so there really shouldn't be gotchas... read the documentation to find out about versioning policies etc. The only thing I can think of is that in the Java version at least, it's a good idea to make enum fields optional, and give the enum type itself a zero value of "unknown" which will be used if you try to deserialize a new value which isn't supported in deserializing code yet.
Some minor additions to Jon's points:
protobuf-net v1 does have a Getaproto which may help with a starting point, however, for interop purposes I would recommend starting from a .proto; protobuf-net can work this was around too, either via "protogen", or via the VS addin
other than that, you shouldn't have my issues as long as you remember to treat all files as binary; opening files in text mode will cause grief

How to validate two xml files are similar (but ignore the element and attribute order)?

For the purposes of unit testing, I would like to validate that two xml files contain the same data, but ignore the order of the elements or attributes.
I am currently using MbUnit.Framework.Xml.XmlAssert.XmlEquals, and it seems to have a few options but I can't find any documentation. It returns false if the element order is different.
This is a c# project.
Try using Microsoft's XML Diff and Patch Tool.
In addition to the XML Diff and Patch API, you may be interested in taking a look at the Windows Forms code sample that implements the tool - XML Diff and Patch GUI Tool (The API's dll is included in this download).
A while back I was happily using xmlunit for these kinds of problems, http://xmlunit.sourceforge.net/, not sure about the .net side of it, or if it is still kept uptodate &c.

Migrating C# code from Cassandra .5 to .6

I have some some simple code derived from an example that is meant to form a quick write to the Cassandra db, then loop back and read all current entries, everything worked fine. When .6 came out, i upgraded Cassandra and thrift, which threw errors in my code (www[dot]copypastecode[dot]com/26760/) - i was able to iron out the errors by converting the necessary types, however in the version that compiles now only seems to read one item back, im not sure if its not saving db changes or if its only reading back 1 entry. the "fixed" code is here: http://www.copypastecode.com/26752/. Any help would be greatly appreciated.
First of all, let me say that you should definitly use TBufferedStream instead of TSocket for the TBinaryProtocol, that will make a huge impact on your application performance.
For the Apache Thrift API documentation that BATCH_INSERT methods is deprecated, so it could have introduced a misleading bug on that operation that actually only insert the first column. Said so, why don't you try to use BATCH_MUTATE instead?
By the way, why are you trying to use Thrift directly? There are some nice c# clients for Cassandra that are actually performing really well. You can find the whole list at http://wiki.apache.org/cassandra/ClientOptions.
I'm the author of one of them are is pretty much updated with Apache and its being used by some companies on production environment. Take a look at my homepage.

Categories

Resources