The answer to just about every single question about using C# with JSON seems to be "use JSON.NET", but that's not the answer I'm looking for.
The reason I say that is, from everything I've been able to read in the documentation, JSON.NET is basically just a better performing version of the DataContractSerializer built into the .NET framework...
Which means if I want to deserialize a JSON string, I have to define the full, strongly-typed class for EVERY request I might have. So if I have a need to get categories, posts, authors, tags, etc., I have to define a new class for every one of these things.
This is fine if I built the client and know exactly what the fields are, but I'm using someone else's API, so I have no idea what the contract is unless I download a sample response string and create the class manually from the JSON string.
Is that the only way it's done? Is there not a way to have it create a kind of hashtable that can be read with json["propertyname"]?
Finally, if I do have to build the classes myself, what happens when the API changes and they don't tell me (as twitter seems to be notorious for doing)? I'm guessing my entire project will break until I go in and update the object properties...
So what exactly is the general workflow when working with JSON? And by general I mean library-agnostic. I want to know how it's done in general, not specifically to a target library...
It is very hard to be library-agnostic as you request because how you work with json really depends on the library you use. As an example inside JSON.NET there are multiple ways you could work with JSON. There is the method you talk about with direct serialization into objects. That is type safe but will break if the data from your API changes. However, there is also a LINQ-to-JSON that provides a JObject (which behaves fairly similarly to XElement) that provides a way to do JObject["key"] as you requested in your question. If you are really just looking for a flexible way to work with JSON inside C#, then check out JSON.NET's LINQ-to-JSON.
In reality no matter how you do it, if the API changes your code is likely to break. Even if you are just strictly a hashtable-based approach, your code will still be likely to break if the data coming back changes.
Edit
JSON.NET Documentation
Examples
If you check out the examples, the second one should give you a good example of how LINQ-to-JSON works. It allows you to work with it without defining any classes. Everything gets converted to standard framework classes (mostly collections and strings). This avoids the need to maintain classes.
I've been a Perl developer for over a decade, and I've just recently started to work in C#. I'm surprised by how much I like it (I don't like Java at all) but one of the most difficult cognitive switches is going from "Everything can be treated as a string and the language takes care of conversions" to "Pre-define your types." In this case string-thinking might be an advantage, because it's what you need to do for the kind of API you're asking for.
You need to write a JSON parser that understands the syntax, which is fairly simple: comma-separated lists, key/value pairs, {} for hashes/objects, [] for arrays, and quoting/escaping constructs. You'll want to create a Hashtable to start because the top-level entity in JSON is always an object, then scan the JSON string character-by-character. Pull out key/value pairs; if the value starts with { then add it as a new Hashtable, if it starts with [ add it as a new ArrayList, otherwise add it as a string. If you get { or [ you'll need to recursively descend to add the child data elements.
If .NET has a good recursive descent parser, you could probably use that to make the job simpler or more robust, but JSON is simple enough to make this a good and reasonably completable exercise.
Related
I am building a C# integration tool, but I am having some trouble figuring out if I should create different classes for the data that I am receiving from different requests from the source application using REST. The responses are similar in a way that the constructs are the same, but for different information. I.e they would have an "Attributes" tag, but the attributes may vary per class. In the same breath, about 60% or more of the attributes are the same.
It looks like they reused the same constructs, but depending on the data, there are may be more things in the result.
My question is, what is the best practice when creating the classes for the JSON Deserialisation? Do you create multiple classes with the same name and same content(diff namespaces), or do you combine the classes into a "Generic" data type and just include the "extra" attributes, even though they wont all be used by one object.
The assumption is that the "null" values will not be considered in the deserialisation. Thus "extra" fields defined will just be ignored if not found.
The problem comes in the Classes where I would like to be able to define DataType1 and DataType2, but when combining the classes this becomes a problem...
Would like to hear your thoughts :)
Rgs,
Francois
Personally I prefer to deserialize in generic classes (lists and dictionaries or whatever your deserialization library offers) and then manually copy the data to whatever further data structures I use internally. Most of the time the "deserialization classes" really are used just for deserialization and the after that the data is immediately copied to further data structures that don't match the deserialization structures. So there's very little value to them.
I have two separate programs that need to share information. This sharing will be done by one app placing an XML serialized object in a database, and the other app retrieving it on a different machine. The objects share the same variables but the properties and methods are different.
How exact do the classes have to match between the two programs?
Is the match line by line or just variable, property, and method names?
I ended up using the Newtonsoft.Json library instead of xml and used the <JsonObject(MemberSerialization.OptIn)> and JsonProperty() attributes to control what got serialized.
You did not specify which kind of serialization you were after.
The standard NET binary serializer is not well suited for data exchange between 2 different assemblies. When you go to deserialize, you'll get an an error similar to [Culture].[Assembly].[Version].SourceClass cannot be deserialized to [Culture].[Assembly].[Version].DestClass. This will happen even if the classes are identical.
There are several ways around this. A) Use the same service DLL on both sides to do the serializing B) trick it into deserializing by using an override to report a matching Culture-Assembly-Version-Class, but that seems dodgy or C) use XML serialization, but that makes for very wordy output, which is also readable.
For Binary Serialization, rather than the NET binary formatter, there is ProtoBuf-NET which is faster, produces much smaller output and uses nearly identical syntax.
How exact do the classes have to match between the two programs
ProtoBuf uses a numeric index rather than property name, so they shouldn't have to be too similar. Of course there has to be some similarity or the destination may not have a clue what the data represents. The code in the class can be quite different because it stays put.
Serialization stores only the data for an object - member variables, properties, etc. As long as the data types are compatible, it should work. You do not need a line by line match for the functions.
It all depends on the serializer you are using. Some require a perfect match, others tend to be more loosely coupled to the objects.
How exact do the classes have to match between the two programs?
Well, not at all. But they should be similar in some way because otherwise the serialization doesn't make sense.
Is the match line by line or variables and method names?
As, stated above: there must be some overlap. Usually the property names must be the same. But of course you can also provide a custom mapping.
Take a look at the Newtonsoft library, u can use it (for json) like this:
JsonConvert.DeserializeObject<IEnumerable<Unit>>(result);
It's independent of the object method that serialized the string.
I want to build a visual studio plugin that automatically annotates classes for serialization. For example for the built in binary serializer I could just add [Serializable] to the class declaration, for WCF it could add [DataContract] to the class and [DataMember] to the members and properties (I could get [KnownType] information through reflection and annotate where appropriate). If using protocol buffers it could add [ProtoContract], [ProtoMember] and [ProtoInclude] attributes and so on.
I am assuming that the classes we are going to use this on are safe to serialize (so no sockets or nonserializable stuff in there). What I want to know is what is the easier way to take an existent piece of code (or a binary if that's easier) and add those attributes while preserving the rest of the code intact. I am fine with the output being source code or binary.
It comes to mind the idea of a using a C# parser, parse everything find the interesting code elements, annotate them and write back the code. However that seems to be very complex given the relatively small amount of modifications I want to make to the code. Is there an easier way to do so?
Visual Studio already has an API for discovering and emitting code which you might take a look at. It's not exactly a joy to use but could work for this purpose.
While such a plugin would certainly be a useful thing, I would consider rather making an add-in for a tool like ReSharper instead of VS directly. The advantage is somebody already solved the huge pile of problems you haven't even dreamed of yet and so it will be a lot easier to build such a specific functionality.
it looks to me like you need to have a MSBuild task similar to this one http://kindofmagic.codeplex.com/. is that about right?
I'm working on an ASP.NET web application that uses a lot of JavaScript on the client side to allow the user to do things like drag-drop reordering of lists, looking up items to add to the list (like the suggestions in the Google search bar), deleting items from the list, etc.
I have a JavaScript "class" that I use to store each of the list items on the client side as well as information about what action the user has performed on the item (add, edit, delete, move). The only time the page is posted to the server is when the user is done, right before the page is submitted I serialize all the information about the changes that were made into JSON and store it in hidden fields on the page.
What I'm looking for is some general advice about how to build out my classes in C#. I think it might be nice to have a class in C# that matches the JavaScript one so I can just deserealize the JSON to instances of this class. It seems a bit strange though to have classes on the server side that both directly duplicate the JavaScript classes, and only exist to support the JavaScript UI implementation.
This is kind of an abstract question. I'm just looking for some guidance form others who has done similar things in terms of maintaining matching client and server side object models.
Makes perfect sense. If I were confronting this problem, I would consider using a single definitive description of the data type or class, and then generating code from that description.
The description might be a javascript source file; you could build a parser that generates the apropriate C# code from that JS. Or, it could be a C# source file, and you do the converse.
You might find more utility in describing it in RelaxNG, and then building (or finding) a generator for both C# and Javascript. In this case the RelaxNG schema would be checked into source code control, and the generated artifacts would not.
EDIT: Also there is a nascent spec called WADL, which I think would help in this regard as well. I haven't evaluated WADL. Peripherally, I am aware that it hasn't taken the world by storm, but I don't know why that is the case. There's a question on SO regarding that.
EDIT2: Given the lack of tools (WADL is apparently stillborn), if I were you I might try this tactical approach:
Use the [DataContract] attributes on your c# types and treat those as definitive.
build a tool that slurps in your C# type, from a compiled assembly and instantiates the type, by using the JsonSerializer on a sample XML JSON document, that provides, a sort of defacto "object model definition". The tool should somehow verify that the instantiated type can round-trip into equivalent JSON, maybe with a checksum or CRC on the resulting stuff.
run that tool as part of your build process.
To make this happen, you'd have to check in that "sample JSON document" into source code and you'd also have to make sure that is the form you were using in the various JS code in your app. Since Javascript is dynamic, you might also need a type verifier or something, that would run as part of jslint or some other build-time verification step, that would check your Javascript source to see that it is using your "standard" objbect model definitions.
I am fairly new to reflection and I would like to know, if possible, how to create an instance of a class then add properties to the class, set those properties, then read them later. I don't have any code as i don't even know how to start going about this. C# or VB is fine.
Thank You
EDIT: (to elaborate)
My system has a dynamic form creator. one of my associates requires that the form data be accessible via web service. My idea was to create a class (based on the dynamic form) add properties to the class (based on the forms fields) set those properties (based on the values input for those fields) then return the class in the web service.
additionally, the web service will be able to set the properties in the class and eventually commit those changes to the db.
If you mean dynamically create a class, then the two options are:
Reflection.Emit - Difficult, Fast to create the class
CodeDom - Less Difficult, Slower to create the class
If you mean create an instance of an existing class, then start with Activator.CreateInstance to create an instance of the object, and then look at the methods on Type such as GetProperty which will return a PropertyInfo that you can call GetValue and SetValue on.
Update: For the scenario you describe, returning dynamic data from a web service, then I'd recommend against this approach as it's hard for you to code, and hard for statically-typed languages to consume. Instead, as suggested in the comments and one of the other answers, some sort of dictionary would likely be a better option.
(Note that when I say return some sort of dictionary, I am speaking figuratively rather than literally, i.e. return something which is conceptually the same as a dictionary such as a list of key-value pairs. I wouldn't recommend directly returning one (even if you're using WCF which does support this) because it's typically better to have full control over the XML you return.)
I know this is being overly simplified by why not just KISS and generate the required Xml to return through the Web Service and then parse the returned Xml to populate the database.
My reasoning is that for the expanded reason you suggest doing this I can see the value or reason for wanting a dynamic class?
The Execution-Time Code Generation chapter of Eric Gunnerson's book (A Programmer's Introduction to C#) has some great information on this topic. See page 14 and onwards in particular. He outlines the two main methods of accomplishing dynamic class/code generation (CodeDOM and the Reflection.Emit namespace). It also discusses the difficulty and performance of the two approaches. Have a read through that, and you ought to find everything you might need.
The real question is, what do you need to use those properties for?
What are gonna be the use cases? Do you need to bind those properties to the UI somehow? Using what kind of technology? (WPF, Windows Forms?)
Is it just that you need to gather a set of key/value pairs at runtime? Then maybe a simple dictionary would do the trick.
Please elaborate if you can on what it is you need, and I'm sure people here can come up with plenty of ways to help you, but it's difficult to give a good answer without more context.