I have built two programs in C# and I am sending simple strings through the sockets. This is fine for the moment but in the near future I will need to send more complicated items, such as objects down the sockets and eventually files.
What steps would I take to do this? What purpose do the buffers serve for the sockets/streams? Apologies if I am a little vague.
If you are sending objects, you have to really be careful with what you do and how you are planning on using those objects on the other end. All properties need to be serialized. If you are going to have large amounts of data in theses objects, you may want to use binary serialization instead.
Also, look at the guidelines posted here: MSDN Serialization Guidelines
If you are going to be sending objects, you may want to look at either .Net Remoting options or WCF Services if applicable. Rolling your own socket handlers and then using it for complex operations is asking for a lot of time and pain, especially if you haven't done it before.
There are many options, but basically you want to serialise the data into a format that will go through the socket.
Worth looking here into xml serialisation.
One way you can handle this is to serialize your object into XML, send over the socket, then deserialize it. I've done it this way before. However, I (being fairly new to .NET) just learned about the JavaScriptSerializer, which I believe makes this process a lot easier for you.
You need to serialize the objects.. Mark it with [Serializable] attribute and use some serializers.. Example can be found here.
First thing in any comms situation is to consider that anything you send must be able to get serialised and de serialised so that it can get over a comms channel. Next you must consider that comms have latency (its not instantaneous), and then the fact that it can fail.
After this you consider the protocols and technology to enable the above to be factored in.
Related
Good evening/morning everyone,
before posting this issue, i've been sending my objects with a traditional way from the client side ( aspx page ) to a WCF data service, the approch i've been using was to convert all attributes to a string and send them after joining them, and in the server side i split the string chain and i construct my object and store it. now by working i found that this method is no longer udapted to what i'm planing to do and it will take me much time. so i've decided to find a way of serializing my xpo objects and send them to the service. been browsing google before coming up to SOF but i didnt find a good tutorial for someone not much familiar with serialization mechanism.
please give me some track to a solution which will reduce lot of time.
i think its a good point to descibe the architecture of my project:
i have a asp web application which contains some pages, and in the server side i have a wcf data service(5.0) which contains all my methods, i'm using XPO as a ORM and all my objects inherit from xpobject.
Thank you in advance and by the way i want to thank the mods/admins/members of SOF for their work helping dummies/intermediate and even experts.
From a performance perspective, it is not recommended to serialize/deserialize the object graph with XPO. Instead you should either serialize the datastore or serialize the object layer.
That said, if you need to serialize objects for import/export, have a look at the eXpand module.
As with all things DevExpress, the best place to ask questions is the support centre.
Server side - C# or java
Client side Objective C
I need a way to serialize an object in C#\java and de-serialize it in Objective C.
I'm new to Objective C and I was wondering where I can get information about this issue.
Thanks.
Apart from the obvious JSON/XML solutions, protobuf may also be interesting. There are Java//c++/python backends for it and 3rd parties have created backends for C# and objective-c (never used that one though) as well.
The main advantages are it being much, much faster to parse[1], much smaller[2] since it's a binary format and the fact that versioning was an important factor from the beginning.
[1] google claims 20-100times compared to XML
[2] 3-10times according to the same source
Another technology similar to protobufs is Apache Thrift.
Apache Thrift is a software framework for scalable cross-language services development. Apache Thrift allows you to define data types and service interfaces in a simple definition file. Taking that file as input, the compiler generates code to be used to easily build RPC clients and servers that communicate seamlessly across programming languages.
JSON for relatively straight forward object graphs
XML/REST for more complex object graphs (distinction between Arrays / Collections / nested arrays etc)
Sudzc. I am using it. It is pretty easy to invoke a Webservice from i-os app.
You dont have to write code to serialize object.
JSON is probably the best choice, because:
It is simple to use
It is human-readable
It is data-based rather than being tied to any more complex object model
You will be able to find decent libraries for import/export in most languages.
Serialisation of more complex objects is IMHO not a good idea from the perspective of portability since often one language/platform has no effective way of expressing a concept from another language / platform. e.g. as soon as you start declaring "types" or "classes" of serialised objects you run into the thorny issue of differing object models between languages.
On iOS there are couple of JSON frameworks and libraries with an Objective-C API:
JSONKit
SBJson
TouchJson
are probably the most prominent.
JSONKit is fast and simple, but can only parse a contiguous portion of JSON text. This means, you need to save downloaded data into a temporary file, or you need to save all downloaded JSON text into a NSMutableData object (kept in memory). Only after the JSON text has been downloaded completely you can start parsing.
SBJson is more flexible to use. It provides an additional "SAX style" interface, can parse partial input and can parse more than one JSON document per "input" (for example several JSON documents per network connection). This is very handy when you want to connect to a "streaming API" (e.g. Twitter Streaming API), where many JSON documents can arrive per connection. The drawback is, it is a much slower than JSONKit.
TouchJson is even somewhat slower than SBJson.
My personal preference is some other, though. It is faster than JSONKit (20% faster on arm), has an additional SAX style API, can handle "streaming APIs", can simultaneously download and parse, can handle very large JSON strings without severely impacting memory foot-print, while it is especially easy to use with NSURLConnection. (Well, I'm probably biased since I'm the author).
You can take a look at JPJson (Apache License v2):
JPJson - it's still in beta, though.
I've been doing some reading up on XML serialization, and from what I understand, It is a way to take an object and persist the state in a file. As for the implementation, it looks straight forward enough, and there seems to be a load of resources for applying it. When should XML serialization be used? What are the benefits? What are situations that are best helped by using this?
The .NET XmlSerializer class isn't the only way to persist an object to XML. The newer DataContractSerializer is faster, and also allows an object to be persisted to a binary form of XML, which is more compact.
The XmlSerializer is only getting limited bug fixes these days, in part because so much code depends on the precise details of how it works, in part because it is associated with ASMX web services, which Microsoft considers to be a "legacy technology".
This is not the case with the DataContractSerializer, which continues to be a vibrant and important part of WCF.
You've answered a little bit of the question in your post. It's great for persisting the state of an object. I've used it in applications for saving user settings. It's also a great way to send data to other systems, since it is standardized. An important thing to remember is that it is easily human readable. This can either be a good or bad thing depending on your situation. You might want to consider encrypting it, or using encrypted binary serialization if you don't want someone else to be able to understand it.
EDIT:
Another gotchya worth mentioning is that the .NET implemented XMLSerializer only serializes public members in a object. If you need to persist private or protected members, you will either need to use a customized serializer or use another form of serialization.
Its good for communication between disparate systems. E.G. take a Java app and a C# app and allow them to communicate via a webservice with serializeable XML objects. Both apps understand XML and are shielded from the details of the other language. And yes while you could fire strings back and forth, XML gives us strong typing and schema validation.
this is just from personal experiences - XML serialization is good for web services.
Also, if you want to modify (or allow the modification of) the object/file that you're storing to without using the application that you're writing (i.e third party app), XML can be a good choice.
I send an array of Objects of type class I wrote, using HttpWebRequest, so I cant send it as an object , because im mixing HttpWebRequest + Soap (that Im writing), and in Soap you cant send a non predefined Objects as String, int , ... .
so I used XML serialization to convert my object to an XML string and send it through my HttpWebRequest .
I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based.
I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results).
At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial.
My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is;
Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this?
Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose.
Are there any other approaches I should consider?
Thanks for your time spent reading this. I hope you can help.
WCF does not natively support streamed collections. (Which are not the same as Streaming Message Transfer)
However, see this blog post.
I recommend that you use ASHX files (Generic Handlers) instead of ASPX pages (Web Forms), as they have far less overhead.
I'm on a project that processes and reports on large sets of aggregatable row based data. There is a primary aggregation service and then many clients who can subscribe to different views of the data from that server. The objects are passed back and forth between the Java server and the C# clients encoded in JSON. We're noticing that the parsing of the objects is taking a lot of time and somewhat memory intensive. Have others used JSON for this purpose or seen similar behavior?
We used to use straight XML across the wire and had to use custom serialization (ie. manual) for alot of the objects. While not JSON we did have performance hits due to this constraint. Once we migrated all our tech to a similar architecture we were able to switch to binary serialization which worked much better.
However on the objects where we had issues with performance due to size we made some modifications. Since we had access to the code on both ends (and both were c#) we were able to binary serialize the payload and then base64 encode it since it had to be text across the wire. It did help a good bit in terms of object size and the serialization ran a bit faster.
Since you are going from Java to C# you won't really have that luxury. So the only thing I can think of in your case would be to try and optimize your parsing of the JSON response. You may be able to use some code profiling tools to help you identify portions that are causing you performance issues and then try to optimize those. Also, on the deserialize to JSON make sure you use a string builder to build your final string. If you are doing standard concat operations it will kill performance as well.
Also, you might want to check around I have seen on the web several JSON serializers written for c# some may be faster than what you are doing, who knows.
Not sure if that helps you all that much but there is some info from things we have seen with string based message passing.
UPDATE: Just saw this on dotnetkicks: JSON.Net it's an update from james for the json.net serializers. May help out.
I know for java there are any number of opensource JSON serializers and deserializers. We use FlexJSON.
JSON can be expensive to decode. If performance is an issue try using something like Hessian.