Serializing Xpo objects through WCF - c#

Good evening/morning everyone,
before posting this issue, i've been sending my objects with a traditional way from the client side ( aspx page ) to a WCF data service, the approch i've been using was to convert all attributes to a string and send them after joining them, and in the server side i split the string chain and i construct my object and store it. now by working i found that this method is no longer udapted to what i'm planing to do and it will take me much time. so i've decided to find a way of serializing my xpo objects and send them to the service. been browsing google before coming up to SOF but i didnt find a good tutorial for someone not much familiar with serialization mechanism.
please give me some track to a solution which will reduce lot of time.
i think its a good point to descibe the architecture of my project:
i have a asp web application which contains some pages, and in the server side i have a wcf data service(5.0) which contains all my methods, i'm using XPO as a ORM and all my objects inherit from xpobject.
Thank you in advance and by the way i want to thank the mods/admins/members of SOF for their work helping dummies/intermediate and even experts.

From a performance perspective, it is not recommended to serialize/deserialize the object graph with XPO. Instead you should either serialize the datastore or serialize the object layer.
That said, if you need to serialize objects for import/export, have a look at the eXpand module.
As with all things DevExpress, the best place to ask questions is the support centre.

Related

Sending JSON from an asp.net site to Python Script

I'd like to preface this by saying that I'm feeling a bit overwhelmed at a summer internship, it's a lot of new technologies and concepts I've never used before and finding good resources to explain these things and not just regurgitate situational tutorials are very scarce. Particularly I'd like to talk about ASP.NET.
What I currently have is a simple webpage made using ASP.NET in visual studio. My general task is to access a database, get the info from the database, turn the data into JSON, and send that data to a python script which will then parse the JSON and do stuff with it, etc.
I am able to get as far as making the JSON object. I can get the user to download the json object as a file, but what I really want to do is just pass it through the network by accessing my web site from the python script using urllib2. This is where I have become completely lost. There are so many terms I've never heard of before, things like services, web APIs, controllers, routing and all these things I've spent hours digging around in and following basic tutorials but still cannot find a firm grasp on the concepts let alone how to accomplish it in a practical manner.
To be completely clear here are my goals:
Send 5 parameters usingurllib2 in python to my asp.net site
use these parameters to query the database and get a json object (COMPLETE)
return the json to the python script
I have no idea how to set up a "service" or how to even go about doing so. I know that I have to attach it to my website somehow but I'm not sure. Any suggestions or good resources would be much appreciated. I'm just looking for some direction and advice on how to go about accomplishing #1 and #3 on my list.
Thank you for taking the time to read through my post!
For part one you could do this:
import urllib2
response = urllib2.urlopen('http://mysite.io?paramOne=ValueOne/')
Now the response object will have your JSON so you can do this:
json = response.read()
urllib2 has a nice way of preparing URL parameters which you might want to look into.

ASP.NET MVC accessing multiple APIs

I have APIs from around 6 providers. I also have a database where I disable or enable the providers I want to use.
I have an ASP.NET MVC4 application. In this I want to be able to use multiple provider's APIs and display data. Each provider's API send a response in a different format - it could be JSON for one and XML for another.
Now I am stuck because:
Each API needs its own code to be parsed. Where does this provider-specific code go into? A single class where for each provider a specific method does the parsing? Or do I create a new class for each provider and do the parsing there?
How can I efficiently call a particular provider's method? Is some bit of hardcoding essential in the sense that if the Provider name is "Prov A" then I call the method GetProvAData?
I hope I have explained the issue clearly enough. Any help will be welcome. Thanks in advance.
Regards,
Satish
This really has nothing to do with MVC, it's a basic software development pattern problem.
Assuming your data from various providers all has to end up in the same format, then this is a textboox example of the Strategy pattern. You would basically create multiple provider parsers that all have the same interface, and you just call Execute or Parse or whatever you want to call it on all of them.
If what you do with the data is different for different providers, then it's a bit more complex because you now have to modify your app to support the individual providers data, and without knowing exactly what that is we can't really give you advice on how to do it.

When should XML serialization be used?

I've been doing some reading up on XML serialization, and from what I understand, It is a way to take an object and persist the state in a file. As for the implementation, it looks straight forward enough, and there seems to be a load of resources for applying it. When should XML serialization be used? What are the benefits? What are situations that are best helped by using this?
The .NET XmlSerializer class isn't the only way to persist an object to XML. The newer DataContractSerializer is faster, and also allows an object to be persisted to a binary form of XML, which is more compact.
The XmlSerializer is only getting limited bug fixes these days, in part because so much code depends on the precise details of how it works, in part because it is associated with ASMX web services, which Microsoft considers to be a "legacy technology".
This is not the case with the DataContractSerializer, which continues to be a vibrant and important part of WCF.
You've answered a little bit of the question in your post. It's great for persisting the state of an object. I've used it in applications for saving user settings. It's also a great way to send data to other systems, since it is standardized. An important thing to remember is that it is easily human readable. This can either be a good or bad thing depending on your situation. You might want to consider encrypting it, or using encrypted binary serialization if you don't want someone else to be able to understand it.
EDIT:
Another gotchya worth mentioning is that the .NET implemented XMLSerializer only serializes public members in a object. If you need to persist private or protected members, you will either need to use a customized serializer or use another form of serialization.
Its good for communication between disparate systems. E.G. take a Java app and a C# app and allow them to communicate via a webservice with serializeable XML objects. Both apps understand XML and are shielded from the details of the other language. And yes while you could fire strings back and forth, XML gives us strong typing and schema validation.
this is just from personal experiences - XML serialization is good for web services.
Also, if you want to modify (or allow the modification of) the object/file that you're storing to without using the application that you're writing (i.e third party app), XML can be a good choice.
I send an array of Objects of type class I wrote, using HttpWebRequest, so I cant send it as an object , because im mixing HttpWebRequest + Soap (that Im writing), and in Soap you cant send a non predefined Objects as String, int , ... .
so I used XML serialization to convert my object to an XML string and send it through my HttpWebRequest .

Sending information down a socket in C#

I have built two programs in C# and I am sending simple strings through the sockets. This is fine for the moment but in the near future I will need to send more complicated items, such as objects down the sockets and eventually files.
What steps would I take to do this? What purpose do the buffers serve for the sockets/streams? Apologies if I am a little vague.
If you are sending objects, you have to really be careful with what you do and how you are planning on using those objects on the other end. All properties need to be serialized. If you are going to have large amounts of data in theses objects, you may want to use binary serialization instead.
Also, look at the guidelines posted here: MSDN Serialization Guidelines
If you are going to be sending objects, you may want to look at either .Net Remoting options or WCF Services if applicable. Rolling your own socket handlers and then using it for complex operations is asking for a lot of time and pain, especially if you haven't done it before.
There are many options, but basically you want to serialise the data into a format that will go through the socket.
Worth looking here into xml serialisation.
One way you can handle this is to serialize your object into XML, send over the socket, then deserialize it. I've done it this way before. However, I (being fairly new to .NET) just learned about the JavaScriptSerializer, which I believe makes this process a lot easier for you.
You need to serialize the objects.. Mark it with [Serializable] attribute and use some serializers.. Example can be found here.
First thing in any comms situation is to consider that anything you send must be able to get serialised and de serialised so that it can get over a comms channel. Next you must consider that comms have latency (its not instantaneous), and then the fact that it can fail.
After this you consider the protocols and technology to enable the above to be factored in.

From ASPX to WCF

I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based.
I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results).
At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial.
My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is;
Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this?
Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose.
Are there any other approaches I should consider?
Thanks for your time spent reading this. I hope you can help.
WCF does not natively support streamed collections. (Which are not the same as Streaming Message Transfer)
However, see this blog post.
I recommend that you use ASHX files (Generic Handlers) instead of ASPX pages (Web Forms), as they have far less overhead.

Categories

Resources