Transferring LINQ data objects - c#

I am working on a webservice where we use LINQ-to-SQL for our database abstraction. When clients use our webservice, the objects are serialized to XML and all is dandy.
Now we wish to develop our own client that uses the native data types since there's no reason to do objects->xml->objects. However, from what I understand you can't transfer LINQ objects as they directly map to the database and as such the data is "live".
My question is whether there is a way to take a "snapshot" of the data you've extracted, set the LINQ object to "offline" and then transfer it. The data will not be changing after it's transferred to our client and we don't need the database access.

The LINQ-to-SQL classes can be used with DataContractSerializer (for WCF) easily enough (you need to enable the serialization property in the designer, though). With this in place, you should be able to share the data assembly with the client. As long as you don't use the data-context, the objects themselves should be well behaved (just disconnected - so no lazy loading).
The trick is that you need to re-use these types (from the existing assembly) in your serialization code. If you are using WCF, you can do this with svcutil /r, or via the IDE.
That said though; it is often cleaner to maintain separate DTO classes for these scenarios. But I'm guilty of doing it the above way on occasion.

If you're willing to use WCF (for the webservice and the client) you can decorate your Linq2SQL generated classes with the [DataContract] and [DataMember] attributes.
Check the following links for some guidance:
http://msdn.microsoft.com/en-us/library/bb546184.aspx
http://msdn.microsoft.com/en-us/library/bb546185.aspx
http://www.codeproject.com/KB/WCF/LinqWcfService.aspx
http://www.aspfree.com/c/a/Windows-Scripting/Designing-WCF-DataContract-Classes-Using-the-LINQ-to-SQL-Designer/

Related

How to design DTO for this scenario?

I have the following situation:
User Object
Half of the data for the structure is stored in the DB(owned by me).
Other half of the data is coming from Third Party(via web api).
Another important point is that I want to use ExpressMapper(or any other good suggestions) for mapping Entity/Service objects to DTOs.
I want to use DTO pattern for transferring the user information between different layers.
Questions:
Should I make a large DTO containing all the properties of both of the Service Object and the DB Object?
Should I create Third Party Data Objects as properties of the DTO. I mean hide the Service Data objects behind an interface and make that interface as a property of the DTO.
Do I need to make interfaces for DTO in case 1 or 2?
Generally, the best DTO design has to do with the requirements of the client and the method/function that's returning it. I'm assuming you're doing this for purposes of serializing results.
If you're simply mapping all of the data you're storing to a DTO, you've not really gained much, and could probably just emit the data object you've stored in a serialized way.
So to answer your questions
1) No. You should make a DTO with data that's relevant to the call you're making for a few reasons..
DTO's are generally transferred over the wire in a serialized way. So
smaller = less data.
There can be overhead to populating and serializing a DTO. You don't
want to penalize every call that returns a large DTO with a
burdensome process just to populate a property that isn't relevant to
the purpose of the call.
It might be a bit more self documenting.
Keep in mind that it's okay to send some "irrelevant" bits of data for purposes of code re-use. Just be mindful of how much data that is, and how much effort the process takes to generate the data. No hard and fast rule here more than what makes sense in terms of readability and efficiency.
2) Depends on your application, but I'd be inclined to say that knowing it's 3rd party data isn't usually relevant to the calling client getting the DTO. The 3rd party you're getting your data from could be getting it from 3rd parties as well. But the DTO's they are returning to you are likely not singling that data out for you.
3) Interfaces, again really has to do with the app you're developing. From the client perspective, any interface isn't going to mean much unless you're supplying the library of DTO's in an assembly. If the DTO's are being generated by a proxy (such as adding service reference), or the client is creating their own client-side verions of the DTOs, those interfaces won't carry through to the client.

How can I take advantage of RIA form generation and validation in an application that uses Entity objects over WCF?

This isn't a specific coding question, but rather I'm looking for some advice on how to approach a problem of mine. I have a Silverlight 5 application that uses WCF to do most of its operations - I like the control it gives me compared to RIA. A lot of these WCF methods take Entity Framework objects as arguments, with extra logic and authorization handled on the server side. This works fairly well and I have a nice little framework that lets me pass objects back and fourth, while knowing that the server will only let certain things be changed depending on the user's permissions.
There are a few things about RIA I like though. I use it to populate datagrids because of its easily generated filters, ordering, etc. I've used RIA more heavily for projects in the past and I mostly like its form generation and validation metadata abilities. I like that, with a class, it will easily make me a form with all the textboxes, combodoxes, checkboxes, labels, etc, as well as with two way binding and validate on error set up for each of these. Tying in with validation, because I'm using Entity Framework objects, I can't just stick DataAnnotations on the ORM generated classes, so the autogenerated metadata classes of RIA are very useful in that regard.
The issue seems to be that these objects are incompatible. I can't use RIA generated classes with my methods that are expecting Entity Framework objects. I can't use RIA to generate the forms and then bind them to my regular entity objects because then there's no automatic validation. Does anyone have any ideas on how I can marry these two? I'm open to thoughts/suggestions.
The form generation and validation magic is not tied to RIA Service client's EntityObject base class.
If you annotate your WCF client's proxy classes with Validation Attributes, you can get more or less the same result.
If you implement IEditableObject, then the datagrid will restore modified data when you hit ESC.
Through careful use of .shared.cs files, and linked source files, you can have most of the server side and client side code being shared.
To achieve even more flexibility, you will need to start crafting your own T4 templates.

WCF, SOAP, EF, POCO

We are developing a application which should use Entity Framework and its an simple WCf Soap service (not Wcf data service). I am very confused now I have read these following posts but I don't understand where to go This question is almost the same but I have a restriction to use POCOs and try to avoid DTOs. Its not that big service. But the link which I mentioned ,in answer its written that if I try to send POCO classes on wire, there will be problem with serialization.
This post has implemented the solution which related to my problem but he did not mention anything related to serialization problem. He just changed the ProxyCreationEnabled =false which I found in many other articles as well.
But these posts are also little old, so what is the recommendation today. I have to post and get lot of Word/Excel/PDFs/Text files as well, so will it be OK to send POCO classes or it will be problem in serialization.
Thanks!
I definitely do not agree with this answer. The answer mentioned suggests to reinvent the wheel (The answer does not even indicate why not using POCOs).
You can definitely go with POCOs, I see no reason to have serialization issues; but if you have any, you can write DTOs for these specific problematic parts and map them to POCOs in the Business layer.
It is a good practice to use POCOs as the name itself suggests; Plain Old CLR objects. Writing the same classes again instead of generating them will not have any advantage. You can simply test it.
UPDATE:
Lazy Loading: Lazy loading means fetching related objects from database whenever they are accessed. If you have already serialized and deserialized an entity (ex. you have sent the entity to client side over a wire), Lazy Loading will not work, since you will not have a proxy in the client side.
Proxy: Proxy class simply enables to communicate with DB (a very simple definition by the way). It is not possible to use an instance of Proxy in the client side; it does not make sense. Just seperate the Proxy class and POCO entities into two different DLLs and share only the POCO objects with the client. And use the proxy in the service side.

Wcf Rest Service - serialize or hand-craft xml?

I'm putting together a plan for an Xml web service to go into a client's site to be consumed by third-parties so that they can access the client's data.
My question is really asking about best practises here, and at the moment I am deliberating over 2 different strategies:
1) Create an object model which represents my Xml data and serialize it (either explicitly, or implicitly by exposing the data through a Wcf REST endpoint)
2) Transforming my domain model directly into hand-crafted Xml using XLinq and returning this as a string from the service, setting up the response headers appropriately
I like (1) because I let the system do the generation of the physical Xml and I work purely within the object model, but versioning becomes a problem and I might need finer control over the output.
I like (2) because I do get the fine control and versioning becomes easier, but I'm now hand-crafting Xml and the opportunity for error escalates.
Any opinions? Am I missing something which gives me the best of both worlds? I would go straight for (1) if I knew the best way to 'version an object model' - would using different namespaces suffice?
I'd use serialization. As long as you don't try to use your domain objects for serialization you can get pretty fine grained control over the XML either via the DataContractSerializer or the XmlSerializer. You can then map between your domain objects and your serialization objects using something like AutoMapper

Get Serializable Object from Session in Silverlight

I have a complex, [Serializable] object stored in session. I have Silverlight 3.0 islands in my .aspx pages that need access to this data and its data type. It is my understanding that Silverlight does not support [Serializable], and since it is running on the client, it does not have easy access to session. I am looking for a solid way to access this data in my Page.xaml.cs file.
I am open to storing it in ISO Storage once it has been retrieved, but how to retrieve, read it from Silverlight? Hidden fields are not an option as it is a complex data type with dozens of properties, and a few dictionaries, lists of other objects.
The classic way of accessing this type of data would be with a silverlight-enabled WCF service on the ASP.NET site that accesses the data. You then add a service-reference from the silverlight client and ask the server for data (asynchronously).
Note that by default this will be a separate object model (proxies from "mex"). If you need the same type you'll have to repeat the code in the client (you can't really use assembly sharing between client and server here).
I don't know whether the silverlight version of svcutil will allow type re-use (the regular version does), but if not another option is to just return xml or binary from the service and deserialize locally. One option here would be something like protobuf-net.

Categories

Resources