Output large dataset from web service - c#

I wonder if anyone experienced with returning large dataset from webservice.
The dataset is around 10,000 x 60 floats.
I will be using http wcf for my webservice.
Any ideas to approach it are welcome :)
Thanks.

There's no technical reason you can't do it.
You just have to consider the amount of data that is being transfered and realize that it may take your client a while to download and deserialize the results.
If you're really worried about the amount of data going over the wire, you could use a library like Google's protocol buffers to do binary serialization (rather than the XML or JSON that you get out of the box with WCF). You can find the .NET port of Protocol Buffers at:
protobuf-net - Project Hosting on Google Code

This is not big data set. You can use web service to return such dataset without any implementation problems. You just need to set maxReceivedMessageSize and maxArrayLength on the client.
The real set of questions you should ask is:
How many concurrent clients can use this service?
What is expected response time?
How often does a client call this service?
What bandwidth is available on production server?
What bandwidth is available on clients?
Answers to these questions show you if 2.3MB is a big data set. If you are affraid of performance and response time you should definitely plan load tests.

Related

Server Console/Service APP <-> WPF Client communication framework sought

I am looking for an open-source framework that can handle communication between my backend host and a WPF frontend client. The following are points I need to consider:
Client is a WPF desktop app, host is a C# console APP but can also run as Windows Service
Host can accept connection requests, client connects, but I still seek bi-direction communication capabilities.
The client and host are not in the same network (need to send messages over the internet)
Just one (possibly more but less than 10) client will connect to the host.
I like to be able to handle the following communication patterns: One-way message, two-way request/response, and streaming from host to client.
The content will consist of serialized POCOs/DTOs and serialized time series data. Each serialized DTO will be of approximately size 400 bytes, the serialized time series can be a lot larger, potentially several megabytes.
Some messages will be sent scheduled, meaning for example, the host sends a new DTO every second. (if the framework includes such scheduling mechanism then even better)
I like to be able to stream data to the client, such as a client that receives and then updates data on its GUI in real-time.
My question is: What might be the best C# based framework to handle this? (I run on .Net 4.5) I do not consider WCF because it seems way too thick and complex for what I try to handle. I also do not want to delve into web programming, so Rest/Soap type frameworks are not under consideration. I would be delighted if anyone could recommend a tcp/websocket based framework or similar that is light weight enough to potentially send messages a lot more frequently than just every second.
Thanks a lot in advance.
Any reason why you are not considering HTTP? It is just over TCP but you don't need to worry about firewalls and stuff, especially given the fact that you want to do this over internet. Self-hosted ASP.NET Web API can be a good candidate. Fire and forget and request/response can be accomplished fairly straight forward and for streaming it has PushStreamContent. ASP.NET Web API is light weight compared to WCF and is open source. Just a suggestion, that's all.
I ended up using ZeroMQ for full-duplex communication between WPF client and Server. It is lightweight, very fast and pretty reliable for my needs.

Concurrency management in WCF

i have implemented a fairly simple wcf service which handles the file transfers from my clients to the server the problem is when a client sends a file request.
all of the bandwidth is allocated to that single client and others have to wait until the requested file transfer is completed.
So my question is how to make the service more efficient and let the users share the bandwidth
[ServiceBehavior(IncludeExceptionDetailInFaults = true, InstanceContextMode =InstanceContextMode.PerCall,
ConcurrencyMode=ConcurrencyMode.Multiple)]
I set the InstanceContextMode attribute to PerCall but that didn't do the trick
UPDATE : This Project is similar to mine
http://www.codeproject.com/Articles/33825/WCF-TCP-based-File-Server
WCF does not have proper load balancing, you will have to develop one yourself.
If you are transferring files, lets assume download, you should send packets of data rather than the complete file at once. When doing this, add 'delays/sleeps' to the process to limit the amount of bytes the server sends on each time window, this will make room for other requests.
It's questionable that it's desirable to serve up files through a WCF endpoint. The reasons against doing this are pretty much exactly the problems you have been having. It works for a few clients at a time - but scaling out requires hosting new instances of the service behind a load balancer.
It would be worth considering hosting your files with some kind of storage service and have your WCF service simply return a link or handle to the file. Then the file can be retrieved offline. Microsoft have created Azure Blob Storage for this exact purpose.
Appreciate this does not address your original question, and understand the scope of your requirement may not accommodate a large reworking.
Another option is to use chunking channel if you are transferring large files. Examples: MSDN, codeplex.
Although I agree with #hugh position.

Java android client communication with C# server

I am currently writing an application having a client server architecture.
The client is a Java android application
The server is a C# application.
The client will pull data from the server but in some cases push some data to the C# server as well.
The data that server needs to forward the clients is list of data structures (perhaps in the form of XML?), sometime binary data like files.
The client and server are communicating over a wireless network.
Speed and scalability is my top most priority in the design of the system,...
I have to write server as well as the client myself. I will be using sockets for communication.
I need your advise on the form of protocol I should use to exchange data between the Java client and C# server.
Should I write similar data structures (which seems redundant) in java and C# and serialize them ??
or should I exchange xml ??
I am not sure yet what is the best way to do it ..
Essentially there will be commands from client and server will respond with data
Please advise me on this topic the data communicated could be be as large as several gigs over wifi so speed is very important.
Well, there's always JSON. It should be well-supported on both ends and is easy for your server to generate and client to consume. Not sure it helps with your bandwidth concerns any...
I believe WCF might be approperiate for this, WCF uses soap so a Java implementation should work well. WCF also supports steaming, so transferring large files is possible, though I'm not sure if Java supports the streaming protocol.
As for performance, you will probably be limited by the speed of the device and not the protocol.
Have a look at this session from TechEd 2011: "My Customers Are Using iPhone/Android,But I'm a Microsoft Guy. Now What?"
http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DPR304
It would probably be worth looking into MonoDroid if you want to share code between client and server (and if serialize/de-serialize makes sense).
As I don't know what you're building, I would advise you to read up on REST before you continue though. It should give you valuable pointers on how to create a nice API that can be easily consumed by various clients.

Download very large files (8GB+) - combining WCF and WebClient?

I have quite some trouble creating a WCF service that supports downloads of very large files. I have read a lot of guides on how to set transferMode attribute to Streamed, increase all messageSize and bufferSize attributes to Int32.MaxValue, and still I have no luck. (I am also returning the stream as the message body via the MessageBodyMember attribute, and metadata is sent via the headers using the MessageHeader attributes).
If I set all these attributes, I can download smaller files fine, but when I try to download 1-2GB files I simply get a 400 bad request error which makes it rather hard to debug...
My service should ideally support file sizes of at least 8GB. Is this even doable with WCF? The various messageSize attributes of the web.config file seem to be limited to Int32.MaxValue which equals a maximum file size of 2GB.
From my studies I have found that it seems I will have to use WebClient.DownloadFile instead.
Files should only be available for download to users who have the required rights. With WCF my download method could take a token-parameter that the server could check and return the stream only if the user had rights to download the requested file. This does not seem straight forward using the WebClient approach. If anyone has some guidelines on how to do this (via the WebClient), I would very much appreciate it.
Ideally my WCF service should administer and provide user tokens and somehow bind to every individual file what tokens are currently legal (tokens should be usable only once). Download should then happen via the WebClient.
Thanks in advance for any clues.
You can do this in WCF. Many moons ago I built a service that did this (we didn't have a web server as part of our configuration). We used WCF streaming:
http://msdn.microsoft.com/en-us/library/ms733742.aspx
The strategy to deal with large payloads is streaming. While messages,
especially those expressed in XML, are commonly thought of as being
relatively compact data packages, a message might be multiple
gigabytes in size and resemble a continuous data stream more than a
data package. When data is transferred in streaming mode instead of
buffered mode, the sender makes the contents of the message body
available to the recipient in the form of a stream and the message
infrastructure continuously forwards the data from sender to receiver
as it becomes available.

WCF or Custom Socket Architecture

I'm writing a client/server architecture where there are going to be possibly hundreds of clients over multiple virtual machines, mostly on the intranet but some in other locations.
Each client will be gathering data constantly and sending a message to a server every second or so. Each message will probably be about 128 characters or so in length.
My question is, for this architecture where I am writing both client/server in .NET is should I go with WCF or some socket code I've written previously. I need scalability (which the socket code has in mind), reliability and just the ability to handle that many messages.
I would not make final decision without peforming some proof of concept. Create very simple service, host it and use some stress test to get real performance results. Than validate results against your requirements. You have mentioned amount of messages but you didn't mentioned expected response time. There is currently discussed similar question on MSDN forum which complains about slow response time of WCF compared to sockets.
Other requirements are not directly mentioned in your post so I will make some assumption for best performance:
Use netTcpBinding - best performance, binary encoding, requires .NET server / clients. I guess you are going to use Net.Tcp because your other choice was direct socket programming.
Don't use security if you don't have to - reduces performance. Probably not possible for clients outside your intranet.
Reuse proxy on clients if possible. Openning TCP connection is expensive if you reuse the same proxy you will have single connection per proxy. This will affect instancing of you services - by default single service instance will handle all requests from single proxy.
Set service throttling so that your service host is ready for many clients
Also you should make some decisions about load balancing. Load balancing for WCF net.tcp connections requires sticky sessions (session affinity) so that after openning the channel client always calls the service on the same server (bacause instance of that service was created only on single server).
100 requests per second does not sound like much for a WCF service, especially with that little payload. But it should be quite quick to setup a simple setup with a WCF service with one echo method just returning the input and then hook up a client with a bunch of threads and a loop.
If you already have a working socket implementation you might keep it, but otherwise you can pick WCF and spend your precious development time elsewhere.
From my experience with WCF, i can tell you that it's performance on high load is very very nice. Especially you can chose between several bindings to achieve your requirements for the different scenarios (httpBinding for outside communication, netPeerTcpBinding in local network e.g.).

Categories

Resources