I have wcf-service and wcf-client. Service send to client a big data array - 55000+ items for one request. Forming this array on service-side takes less then one second, but client-side recieve this array more than 5 seconds! Can I faster this? I use BasicHttpBinding on client-side, if in important. (Pagination is not good idea for me)
try using messageEncoding="Mtom" which must stream your data or if WCF client and server is yours change it to net.tcp binding the lower level protocol will get rid of the overhead data, and with it you could also stream the data
Related
Current situation: an existing SQL Server stored procedure I have no control upon returns 10 large strings in separate resultsets in about 30 seconds (~3 seconds per dataset). The existing ASP.NET Web API controller method that collects these strings only returns a response once all strings are obtained from the stored procedure. When the client receives the response, it takes another 30 seconds to process the strings and display the results, for a total of 1 minute from request initiation to operation completion.
Contemplated improvement: somehow transmit the strings to the client as soon as each is obtained from the SqlDataReader, so the client can work on interpreting each string while receiving the subsequent ones. The total time from request initiation to completion would thus roughly be halved.
I have considered the WebClient events at my disposal, such as DownloadStringCompleted and DownloadProgressChanged, but feel none is viable and generally think I am on the wrong track, hence this question. I have all kinds of ideas, such as saving strings to temporary files on the server and sending each file name to the client through a parallel SignalR channel for the client to request in parallel, etc., but feel I would both lose my time and your opportunity to enlighten me.
I would not resort to inverting the standard client / server relationship using a "server push" approach. All you need is some kind of intermediary dataset. It could be a singleton object (or multiple objects, one per client) on your server, or another table in an actual database (perhaps NoSql).
The point is that the client will not directly access the slow data flow you're dealing with. Instead the client will only access the intermediary dataset. On the first request, you will start off the process of migrating data from the slow dataset to the intermediary database and the client will have to wait until the first batch is ready.
The client will then make additional requests as he processes each result on his end. If more intermediary results are already available he will get them immediately, otherwise he will have to wait like he did on the first request.
But the server is continuously waiting on the slow data set and adding more data to the intermediate data set. You will have to have a way of marking the intermediate data as having already been sent to the client or not. You will probably want to spawn a separate thread for the code that moves data from the slow data source to the intermediate one.
I have a .Net Remoting service that will return a class to the client application. That class has a string property where the string can range from 1kb to 400kb worth of data.
I tried passing 256kb worth of string from server to client and the client was able to get it in less than 5 seconds which is still ok since this call will only be used for trouble-shooting purposes by an administrator. However I read
here that when sending huge data: "the socket will be blocked from receiving all other messages until it receives the remaining .... packets". If my data ever reached an MB size I do not want to block the client from receiving other messages.
How can I achieve my goal of not blocking the client? Do I compress the string using GZipStream like in here? Or are there other better ways?
Good article from Tess Fernandez : https://blogs.msdn.microsoft.com/tess/2008/09/02/outofmemoryexceptions-while-remoting-very-large-datasets/
I am using the new PushStreamContent entity in MVC4 to stream notifications from my web server back to multiple listening iOS clients (they are using NSURLConnection). The messages being sent are JSON. When I send messages that are less than 1024 bytes, the message sends as expected. Sending messages larger than this size however causes the client to receive the message in multiple chunks, each being 1024 bytes.
I am wondering what is the best way for my iOS clients to consume these multiple messages coming back? Is there a way to have NSURLConnection aggregate the results for me, or do I need to implement something that gets a result, checks if it's valid json, if not wait for the next result and append the previous, and continue until it is valid? What is a better way of doing this?
I found that you are able to adjust the size of the buffer that writes data to the stream that PushStreamContent uses. However, chunking the data is the correct thing for it to do and keeping this small has several advantages. I ended up writing my own method to aggregate the data flowing in on the client side. See the following question for more details:
How to handle chunking while streaming JSON data to NSURLConnection
I'm currently developing service in which client communicate with server by sending xml files with messages. In order to improve reliability of messaging (client will be using low quality limited bandswitch mobile internet) I chunks these message in smaller portions of 64 or 128 Kb size, and send them with transfer="streamed" in BasicHttp binding.
Now, I have a problem:
server should report to client, if he succesfully received a chunk or not, so after f.e 5 chunks failed to transfer, the transfer process will be cancelled and postponed to be tried later, and to keep track of which chunks are received and which are not.
I'm thinking about using callback mechanism to communicate client, so server will invoke callback method ChunkReceived in it's [OperationContract], when it saves chunk to the file on the server-side, but, correct me if I'm wrong, but callback only works with WS Dual http binding, and isn't supported in basichttp binding. But streamed transfer isn't supported in WS Dual binding.
So is it ok for me to switch to WS Dual binding and use transfer="buffered" (considering chunk size is relatively small) - won't that hurt reliability of the transfer? Or maybe I can somehow communicate with client in basic http binding, maybe by returning some kind of response message, i.e.
[OperationContract]
ServerResponse SendChunk (Chunk chunk);
where ServerResponse will hold some enum or bool flag to tell the client if the SendChunk operation is ok. But then I will have to keep some kind of array on both client and server side to keep track of all the chunks status. I'm just not sure what's the best pattern to use there. Any advice would be highly appreciated.
We had similar problem in our application - low bandwidth and many disconnects/timeouts. We have smaller messages, so we didn't split them, but the solution should work to for chunks too. We've created Repeater on client. This proven to be reliable solution - it works well on clients with slow, poor connections(like GPRS - being on the move disconnected often). Also client won't get timeout errors if server slows down due to high load. Here is modified version, with chunks
Client:
1. Client sends Chunk#1, with pretty short timeout time
2. Is there OK response:
2A. Yes - proceed to next chunk
3. Was that last chunk?
3A. Yes - process reponse
3B. No - send next chunk
2B. No - repeat current chunk
Server:
Accept request
Is Chunk repeated
2A. Yes:
Is final chunk:
3A. Yes - check if response is ready, else wait(this propably will make client repeat)
3B. No - send Ok reponse
2B. No:
Save request somewhere (list, dictionary etc.)
Is this last chunk:
5A. Yes - Process message, save Reponse, and send it to client
5B. No - Send Ok Message
I'm trying to send an image to wcf to use OCR.
For now, I succeeded in transforming my image into a byte[] and sending it to the server using wcf. Unfortunately, it works for an array whose size is <16Kb and doesn't work for an array >17Kb.
I've already set the readerQuotas and maxArrayLength to its maximum size in web.config on the server size.
Do you know how to send big data to a wcf server, or maybe any library to use OCR directly on wp7?
If all else fails, send it in fragments of 16Kb, followed by an "all done" message that commits it (reassembling if necessary)
Bit of a hack but howabout sending it with a HTTP post if it isn't too big? or alternatively changing the webservice so it accepts a blob? (the current array limitation is a limit on the array datatype in the W3C spec)
Finaly solved.
You have to update your web.config to allow the server to receive big data. And then you have to use the Stream type in your WCF and byte[] type in your WP7. Types will match and both WCF or WP7 will agree to receive and send it.
In WCF :
public string ConvertImgToStringPiece(Stream img)
{
//.....
}
In WP7 :
Service1Client proxy = new Service1Client();
proxy.ConvertImgToStringPieceCompleted += new EventHandler<ConvertImgToStringPieceCompletedEventArgs>(proxy_ConvertImgToStringPieceCompleted);
proxy.ConvertImgToStringPieceAsync(b); //b is my Byte[], more thant 17Kb.
I don't know if this works on WP7, but with WCF you can also use streams to upload bigger amounts of data.
You can try using a WCF session. The key thing to remember is that sessions in WCF are different than normal sessions we use for Internet programming. It's basically a call to a method that starts the session, any interim calls, and then a final one that ends the session. You could have a service call that starts the session, send chunks of the image, and then call the last one which closes the session and will return whatever you need.
http://msdn.microsoft.com/en-us/library/ms733040.aspx