I have a console application which is calling a third party API, basically it takes records from a database and pushes them to the third party via their WCF API.
For various reasons (Mainly the third party API being very slow - 7 seconds to respond) we want to post multiple records in parallel so we have started doing this, however we are now seeing some strange behaviour from the third party API where it is duplicating records.
It has been suggested to us by the developers of the API that this is because we are sending the requests over the same connection (which makes sense as .net will reuse connections) and they dont/cant/wont support that, they will only support one request over one connection and then the connection must be closed.
My question is, How do I do this in .net core (2.2) We are currently using a HttpClient which I'd expect to reuse connections where possible - how can I guarantee that we use a new connection for each request?
I have after some digging worked out what the problem is and there is no way to fix it.
The process is :
We Post into the API
The API creates a record in a staging table in the database with a flag showing a status of "to be processed" and then starts polling the table waiting for change.
The API then invokes an executable on the server which looks at the staging table for any records with a status of "to be processed"
The executable does its processing and then changes the status on the record to "complete"
The API which has been polling the table sees the changed status, reads the record and returns the result to the client.
All fine if you only ever post one record at a time. But as Im executing in parallel what is happening is :
We call the API 10 times within a few ms of each other
The API creates 10 records in the staging table all with the "to be processed" status.
The API starts polling the staging table for changes and at the same time invokes the executable TEN TIMES
All 10 instances of the executable read all 10 records and process them - each instance unaware that there are another 9 instances all doing the same thing.
All 10 instances of the executable finish processing and change the status on the staging table to "complete"
The API sees the status change and returns all of the changed records to me in the response - So each of the 10 requests I sent gets 10 records returned to it.
Needless to say, we have entered into discussions with the provider of the API, it might be EOL and pretty much out of support but we're paying for licencing of this thing and this is a really stupid process that they need to provide a fix or a workaround for.
So in the end it was nothing to do with reusing connections, dont know why we were told it was.
Related
I've had a fairly good search on google and nothing has popped up to answer my question. As I know very little about web services (only started using them, not building them in the last couple of months) I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
To give you an example, my app is designed to make job updates, which for certain types of updates will call the web service. It seems like my options are that I could create a datatable in my app of updates that require the web service and pass the whole datatable to the web service and then write a method in the web service to process the datatable's updates. Alternatively I could iterate through my entire table of updates (which includes other updates than those requiring the web service) and call the web service as when an update requires it.
At the moment it seems like it would be simpler for me to pass each update rather than a datatable to the web service.
In terms of data being passed to the web service each update would contain a small amount of data (3 strings, max 120 characters in length). In terms of numbers of updates there would probably be no more than 200.
I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
Web services or not, any calls routed over the network would benefit from building up multiple requests, so that they could be processed in a single round-trip. In your case, building an object representing all the updates is going to be a clear winner, especially in setups with slower connections.
When you make a call over the network, these things need to happen when a client communicates to a server (again, web services or not):
The data associated with your call gets serialized on the client
Serialized data is sent to the server
Server deserializes the data
Server processes the data, producing a response
Server serializes the response
Server sends serialized response back to the client
The response is deserialized on the client
Steps 2 and 6 usually cause a delay due to network latency. For simple operations, latency often dominates the timing of the call.
The latency on fastest networks used for high-frequency trading is in microseconds; on regular ones it is in milliseconds. If you are sending 100 packages one by one on a network with 1ms lag (2ms per roundtrip), you are wasting 200ms just on the network latency! This one fifth of a second, a lot of time by the standards of today's CPUs. If you can eliminate it simply by restructuring your requests, it's a great reason to do it.
You should usually favor coarse-grained remote interfaces over a fine-grained ones.
Consider adding a 10ms network latency to each call - what would be the delay for 100 updates?
Is it possible to request a .NET server for an XHR request, while it is busy with another task?
Basically, I am working on an e-commerce application, that generates invoices of purchases once at a time, that may be weekly, monthly etc. So, while in process to generate the invoices a lot of calculations, and database reads and writes are done on the server side, but the user can only wait for the complete process to finish at once without knowing the progress made by the server. As per the project's requirement, the application can be generating thousands of invoices at a time, so I guess that'll take a lot of time.
So, I was thinking that, is it possible for me to write a code in ASP.NET, C# & jQuery, for requesting an XHR from the server while it is busy with generating invoices, so as to know the progress made by the server.
The process may be like:
User selects the criteria for Invoice generation on the screen and clicks Generate Invoice button.
The server gets the requests and makes a initial read operation on the database so as to return the number of records or invoices to be generated, and simultaneously starts generating invoices.
The output of that Read(), is sent to the Client System, and on client's side, a modal pop starts to show a progress bar telling the number of records to be processed as well as those completed processes.
As a server cannot send a response by itself without the client initiating with a request to do so, I guess the client may be sending XHR's every 10-20 seconds so as to know the progress made by the server on the Invoice Generation process.
But, here comes the actual problem, the server may not respond to the same application domain, and tell about the progress made, before completing the earlier requested process of Invoice Generation. Or else, it may break the earlier process.
Can it be done using multiple threads? Or may be some other methodology of .NET.
My application is in ASP.NET with C# and answers with Code examples or references will appreciated.
I have a .net winform application that I want to allow users to connect to via PHP.
I'm using PHP out of personal choice and to help keep costs low.
Quick overview:
People can connect to my .net app and start a new thread that will continue running even after they close the browser. They can then login at any time to see the status of what their thread is doing.
Currently I have come up with two ways to do this:
Idea 1 - Sockets:
When a user connects for the first time and spawns a thread a GUID is associated with their "web" login details.
Next time PHP connects to the app via a socket PHP sends a "GET.UPDATE" command with their GUID which is then added to a MESSAGE IN QUEUE for the given GUID.
The .net app spawned thread is checking the MESSAGE IN QUEUE and when it sees the "GET.UPDATE" command it then endcodes the data into json and adds it to the MESSAGE OUT QUEUE
The next time there is a PHP socket request from that GUID it sends the data in the MESSAGE OUT QUEUE.
Idea 2 - Database:
Same Idea as above but commands from PHP get put into a database
the .net app thread checks for new IN MESSAGES in the database
if it gets a GET.UPDATE command it adds the json encoded data to the database
Next time PHP connects it will check the database for new messages and report the data accordingly.
I just wonderd what of the two above ideas would be best. Messing about with sockets can quicly become a pain. But i'm worried with the database ideas that if I have 1000's of users we will have a database table that could begin to slow down if there is alot of messages in the queue
Any advice would be appricated.
Either solution is acceptable, but if you are looking at a high user load, you may want to reconsider your approach. A WinForms solution is not going to be nearly as robust as a WCF solution if you're looking at thousands of requests. I would not recommend using a database solely for messaging, unless results of your processes are already stored in the database. If they are, I would not recommend directly exposing the database, but rather gating database access through an exposed API. Databases are made to be highly available/scalable, so I wouldn't worry too much on load unless you are looking at a low-end database like SQLite.
If you are looking at publicly exposing the database and using it as a messaging service for whatever reason, might I suggest Postgresql's LISTEN/NOTIFY. Npgsql has good support for this and it's very easy to implement. Postgresql is also freely available with a large community for support.
How would one use SignalR to implement notifications in an .NET 4.0 system that consists of an ASP.NET MVC 3 application (which uses forms authentication), SQL Server 2008 database and an MSMQ WCF service (hosted in WAS) to process data? The runtime environment consists of IIS 7.5 running on Windows Server 2008 R2 Standard Edition.
I have only played with the samples and do not have extensive knowledge of SignalR.
Here is some background
The web application accepts data from the user and adds it to a table. It then calls an one way operation (with the database key) of the WCF service to process the data (a task). The web application returns to a page telling the user the data was submitted and they will be notified when processing is done. The user can look at an "index" page an see which tasks are completed, failed or are in progress. They can continue to submit more tasks (which is independent of previous data). They can close their browser and come back later.
The MSMQ based WCF service reads the record from the database and processes the data. This may take anything from milliseconds to several minutes. When its done processing the data, the record is updated with the corresponding status (error or fail) and results.
Most of the time, the WCF service is not performing any processing, however when it does, users generally want to know when its done as soon as possible. The user will still use other parts of the web application even if they don't have data to be processed by the WCF Service.
This is what I have done
In the primary navigation bar, I have an indicator (similar to Facebook or Google+) for the user to notify them when the status of tasks has changed. When they click on it, they get a summary of what was done and can then view the results if they wish to.
Using jQuery, I poll the server for changes. The controller action checks to see if there is any processes that were modified (completed or failed) and return them otherwise waits a couple of seconds and check again without returning to the client. In order to avoid a time out on the client, it will return after 30 seconds if there was no changes. The jQuery script waits a while and tries again.
The problems
Performance degrades with every user that views a page. There is no need for them to do anything in particular. We've noticed that memory usage of Firefox 7+ and Safari increases over time.
Using SignalR
I'm hoping that switching to SignalR can reduce polling and thus reduce resource requirements especially if nothing has changed task wise in the database. I have trouble getting the WCF service to notify clients that its done with processing a task given the fact that it uses forms based authentication.
By asking this question, I hope someone will give me better insight how they will redesign my notification scheme using SignalR, if at all.
If I understand correctly, you need a way of associating a task to a given user/client so that you can tell the client when their task has completed.
SignalR API documentation tells me you can call JS methods for specific clients based on the client id (https://github.com/SignalR/SignalR/wiki/SignalR-Client). In theory you could do something like:
Store the client id used by SignalR as part of the task metadata:
Queue the task as normal.
When the task is processed and de-queued:
Update your database with the status.
Using the client id stored as part of that task, use SignalR to send that client a notification:
You should be able to retrieve the connection that your client is using and send them a message:
string clientId = processedMessage.ClientId //Stored when you originally queued it.
IConnection connection = Connection.GetConnection<ProcessNotificationsConnection>();
connection.Send(clientId, "Your data was processed");
This assumes you mapped this connection and the client used that connection to start the data processing request in the first place. Your "primary navigation bar" has the JS that started the connection to the ProcessNotificationsConnection endpoint you mapped earlier.
EDIT: From https://github.com/SignalR/SignalR/wiki/Hubs
public class MyHub : Hub
{
public void Send(string data)
{
// Invoke a method on the calling client
Caller.addMessage(data);
// Similar to above, the more verbose way
Clients[Context.ClientId].addMessage(data);
// Invoke addMessage on all clients in group foo
Clients["foo"].addMessage(data);
}
}
We have multiple services that do some heavy data processing that we'd like to put multiple copies of them across multiple servers. Basically the idea is this:
Create multiple copies of identical servers with the collection of services running on them
a separate server will have an executable stub that will be run to contact one of these servers (determined arbitrarily from a list) to begin the data processing
The first server to be contacted will become the "master" server and delegate the various data processing tasks to the other "slave" servers.
We've spent quite a bit of time figuring out how to architect this and I think the design should work quite well but I thought I'd see if anyone had any suggestions on how to improve this approach.
The solution is to use a load balancer..
I am bit biased here - since I am from WSO2 - the open source WSO2 ESB can be used as a load balancer - and it has the flexibility of load balancing and routing based on different criteria. Also it supports FO load balancing as well...
Here are few samples related to load balancing with WSO2 ESB...
You can download the product from here...
eBay is using WSO2 ESB to process more than 1 Billion transactions per day in their main stream API traffic...
The first server to be contacted will become the "master" server and
delegate the various data processing tasks to the other "slave"
servers.
That is definitely not how I would build this.
I build this with the intent to use cloud computing (regardless of whether it uses true cloud computing or not). I would have a service that would receive requests and save those requests to a queue. I would then have multiple worker applications that will take an item from the queue, mark it in process and do whatever needs done. Upon completion the queue item is updated as done.
At this point I would either notify the client that the work is done, or you could have the client poll the server for reading the status of the queue.