I am trying to consume a third party API based on REST and SOAP based web services. Now before i send any request to them i need to first obtain a session by using one if it's WebService Request. It will create a session for me and give a unique identifier. Now i need to store and use this unique identifier for all of my subsequent requests and close the session when user is done. (Acts same as connect to database, execute command and close connection)
Now this seems fairly simple but there is some limitation. I can only have a finite number of sessions simultaneously and as i am implementing this on web i don't know how many people will use the web application at a time and do actions that will use third party API. I need to pool these connections same as SQL pooling is done in ADO.NET .here is what i think need
I need to have open connections so that user requests are immediately
served
I need the session/connection to be keep-alive.(I have to ping it
once in a while so that id doesn't times out)
Control max number of connections for peak hours (increase & decrease the limit)
I've searched for this but i am unable to find anything solution for this. I saw c sharp object pooling pattern implementation & is http connection pooling possible but these are having a different context.
What I'm wondering is there any API/Package available that i can implement and achieve this functionality or .NET has any classes I can inherit / use to help me build a pooling system. I'd rather not re-invent the wheel if I could use some proven solution or inherit from one.
Related
I've a problem with a multithreading app in C# and I would appreciate some help, since I'm new to multithreading.
This is the scenario: I'll have a mobile app that will do a lot of queries/requests in my database(Mysql), my goal is to make a server side application that handles multiple queries using C# in a Linux machine(mono to the rescue). My company is doing the database side of the application, there's another company making the mobile app. I'll send the data to the cloud and the cloud server will send it to the client.
I'm reading the threading chapters of CLR via C# and C# 4.0 in a nutshell, but until now I have only a little clue of what I can do, I believe that asynchronous methods would work, since it doesn't use a lot of resources but I'm a little confused about how to handle thread concurrency(priority, state).
So here are my questions:
What is the best way to solve this problem?
Which class from .NET framework suits best for this job?
How should I handle the query queue?
How can I handle thousands of threads/queries fast enough, so my mobile app user can have the query result in a estimated time of 5 minutes?
Some observations:
I know that the data and time to finish a query will be exponentially equal to the size of the user's data in my database, but I need to handle(few and large data) it as fast as I can.
I'm sending the data to a cloud database(Amazon EC2) and from there i'll send it to the client. I'll not handle this, this will be handled by another company, so my job is to get the queries done quickly and make them avaliable to the cloud database.
I'm aware that to send the information to my client I depend on my IT infrastructure, but the point here is: how I can solve this problem quickly in a way that I'll have only to worry about my application infrastructure.
I cannot put the queries on a big string and throw it on the database, because I need to handle each query result separately before sending the result to the user.
The storage engine is MyISAM, so no transactions are allowed.
I would create a REST web service, either on top servicestack or WebAPI, to abstract access to your data via a service. Either of these services would be able to handle simultaneous requests from your mobile client, as they are designed to do so. In addition, I would create a class that can mediate access and provide a unit-of-work to your database (ie repository). The connection provider for MySQL should be able to handle simultaneous requests from your web service, so you should not have to worry about threading and request management. If a single instance is not enough, you can add more web servers running the same code and use a load-balancer to distribute the requests to each of your instances, where the service/data code is the same.
Some resources for mono based web-api/servicestack:
http://www.piotrwalat.net/running-asp-net-web-api-services-under-linux-and-os-x/
What is the best way to run ServiceStack on Linux / Mono?
I have a .net winform application that I want to allow users to connect to via PHP.
I'm using PHP out of personal choice and to help keep costs low.
Quick overview:
People can connect to my .net app and start a new thread that will continue running even after they close the browser. They can then login at any time to see the status of what their thread is doing.
Currently I have come up with two ways to do this:
Idea 1 - Sockets:
When a user connects for the first time and spawns a thread a GUID is associated with their "web" login details.
Next time PHP connects to the app via a socket PHP sends a "GET.UPDATE" command with their GUID which is then added to a MESSAGE IN QUEUE for the given GUID.
The .net app spawned thread is checking the MESSAGE IN QUEUE and when it sees the "GET.UPDATE" command it then endcodes the data into json and adds it to the MESSAGE OUT QUEUE
The next time there is a PHP socket request from that GUID it sends the data in the MESSAGE OUT QUEUE.
Idea 2 - Database:
Same Idea as above but commands from PHP get put into a database
the .net app thread checks for new IN MESSAGES in the database
if it gets a GET.UPDATE command it adds the json encoded data to the database
Next time PHP connects it will check the database for new messages and report the data accordingly.
I just wonderd what of the two above ideas would be best. Messing about with sockets can quicly become a pain. But i'm worried with the database ideas that if I have 1000's of users we will have a database table that could begin to slow down if there is alot of messages in the queue
Any advice would be appricated.
Either solution is acceptable, but if you are looking at a high user load, you may want to reconsider your approach. A WinForms solution is not going to be nearly as robust as a WCF solution if you're looking at thousands of requests. I would not recommend using a database solely for messaging, unless results of your processes are already stored in the database. If they are, I would not recommend directly exposing the database, but rather gating database access through an exposed API. Databases are made to be highly available/scalable, so I wouldn't worry too much on load unless you are looking at a low-end database like SQLite.
If you are looking at publicly exposing the database and using it as a messaging service for whatever reason, might I suggest Postgresql's LISTEN/NOTIFY. Npgsql has good support for this and it's very easy to implement. Postgresql is also freely available with a large community for support.
We have an asmx web service hosted in IIS6. Is there a good way to limit the number of calls to the service in a period of time for a single IP? We don't want to put a hard limit (X number of times an hour), but we want to be able to prevent a spike from a single user.
We're currently investigating to see if our firewall is capable of limiting connection attempts. In the case that our firewall is not able to limit connections, is there a good way to handle this programmatically? Rather than trying to come up with our own custom solution and reinventing the wheel, is there an existing implementation or strategy that can be used?
ASMX web services have almost no extensibility. If you have any choice, you should use WCF.
You might be able to write a method to be called from each of your operations, that would look at the caller IP, check in a database, and throw a SoapFault if that IP has connected too much. That's about all there is, though. You might be able to do that from a SoapExtension, but you have to be very careful with those.
I have developed a windows service which reads data from a database, the database is populated via a ASP.net MVC application.
I have a requirement to make the service re-load the data in memory by issuing a select query to the database. This re-load will be triggered by the web app. I have thought of a few ways to accomplish this e.g. Remoting, MSMQ, or simply making the service listen on a socket for the reload command.
I am just looking for suggestions as to what would be the best approach to this.
How reliable does the notification has to be? If a notification is lost (lets say the communication pipe has a hickup in a router and drops the socket), will the world end come or is business as usual? If the service is down, do notifications from the web site ned to be queued up for when it starts up, or they can e safely dropped?
The more reliable you need it to be, the more you have to go toward a queued solution (MSMQ). If reliability is not an issue, then you can choose from the mirirad of non-queued solutions (remoting, TCP, UDP broadcast, HTTP call etc).
Do you care at all about security? Do you fear an attacker my ping your 'refresh' to death, causing at least a DoS if not worse? Do you want to authenticate the web site making the 'refresh' call? Do you need privacy of the notifications (ie. encryption)? UDP is more difficult to secure (no session).
Does the solution has to allow for easy deployment, configuration and management on the field (ie. is a standalone, packaged, product) or is a one time deployment that can be fixed 'just-in-time' if something changes?
Withous knowing the details of all these factors, is dififcult to say 'use X'. At least one thing is sure: remoting is sort of obsolete by now.
My recommendation would be to use WCF, because of the ease of changing bindings on-the-fly, so you can test various configurations (TCP, net pipe, http) w/o any code change.
BTW, have you considered using Query Notifications to detect data changes, instead of active notifications from the web site? I reckon this is a shot in the dark, but equivalent active cache support exists on many databases.
Simply host a WCF service inside the Windows Service. You can use netTcpBinding for the binding, which will use binary over TCP/IP. This will be much simpler than sockets, yet easier to develop and maintain.
I'd use standard TCP sockets - this will survive all sorts of moving of components, and minimize configuration issues IMHO.
I'm creating an application that I want to put into the cloud. This application has one main function.
It hosts socket CLIENT sessions on behalf of other users (think of Beejive IM for the iPhone, where it hosts IM sessions for clients to maintain state on those IM networks, allowing the client to connect/disconnect at will, without breaking the IM network connection).
Now, the way I've planned it now, is that one 'worker instance' can likely only handle a finite number of client sessions (let's say 50,000 for argument sake). Those sessions will be very long lived worker tasks.
The issue I'm trying to get my head around is that I will sometimes need to perform tasks to specific client sessions (eg: If I need to disconnect a client session). With Azure, would I be able to queue up a smaller task that only the instance hosting that specific client session would be able to dequeue?
Right now I'm contemplating GoGrid as my provider, and I solve this issue by using Apache's Active Messaging Queue software. My web app enqueues 'disconnect' tasks that are assigned to a specific instance Id. Each client session is therefore assigned to a specific instance id. The instance then only dequeues 'disconnect' tasks that are assigned to it.
I'm wondering if it's feasible to do something similar on Azure, and how I would generally do it. I like the idea of not having to setup many different VM's to scale, but instead just deploying a single package. Also, it would be nice to make use of Azure's Queues instead of integrating a third party product such as Apache ActiveMQ, or even MSMQ.
I'd be very concerned about building a production application on Azure until the feature set, pricing, and licensing terms are finalized. For starters, you can't even do a cost comparison between it and e.g. GoGrid or EC2 or Mosso. So I don't see how it could possibly end up a front-runner. Also, we know that all of these systems will have glitches as they mature. Amazon's services are in much wider use than any of the others, and have been publicly available for much years. IMHO choosing Azure is a recipe for pain as they stabilize.
Have you considered Amazon's Simple Queue Service for queueing?
I think you can absolutely use Windows Azure for this. My recommendation would be to create a queue for each session you're tracking. Then enqueue the disconnect message (for example) on the queue for that session. The worker instance that's handling that connection should be the only one polling that queue, so it should handle performing the task on that connection.
Regarding the application hosting socket connections for clients to connect to, I'd double-check on what's allowed as I think only HTTP and HTTPS connections are allowed to be made with Azure.