Basically I have a requirement to connect to an http streaming API in my app - it works fine, but what I want to do is to connect to multiple instances of this API for different accounts with the streaming provider. For whatever reason this doesn't work as the connections interfere with each other, however if I run multiple instances of the app separately, each connecting to the streaming API for a different account, then it is fine.
So I wanted to know what the best way would be to isolate / sandbox these connections so they might work within the same app. I know it is possible to use separate AppDomains but I wondered if anyone might suggest a better/easier strategy.
Related
i decided to write a card game with c# that has a winform application for main server to manage the game with a web interface. i chose SignalR self-host for main server. because i want to sell this app to others and dont want to modify the code or html of the web interface. so my question is: is that good for handling 10000 client request? is there a way to write this app for better performance?
another thing is i want to write the main server and login cashout profile and ... most written by customers like poker mavens and i just create a api with json to do this functions. plzzzz guide me which way is better to write this app!
With your server code self hosted and a javascript client calling into your server methods, becoming your browser based client, your design should work.
I am looking at this. https://learn.microsoft.com/en-us/aspnet/signalr/overview/deployment/tutorial-signalr-self-host
But I think you'll need to figure out scale out scenarios and server failure scenarios with the self host. In case there is a patch update on the server and it has to restart, you'll need to be able to get a backup. Also consider the case when you need to upgrade the server. So you'll need to be able to host it in multiple servers and you'll need to provide the signalr backplane option.
From a performance point of view, I have tested a web api signalR application on a single 4-core-14-GB server and was able to scale up to 20k connections, with the server comfortably serving more than 200 Requests per second.
With a backplane these numbers were around 100-150 rps.
The response times in both cases were very good ~ 500 ms.
Although please note that your numbers could be VASTLY different based on your actual functionality.
I've a problem with a multithreading app in C# and I would appreciate some help, since I'm new to multithreading.
This is the scenario: I'll have a mobile app that will do a lot of queries/requests in my database(Mysql), my goal is to make a server side application that handles multiple queries using C# in a Linux machine(mono to the rescue). My company is doing the database side of the application, there's another company making the mobile app. I'll send the data to the cloud and the cloud server will send it to the client.
I'm reading the threading chapters of CLR via C# and C# 4.0 in a nutshell, but until now I have only a little clue of what I can do, I believe that asynchronous methods would work, since it doesn't use a lot of resources but I'm a little confused about how to handle thread concurrency(priority, state).
So here are my questions:
What is the best way to solve this problem?
Which class from .NET framework suits best for this job?
How should I handle the query queue?
How can I handle thousands of threads/queries fast enough, so my mobile app user can have the query result in a estimated time of 5 minutes?
Some observations:
I know that the data and time to finish a query will be exponentially equal to the size of the user's data in my database, but I need to handle(few and large data) it as fast as I can.
I'm sending the data to a cloud database(Amazon EC2) and from there i'll send it to the client. I'll not handle this, this will be handled by another company, so my job is to get the queries done quickly and make them avaliable to the cloud database.
I'm aware that to send the information to my client I depend on my IT infrastructure, but the point here is: how I can solve this problem quickly in a way that I'll have only to worry about my application infrastructure.
I cannot put the queries on a big string and throw it on the database, because I need to handle each query result separately before sending the result to the user.
The storage engine is MyISAM, so no transactions are allowed.
I would create a REST web service, either on top servicestack or WebAPI, to abstract access to your data via a service. Either of these services would be able to handle simultaneous requests from your mobile client, as they are designed to do so. In addition, I would create a class that can mediate access and provide a unit-of-work to your database (ie repository). The connection provider for MySQL should be able to handle simultaneous requests from your web service, so you should not have to worry about threading and request management. If a single instance is not enough, you can add more web servers running the same code and use a load-balancer to distribute the requests to each of your instances, where the service/data code is the same.
Some resources for mono based web-api/servicestack:
http://www.piotrwalat.net/running-asp-net-web-api-services-under-linux-and-os-x/
What is the best way to run ServiceStack on Linux / Mono?
I have a need to install an "agent" (I'm thinking it will run as a Windows Service) on many servers in my network. This agent will host a WCF service with several operations to perform specific tasks on the server. This I can handle.
The second part is to build a control center, where I can browse which servers are available (the agent will "register" themselves with my central database). Most of the servers will probably be running the most recent version of my service, but I'm sure there will be some servers which fail to update properly and may run an out dated version for some time (if I get it right, the service contract wont change much, so this shouldn't be a big deal).
Most of my WCF development has been Many Clients to a Single WCF Service, now I'm doing the reverse. How should I manage all of these EndPoints in my control center app? In the past, I've always had a single EndPoint mapped in my App.config. What would some code look like that builds a WCF EndPoint on the fly, based on say a set of string ip; int port; variables I read from my database?
This article has some code examples on how to create an end point on the fly:
http://en.csharp-online.net/WCF_Essentials%E2%80%94Programmatic_Endpoint_Configuration
WCF4 has a Discovery API built-in that might just do everything you need.
I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc
I have developed a windows service which reads data from a database, the database is populated via a ASP.net MVC application.
I have a requirement to make the service re-load the data in memory by issuing a select query to the database. This re-load will be triggered by the web app. I have thought of a few ways to accomplish this e.g. Remoting, MSMQ, or simply making the service listen on a socket for the reload command.
I am just looking for suggestions as to what would be the best approach to this.
How reliable does the notification has to be? If a notification is lost (lets say the communication pipe has a hickup in a router and drops the socket), will the world end come or is business as usual? If the service is down, do notifications from the web site ned to be queued up for when it starts up, or they can e safely dropped?
The more reliable you need it to be, the more you have to go toward a queued solution (MSMQ). If reliability is not an issue, then you can choose from the mirirad of non-queued solutions (remoting, TCP, UDP broadcast, HTTP call etc).
Do you care at all about security? Do you fear an attacker my ping your 'refresh' to death, causing at least a DoS if not worse? Do you want to authenticate the web site making the 'refresh' call? Do you need privacy of the notifications (ie. encryption)? UDP is more difficult to secure (no session).
Does the solution has to allow for easy deployment, configuration and management on the field (ie. is a standalone, packaged, product) or is a one time deployment that can be fixed 'just-in-time' if something changes?
Withous knowing the details of all these factors, is dififcult to say 'use X'. At least one thing is sure: remoting is sort of obsolete by now.
My recommendation would be to use WCF, because of the ease of changing bindings on-the-fly, so you can test various configurations (TCP, net pipe, http) w/o any code change.
BTW, have you considered using Query Notifications to detect data changes, instead of active notifications from the web site? I reckon this is a shot in the dark, but equivalent active cache support exists on many databases.
Simply host a WCF service inside the Windows Service. You can use netTcpBinding for the binding, which will use binary over TCP/IP. This will be much simpler than sockets, yet easier to develop and maintain.
I'd use standard TCP sockets - this will survive all sorts of moving of components, and minimize configuration issues IMHO.