How to solve limitations of SignalR in scaleout for backplane - c#

I use ASP.NET MVC and C# .I found SignalR for transfer data in real time,but signalR have some limits.
according to the issue for this :
Using a backplane, the maximum message throughput is lower than it is when clients talk directly to a single server node. That's because the backplane forwards every message to every node, so the backplane can become a bottleneck. Whether this limitation is a problem depends on the application. For example, here are some typical SignalR scenarios:
Server broadcast (e.g., stock ticker): Backplanes work well for this
scenario, because the server controls the rate at which messages are
sent.
Client-to-client (e.g., chat): In this scenario, the backplane might
be a bottleneck if the number of messages scales with the number of
clients; that is, if the rate of messages grows proportionally as
more clients join.
High-frequency realtime (e.g., real-time games): A backplane is not
recommended for this scenario.
My project needs to High-frequency realtime (e.g., real-time games) .
Also I need real time video chat
My scenario :
I have a Master server and multi Slave servers, Clients connect to the Slave servers and ans Slave servers connect to Master server.
Example :
Server Slave-1 and server Slave-2 connected to Master server, client-A and client-B connected to Slave-1 an client-C and client-D connected to Slave-2,
client-A send message or data or in live chat with client-D
How I can implement this scenario ?
[Update-1]
If i don't use signalR for that problem, So what should I use?
[Update-2]
In my scenario, the master server acts like a router and Slave server acts like a switch . Clients connected to switch and switch connected to router .if client-A send data packet to client-C, data packet should be send to router and router handle data packet.Over 2000 possible number of Slave servers and the number of users for each server it is over 10,000.
Thanks.

A backplane will introduce delays in message delivery, which will not work well for low-latency work. If you absolutely must have multiple servers to handle your clients, and you absolutely must have minimal latency, then a backplane is probably not going to work for you.
However, check out this conversation on the ASP forums. The poster is seeing average latencies of around 25ms for 60,000 messages per second to 3,000 connected clients on one server.
As is often the case, the trade-off here is between latency and complexity. The optimal solution is for messages to be routed only to the server(s) containing the target client(s). To achieve this you need a way to track every client connection, deal with reconnects to different servers, etc. You can probably solve this with a few tens of hours of hard slog programming, but in doing so you're going to break most of what makes SignalR useful.
For alternatives, the first that comes to mind is ZeroMQ. A bit more work, especially if your clients are browser based, but low latency and high throughput are project goals for ZeroMQ. You'll need to handle scale-out yourself though... and you're back to tracking connection points across multiple servers and reconnects.
If neither of these solves your problems, then you might have to look at changing your architecture. One common method for MMOs is to have related clients connect to the same servers to reduce inter-server communication requirements. Clients who legitimately need to communicate real-time data are put together on a single server which doesn't have to worry about back-plane issues. This server then communicates back to the 'Master' server only what is required to maintain world state and so on.
Plan your architecture to reduce the problems before they start... but don't spend weeks working on something that might not be necessary. Do some tests on SignalR and see what effect the backplane actually has on latency before you dive into the abyss.

Related

Pushing a notification to million users with ASP.NET core, websocket limits and SignalR performance

I have a social-style web-app, and I want to add "live" updates to votes under posts.
At first I thought that a client-side loop with /GET polling the server every few seconds would be an inferior choice, and that all cool kids use websockets or server sent events.
But now I found that websocket would be limited at 65k live connections (even less in practice).
Is there a way to push vote updates to a big number of users realtime?
The app has around ~2 million daily users, so I'd expect 200-300k simultaneous socket connections.
The stack is ASP.NET Core backend hosted on an Ubuntu machine with nginx reverse proxy.
At current state all load is easily handled by a single machine, and I don't really want to add multiple instances just to be able to work with signalR.
May be there is a simple solution that I'm missing out?

How to block TCP and UDP packets (flood attack)

I have a program that tells you if your computer is online or not. The way I do it is with the help of a Server that basically sends UDP packets to clients. Clients then respond back letting the server know that they are online. If a client does not respond for the next 5 seconds then I mark it as offline.
Anyways I was testing this service and from a different computer I sent thousands of udp packets to the Server. After sending so many packages the server was not working the way it was supposed to.
So I know if someone is sending me a lot of packets. The problem is how do I block those packages so that my Server can still work?
Edit Possible Solution
I think I will implement the following solution what u guys think?
I will require 2 or more Servers now. If one client finds that the server is not responding then it will then talk to the Second Server. So the attacker will also have to know that there is a second server. Depending on how secure you want to be you could have even 5 servers. I guess that if the attacker knows that there are 5 servers then I just wasted my time and money right? lol
The general solution to this is you buy extra hardware that goes in front of the computer that looks at the incoming packets.
What that extra hardware does depends on what solution you want to use, you could have that hardware distribute the requests to many servers all running the same software (this would make the hardware you added a Load Balancer). You also could have the hardware detect that a unusually large number of packets coming from a single address, the hardware could then start dropping packets from that address instead of forwarding them on to the server (this would make the hardware you added a Stateful Firewall)
There are more options beyond those two but all solutions revolve around reducing the load on the server (usually shifting the load to another piece of hardware dedicated to taking the load). You could potentially upgrade your software to be more resilient to packet floods but unless your current software is written very poorly it won't buy you too much more capacity.

What is the best way to handle a C# server?

I'll have about 10 computers communicating with the server on a regular basis.
The packets won't be too data intensive. (updates on game data)
The server will be writing to a postgres database
My main concern is concurrent access by the clients. How can I handle the queing in of client requests?
Is WCF a good route? So for instance, rather then opening tcp/ip streams is it feasible to use GET/POST requests & such by WCF over a LAN?
Can anyone suggest any other areas of focus for possible problems?
With WCF you have so much flexibility choosing what to use depends on what to you want to do. Knowing what to choose comes from experimentation and experience.
You can use tools like Fiddler, Wireshark and ServiceTraceViewer to analyze the conversations between clients and the server - they should be used periodically to detect inefficient conversations.
You're producing a game that needs to communicate data via a central server...thus presumably you want that to be as efficient as possible and with the lowest latency.
Thus ideally any requests that come into the server you want to be processed as fast as possible...you want to minimize any overhead.
Do you need to guarantee that the message was received by the server and consequently you are guaranteed to receive a reply (i.e. reliable delivery)?
Do the messages need to be processed in the same order they were sent ?
Then consider does your server need to scale? will you only ever have small number of clients at a time ? You mention 10?.....will this ever increase to a million?
Then consider your environment....you mention a LAN...will clients always be on the same LAN...i.e. Intranet? If so then they can communicate directly with the server using TCP sockets....without going through an IIS stack.
With certain assumptions above with WCF you could choose Endpoints that uses a netTCPBinding with Binary encoding, and with the Service using:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)].
This would establish sessions (as long as you use a Binding that supports sessions) between the clients and server (doesn't scale because each client gets their own server instance to manage their session), it would allow multiple clients to be handled in parallel, and the Binary encoding "might" reduce data size (you could choose to avoid that as you said your data isn't intensive, and Binary might actually add more overhead to small messages sizes).
If on the other-hand your service needs the ability to infinitely scale and keep response times low, doesn't mind if messages are lost, or delivered in a different order, or have to be deliverable over HTTP, etc, etc, then...then there are other Bindings, InstanceContextModes, message encodings, etc, that you can use. And if you really want to get sophisticated, you can get your Service to expose multiple Endpoints that are configured in different ways, and your client might prefer a particular recipe.
NOTE: (if you host the WCF Service using IIS and you are using netTCPBinding then you have to enable the net.tcp protocol....not needed if you self-host the WCF Service).
http://blogs.msdn.com/b/santhoshonline/archive/2010/07/01/howto-nettcpbinding-on-iis-and-things-to-remember.aspx
So that is "1" way to do it with WCF for a particular scenario.
Experiment with one way...look at its memory use, latency, resilience, etc. and if your needs change, then change the Binding/Encoding that you use...that's the beauty and nightmare of WCF.
If your clients must use HTTP and POST/GET requests then use a Http flavoured binding. If you need guaranteed message delivery, then you need a reliable session.
If your Service needs to infinitely scale then you might start looking at hosting your Service in the Cloud e.g. with Windows Azure.
http://blog.shutupandcode.net/?p=1085
http://msdn.microsoft.com/en-us/library/ms731092.aspx
http://kennyw.com/work/indigo/178

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

How would you notifiy clients about changed data on the server using .Net 2.0?

Imagine a WinForms client app that displays fairly complex calculated data fetched from a server app with .Net Remoting over a HTTPChannel.
Since the client app might be running for a whole workday, I need a method to notify the client that new data is available so the user is able to start a reload of the data when he needs to.
Currently I am using remoted .Net events, serializing the event to the client and then rethrowing the event on the side of the client.
I am not very happy with this setup and plan to reimplement it.
Important for me is:
.Net 2.0 based technology
easy of use
low complexity
robust enough to survive a server or client restart still functional
When limited to .Net 2.0, how would you implement such a feature? What technologies / libraries would you use?
I am looking for inspiration on how to attack the problem.
Edit:
The client and server exist in the same organisation, typically a LAN, perhaps a WAN/VPN situation.
This mechanism should only make the client aware that there is new data available. I'd like to keep remoting for getting the actual data to the client since that is working pretty well. MSMQ comes with windows, doesn't it? So it should be ok to use it, but I'm open to any alternative.
I've implemented a similar notification mechanism using MSMQ. The client machine opens a local, public queue, and then advises the server of it's queue name. When changes occur, the server pushes notifications into all the client queues that it's be made aware of. This way the client will know that data is ready, even if it wasn't running when the notification was sent.
The only downside is that it requires MSMQ on the clients, so this may not work if you don't have that kind of control over your client's machines.
For an extra level of redundancy (for example, if a client machine is completely down, and therefore the client queue is unavailable) you could queue notifications on the server prior to dissemination to clients. Notifications in the server queues are only removed when the client is successfully contacted (or perhaps after 3 failed attempts, etc.)
Also in that regard, if the server fails to deliver messages to a client a measured number of times, over a measured period of time, then support entities are notified, error alerts go out, and the client queue is removed from the list of destinations. When I say "measured" I mean a frequency/duration that makes sense to the setting. In my case, it was 5 retries with 5 minute intervals between attempts.
It might also make sense to have the client "renew" it's notification subscription at intervals. If a renewal doesn't occur, then eventually the client queue is removed from the destination list by a "groomer" process in the service.
It sounds as though you need to implement a message-queue based solution. Easy to implement, can survive reboots, and the technology is mature both on the server (MSMQ, MGQSeries) and on the client (System.Messaging)
If you can't find anything built-in and assuming you know the address of all the clients, you could send them a UDP message when data changes. Using UdpClient, this is very easy. The datagram doesn't even need to contain any data if the client app can assume that any UDP data on a certain port means it needs to get new data from the server.
If necessary, you can even make this a broadcast packet (if you don't know who the clients are and they are on the same subnet as the server), so long as the server isn't too "chatty".
Whatever solution you decide on, I would urge you to avoid having the clients poll. This will create a lot of unecessary network traffic and still won't perform all that well.
I would usually use a UI timer on the client to periodically hit the server to see if there was new or updated data. (Assuming you have a mechanism to identify that you have new data like time stamps for new rows, or file time stamps, or a table with last-calculated dates, etc)
That way the server doesn't have to know about the clients. The clients can check at their leisure, etc.

Categories

Resources