Constantly changing data and caching? - c#

Currently I have one server with a C# net core console app that takes incoming TCP IOT data.
Each device reports every 5 seconds and there are 30,000 devices
I now need to move to distributed model where HAProxy distributes data using leastconn model (long running TCP connections) to multiple servers
The issue I have is each new bit of information from the IOT device is compared against its "last record" and logic fires if there is a state change etc
As the IOT devices have sim cards, I have seen TCP connections get interrupted and the next time it connects it gets routed to different server and its "last record" is very old and not the actual last record
This causes obvious data consistency issues.
I can live with the "last record" being a 1 min out of data, but not much more
I had started to look into caching with redis and memcached, but from what I read, you should use caching for infrequently changing data, whereas mine is changing every 5 secs
As I can I see I have the following options
configure haproxy to sticky sessions from IP address to certain server - not perfect and will imbalance loads as each IOT device can report on same IP/different port.
Use local in mem cache and distributed cache, then broadcast recent data every 1 min to the main cache, and pull from main cache every 1 min on all servers
Is there something else i'm not considering?
What software engineering patterns exist to solve this problem?
And is a cache not suitable for this scenario ?

Related

Pushing a notification to million users with ASP.NET core, websocket limits and SignalR performance

I have a social-style web-app, and I want to add "live" updates to votes under posts.
At first I thought that a client-side loop with /GET polling the server every few seconds would be an inferior choice, and that all cool kids use websockets or server sent events.
But now I found that websocket would be limited at 65k live connections (even less in practice).
Is there a way to push vote updates to a big number of users realtime?
The app has around ~2 million daily users, so I'd expect 200-300k simultaneous socket connections.
The stack is ASP.NET Core backend hosted on an Ubuntu machine with nginx reverse proxy.
At current state all load is easily handled by a single machine, and I don't really want to add multiple instances just to be able to work with signalR.
May be there is a simple solution that I'm missing out?

Alternative to start thread inside of an mvc .net core application?

There is currently multiple instruments inside of our manufacture plant that are inserting data into multiples tables inside of a database at different speeds.
There is a computer on each production line connected to a web page where the operator enter the assigned job number and some related information is displayed.
Our goal is to display indications base on the data inserted by the plant devices. Status are related to raw material availability, warehouse storage availability, temperature range, etc.
My initial idea was to modify the current MVC application by spawning a thread per production line that is scanning the inserted information each 10 seconds and may push data trough signarlR to advice operators. I read that starting threads inside of an MVC application is a bad practice that may disturb how IIS manage threads.
I was wondering how to host fast-recurrent-independent processes in MVC if it is not by using a separate thread?
Thank you for your time!
Yes, starting polling threads might not be the best approach here. An alternative solution that I might suggest is to modify your so called instruments (that are currently inserting data) to be SignalR clients and broadcast a message to the server each time they insert some data. The SignalR server could then simply broadcast this message to the javascript SignalR clients connected to it. This way you could achieve direct communication (through the SignalR server) between the instruments that are producing data and the browser clients that can display this data in real time.

Feasibility of pushing notifications from web application to desktop windows forms application using SignalR

I want to build a notification system between Web application and a desktop Winforms application.
I want my web application to push notifications into my Windows Forms Desktop application, and at the same time I want to filter the messages that will be delivered to the users. I mean not all connected users will receive all messages. There will be a filtration process that will occur on the server side (web application) to determine who will receive what.
I want the desktop app to receive the notifications if is already connected, and if not it will receive nothing. I don't want to save the notifications that will come from the server if the app is not connected or is not running. The Pushed notifications will be instant and will not be saved on the client side, they will be just displayed.
Also I have a concern: if multiple users are connected and are requesting from the server at the same time, will that affect the performance of the server?
There will be 20,000 users, for example, using the Windows Forms app to receive the notifications depending on their categories from the server side (Web Application).
Does SignalR supports this scenario?
Does SignalR supports this scenario?
Yes it supports and it's suitable for live time notifications.
You can use groups for broadcasting messages to specified subsets of connected clients. But don't use groups for sensetive data.
You can track/map connected clients and just send messages to specific user/users notification by connectionId/connectionIds.
If we come to performance, 20000 concurrent connections (I asssume concurrent) it's really many. First you should change IIS configuration to support more than 5000 concurrent requests.
You should optimize signalr for performance. The message size should be less. The per message should be maximum 4 Kb (I suggest you to use more less for this many concurrent connections).
Signalr uses Json so you can use JsonProperty to reduce messsage size.
[JsonProperty("op")]
public decimal OrderPrice
Why message size is important ? Because every connection has a buffer on server side. If client can get 1 messsage but at that time server sends 2 messages, messages will be filled in the buffer. These buffers use memory. Therefore, you should be more carefully with 20000 concurrent connections. Otherwise you will suffer from memory consumption.
But in your stiuation, message size will be not enough also you should decrease this bufffer limit.
DefaultMessageBufferSize: By default, SignalR retains 1000 messages in
memory per hub per connection. If large messages are being used, this
may create memory issues which can be alleviated by reducing this
value. This setting can be set in the Application_Start event handler
in an ASP.NET application, or in the Configuration method of an OWIN
startup class in a self-hosted application. The following sample
demonstrates how to reduce this value in order to reduce the memory
footprint of your application in order to reduce the amount of server
memory used:
public class Startup
{
public void Configuration(IAppBuilder app)
{
// Any connection or hub wire up and configuration should go here
GlobalHost.Configuration.DefaultMessageBufferSize = 500;
app.MapSignalR();
}
}
I suggest you to use it 100. What's drawback of decreasing buffer limit ? When buffer is full, it will not get any new messsage. That's mean client will loose some of your notifications. So if your notifications are transactional(User has to recieve), don't decrease buffer size many. But if not, you can decrease (Minumum has to be 32).
You should use net 4.5 or higher both on server and client sides and your clients should have windows8 or higher to support websocket.
After apply these steps, follow your memory consumption and change messsage size/buffer limit/frequency of messsages.
Bonus: 20000 concurrent requests are too many. Therefore I suggest you to use load balancer/scaleup and Signalr BackPlane, If you face performance problem. That's way, you will have not 1 web server let's say 4. Each will have 5000 concurrent requests (averagely). When you send one message on a server the other servers (also their clients) will get the message with backplane. What's drawback ? You should use shared resource (eg: database) to track/map users with connectionIds.

How to solve limitations of SignalR in scaleout for backplane

I use ASP.NET MVC and C# .I found SignalR for transfer data in real time,but signalR have some limits.
according to the issue for this :
Using a backplane, the maximum message throughput is lower than it is when clients talk directly to a single server node. That's because the backplane forwards every message to every node, so the backplane can become a bottleneck. Whether this limitation is a problem depends on the application. For example, here are some typical SignalR scenarios:
Server broadcast (e.g., stock ticker): Backplanes work well for this
scenario, because the server controls the rate at which messages are
sent.
Client-to-client (e.g., chat): In this scenario, the backplane might
be a bottleneck if the number of messages scales with the number of
clients; that is, if the rate of messages grows proportionally as
more clients join.
High-frequency realtime (e.g., real-time games): A backplane is not
recommended for this scenario.
My project needs to High-frequency realtime (e.g., real-time games) .
Also I need real time video chat
My scenario :
I have a Master server and multi Slave servers, Clients connect to the Slave servers and ans Slave servers connect to Master server.
Example :
Server Slave-1 and server Slave-2 connected to Master server, client-A and client-B connected to Slave-1 an client-C and client-D connected to Slave-2,
client-A send message or data or in live chat with client-D
How I can implement this scenario ?
[Update-1]
If i don't use signalR for that problem, So what should I use?
[Update-2]
In my scenario, the master server acts like a router and Slave server acts like a switch . Clients connected to switch and switch connected to router .if client-A send data packet to client-C, data packet should be send to router and router handle data packet.Over 2000 possible number of Slave servers and the number of users for each server it is over 10,000.
Thanks.
A backplane will introduce delays in message delivery, which will not work well for low-latency work. If you absolutely must have multiple servers to handle your clients, and you absolutely must have minimal latency, then a backplane is probably not going to work for you.
However, check out this conversation on the ASP forums. The poster is seeing average latencies of around 25ms for 60,000 messages per second to 3,000 connected clients on one server.
As is often the case, the trade-off here is between latency and complexity. The optimal solution is for messages to be routed only to the server(s) containing the target client(s). To achieve this you need a way to track every client connection, deal with reconnects to different servers, etc. You can probably solve this with a few tens of hours of hard slog programming, but in doing so you're going to break most of what makes SignalR useful.
For alternatives, the first that comes to mind is ZeroMQ. A bit more work, especially if your clients are browser based, but low latency and high throughput are project goals for ZeroMQ. You'll need to handle scale-out yourself though... and you're back to tracking connection points across multiple servers and reconnects.
If neither of these solves your problems, then you might have to look at changing your architecture. One common method for MMOs is to have related clients connect to the same servers to reduce inter-server communication requirements. Clients who legitimately need to communicate real-time data are put together on a single server which doesn't have to worry about back-plane issues. This server then communicates back to the 'Master' server only what is required to maintain world state and so on.
Plan your architecture to reduce the problems before they start... but don't spend weeks working on something that might not be necessary. Do some tests on SignalR and see what effect the backplane actually has on latency before you dive into the abyss.

C# TCP Asynchronous connection actively refused with low CPU utilization

I have a C# TCP Asynchronous socket server.
Testing locally, I can accept 5K connections over 35 seconds. I am running a quad core hyper threaded PC (8 threads) and the utilization is ~20-40% on each core. As the connections are accepted, the server asks the client for a bunch of data and then performas a bunch of database entries.
I moved the server application and SQL database to a small database and medium server instance on Amazon AWS.
The Amazon Medium server(EC2) is 1 virtual core and 2 ECU's. From what I can tell it only has 1 thread.(from performance monitor)
If I try to connect 1000 clients over 35 seconds to the medium server. After ~650 connections i start to receive Connect failed(b) No connection could be made because the target machine actively refused it.
Looking at performance monitor, I noticed that the CPU utilization is only ~10-15%.
I am guessing that the core is not getting loaded because it is trying to handle a huge queue of connections, of small operations and does not provide enough load to increase CPU usage, since the server only has 1 virtual core.
Does anyone have any experience with this, does my theory make sense? I am stuck at a hardware limitation? (need to bump up the server size?)
If not, any ideas how to get more utilization and support more/quicker connections?
Does anyone have experience with this?
EDIT:
I upgraded my instance on Amazon EC2 to the High CPU Extra Large instance.
The Instance now has 8 cores and 20 ECU's.
I am experiencing the same problem, I get No connection could be made because the target machine actively refused it after ~600 connections.
Check if the Amazon servers are configured to allow that many open TCP connections. In windows you have to set the server to allow connection by adding a registry entry for Max allowed TCP connections.
your code seems to be utilizing multiple cores while you were running it locally. Have you made changes in the code while you moved it to single core?. I have seen similar case in my application when I was testing with dummy user creation and programmatically making them access the application at same time. After certain number , I used to get connection dropped / not able to access.
If your requirement is for heavy utilization, you can go with higher config EC2 and get the reserved instance ( if you are looking to use it for long time), which might cost you very economic. Also, try using cloudfront , loadbalancer( if you have 2/more instances). they will definitely improve the number of users you can handle currently. Hope it helps.
I was able to fix this issue. I upgraded my Instance to use PIOPS. I also used a better internet connection locally, as my old upload rate was too slow to support sending all the data and connections.
With these 2 changes I am able to create 5K+ TCP connections locally and connect to the server no problem.

Categories

Resources