I'm using signalR for having alive connection between clients and webapis in the server. we have a lot of groups in which there are 3 types of clients. android app, web app and raspberry device (IoT) and they have to keep a specific status updated all the time (immediately after status changed in the database) and check server for get and post commands and responses.
But i think having alive connection (websocket) all the time will drain battery of raspberry. so i think if there is a solution for handle clients connections manually in an arbitrary time schedule.
So can i open and close connection of clients from server (hub) in the arbitrary time schedule and repeat it every day?
For example , i want open the connection of a group of clients from 3 to 5 (pm) and after that i want just open connection every 5 minutes and keep it alive for 1 second. and repeat it.
Maybe it seems stupid but i want signalR in scheduling time span. is there a better way?
Related
I have a social-style web-app, and I want to add "live" updates to votes under posts.
At first I thought that a client-side loop with /GET polling the server every few seconds would be an inferior choice, and that all cool kids use websockets or server sent events.
But now I found that websocket would be limited at 65k live connections (even less in practice).
Is there a way to push vote updates to a big number of users realtime?
The app has around ~2 million daily users, so I'd expect 200-300k simultaneous socket connections.
The stack is ASP.NET Core backend hosted on an Ubuntu machine with nginx reverse proxy.
At current state all load is easily handled by a single machine, and I don't really want to add multiple instances just to be able to work with signalR.
May be there is a simple solution that I'm missing out?
Currently I have one server with a C# net core console app that takes incoming TCP IOT data.
Each device reports every 5 seconds and there are 30,000 devices
I now need to move to distributed model where HAProxy distributes data using leastconn model (long running TCP connections) to multiple servers
The issue I have is each new bit of information from the IOT device is compared against its "last record" and logic fires if there is a state change etc
As the IOT devices have sim cards, I have seen TCP connections get interrupted and the next time it connects it gets routed to different server and its "last record" is very old and not the actual last record
This causes obvious data consistency issues.
I can live with the "last record" being a 1 min out of data, but not much more
I had started to look into caching with redis and memcached, but from what I read, you should use caching for infrequently changing data, whereas mine is changing every 5 secs
As I can I see I have the following options
configure haproxy to sticky sessions from IP address to certain server - not perfect and will imbalance loads as each IOT device can report on same IP/different port.
Use local in mem cache and distributed cache, then broadcast recent data every 1 min to the main cache, and pull from main cache every 1 min on all servers
Is there something else i'm not considering?
What software engineering patterns exist to solve this problem?
And is a cache not suitable for this scenario ?
I know TCP won't break an established TCP connection during a period of time. But is it possible to increase this time?
I am developing a client-server application used to stream continuous data from one server to the client application using C#. I want to keep collecting data if the cable connection reconnected within one minute.
There are three timeouts that you can use in the tcp client:https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient?view=netcore-3.1#properties
Connect timeout. The time it takes to establish the connection
Receive timeout. The timeout on the receiving side once the reading is initiated: https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.receivetimeout?view=netcore-3.1#System_Net_Sockets_TcpClient_ReceiveTimeout
Send timeout. The timeout on the sender side once the sending is initiated: https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.tcpclient.sendtimeout?view=netcore-3.1#System_Net_Sockets_TcpClient_SendTimeout
Those are the out of the box settings you can use. If you didn't mean one of these, please clarify in your question.
I have a problem with the .net client in a windows service. After some time it freezes .
The usecase is this.
A user uploads one or more files to our website. The service detects this and starts to process the files, sending a signal to the website when processing starts and when it ends.
The service checks for new files every 10 seconds. In this iteration we make a new connection to the server and stops it again, most of the time no messages is sent.
I suspect the connect/disconnect to be causing this. Is it better to open the connection when the services start and then use this connection in every iteration? The service must be able to run without restarts for months.
I have a chat site (http://www.pitput.com) that connects user via socket connections.
I have in the client side a flash object that opens a connection to a port in my server.
In the server i have a service that is listening to that port in an async matter.
All is working fine except when i talk to someone after an unknown period of time(about couple of minutes) the server is closing my connection and i get an error in the server :
" A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond".
I dont know how exactly the tcp socket works. does it checking for "live" connection every couple of seconds? how does it decide when to close the connection? Im pretty sure that the close operation is not coming from the client side.
Thanks.
Sounds like the server is handling the connection but not responding. This is the point where I usually pull out WireShark to find out what's going on.
TCP/IP does have an option for checking for live connections; it's called "keepalive." Keepalives are hardly ever used. They're not enabled by default. They can be enabled on a system-wide basis by tweaking the Registry, but IIRC the lowest timeout is 1 hour. They can also be enabled on a single socket (with a timeout in minutes), but you would know if your application does that.
If you are using a web service and your client is connecting to an HTTP/HTTPS port, then it may be getting closed by the HTTP server (which usually close their connections after a couple minutes of idle time). It is also possible that an intermediate router may be closing it on your behalf after an amount of idle time (this is not default behavior, but corporate routers are sometimes configured with such "helpful" settings).
If you are using a Win32 service, then it does in fact sound like the client side is dropping the connection or losing their network (e.g., moving outside the range of a wireless router). In the latter case, it's possible that the client remains oblivious to the fact that the connection has been closed (this situation is called "half-open"); the server sees the close but the client thinks the connection is still there.
Is this an ASP web service hosted with some company? If so, the server generally recycles apps every 10 to 20 minutes. You cannot have a web service running indefinitely, unless it's your own server (I believe).