If I have multiple pages that could use multiple hub classes, what is the best way to manage this?
For instance:
Is it bad to navigate to another page in the website and essentially "reopen" the connection to the same hub class that was open on the previous page?
Am I correct in thinking that opening multiple hub connections on a page is ok because they are all unified in one connection, even if they are different hub classes?
You can have multiple hubs sharing one connection on your site. SignalR 2.0 was updated to handle multiple hubs over one signlar connection with no lost in performance.
Official docs: http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#multiplehubs
All clients will use the same URL to establish a SignalR connection with your service ("/signalr" or your custom URL if you specified one), and that connection is used for all Hubs defined by the service.
There is no performance difference for multiple Hubs compared to defining all Hub functionality in a single class.
SignalR Core - NOT POSSIBLE
Unfortunately this is not possible anymore in the new 'Core' version of SignalR
https://github.com/aspnet/SignalR/issues/456
https://github.com/aspnet/SignalR/issues/955
Notes: Issues with iOS and self signed certificates
On iOS there's a limit of FOUR connections per server.
Now there isn't this limit for websockets (I think it may be 32 but not sure). However I am using a self signed certificate which has all kinds of issues in Safari - so it actually drops down to long polling (and it's not obvious it's done so).
So I ended up with these connections:
1 - Angular / Webpack hot reload socket
2 - Web API calls
3 - Hub number one
4 - Hub number two
5 - #&#&#$&#$&
So if I had ONLY three hubs the whole Safari page would lock up with a blue bar. Even Web API calls got blocked.
Note: With HTTP/2 this limit is gone but you're probably better off limiting yourself to one hub especially if you're using hot reload. Plus setting up HTTP/2 in development isn't necessarily a trivial task.
So how to fix?
First (temporarily) set your hub to only accept websockets. This will give you an error in Safari (make sure errors are being caught and shown in an alert dialog).
routes.MapHub<SignalRHub>("/rt", options =>
{
// when run in debug mode only WebSockets are allowed
if (Debugger.IsAttached) {
options.Transports = Microsoft.AspNetCore.Http.Connections.HttpTransportType.WebSockets;
}
});
Now you'll be able to confirm the fix - run in debug mode, or remove the 'if'.
The problem with iOS is even if you accept a self signed certificate for https traffic - and get a nice little 'lock' symbol in the browser - it doesn't apply to the wss: protocol. So connections cannot be upgraded to wss, which is why they block at the max of 4.
Solution #1
If you can get everything down to one hub it's just easier :-)
I also realized that multiple hubs complicates reconnect logic if the connection is lost. One hub just makes this easier. If you're not careful you'll end up showing 3 dialog boxes saying 'Connection lost. Retry?' I'm switching to a single hub just because of this.
While I hate mixing everything, partial classes help and I personally don't have many SignalR methods anyway.
Solution #2
This is only relevant to debugging, and assumes you're using a https cert which you self-signed.
Use instead something like letsencrypt - or Cloudflare's argo tunnel to get a publically trusted cert. This will be fully trusted by Safari, so your connections will get upgraded to real web sockets.
Solution #3
Create a self signed ROOT certificate (CA) and then generate SSL certificates with the domain name from it.
This was trickier than I imagined. In the end it turned out I was missing Subject Type=CA in my root cert - which iOS requires. Without this 'extension' it will install your root the certificate as a profile, but won't allow you to select it for SSL.
Once you have the root cert installed Safari will work with websockets just fine.
Solution #4
Use http only. This wasn't an option for me because I use certain APIs like Facebook / Google / Payment and they require https.
Notes
Important: Now consider production. Realize that websockets may be unavailable for various reasons, so if you have 4 hubs that are connected on iOS this can still cause blocking. You're living dangerously.
Better to use one hub in the first place. BUT also best to get your cert installed properly so iOS will work with websockets.
How to create and install X.509 self signed certificates in Windows 10 without user interaction?
To start with Hubs read the WIKI entry for Hubs and Client Side of Hubs. There are couple of things according to the context of a multiple pages.
When you start a hub it gives your client an ID which stays the same for that hub (someone can confirm with example) over multiple pages.
Its not bad to reopen the connection to the same hub. You might have hub.start client side method running on all pages however if its one client opening multiple windows or going from one page to another you will have same connection ID on that hub so you can keep in contact. If it was multiple hubs then you have to manage hubs as well as connection IDs. So this question is like "Is it bad to have multiple ISPs serving my internet connection for different websites". You can have them but it is an overkill. A single ISP can server all pages to you as well.
Multiple hubs on a single page is not ideal but it will work. Again the answer need a bit of context to the problem but in general you can differentiate between various requests on same connection ID via groups or using other parameter based approach. Having two hubs on same page may take more resources (need to test this) than using parameters or groups to separate different areas of messaging.
Example:
You have a page that has two parts, a graph which shows real time user activity and an area to see real time data changes done by users as a table. Will you create two hubs or two groups or what? There are other pages which use same graph and data table.
My Solution:
I will create a single hub for the application to recieve real time data from the server.
I will create different methods on server to send graph points and data tables.
I will create client side method on all pages that use these graphs to communicate with server methods on the same hub.
When you switch between pages the client will connect with same hub and request getGraph or getDataTable or both and populate its client with relevant data. Similarly on server when data changes you can call client side method to update all clients or group of them (lets add this complexity)
Assume you have students and teachers looking at your application. They require different level of data access. You can use groups to keep them separate on the hub so you are not sending teachers info to students and students data to teachers.
On your hub join you can add them to join a group associated with their role or any differentiating function.
When you send to all clients , now you can send to group of clients that is teachers or students. Not creating another hub for teachers or students, they are all on same hub.
Coming back to your question of "is it bad" and "is ok" this is difficult to establish without context of actual application. I cant think of a scenario where you can justify multiple hubs apart from Performance.
Related
So I'm starting with the simple Greeter service from the VS2022 sample project template "ASP.NET Core gRPC Service", targeting .NET7, running on Windows. Name this side "A".
I was able to write a second console app (also .NET7) that connects as a gRPC client to that service, invoke the SayHello() rpc successfully and retrieve the expected response message. Name this side "B". So far, so good.
My scenario, however, is a bit different: I want the client B to connect to the server A and authenticate with the server (which is not part of the sample so far but I assume this will work fine as well). Then, the server A shall start acting as a gRPC client and the console-app-client B shall act as a gRPC server, i.e. I want them to switch roles (or establish another role if you like). The challenge is that I want the two parties to re-use the same TCP HTTP/2 connection that has already been established. So I'm looking for a way to create new instances of gRPC client/servers at runtime and provide them with the existing TCP connection. The reason why I need to do it this way are limitations coming from network security (the initial connection can only come B to A). I'm aware that by using streams I can make my server A send messages to B, but I'd prefer to have the full client/server support.
I saw a few discussions around the same question but the presented approaches are using GO as a language and it's hard for me to understand whether and how I can do the same in .NET7.
Possible? Thanks!
i decided to write a card game with c# that has a winform application for main server to manage the game with a web interface. i chose SignalR self-host for main server. because i want to sell this app to others and dont want to modify the code or html of the web interface. so my question is: is that good for handling 10000 client request? is there a way to write this app for better performance?
another thing is i want to write the main server and login cashout profile and ... most written by customers like poker mavens and i just create a api with json to do this functions. plzzzz guide me which way is better to write this app!
With your server code self hosted and a javascript client calling into your server methods, becoming your browser based client, your design should work.
I am looking at this. https://learn.microsoft.com/en-us/aspnet/signalr/overview/deployment/tutorial-signalr-self-host
But I think you'll need to figure out scale out scenarios and server failure scenarios with the self host. In case there is a patch update on the server and it has to restart, you'll need to be able to get a backup. Also consider the case when you need to upgrade the server. So you'll need to be able to host it in multiple servers and you'll need to provide the signalr backplane option.
From a performance point of view, I have tested a web api signalR application on a single 4-core-14-GB server and was able to scale up to 20k connections, with the server comfortably serving more than 200 Requests per second.
With a backplane these numbers were around 100-150 rps.
The response times in both cases were very good ~ 500 ms.
Although please note that your numbers could be VASTLY different based on your actual functionality.
I have a chat SignalR server, the chat support group chatting.
I also have a server which actually creates the groups and other group managment tools.
Whenever a user leaves a group (via http post ) to server, I want the chat service to trigger some methods, such as LeaveGroup and some other logictics.
I bound the connectionId to userId so I got the parsing request covered.
QUESTION IS: What is the best practice of communication between server/service and the signalr server.
Taking in mind, I dont want to compormise on scalability on each of my servers/services.
My idea is more or less host a web api server inside the SignalR server, but I can't seem to find any topics suggesting that could damage the performance.
Ideas?
Thanks alot.
p.s
I know that there is no code involved in here. but it seems irrelevant. I have self hosted web api in a window service I have, so the code is pretty much the same.
I would love to provide more data/information if thats neccesary
It seems like this documentation is most applicable to what you're trying to do: https://www.asp.net/signalr/overview/getting-started/tutorial-server-broadcast-with-signalr
It speaks specifically about how to communicate from your server/service application to the signalr clients. Communicating from the client to the server/service could be done either through the signalr hub, or with other web API.
From a best practice perspective, the documentation specifically states (https://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-server#callfromoutsidehub):
If you need to use the context multiple-times in a long-lived object,
get the reference once and save it rather than getting it again each
time. Getting the context once ensures that SignalR sends messages to
clients in the same sequence in which your Hub methods make client
method invocations. For a tutorial that shows how to use the SignalR
context for a Hub, see Server Broadcast with ASP.NET SignalR.
If you're really into scalability, you might want to look into integrating your signalr communications into some other message queueing system, but that's probably overkill for most circumstances.
Background
I have multiple servers that I currently connect to remotely to run a number of different commands/scripts to obtain information about the servers and/or applications running on the servers.
I'd like to automate running the commands/scripts (or the code contained in the scripts converted to C#/.NET) and have the server send alerts/notifications/messages to a client (basically a Windows Form) running on multiple workstations, but need some guidance.
For reference, I have limited experience creating Windows Services, but feel fairly confident in being able to create them on the server to handle to command/script automation, which I'm assuming would be the best way to go about handling the command/script automation on the server (since the commands/scripts would need to be run all the time or at set intervals).
Question
How can I connect multiple servers to multiple clients so that the server sends alerts/notifications/messages to the client when a command/script or even an event occurs on the server?
For instance, if an application on the server has a built-in command that can be run to determine the status of the application (up, down, limbo, etc.), I would like the Windows Form on the client to receive an alert from the server when the command returns "down" or "limbo" when it is run, presumably from a Windows Service. The alerts would be displayed on the Windows Form that would be setup basically as a dashboard for the servers that the client can connect to.
An even better outcome would be that the client runs as a background application and a notification appears similar to how Microsoft Outlook displays a notification when new email messages arrive (although these notifications would likely require user interaction to close instead of fading out like the Outlook notifications).
I would also like for the client to use a configuration file that has the connection information for the servers in it so that the servers being used can be changed quickly new servers are added or existing servers are decommissioned.
Research (so far)
I've read about WCF and duplex contracts, and how WCF can be hosted in Windows Services. From what I've read, this seems promising. However, I'm not quite sure how I would set this up so that the client can connect to a WCF service on multiple servers.
One thing that I'm concerned about with WCF is that in all of the WCF examples (which implement a calculator-type service) I've seen the client has to initiate the communication with the server in order to receive a message through a callback. In the calculator service examples, the client sends numbers to the service and the result is provided in the callback. I've also seen an asynchronous example, but in that example the client initiated a single, long running request and the callback returned a single response when it was finished processing.
And, just so I'm clear about bindings in WCF, it is possible to create and use bindings for multiple servers using a configuration file without having to use SvcUtil.exe to generate the code, correct? The reason I ask is because the servers that will be configured will likely be change for different users, so the client needs to be flexible when connecting to the services.
I've just now started looking at Sockets, but I'm not familiar enough with them to know if this would be the better option to achieve my objective.
Summary
I'm just looking for guidance, so if you can help direct me to some resources that will help me achieve my objective, I would appreciate it. I've searched extensively, but the majority of my searching either doesn't apply to my scenario, it is limited to a single server/client interaction, or it is limited to a single server with multiple clients.
Since I'm not sure what direction to go in, I don't have any code examples, although I have implemented the examples in the following Microsoft article: Windows Communication Foundation - Getting Started Tutorial
So you want to build a system of
multiple servers which execute commands on the computer they are running on
multiple clients which will receive the status of the commands executed on server or such information from the server
This would be my advice
Servers can be implemented as windows service. You will be able to administrate them easily this way using the services console or the scm. Checkout this link for a creating a simple C# service How do you write and use a Windows Service in C#?
Also, you can set the service to run as an in-built service user with different levels of permissions in addition to regular user accounts.
I have not used WCF, but usually clients connect to the server; this is a pretty common model, and hence all samples are such. Initiating connection from server is not a big deal (at least in a socket program), but just a bad model. You have to ask yourself, if no client is connected to your servers, how can they relay a status to the end user. You have to think clearly about the communication model. I would suggest a central repository of messages. It can be a file on a shared file system or a database or any such entity which can act as a data repository. This way all servers can convey there messages without caring if a client is connected or not. You can use Sockets to achieve what you want to do. Check the asychronous socket server sample from MSDN to understand how to do it.
Making the client run in the background and just have a notification area icon is also easy in c#. You can use NotifyIcon Class for that. This CodeProject article (Formless System Tray Application) demonstrates its usage. To show notification a la outlook style, you can refer to the following post: How to create form popup from from system tray on windows application (not web) with c#. Look at not only the accepted answer but other answers too; there are lot of useful links in it.
So far we have windows service talking over sockets, storing messages in a central repository and capable of handling multiple clients with toast style pops for client side notification.
You need a far richer client side GUI so the end users can take actions on the messages sent from the server. You can maintain a list of servers in app.config for the client that the client connects on startup. You should to provide a GUI for users to manage all servers and their connections.
Lat but not least, by building such a client server model, you are effectively building a security loophole in your systems. You should implement a good authorization mechanism. Checkout the following post: Authenticate user in WinForms (Nothing to do with ASP.Net)
EDIT:
You can also implement your server to accept "custom command" when you implement it as a service. This way, your client server communication will be standardized by using ServiceController to pass the command. This post might help: How to send a custom command to a .NET windows Service from .NET code?.
Don't get confused in the "command" terminology here. ServiceController issues standard commands to a service for start, stop, pause, resume and restart the service. These are the same items you see on the context menu when you right click a service in the services.msc snap-in. The same way a service can respond to custom commands. In your case the custom command maybe a request to execute a process.
Note that some mechanisms I have described are geared towards an intranet setup while others scale fine on both intranet and internet
I am looking for a way user can communicate between an ASP and Winform applications.
I am looking for something like soluto.com, I want to let the user send commands to other computers via Website. So let's say the user signed up for 10 computers, which is registered on the mvc app. User can select all the 10 computer and send a "Do this task" with a click of a button.
I am thinking something like, Winform will create a httplisten server. Everytime winform is open, it will send a "I am online" post to mvc, along with IP:Port. The server will send a request to that ip:port when required.
That approach seems very unsecure though, having an open port, configuring firewall and etc, seems like a overkill.
I was wondering if there way any other way of accomplishing this.
Thank you for the help.
P.S. Before you claim this is a stupid idea, Piriform is doing something like this also. Take a look at Agomo.com
Use SingalR with properly architected web and windows applications (e.g. MVP, MVC, etc.)
SignalR with window client (WPF)
Console App & SignalR
Create a WCF service within the WinForm application, specify endpoint(s) (and secure the endpoint appropriately), and connect to said endpoints from your ASP.NET application the same way you would also connect to a WCF service.
Why don't you just have the Winforms app use a standard HttpClient or WebRequest to periodically poll the service (maybe every 5 seconds or so) and ask if there if there are any tasks that need to be performed?
Unless you need realtime, low-latency, high performance communication then this is the easiest way to solve your problem with minimal to zero client side setup or security configuration.
The way I would do it is implement it like a stack in a data persistence layer. So each client could have rows in a table that are added when a task is queued. When the clients sends an HTTP GET request to the MVC server it will return the an array of tasks for that client and you could have it either delete them from the database right away or wait for the client to send a HTTP command later to indicate which tasks it completed.
You could represent tasks as a simple data object with a few properties, or just a string or int that you can lookup on the client in some way to invoke the appropriate code.
For reasonable security each client just needs to be given a unique key like a GUID or equivalent that it can later send to the server to validate its identity. This is also known as a cookie, secret, or API key.