What is the simplest way to do simple distributed communication in .NET? - c#

So basically I am thinking about attempting load testing on my asp.net application using various features all at once. There is a lot of dependencies and ajax requests being performed in this application so it seems like a simple replay of captured http requests will not suffice and due to other features like picking out random operations, performing then verifying results across several machines, simple load testing software will not suffice.
Also there is no budget to this project for spending, so commercial implementations can not be used. I'm debating on trying to use MSMQ (never used before) to handle communication between clients, but if that is really complicated to set up then I would either use a database table as a queue or a simple TCP server with each test machine as its clients.
Features I want are: immediate failure (one client crashes, then all clients should stop), each test run should start with a brand new scenario with no prior messages, and ability to publish a start and stop event. Also it would be nice if I don't have to worry about state management (leaning towards TCP server for this over database) or concurrency.

It doesn't sound like MSMQ is what you need. It is a message-passing asynchronous communication method, akin to email. You can send a message to another queue that no one is even listening to (i.e. the application isn't running). It seems to me you want a more "online" communication model.
How about creating agents (client applications that sit on many machines and create the load) that expose a WCF service where a controller program can connect to all of them and instruct the agents what to do? It can be a duplex contract, so that the agents can send the controller a notifications. When one of them send a error notification, the controller can instruct all the other agents to shut down. Also I'd go for a Net.TCP binding rather than HTTP binding.

Related

Signal from cloud to internal network

We have a microservice based application/ website hosted currently in Azure, and we need to have a function where we press on a button, and it sends some data to another webservice currently hosted inside our corporate network.
Our IT bods are against being able to POST to a service hosted inside our network, and I am wondering how people normally deal with this problem.
I can think of 2 possible solutions, neither of which I like particularly:
Set up a VPN to the internal network, which feels a bit of a heavy solution to me
The internal network service polls the cloud application for changes of state continuously, an triggers an update process when a change is recorded. This will generate a lot more traffic than I would ideally want
How do other people address this issue? Essentially I just want to send some data from the cloud into our network in a secure fashion. Pulls from our network are OK, but pushes into it are not.
Even sending a signal to get the internal network to initiate a pull would also work fine.
Both the solutions you came up with are fairly common patterns in Azure architecture. Of the two, the second would be the one I would generally choose for this particular scenario, but it does depend on how fast you need the push to happen. VPN is going to be the fastest as you have a direct connection between your Azure service and your internal one, but it is a bit more complex to set up for a single pipeline.
The second is generally accomplished through a messaging service like Service Bus as it adds a lot of resiliency to that sort of arrangement. You can configure your onprem service to ping Service Bus based on the interval you define- more often if you need the updates to happen quickly, less often if you want to reduce traffic. Depending on the size of the data, you can load it directly into Service Bus for pickup or the message can contain the location of the required data. Event Grid is another option for a messaging service. It sends notifications out instead of waiting for you to poll, so it would be a good choice if you wanted to ping your onprem service to reach out and pick up the changes.
If you are open to using Logic Apps to do the push, it accesses onprem resources via a data gateway that you install inside your network. It does use Service Bus in the background to accomplish this so you will be using your second solution, but it would be a bit simpler from a development perspective.

How to connect multiple clients to multiple servers and send alerts from servers based on certain events?

Background
I have multiple servers that I currently connect to remotely to run a number of different commands/scripts to obtain information about the servers and/or applications running on the servers.
I'd like to automate running the commands/scripts (or the code contained in the scripts converted to C#/.NET) and have the server send alerts/notifications/messages to a client (basically a Windows Form) running on multiple workstations, but need some guidance.
For reference, I have limited experience creating Windows Services, but feel fairly confident in being able to create them on the server to handle to command/script automation, which I'm assuming would be the best way to go about handling the command/script automation on the server (since the commands/scripts would need to be run all the time or at set intervals).
Question
How can I connect multiple servers to multiple clients so that the server sends alerts/notifications/messages to the client when a command/script or even an event occurs on the server?
For instance, if an application on the server has a built-in command that can be run to determine the status of the application (up, down, limbo, etc.), I would like the Windows Form on the client to receive an alert from the server when the command returns "down" or "limbo" when it is run, presumably from a Windows Service. The alerts would be displayed on the Windows Form that would be setup basically as a dashboard for the servers that the client can connect to.
An even better outcome would be that the client runs as a background application and a notification appears similar to how Microsoft Outlook displays a notification when new email messages arrive (although these notifications would likely require user interaction to close instead of fading out like the Outlook notifications).
I would also like for the client to use a configuration file that has the connection information for the servers in it so that the servers being used can be changed quickly new servers are added or existing servers are decommissioned.
Research (so far)
I've read about WCF and duplex contracts, and how WCF can be hosted in Windows Services. From what I've read, this seems promising. However, I'm not quite sure how I would set this up so that the client can connect to a WCF service on multiple servers.
One thing that I'm concerned about with WCF is that in all of the WCF examples (which implement a calculator-type service) I've seen the client has to initiate the communication with the server in order to receive a message through a callback. In the calculator service examples, the client sends numbers to the service and the result is provided in the callback. I've also seen an asynchronous example, but in that example the client initiated a single, long running request and the callback returned a single response when it was finished processing.
And, just so I'm clear about bindings in WCF, it is possible to create and use bindings for multiple servers using a configuration file without having to use SvcUtil.exe to generate the code, correct? The reason I ask is because the servers that will be configured will likely be change for different users, so the client needs to be flexible when connecting to the services.
I've just now started looking at Sockets, but I'm not familiar enough with them to know if this would be the better option to achieve my objective.
Summary
I'm just looking for guidance, so if you can help direct me to some resources that will help me achieve my objective, I would appreciate it. I've searched extensively, but the majority of my searching either doesn't apply to my scenario, it is limited to a single server/client interaction, or it is limited to a single server with multiple clients.
Since I'm not sure what direction to go in, I don't have any code examples, although I have implemented the examples in the following Microsoft article: Windows Communication Foundation - Getting Started Tutorial
So you want to build a system of
multiple servers which execute commands on the computer they are running on
multiple clients which will receive the status of the commands executed on server or such information from the server
This would be my advice
Servers can be implemented as windows service. You will be able to administrate them easily this way using the services console or the scm. Checkout this link for a creating a simple C# service How do you write and use a Windows Service in C#?
Also, you can set the service to run as an in-built service user with different levels of permissions in addition to regular user accounts.
I have not used WCF, but usually clients connect to the server; this is a pretty common model, and hence all samples are such. Initiating connection from server is not a big deal (at least in a socket program), but just a bad model. You have to ask yourself, if no client is connected to your servers, how can they relay a status to the end user. You have to think clearly about the communication model. I would suggest a central repository of messages. It can be a file on a shared file system or a database or any such entity which can act as a data repository. This way all servers can convey there messages without caring if a client is connected or not. You can use Sockets to achieve what you want to do. Check the asychronous socket server sample from MSDN to understand how to do it.
Making the client run in the background and just have a notification area icon is also easy in c#. You can use NotifyIcon Class for that. This CodeProject article (Formless System Tray Application) demonstrates its usage. To show notification a la outlook style, you can refer to the following post: How to create form popup from from system tray on windows application (not web) with c#. Look at not only the accepted answer but other answers too; there are lot of useful links in it.
So far we have windows service talking over sockets, storing messages in a central repository and capable of handling multiple clients with toast style pops for client side notification.
You need a far richer client side GUI so the end users can take actions on the messages sent from the server. You can maintain a list of servers in app.config for the client that the client connects on startup. You should to provide a GUI for users to manage all servers and their connections.
Lat but not least, by building such a client server model, you are effectively building a security loophole in your systems. You should implement a good authorization mechanism. Checkout the following post: Authenticate user in WinForms (Nothing to do with ASP.Net)
EDIT:
You can also implement your server to accept "custom command" when you implement it as a service. This way, your client server communication will be standardized by using ServiceController to pass the command. This post might help: How to send a custom command to a .NET windows Service from .NET code?.
Don't get confused in the "command" terminology here. ServiceController issues standard commands to a service for start, stop, pause, resume and restart the service. These are the same items you see on the context menu when you right click a service in the services.msc snap-in. The same way a service can respond to custom commands. In your case the custom command maybe a request to execute a process.
Note that some mechanisms I have described are geared towards an intranet setup while others scale fine on both intranet and internet

How to effectively communicate between database bound applications?

We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

Communication between two separate applications

I have developed a windows service which reads data from a database, the database is populated via a ASP.net MVC application.
I have a requirement to make the service re-load the data in memory by issuing a select query to the database. This re-load will be triggered by the web app. I have thought of a few ways to accomplish this e.g. Remoting, MSMQ, or simply making the service listen on a socket for the reload command.
I am just looking for suggestions as to what would be the best approach to this.
How reliable does the notification has to be? If a notification is lost (lets say the communication pipe has a hickup in a router and drops the socket), will the world end come or is business as usual? If the service is down, do notifications from the web site ned to be queued up for when it starts up, or they can e safely dropped?
The more reliable you need it to be, the more you have to go toward a queued solution (MSMQ). If reliability is not an issue, then you can choose from the mirirad of non-queued solutions (remoting, TCP, UDP broadcast, HTTP call etc).
Do you care at all about security? Do you fear an attacker my ping your 'refresh' to death, causing at least a DoS if not worse? Do you want to authenticate the web site making the 'refresh' call? Do you need privacy of the notifications (ie. encryption)? UDP is more difficult to secure (no session).
Does the solution has to allow for easy deployment, configuration and management on the field (ie. is a standalone, packaged, product) or is a one time deployment that can be fixed 'just-in-time' if something changes?
Withous knowing the details of all these factors, is dififcult to say 'use X'. At least one thing is sure: remoting is sort of obsolete by now.
My recommendation would be to use WCF, because of the ease of changing bindings on-the-fly, so you can test various configurations (TCP, net pipe, http) w/o any code change.
BTW, have you considered using Query Notifications to detect data changes, instead of active notifications from the web site? I reckon this is a shot in the dark, but equivalent active cache support exists on many databases.
Simply host a WCF service inside the Windows Service. You can use netTcpBinding for the binding, which will use binary over TCP/IP. This will be much simpler than sockets, yet easier to develop and maintain.
I'd use standard TCP sockets - this will survive all sorts of moving of components, and minimize configuration issues IMHO.

Categories

Resources