Server for handling data manipulation - c#

I have currently built application in which I handle Products, Accounts, Orders, etc.
There are two databases one is a database which I created where I handle Users and Roles and some minor Application specific data. The other database is external and it is the one which holds all the data about Orders, Products and Accounts.
What I am trying to figure out is: how to build a server which runs parallel to the main application and handles all data manipulation with the external database.
Let's say a situation in which this would be helpful: There is an excel file which has to be created based on big amount of data and afterwards it has to be stored in the externalDB as certain type of format and sent as Email to someone. This will surely overload the main thread of the main application, hence we don't want that. Therefore, it would be a good idea to handle those kind of situations outside user's vision.
I am using ASP.NET MVC 5 and was curious what would be a good approach for this situation? I was thinking that I should make Console application, which is working as a service.

You would create a service architecture and have the services communicate with each other through controllers. These could push data back to the main application views, or the views could request data push from the services.
If you wanted to have jobs etc running on that server that send emails, you could easily just create SQL jobs or SSIS jobs or even custom service jobs that did that based on your criteria that are separate from the main view (where view is the main application that the user interacts with)
Services themselves could be configured with a light User Interface that you could call up on the server, perhaps in the service tray or through the Services component of Windows.
SOA architecture
https://www.cleverism.com/how-to-build-service-oriented-architecture-soa/
Micro Service architecture
http://microservices.io/patterns/microservices.html
Hope that helps. Good luck!

There are quite a few ways to do these kinds of things.
If your applications are for Android or iOS devices, it's common practice to have an external server running which handles intensive processing including managing your excel data, sending emails and database communication.
If your applications are cross-platform desktop applications or OS-specific desktop applications, you can still be able to use the external server, although certain other parts of the processing may be done on the client machine.
By using a web service to deal with database communication, it makes the applications more secure in regards to hard-coded database connection information, SMTP server credentials etc.
Using a RESTful web service will mean that you're adhering to the Hypertext Transfer Protocol (HTTP) which is great for exchanging data in the majority of cases.
Using a console application which acts as a TCP client server allows you to use the Transmission Control Protocol (TCP) to communicate in a different way. You're able to pass on data in a custom format if you wish. You'd be able to create a standard for communication which your applications will need to stick to.
The console application would do the same processing as the web service, however you would be in full control of logging requests, responses and data transfer.

Related

How to connect multiple clients to multiple servers and send alerts from servers based on certain events?

Background
I have multiple servers that I currently connect to remotely to run a number of different commands/scripts to obtain information about the servers and/or applications running on the servers.
I'd like to automate running the commands/scripts (or the code contained in the scripts converted to C#/.NET) and have the server send alerts/notifications/messages to a client (basically a Windows Form) running on multiple workstations, but need some guidance.
For reference, I have limited experience creating Windows Services, but feel fairly confident in being able to create them on the server to handle to command/script automation, which I'm assuming would be the best way to go about handling the command/script automation on the server (since the commands/scripts would need to be run all the time or at set intervals).
Question
How can I connect multiple servers to multiple clients so that the server sends alerts/notifications/messages to the client when a command/script or even an event occurs on the server?
For instance, if an application on the server has a built-in command that can be run to determine the status of the application (up, down, limbo, etc.), I would like the Windows Form on the client to receive an alert from the server when the command returns "down" or "limbo" when it is run, presumably from a Windows Service. The alerts would be displayed on the Windows Form that would be setup basically as a dashboard for the servers that the client can connect to.
An even better outcome would be that the client runs as a background application and a notification appears similar to how Microsoft Outlook displays a notification when new email messages arrive (although these notifications would likely require user interaction to close instead of fading out like the Outlook notifications).
I would also like for the client to use a configuration file that has the connection information for the servers in it so that the servers being used can be changed quickly new servers are added or existing servers are decommissioned.
Research (so far)
I've read about WCF and duplex contracts, and how WCF can be hosted in Windows Services. From what I've read, this seems promising. However, I'm not quite sure how I would set this up so that the client can connect to a WCF service on multiple servers.
One thing that I'm concerned about with WCF is that in all of the WCF examples (which implement a calculator-type service) I've seen the client has to initiate the communication with the server in order to receive a message through a callback. In the calculator service examples, the client sends numbers to the service and the result is provided in the callback. I've also seen an asynchronous example, but in that example the client initiated a single, long running request and the callback returned a single response when it was finished processing.
And, just so I'm clear about bindings in WCF, it is possible to create and use bindings for multiple servers using a configuration file without having to use SvcUtil.exe to generate the code, correct? The reason I ask is because the servers that will be configured will likely be change for different users, so the client needs to be flexible when connecting to the services.
I've just now started looking at Sockets, but I'm not familiar enough with them to know if this would be the better option to achieve my objective.
Summary
I'm just looking for guidance, so if you can help direct me to some resources that will help me achieve my objective, I would appreciate it. I've searched extensively, but the majority of my searching either doesn't apply to my scenario, it is limited to a single server/client interaction, or it is limited to a single server with multiple clients.
Since I'm not sure what direction to go in, I don't have any code examples, although I have implemented the examples in the following Microsoft article: Windows Communication Foundation - Getting Started Tutorial
So you want to build a system of
multiple servers which execute commands on the computer they are running on
multiple clients which will receive the status of the commands executed on server or such information from the server
This would be my advice
Servers can be implemented as windows service. You will be able to administrate them easily this way using the services console or the scm. Checkout this link for a creating a simple C# service How do you write and use a Windows Service in C#?
Also, you can set the service to run as an in-built service user with different levels of permissions in addition to regular user accounts.
I have not used WCF, but usually clients connect to the server; this is a pretty common model, and hence all samples are such. Initiating connection from server is not a big deal (at least in a socket program), but just a bad model. You have to ask yourself, if no client is connected to your servers, how can they relay a status to the end user. You have to think clearly about the communication model. I would suggest a central repository of messages. It can be a file on a shared file system or a database or any such entity which can act as a data repository. This way all servers can convey there messages without caring if a client is connected or not. You can use Sockets to achieve what you want to do. Check the asychronous socket server sample from MSDN to understand how to do it.
Making the client run in the background and just have a notification area icon is also easy in c#. You can use NotifyIcon Class for that. This CodeProject article (Formless System Tray Application) demonstrates its usage. To show notification a la outlook style, you can refer to the following post: How to create form popup from from system tray on windows application (not web) with c#. Look at not only the accepted answer but other answers too; there are lot of useful links in it.
So far we have windows service talking over sockets, storing messages in a central repository and capable of handling multiple clients with toast style pops for client side notification.
You need a far richer client side GUI so the end users can take actions on the messages sent from the server. You can maintain a list of servers in app.config for the client that the client connects on startup. You should to provide a GUI for users to manage all servers and their connections.
Lat but not least, by building such a client server model, you are effectively building a security loophole in your systems. You should implement a good authorization mechanism. Checkout the following post: Authenticate user in WinForms (Nothing to do with ASP.Net)
EDIT:
You can also implement your server to accept "custom command" when you implement it as a service. This way, your client server communication will be standardized by using ServiceController to pass the command. This post might help: How to send a custom command to a .NET windows Service from .NET code?.
Don't get confused in the "command" terminology here. ServiceController issues standard commands to a service for start, stop, pause, resume and restart the service. These are the same items you see on the context menu when you right click a service in the services.msc snap-in. The same way a service can respond to custom commands. In your case the custom command maybe a request to execute a process.
Note that some mechanisms I have described are geared towards an intranet setup while others scale fine on both intranet and internet

Simplest architecture for service to service data exchange

I have a public server with web services (.Net) that collect data and uploaded files from different mobile apps and I need to synchronise it with an internal intranet server.
The intranet server is deeply protected by firewall and organisation policies.
I think this is a pretty common scenario where messages and brokers could be used, something like Rabbitmq or Nservicebus, but I'm not an expert on it.
As the data is only to be sent from the external server to the intranet one in unidirectional and asynchronous way I was thinking not to add another layer of indirection to the architecture and just use the web services exposed also for server to server communication.
The approach would be like:
An intranet windows serivce would poll regularly and at different scheduled intervals the external web service to know if there is new data to get (maybe from a certain point in time)
The web service would respond with the list of the new data and files
The windows service would iterate with calls to get all the data to be inserted in the intranet and download the uploaded files.
What are the risks of this approach? Would be better that the external web service would respond only a link to a huge zipped file response with all the data and files in it?
Should I use a something like RabbitMq also for a so simple scenario?
If you are literally dealing with files, you might want to think about something even simpler. FTP (more specifically sftp) might fit your needs better, and be FAR simpler to implement.

How can I handle multiple queries with threads in c#

I've a problem with a multithreading app in C# and I would appreciate some help, since I'm new to multithreading.
This is the scenario: I'll have a mobile app that will do a lot of queries/requests in my database(Mysql), my goal is to make a server side application that handles multiple queries using C# in a Linux machine(mono to the rescue). My company is doing the database side of the application, there's another company making the mobile app. I'll send the data to the cloud and the cloud server will send it to the client.
I'm reading the threading chapters of CLR via C# and C# 4.0 in a nutshell, but until now I have only a little clue of what I can do, I believe that asynchronous methods would work, since it doesn't use a lot of resources but I'm a little confused about how to handle thread concurrency(priority, state).
So here are my questions:
What is the best way to solve this problem?
Which class from .NET framework suits best for this job?
How should I handle the query queue?
How can I handle thousands of threads/queries fast enough, so my mobile app user can have the query result in a estimated time of 5 minutes?
Some observations:
I know that the data and time to finish a query will be exponentially equal to the size of the user's data in my database, but I need to handle(few and large data) it as fast as I can.
I'm sending the data to a cloud database(Amazon EC2) and from there i'll send it to the client. I'll not handle this, this will be handled by another company, so my job is to get the queries done quickly and make them avaliable to the cloud database.
I'm aware that to send the information to my client I depend on my IT infrastructure, but the point here is: how I can solve this problem quickly in a way that I'll have only to worry about my application infrastructure.
I cannot put the queries on a big string and throw it on the database, because I need to handle each query result separately before sending the result to the user.
The storage engine is MyISAM, so no transactions are allowed.
I would create a REST web service, either on top servicestack or WebAPI, to abstract access to your data via a service. Either of these services would be able to handle simultaneous requests from your mobile client, as they are designed to do so. In addition, I would create a class that can mediate access and provide a unit-of-work to your database (ie repository). The connection provider for MySQL should be able to handle simultaneous requests from your web service, so you should not have to worry about threading and request management. If a single instance is not enough, you can add more web servers running the same code and use a load-balancer to distribute the requests to each of your instances, where the service/data code is the same.
Some resources for mono based web-api/servicestack:
http://www.piotrwalat.net/running-asp-net-web-api-services-under-linux-and-os-x/
What is the best way to run ServiceStack on Linux / Mono?

How to effectively communicate between database bound applications?

We have a number of different old school client-server C# WinForm client-side apps that are essentially front-ends for the database. Then there is a C# server-side windows service that waits on the client apps to submit orders and then it processes them.
The way the server-side service finds out whether there is work to do is that it polls the database. Over the years the logic of polling for waiting orders has gotten a lot more complicated due to the myriad of business rules. So because of this, the polling stored proc itself uses quite a bit of SQL Server resources even if there is nothing to do. Add to this the requirement that the orders be processed the moment they are submitted and you got yourself a performance problem, as the database is being polled constantly.
The setup actually works fine right now, but the load is about to go through the roof and, it is obvious, that it won't hold up.
What are some effective ways to communicate between a bunch of different client-side apps and a server-side windows service, that will be more future-proof than the current method?
The database server is SQL Server 2005. I can probably get the powers that be to pony up for latest SQL Server if it really comes to that, but I'd rather not fight that battle.
There are numerous options ways you can notify the clients.
You can use a ready-made solution like NServiceBus, to publish information from the server to the clients or other servers. NServiceBus uses MSMQ to publish one message to multiple subscribers in a very easy and durable way.
You can use MSMQ or another queuing product to publish messages from the server that will be delivered to the clients.
You can host a WCF service on the Windows service and connect to it from each client using a Duplex channel. Each time there is a change the service will notify the appropriate clients or even all of them. This is more complex to code but also much more flexible. You could probably send enough information back to the clients that they wouldn't need to poll the database at all.
You can have the service broadcast a UDP packet to all clients to notify them there are changes they need to pull. You can probably add enough information in the packet to allow the clients to decide whether they need to pull data from the server or not. This is a very lightweight for the server and the network, but it assumes that all clients are in the same LAN.
Perhaps you can leverage SqlDependency to receive notifications only when the data actually changes.
You can use any messaging middleware like MSMQ, JMS or TIBCO to communicate between your client and the service.
By far the easiest, and most likely the cheapest, answer is to simply buy a bigger server.
Barring that, you are in for a development effort that has a high probability of early failure. By failure I don't mean that you end up scraping whatever it is you end up building. Rather, I mean you launch the changes and orders will be screwed up while you are debugging your myriad of business rules.
Quite frankly, I wouldn't consider approaching a communications change under pressure; presuming your statement about load going "through the roof" in the near term.
If your risk exposure is such that it has to be 100% functional day one (which is normal when you are expecting a large increase in orders), with no hiccups then just upsize the DB server. Heck, I wouldn't even install the latest sql server on it. Instead, just buy a larger machine, install the exact same OS and DB server (and patch levels) and move your database.
Then look at your architecture to determine what needs to go away and what can be salvaged.
If everybody connects to SQL Server then there is also the option of Service Broker. Unlike other messaging/queueing solution recommended so far it is entirely contained in your database (no separate product to deploy, administer and configure), it offers a single story vis-a-vis your backup/recovery and high availability needs ( no separate backup for message store, no separate DR/HA, whatever is your DB solution is also your messaging solution) and overs a uniform programming API (SQL).
Even when everything is within one single SQL Server instance (ie. there is no need to communicate over network between multiple SQL Service instances) Service Broker still has an ace that no one can match: activation. With activation you eliminate completely the need to poll because the system itself will launch your processing code (will 'activate') when there are events to process. The processing code can be internal (T-SQL procedure or SQLCLR .Net procedure) or external (see external activator).

Intense Distributed C# (WCF) Architecture Design

I want to design a new distributed application, but I have a few queries that I need some genius advice on, hopefully from you people:
Scenario
I currently support a legacy application that is starting to fall between the cracks.
It is a distributed Client-Server app implemented using .Net Remoting. I can't explain exactly what it does, because I'm not allowed to.......But let's just say that it does LOTS of MATHS. I want to re-design and re-write the application using WCF.
Pre-requisites
The server side of the implementation will be hosted in a Windows Service.
The client side will be a windows forms application.
The server side will perform lots of memory-intensive processing.
The server will spit this data out to multiple thin clients (20-ish).
The majority of the time the server will be passing data to the clients, but occasionally the clients will be persisting data back to the server.
The speed at which the data is transmitted is highly-important, however I'm well aware that WCF can handle fast distribution of data.
Encryption/Security is not that important as the app will run on a highly protected local network.
Queries
Given the information above:
1)What sort of design pattern am I best going with? - Baring in mind I want the server to continually PUSH the newly calculated information immediately to the clients, as opposed to the current implementation that involves the client pulling from the server continuously.
2)What type of WCF binding should I use to ensure maximum speed of data transfer? (as close to real-time as possible is what I'm after)
3)Should I use a class library to share the common objects between the client and the server applications?
4)What is the best way in which to databind my objects on the client side in order to see live updates continually as data changes?
If I've forgotten anything then feel free to point this out
Help greatly appreciated.
1) What sort of design pattern am I best going with?
Based on your comments, you're wanting to transform the current polling mechanism to an event-based mechanism. That is, instead of the client constantly checking the server for results, have the server notify the client when a new calculation result is available.
I would recommend using Juval Lowy's Publish-Subscribe Framework for this.
(source: microsoft.com)
.
This framework is described in detail in this MSDN article. And you can download the framework's source code for free at Lowy's website, IDesign.net.
Basically, the server logic that performs the calculations inside the Windows service is the Publishing Client in the graphic, and the various WinForm applications are the Subscribing Clients. The Pub/Sub Service lives in your Windows service. It manages the list of subscribing clients and provides a single endpoint for your server to publish calculation results to. In this way, your server performs a calculation and publishes the result once to the Pub/Sub Service endpoint. The Pub/Sub Service is then responsible for publishing the result to the subscribed clients.
2) What type of WCF binding should I use to ensure maximum speed of data transfer?
If all of your WCF communication were on a single machine, you'd want to use the NetNamedPipeBinding. However, since you will be distributed, you want to use the NetTcpBinding.
For WCF binding decisions, I have found this chart useful.
3) Should I use a class library to share the common objects between the client and the server applications?
Since you are in control of both the client and server side, I would highly recommend sharing a class library instead of using Visual Studio's "Add Service Reference" feature. For a detailed discussion of this, refer to this SO question-and-answer.
4) What is the best way in which to databind my objects on the client side in order to see live updates continually as data changes?
I suspect this will depend on what controls you use to display the data. One way that immediately comes to mind would be to have your client fill an in-memory data table as each calculation result is received. This data table could then be bound to a ListBox control, for example, that shows the results in calculation order.
This to me looks like you need to implement the Observer pattern, but distributed. Whereby new calculations are made to the service, and WCF just happens to be the mechanism by which you push your notification back to the client.
Generally speaking, you have your business logic housed in a windows service, whereby a type is a Subject (Observable). You could publish an endpoint for clients to register for notifications. This would be a WCF service, with potentially two operations:
RegisterClient(...)
UnregisterClient(...)
When a client is registered with service, it can receive updates, broadly speaking, the when the service has finished calculating a result, it could iterate through all registered clients and initiate a push. The push being a communication through an endpoint on the client.
A client endpoint might typically by
Notify(Result...);
And your server simply calls that when it has new data...
Typically you'd use TCP to maximise throughput.
This is by no means exactly what you should do, but perhaps its a direction to start in?

Categories

Resources