I am developing the web site, It consists of device name list and related Build Button. When one click the Build button the one process will run in server. When more than ten user click the Build button more processes will create at that server will hang. How can send all request from client to single process in server.
You could set up a WCF Windows service that internally has some kind of count of currently running processes and makes sure that there are never more than X threads each running one process. If you want exactly one, rather than just a limited amount, you don't even need to worry about the threads and can just halt other requests while it's processing one.
It might make it more difficult to give feedback to the users though if that's needed, since if one or more processes are queued, you can't immediately tell the user that the build has begun etc.
It sounds like you are spawning a process thread to do the build on the sever (I don't recommend this, as spawning threads in a ASP.Net application can be tricky). The easiest way to get around each request spawning a new thread would be to separate out your build process from the Asp.net web application. So:
Move the build action to a Windows Service on the same machine.
Use Windows Communication Foundation to expose the service entry point to the Asp.Net application
Make the WCF instance a singleton instance, so that only one request can access it at a time. (This is the drawback of using only one thread.)
Your other option is to log the requests to a queue, and have a process (windows service maybe) monitor the queue, and process it one at a time. The drawback to this is that you won't get immediate results for the user. You'll need some way of notifying them when the process has finished, but you'll most likely have to do the same thing with the above solution too.
Related
I am trying something new to me, so I don't have the vocabulary to express my questions in any sort of domain specific language.
I am currently mind mapping a tool I would like to build. The function of this tool is to execute many long running tasks and log results to an remote database. This is most similar to Jenkins build and test functionality. Unfortunately, I don't think I can use Jenkins since these are tests executed on real live custom hardware with a lot of IO operations with other hardware resources.
It would almost certainly be run on a server, so it's headless. I generally build desktop tools with a UI to execute similar tasks in a windows desktop environment. When I want to communicate to the user what the tool is doing I simply create some UI elements to display status.
In this environment there would rarely if ever be a person looking at it work. If I DO need to debug something while running, or just want to check the status my immediate thought is log files. However, they are pretty cumbersome to watch in real time.
I would like to be able to make requests of the task runner from the command line in the same way I git status. My current thought is to register my command (like git) on PATH. I could have that command connect to a named pipe that my long process is connected to/monitoring and relay the user's request. (I have never used named pipes before, but it seems like a standard way to have processes communicate?)
This solution requires three "layers":
The command that is on PATH that can parse/accept/reject the user's request and then forward it along.
The long process manager that listens for user requests and monitors long task execution.
The task executers themselves.
Are there other approaches? Am I reinventing the wheel? Links and resources are greatly appreciated!
How can I create a long running task that listens to requests from the
terminal? (Not just asking for input)
You can use sockets for inter-process communication or pipes.
Am I reinventing the wheel?
It depends on the use of that program. ie what are the tasks that are running.
For ex: If you are creating a version control system then yes, git already exists
If not then no. You are simply creating a program or application or website that use multiple threads. Multiple threads are provided by the hardware (CPU) and Operating system. You Interact with these threads using the programming languages.
Applications or programs can be single or multi threaded.
For example github:
When you vote on an answer you are basically creating a request that is received by a program running on a server which creates a thread to update the vote count in the database. In the meanwhile if someone else also vote on an answer a similar request is sent and handled maybe by the same server on another thread. This server is still listening for request and in the same time executing tasks (updating the vote count in the database)
Note: This example is simplified, just as an example
For running tasks in the background and still having a functioning UI or request listener, you require multiple threads. One thread for UI, one or more for handling requests, and the rest for running those background tasks and writing to the logs.
Note:
when dealing with threads thread safety and shared resources between threads are important.Also programs can also use more than multiple processes, but multithreading was introduced to fix some of the overhead of using multiple processes (related to thread safety and shared resources).
In your case, If the log files are in a database, then concurrency is access is handled by that database.
But in case the log file is a local file on the system, then writing and reading from the log requires handling the multiple threads access to that one log file. That is where thread safety and shared resources is used.
Note that the below is a simplified comparision:
In case of a local application which uses multiple processes:
This application is like a network of processes, with each process
running a certain program.
Process 1 might render the UI, Process 2 might write to a log file,
etc...
Pipes, signals, or other operating system technologies are used to
send messages and communicate between these processes (Programs).
Ex: when a user press "Write log" button, process 1 communicates with
process 2 (using the communication methods from above) to tell
process2 to write to that log file.
In the case of a web app:
processes becomes computers connected over the network(or servers
running on a host) Operating system technologies becomes Network
communication like HTTP request and responses, JSON, xml, etc...
So This application is like a network of computers (servers)
located in different locations, with each computer running a certain
program.
Computer 1 might render the UI, computer 2 might write to the
database, etc...
Network communication technologies are used to communicate between the
computers or servers over the internet. Ex: when a user press
"Write log" button, Computer 1 communicates with Computer 2 (using the
communication methods from above) to tell Computer to write to that
database.
A host is a computer hosting (running) a program. This program is usually called a server. (The host provides services)
I am having an asp.mvc application which resides on a server.From this application, I want to start a process which is a bit long-running operation and will be resource intensive operation.
So what I want to do is I want to have some user agent like 3 which will be there on 3 machines and this user agent will use resources of their respective machines only.
Like in Hadoop we have master nodes and cluster in which tasks are run on the individual cluster and there is 1 master node keeping track of all those clusters.
In Azure, we have virtual machines on which tasks are run and if require Azure can automatically scale horizontally by spinning up the new instance in order to speed up the task.
So I want to create infrastructure like this where I can submit my task to 3 user agents from the mvc application and my application will keep track of this agents like which agent is free, which is occupied, which is not working something like this.
I would like to receive progress from each of this user agent and show on my MVC application.
Is there any framework in .net from which I can manage this background running operations(tracking, start, stop etc..) or what should be the approach for this?
Update : I don't want to put loads of server for this long running operations and moreover I want to keep track of this long running process too like what they are doing, where is error etc.
Following are the approach which I am thinking and I don't know which will make more sense:
1) Install Windows Service in the form of agents of 2-3 computer on premises to take advantage of resp resources and open a tcp/ip connection with this agents unless and until the long running process is complete.
2) Use hangfire to run this long running process outside of IIS thread but I guess this will put load on server.
I would like to know possible problems of above approaches and if there are any better approaches than this.
Hangfire is really a great solution for processing background tasks, and we have used used it extensively in our projects.
We have setup our MVC application on separate IIS servers which is also a hangfire client and just enqueues the jobs needs to be executed by hangfire server. Then we have two hangfire server instances, which are windows service application. So effectively there is no load on the MVC app server to process the background jobs, as it is being processed by separate hangfire servers.
One of the extremely helpful feature of hangfire is its out of the box dashboard, that allows you to monitor and control any aspect of background job processing, including statistics, background job history etc.
Configure the hangfire in application as well as in hangfire servers
public void Configuration(IAppBuilder app)
{
GlobalConfiguration.Configuration.UseSqlServerStorage("<connection string or its name>");
app.UseHangfireDashboard();
app.UseHangfireServer();
}
Please note that you use the same connection string across. Use app.UseHangfireServer() only if you want to use the instance as hangfire server, so in your case you would like to omit this line from application server configuration and use only in the hangfire servers.
Also use app.UseHangfireDashboard() in instance which will serve your hangfire dashboard, which would be probably your MVC application.
At that time we have done it using Windows Service, but if had to do it now, I would like to go with Azure worker role or even better now Azure Web Jobs to host my hangfire server, and manage things like auto scaling easily.
Do refer hangfire overview and documentation for more details.
Push messages to MSMQ from your MVC app and have your windows services listen (or loop) on new messages entering the queue.
In your MVC app create a ID per message queued, so make restful API calls from your windows services back to the mvc app as you make progress on the job?
Have a look at Hangfire, this can manage background tasks and works across VMs without conflict. We have replaced windows services using this and it works well.
https://www.hangfire.io
Give a try to http://easynetq.com/
EasyNetQ is a simple to use, opinionated, .NET API for RabbitMQ.
EasyNetQ is a collection of components that provide services on top of the RabbitMQ.Client library. These do things like serialization, error handling, thread marshalling, connection management, etc.
To publish with EasyNetQ
var message = new MyMessage { Text = "Hello Rabbit" };
bus.Publish(message);
To subscribe to a message we need to give EasyNetQ an action to perform whenever a message arrives. We do this by passing subscribe a delegate:
bus.Subscribe<MyMessage>("my_subscription_id", msg => Console.WriteLine(msg.Text));
Now every time that an instance of MyMessage is published, EasyNetQ will call our delegate and print the message’s Text property to the console.
The performance of EasyNetQ is directly related to the performance of the RabbitMQ broker. This can vary with network and server performance. In tests on a developer machine with a local instance of RabbitMQ, sustained over-night performance of around 5000 2K messages per second was achieved. Memory use for all the EasyNetQ endpoints was stable for the overnight run
So I created a Windows Service using C# and created an installer in Visual Studio for it. It's set up to run manually as I don't want it running all the time. I then have another application (C# WPF) that should have an option to turn the service on and off (the service itself creates a web service that in turn communicates back to my WPF application). This works fine in Windows XP, but testing it on a Windows 7 machine, it won't start. Surprisingly it does throw an exception and crash, it just does nothing. I believe this is a permissions problem. If I go to the services control panel using the same Windows 7 account, I'm not able to start or stop the service either.
So my question is, is there a way to set my service so that regular user accounts can start and stop it? And is there a way to set my installer to do this automatically.
I don't want my WPF application to have run as administrator!
Whilst I believe that it is possible to secure a service so that regular users can start and stop it, I do not recommend doing so. This will create a lot of complication and is a potential cause for confusion. I always prefer to keep things simple, especially when it comes to installation and security.
So, if we can't let the user start and stop the service we probably need to let the service run all the time. Since you don't want the service to be active all the time I suggest you give the service its own internal Running flag. When this is set true, the service is active and does busy things, otherwise the service remains idle. You can use your preferred IPC mechanism (sockets, named pipes, WCF etc.) to allow the user to toggle this switch.
Windows 8 has a feature to allow services to start on demand, basically in response to some kind of trigger. But for Windows 7, your only real option is to set it to start automatically on startup. You could set it to start delayed, so that its not adding to the time it takes windows to start.
Regular users cannot start and stop services.
EDIT: Reading the link in the comment above it sounds to me as that is a blanket ability for users to start and stop services. I think the question here is about how to do this for a particular service.
Also, while it may be possible to set the service to run as that particular user, it really means it only works for that particular user and other users on the work station would not be able to use the application as they'd not be able to start or stop the service, assuming that the service running as a user implies that the user may control it, which may not be the case.
Also in reading the comments and other answer, I'm left to wonder if the service can be used by any user which can run the application. That is, if user A logs on to the work station and starts this app (and thus the service), locks it and walks away, what happens when use B logs on and tries to run the same service? Can the service support multiple users at the same time, or will funny things begin to happen if the service is utlized by the application running multiple times.
This really sounds like what is desired is for a background to be started when the application starts. This thread (or threads) would do the work of the service, and by their nature would end when the application ends. Of course more detail in the question would help give a better answer.
Of course if it is appropriate as a service, I see no reason not to have a service with a worker thread that sleeps, and another timer thread that acts as a producer that checks if there's work to do.
I need to implement a background process that runs on a remote windows server 24/7. My development environment is C#/ASP.NET 3.5. The purpose of the process is to:
Send reminder e-mails to employees and customers at appropriate times (say 5:00PM on the day before a job is scheduled)
Query and save GPS coordinates of employees when they are supposed to be out on jobs so that I can later verify that their positions were where they were supposed to be.
If the process fails (which it probably will, especially when updates are added), I need for it to be restarted immediately (or within just a few minutes) as I would have very serious problems if this process failed to send a notification, log a GPS coordinate, or any of the other tasks its meant to perform.
Implement your process as a Windows service.
For a straightforward example of how
to code a Windows service this in
.Net see http://www.developer.com/net/csharp/article.php/2173801 .
To control what happens should the
service fail configure this through
the "Recovery" tab on your service
in services.msc. (see image below)
For higly critical operation you
might look into setting up a server cluster for mitigating single
server failure (see http://msdn.microsoft.com/en-us/library/ms952401.aspx ).
(source: microsoft.com)
You need a Windows Service. You can do non-visual iterative operations in windows services.
Another alternative is to create a normal application and run it on a schedule. Your application is run at certain times a day to perform its actions, depending on how often you need to log GPS coordinates and send reports. If your service doesn't need to run constantly this is usually the recommended approach, as services are supposed to be limited to always-on applications.
As well as being a service, you might want to run on a cluster, and make your service known to the cluster management software.
You can create Windows Service (server programming on Windows) or use scheduler to periodically execute a task.
Depending on the requirements for the high availability, program can be installed on a fail-over cluster where there will be other server (passive node) started and quietly waiting as a hot-backup if the first (active node) dies. This is wide topic. Start with High availablity on Wikipedia.
In my experience if you need to run something 24x7 you need to have (one or more) watchdog process to verify that your service(s) are running correctly. Just relying on the normal service framework cannot guarantee that the program is working correctly - even if it looks like it is running. The watchdog program (which also is a service) can query the service automatically e.g. posting messages checking response times, querying for statistics and so on - when it detects problems it can restart the service (or do some other fail-recovery)
The reason for having a watchdog program as opposed to just rely on user queries to detect errors is that it can be done automatically. This is the preferred method because it allows for a proactive detection.
Scenario: A WCF service receives an XDocument from clients, processes it and inserts a row in an MS SQL Table.
Multiple clients could be calling the WCF service simultaneously. The call usually doesn't take long (a few secs).
Now I need something to poll the SQL Table and run another set of processes in an asynchronous way.
The 2nd process doesn't have to callback anything nor is related to the WCF in any way. It just needs to read the table and perform a series of methods and maybe a Web Service call (if there are records of course), but that's all.
The WCF service clients consuming the above mentioned service have no idea of this and don't care about it.
I've read about this question in StackOverflow and I also know that a Windows Service would be ideal, but this WCF Service will be hosted on a Shared Hosting (discountasp or similar) and therefore, installing a Windows Service will not be an option (as far as I know).
Given that the architecture is fixed (I.E.: I cannot change the table, it comes from a legacy format, nor change the mechanism of the WCF Service), what would be your suggestion to poll/process this table?
I'd say I need it to check every 10 minutes or so. It doesn't need to be instant.
Thanks.
Cheat. Expose this process as another WCF service and fire a go command from a box under your control at a scheduled time.
Whilst you can fire up background threads in WCF, or use cache expiry as a poor man's scheduler those will stop when your app pool recycles until the next hit on your web site and the app pool spins up again. At least firing the request from a machine you control means you know the app pool will come back up every 10 minutes or so because you've sent a request in its direction.
A web application is not suited at all to be running something at a fixed interval. If there are no requests coming in, there is no code running in the application, and if the application is inactive for a while the IIS can decide to shut it down completely until the next request comes in.
For some applications it isn't at all important that something is run at a specific interval, only that it has been run recently. If that is the case for your application then you could just keep track of when the table was last polled, and for every request check if enough time has passed for the table to be polled again.
If you have access to administer the database, there is a scheduler in SQL Server. It can run queries, stored procedures, and even start processes if you have permission (which is very unlikely on a shared hosting, though).
If you need the code on a specific interval, and you can't access the server to schedule it or run it as a service, or can't use the SQL Server scheduler, it's simply not doable.
Make you application pool "always active" and do whatever you want with your threads.