System Wide persistent storage? - c#

My program starts a process and I need to make sure it is killed before I can run the program again. To do this, I'd like to store the start time of the Process in something like a mutex that I could later retrieve and check to see if any process has a matching name and start time.
How could I do this? I don't really want to stick anything on the harddrive that will stick around after the user logs out.
For reference I'm using C# and .NET

You want to store the process ID, not the process name and start time. That will make it simpler to kill the process.
You can store the file in %TMP% so that it will get cleaned up when hard drive space is low.
C# code to kill the process looks like this:
int pid = Convert.ToInt32(File.ReadAllText(pidFile));
Process proc = Process.GetProcessById(pid);
proc.Kill();
You can find out the %TMP% directory like this:
var tmp = Environment.GetEnvironmentVariable("TMP");
EDIT: The PID can be reused, so you will need to deal with that, too.

I agree with rossfabricant that the Process ID should be stored instead of a name. Process IDs are not reused until after a restart, so this should be safe.
However, I'd recommend against using the TMP environment variable for storage, and instead look at Isolated Storage. This would be more .NET oriented in terms of storage. Also, it would lower the security required to run your application.
Unfortunately, both temporary directories and isolated storage will persist after a logout, though, so you'll need logic to handle that case. (Your app can clean out the info on shutdown, however).
If you have access to the code of process you are starting, it might be better to use something like named pipes or shared memory to detect whether the application is running. This also gives you a much cleaner way to shut down the process. Killing a process should be a last resort - in general, I wouldn't build an application where the design requires killing a process if it was at all avoidable.

Related

Dump a process memory to file / recreate process from dump file

Just curious, maybe someone knows a way:
Is it possible, while having an opened process (app domain), dump its entire memory space to a file, send it by wire to a LAN workstation and recreate the process as it was on the first computer.
Assumptions:
the application exists on both computers;
the process is not creating any local settings/temporary files;
the OS is the same on both computers;
If you want to do so, you have to ensure you have the same environment to run the "dumped" process. Some of them:
You have to provide the same handles with the same state (process, threads, file, etc.)
The new environment must have the same memory addresses allocated (including runtime allocations) as previous had
All the libraries must be initialized and put in the same state
If you have some GUI interface even GPU must be in the same state (you have to preload all graphic resources etc.)
And many more stuff to take care about.
This is what's involved on Linux:
http://www.cs.iit.edu/~scs/psfiles/dsn08_dccs.pdf
Not exactly easy.

Network share copy cooperation:

I have a network server which has a share that exposes a set of files. These files are consumed by processes that are running on multiple servers, sometimes several processes on the same machine.
The set of files are updated a couple times a day, and the set of files are fairly large.
We are attempting to reduce the bandwidth used by these processes retrieving these filesets by making processes that are on the same machine share the same fileset.
In order to do this, we want each process on the same machine to coordinate with the other processes that need the same files so that only one will attempt to download the files, and then the files will be shared by all the processes once complete.
Additionally, we need to prevent the server from performing an update on the fileset while a download is in progress.
In order to facilitate this requirement, I created a file lock class. This class opens a file called .lock in the specified location. The file is opened as read/write so that it will prevent another process from doing the same, regardless of what machine the process is running on. This is enclosed in a try/catch so that if the file is already locked, the exception is caught and the lock is not acquired. This already works correctly.
The problem I am trying to solve is that if a process hangs for some reason while it has the lock, all the other processes will indefinitely fail to sync these files because they cannot acquire the lock.
One solution we were exploring today was to have a multi-lock setup, where each lock would have a guid in the name, and instead of fighting over a single hard lock, locks could be acquired as many as requested. However, processes would be responsible for making sure there is only one lock set when they begin a download. This is so that if a process with a lock hangs, we can consider it expired after a certain time limit, and nothing prevents a new process from requesting a lock in addition to the hung lock.
The problem here is that the creation of these multi locks needs to be synchronized between processes or else there could be a race condition on the creation and checking of the lock count.
I don't see a way to synchronize this without reintroducing a hard locking mechanism like the first solution, but then we are right back where we started where a hung process will block the others from doing a download.
Any suggestions?
A common way to tackle this is to use some sort of shareable lock file, with the real locking logic performed via the content. For example, consider an SQlite database file with a single table as a lock file: Something like
CREATE TABLE lock (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host TEXT,
pid INTEGER,
expires INTEGER
)
A consumer (or the producer for an update to the fileset) requests a lock by INSERTing into the table
A process heartbeats by UPDATEing its own row, making it never-expiring
expired rows are discarded: Crashed processes will stop updating and eventually their lock will be discarded
The lowest id holds the lock
Processes on the same host may evaluate the host field to find out, if another process on the same host already wants to copy, making it obsolete to request another copy
Ofcourse this can be done via a database server (or in fact locking server) instead of a database file if feasable, but the SQlite method has the advantage of requiring nothing more than file access.
The trick here is good use of caching.
The designated "download" process that updates the fileset should first grab it from the remote location and store it in a temp file. Then it should simply continue attempting to acquire a read/write lock on the local file(s) you want to replace. When it succeeds, do the swap and drop the lock. This part should go very very quickly.
Also, it is quite unlikely to "hang" when doing a simple file copy on a local drive. Meaning the other dependent processes will be able to continue functioning regardless of what happens with this one.
To make sure the downloading process is functioning correctly you'll need a monitoring program that pings the download process every so often to ensure it's responsive. If it's not then alert someone..

Best way to make sure another process acquires a Mutex before moving on?

I'm not exactly sure the best way to go out about what I'm trying to do, so I thought I'd turn here for ideas.
I have N number of programs that all must communicate with one process, ProcA through a memory mapped file.
When I open any program in group N, it checks if ProcA has been opened, if not it launches it. Here is where the question of this comes in...
When ProcA is ready, I need to be able to communicate to the process that spawned it, that the mapped memory has been created and communication can begin.
I was thinking about using a Mutex to accomplish this. Have the spawning processing attempt to acquire and release a mutex in a loop, checking if it was the creator or not, until it gets a return that the mutex was created in other place. Even this though seems potentially problematic since as I said, N number of programs will be doing this at once and if multiple are spinning, acquiring and releasing like this, they'll see each other locking and think it's ProcA instead.
So, what's the best way for N number of processeses blocking until ProcA signals that it's open for business?
Thanks!
Edit
For further clarification, I've tried having the process that spawns ProcA create the memory map which ProcA can then take over. But I found I have the same problem, because the spawning process needs to know when to release the shared memory, if it does it before ProcA grabs it then the memory map is released.
Edit 2
I need to pass pointer data around between the two processes, thus memory mapped files are my only options, pipes, sockets, etc won't work for this.
I think what you should really look into IPC with WCF (it's an old article, but should give you the basics), this would be far better than your current approach and signalling each other with the Mutex.
But if you insist on using this approach, you can create a system wide (Global), or session wide (named) Mutex, and simply do WaitOne/ReleaseMutex. No need for looping, and you should try to stay away from this (looping) approach as you'll be unnecessarily wasting CPU cycles, but don't think there's much to it unless I'm missing something.

Process Management in .NET

In my Server/Client setup, I have the Client applications managing other applications on the machine (start/stop/restart/query processes). Right now, I just have a very basic setup using the Process ID, but it occured to me before it goes live, I need to improve this.
If the process stops and another starts using the same ID in between the times that it I issue a query for it, this will whack the system out. None of the processes that I start will ever come from the same file path, but will often times have the same executable name.
I am not having much luck finding it, but can I find the executable path for a running service? I imagine my best bet when querying the running state would be first to look for the stored ProcessID it should be at, if that is running, check the filepath/executable name to make sure it matches as well.
Would there be a better way to do this, or is this the best possible scenario?
You can use the Process.Exited event to be notified when a process you are monitoring exits. This way there will be no chance of things like that happening "while you aren't looking".
Note: for the Exited event to be raised, first you have to explicitly set Process.EnableRaisingEvents to true.

Building C# console app for multiple instances

I'm building a console application which imports data into databases. This is to run every hour depending on an input CSV file being present. The application also needs to be reused for other database imports on the same server, e.g. there could be up to 20 instances of the same .exe file with each instance having their own separate configuration.
At the moment I have the base application which passes a location of config file via args, so it can be tweaked depending on which application needs to use it. It also undertakes the import via a transaction, which all works fine.
I'm concerned that having 20 instances of the same .exe file running on the same box, every hour, may cause the CPU to max out?
What can I do to resolve this? Would threading help?
Why not make a single instance that can handle multiple configurations? Seems a lot easier to maintain and control.
Each executable will be running in it's own process, and therefore, with it's own thread(s). Depending on how processor intensive each task is, the CPU may well max out but this is not necessarily something to be concerned about. If you are concerned about concurrent load then the best way may be to stagger the scheduling of your processes so that you have the minimum number of them running simultaneously.
No, this isn't a threading issue.
Just create a system-wide named Mutex at the start of the application. When creating that Mutex, see if it already exists. If it does, it means that there is another instance of your application running. At this point you can give the user a message (via the console or message box) to say that another instance is already running, then you can terminate the application.
I realize this thread is very old but I had the very same issues on my project. I suggest using MSMQ to process jobs in sequence.

Categories

Resources