How would I make it so that I have one main program with a background process that "listens" for catches (via a function like sendDebugInfo(Exception e) for example) and then unhide the second console and display the message but if the user closes the debug window it doesn't exit the program.
If the above isn't clear enough here is a simple version:
Console application 1 function helloWord() is used
Console application 1 function helloWorld() sends a String to a second console window (but within the same project)
The second console displays "hello world".
You are really asking about inter-process communication (IPC).
There are many ways to achieve IPC. I suggest you have a look at Named Pipes. They are easy to use and quite reliable.
http://msdn.microsoft.com/en-us/library/system.io.pipes.aspx
The basic idea behind Named Pipes is that you have a named resource that you can write messages to in one process and read messages from in the other process. The message can be anything you want. The processes connect to the pipe simply by using the pre-agreed name for it.
Just for clarification, saying that you want two console applications running in different threads is somewhat misleading. Console applications typically run on different processes entirely and since threads are not shared between processes two console applications running in different threads is the norm. However when you say this explicitly it sounds like you're trying to run them on the same process but different threads which I'm not even sure is possible.
That said, Eric J. is right you really seem to be asking about IPC which can be performed in a number of ways. Named pipes is one way and TCP loopback is another. If you want these applications to run on separate machines at some point you're going to want to use TCP. Otherwise named pipes are a lot easier to deal with.
I'd suggest reading up on IPC, figuring out which method suits your needs and try to make it work. When you run into a specific issue like "my messages aren't getting through" or something then you should come back and search for a similar question or create a new question.
Related
I am optimizing the parameters of a simulation model that I wrote in C# using an external optimizer.
To allow this external optimizer to 'call' my C# model, I wrote a console application in C# around my model.
The external optimization makes a system call to the console application (the name of this application is an input to the external optimizer).
So far so good.
The problem is that it is not very efficient: every time the console application is called, it needs to initialize my C# model, which takes a lot of time, while in fact I want to run the same model over and over again (thus, initialize it once and then only run it through the console application).
I was thinking about writing another application which initializes my model, keeps running and responds to events raised by the console application (i.e runs the model).
How can I send an event from a console application to a continuously other running application in C#?
Or shouldn't I do this with events at all and use another way of communication.
Named pipes is the simpliest way to organize interprocess communication, it's minimum requirement is using System.IO.Pipes
This question has simple example for named pipes usage
Example of Named Pipes
How to start separate processes integrated in one wpf application in c# and how to communicate between them?
What I want is somthing like Internet Explorer.
In this case I want to Integerate several applications into one individual application with separate processes
The simplest way is to use the System.Diagnostics.Process class and named pipes.
But, generally, this would hurt the maintenability (see other responses).
For processes running on different machines, you better have a look at WCF.
I design an application back-end. For now, it is a .NET process (a Console Application) which hosts various communication frameworks such as Agatha and NServiceBus.
I need to periodically update my datastore with values (coming from the application while it's running).
I found three possible ways:
Accept command line arguments, so I can call my console app with -update.
On start up a background thread will periodically invoke the update method.
Create an updater.exe app which will do the updates, but I will have code duplication since in some way it will need to query the data from the source in order to save it to the datastore.
Which one is better?
Use the simplest thing that will work. Sounds like option 1 is the way to go based on the info you have given.
Option 2 has threads, threads always complicate programs, more difficult to debug and write, greater chance of bugs.
Option 3, would mean that you have two apps, if you make a change you will have to deploy new versions of both, increasing maintenance costs.
I'm building a console application which imports data into databases. This is to run every hour depending on an input CSV file being present. The application also needs to be reused for other database imports on the same server, e.g. there could be up to 20 instances of the same .exe file with each instance having their own separate configuration.
At the moment I have the base application which passes a location of config file via args, so it can be tweaked depending on which application needs to use it. It also undertakes the import via a transaction, which all works fine.
I'm concerned that having 20 instances of the same .exe file running on the same box, every hour, may cause the CPU to max out?
What can I do to resolve this? Would threading help?
Why not make a single instance that can handle multiple configurations? Seems a lot easier to maintain and control.
Each executable will be running in it's own process, and therefore, with it's own thread(s). Depending on how processor intensive each task is, the CPU may well max out but this is not necessarily something to be concerned about. If you are concerned about concurrent load then the best way may be to stagger the scheduling of your processes so that you have the minimum number of them running simultaneously.
No, this isn't a threading issue.
Just create a system-wide named Mutex at the start of the application. When creating that Mutex, see if it already exists. If it does, it means that there is another instance of your application running. At this point you can give the user a message (via the console or message box) to say that another instance is already running, then you can terminate the application.
I realize this thread is very old but I had the very same issues on my project. I suggest using MSMQ to process jobs in sequence.
I'm working with an application, and I am able to make C# scripts to run in this environment. I can import DLLs of any kind into this environment. My problem is that I'd like to enable communication between these scripts. As the environment is controlled and I have no access to the source code of the application, I'm at a loss as to how to do this.
Things I've tried:
File I/O: Just writing the messages that I would like each to read in .txt files and having the other read it. Problem is that I need this scripts to run quite quickly and that took up too much time.
nServiceBus: I tried this, but I just couldn't get it to work in the environment that I'm dealing with. I'm not saying it can't be done, just that I can't get it done.
Does anyone know of a simple way to do this, that is also pretty fast?
Your method of interprocess communication should depend on how important it is that each message get processed.
For instance, if process A tells process B to, say, send an email to your IT staff saying that a server is down, it's pretty important.
If however you're streaming audio, individual messages (packets) aren't critical to the performance of the app, and can be dropped.
If the former, you should consider using persistent storage such as a database to store messages, and let each process poll the database to retrieve its own messages. In this way, if a process is terminated or loses communication with the other processes temporarily, it will be able to retrieve whatever messages it has missed when it starts up again.
The answer is simple;
Since you can import any DLL into the script you may create a custom DLL that will implement communication between the processes in any way you desire: shared memory, named pipe, TCP/UDP.
You could use a form of Interprocess Communication, even within the same process. Treat your scripts as separate processes, and communicate that way.
Named pipes could be a good option in this situation. They are very fast, and fairly easy to use in .NET 3.5.
Alternatively, if the scripts are loaded into a single AppDomain, you could use a static class or singleton as a communication service. However, if the scripts get loaded in isolation, this may not be possible.
Well, not knowing the details of your environment, there is not much I can really offer. You are using the term "C# scripts"...I am not exactly sure what that means, as C# is generally a compiled language.
If you are using normal C#, have you looked into WCF with Named Pipes? If your assemblies are running on the same physical machine, you should be able to easily and quickly create some WCF services hosted with the Named Pipe binding. Named pipes provide a simple, efficient, and quick message transfer mechanism in a local context. WCF itself is pretty easy to use, and is a native component of the .NET framework.
Since you already have the File I/O in place you might get enough speed by placing it on a RAM disk. If you are polling for changes today a FileSystemWatcher could help to get your communication more responsive.
You can use PipeStream. Which are fast than disk IO as they are done using main memory.
XMPP/Jabber is another appraoch take a look at jabber.net.
Another easy way is to open a TCP Socket on a predefined Port, connect to it from the other process and communicate that way.