My application needs to:
Run an external command-line application
Capture the standard output/error during execution
Parse the data from the output/error and use it for status display in the main app
While it's not hard to do it, it seems to me it's a very common task, and I was wondering if there's a library or framework simplifying this task in C#.
Edit: Here's a "Mono.Options inspired" example of the kind of thing I'm looking for:
ProcessWithParser proc= new ProcessWithParser(filename, arguments);
proc.AddWatch("percent=??", v => this.percent = Convert.ToInt32(v));
proc.Start();
In the "concept" above, it would locate "percent=" in the output, read the next two characters as a string, and call the delegate using this string as a parameter.
The base class library provides the tools for this already - there isn't much a framework would really need to add to it.
Redirection of the process standard input and output streams is trivial. RegEx and using streams makes parsing this fairly easy, as well.
The framework pretty much does what any (flexible) custom library would normally do in other languages.
Related
I know this is wrong (trying to code in C# with a C/C++ mindset).
But is it possible to create inline functions / inline recursive functions (til the Nth call) / macros in C#?
Because if I intend to call the same function 1000 times, but it's a simple function, the overhead of creating the stack is quite big... And unnecessary.
In C i could use inline functions. Is there something like that in C#?
Again... I'm not saying that C/C++ is better... I'm just new to C# and have none to ask those simple questions...
Inline functions in C#?
Finally in .NET 4.5, the CLR allows one to force1 method inlining
using MethodImplOptions.AggressiveInlining value. It is also available
in the Mono's trunk (committed today).
[MethodImplAttribute(MethodImplOptions.AggressiveInlining)]
void Func()
{
Console.WriteLine("Hello Inline");
}
The answer should be: don't worry about it. No need to start with micro-optimizations unless you've tested it and it's actually a problem. 1000 function calls is nothing until it's something. This is majorly overshadowed by a single network call. Write your program. Check the performance against the goals. If it's not performant enough, use a profiler and figure out where your problem spots are.
Yes, C# is a very powerful general purpose language that can do nearly anything. While, inline functions / macros are generally frowned upon, C# does provide you with multiple tools that can accomplish this in a more concise and precise fashion. For example, you may consider using template files which can be used (and reused) in nearly all forms of .NET applications (web, desktop, console, etc).
http://msdn.microsoft.com/en-us/data/gg558520.aspx
From the article:
What Can T4 Templates Do For Me?
By combining literal text, imperative code, and processing directives, you can transform data in your environment into buildable artifacts for your project. For example, inside a template you might write some C# or Visual Basic code to call a web service or open an Excel spreadsheet. You can use the information you retrieve from those data sources to generate code for business rules, data validation logic, or data transfer objects. The generated code is available when you compile your application.
If you're new to programming I would recommend against implementing templates. They are not easy to debug in an N-Tier application because they get generated and ran at run-time. However, it is an interesting read and opens up many possibilities.
I'm supposed to write a command line tool in C#. Problem is, I'm completely new to it and have to read up on a lot of stuff. The tool has to accept several parameters with a syntax I have no idea of what it does. It goes like this:
tool.exe \path\data.log /lastrun:file1.txt >file2.txt
Is that /lastrun:... valid markup?
I know that >file2.txt has something to do with output and stdout, but I barely find any info for dummies. Does it write a text file?
The tool is supposed to output data on stdout, which is meant to be read again and possibly processed with further console commands. How can one reference the output?
I have practically no experience with command line tools. I would appreciate if anyone could either drop me some smart words I could look up, links or simply explains me what is going on here.
You are the one to decide the format of the command line parameters (what you call "markup").
It is entirely up to you whether it is valid or not.
You need to parse the passed in arguments - see Main() and Command-Line Arguments (C# Programming Guide) on MSDN for details. Many people use a command line parsing library (there are many - search and find one you like, perhaps the one with the best documentation).
As for > - I suggest you read about command redirection (article about XP, but still valid).
Outputting data on stdout is easy. Just write to the Console class. If you want to read in you could use the static read methods on the console class as well, though depending on the type of data you are sending you may want to look into pipes. Here is another post Standard Input & Output in .NET asking the same question.
As far as console input format, like was mentioned, that's up to you!
During development of a new application, I was wondering how the most flexible solution of a dynamic, let's say ‘scriptable’, system would look like.
In general, I would need a file in plain text (e.g. TXT or XML) wherein I define a trigger (in my example a hex string) and a corresponding action (open a window, execute a SQL transaction, …). I should be able to add new entries without recompiling the whole application.
As you see, it's not really scriptable this way, but I need a solution to define what happens with which input.
Has anyone got some experience with this?
There are various scripting languages you can use inside your .NET application; IronPython being one obvious example, but others are available - javascript for example (talking to your .NET objects; not in a browser). One of these might have some application here?
If you have only simple program flow you can use Windows Workflow Foundation Rules Engine.
Features
Inlcuded in dotnet3.0 runtime, no extra costs
RuleSetDialog can be integrated into your code to edit rules including intellisense
persistable as xml-files.
ruleengine can evaluate expressions and can perform actions
See also
A quick and dirty Rules Engine using Windows Workflow Part1 and Part2
I first found this topic when reading the german languaged magazine dotnetpro 10/2010 on page 30 "Die Rule-Engine aus .NET ohne Workflow benutzen"
Well you can embed a .NET language like IronPython or possibly Boo
Is it possible to use Mathematica's computing capabilities from other languages? I need to do some complex operations (not necessarily symbolic, btw), and it'd be pretty sweet to be able to just call Mathematica's functions or running Mathematica's code right from my python/c#'s program.
Is it possible?
Looks like there is a MathLink API you can use from C#, c or Java, have you checked this out?
http://reference.wolfram.com/mathematica/guide/MathLinkAPI.html
To links about usage of python and .Net (for C#)
Perhaps the easiest way is to make the Mathematica program its own self-contained script and just call it as a system call or pipe stuff to/from it via stdin/stdout. Here's how to do that:
Call a Mathematica program from the command line, with command-line args, stdin, stdout, and stderr
I haven't used it, but this looks interesting. Looks like you can call Mathematica code directly from your C# app using .NET/Link (a product by Wolfram).
Yes but there are some subtleties. I covered Mathematica .NET interoperability in my book F# for Scientists but dropped the subject for its successor F# for Technical Computing.
So here's the concept. I'm trying to write a C# backend for CouchDB (keep reading, this has little or nothing to do with CouchDB).
Here's the basic idea:
1) The DB Driver uploads an arbitrary string to the DB Server.
2) The DB Driver tells the DB server to run the code and produce a view
3) The DB Server loads my C# backend, hands it the input string
4) The DB Server hands the function to the C# backend
5) The DB Server hands the C# backend data to execute the function over
6) The C# Backend executes the code and result the data to the DB Server
7) The DB Server stores, or otherwise sorts and sends the results back to the DB Driver
So the problem I'm having is with the remote code execution. All my models are C# objects. So it'd be really nice to be able to write the code on the front end, and have the code transfered to the back end for execution. Without having to copy DLLs etc. Now currently in CouchDB this query language is JavaScript so it's super simple to just copy the code into a string.
Any options like this in C#? Can I get a string version of a function? Should I create a external server that hosts these DLL assemblies and serves them up to the CouchDB backends? Should I compile them on the front end, and embed the .cs files as a resource? I'm just brainstorming at this point. Any thoughts/ideas/questions are welcome.
Would you be able to achieve this using Expression trees (System.Linq.Expressions)?
Perhaps some means of passing a serialised expression tree to the back end, compiling this into a lambda/Func<> oin the server side and then executing it?
Have not played with them except for some curiosity research when Linq was unveiled, but it may help.
It sounds like what you're looking for is a Javascript style eval method for C#. Unfortunately that does not exist today and is not trivial to implement. The two main approaches for dynamically executing C# code is
Building lightweight methods via Reflection.Emit
Compiling assemblies on the fly (via CodeDom or straight calls to csc).
Another option to consider here is to take advantage of a dynamic language. Starting with C# 4.0, it's very easy to interop with dynamic languages. Instead of the backend being passed C# code, why not pass Python or Ruby and take advantage of the DLR?
Define "function." Is this arbitrary code (which must be dynamically compiled and executed), or a list of methods that could be attached to a class?
If the latter (which is what you should be doing - allowing arbitrary dynamic code execution not only violates the .NET development model (in my opinion - no flames please), it's a huge security risk), you can use a web service.
Web services and WCF services are an excellent way to distribute processing between machines.
There's a couple of projects I've used in the past that might help you out:
Vici Parser
Vici Parser is a .NET library for
late-bound expression parsing and
template rendering. It also includes
an easy to use tokenizer that can be
used for parsing all kinds of file
formats (a full-featured JSON parser
is included).
This Codeproject article
This example shows an "Eval" function
for C# and implements different usages
of this evaluate function. The library
is written in C# and can be tested
with the MainForm in the solution.
Other alternatives include using XSLT scripting (something Umbraco CMS makes use of) and perhaps something like LUA.NET which would enable you to sandbox what the code can do.