Is it possible to use Mathematica's computing capabilities from other languages? I need to do some complex operations (not necessarily symbolic, btw), and it'd be pretty sweet to be able to just call Mathematica's functions or running Mathematica's code right from my python/c#'s program.
Is it possible?
Looks like there is a MathLink API you can use from C#, c or Java, have you checked this out?
http://reference.wolfram.com/mathematica/guide/MathLinkAPI.html
To links about usage of python and .Net (for C#)
Perhaps the easiest way is to make the Mathematica program its own self-contained script and just call it as a system call or pipe stuff to/from it via stdin/stdout. Here's how to do that:
Call a Mathematica program from the command line, with command-line args, stdin, stdout, and stderr
I haven't used it, but this looks interesting. Looks like you can call Mathematica code directly from your C# app using .NET/Link (a product by Wolfram).
Yes but there are some subtleties. I covered Mathematica .NET interoperability in my book F# for Scientists but dropped the subject for its successor F# for Technical Computing.
Related
I know this is wrong (trying to code in C# with a C/C++ mindset).
But is it possible to create inline functions / inline recursive functions (til the Nth call) / macros in C#?
Because if I intend to call the same function 1000 times, but it's a simple function, the overhead of creating the stack is quite big... And unnecessary.
In C i could use inline functions. Is there something like that in C#?
Again... I'm not saying that C/C++ is better... I'm just new to C# and have none to ask those simple questions...
Inline functions in C#?
Finally in .NET 4.5, the CLR allows one to force1 method inlining
using MethodImplOptions.AggressiveInlining value. It is also available
in the Mono's trunk (committed today).
[MethodImplAttribute(MethodImplOptions.AggressiveInlining)]
void Func()
{
Console.WriteLine("Hello Inline");
}
The answer should be: don't worry about it. No need to start with micro-optimizations unless you've tested it and it's actually a problem. 1000 function calls is nothing until it's something. This is majorly overshadowed by a single network call. Write your program. Check the performance against the goals. If it's not performant enough, use a profiler and figure out where your problem spots are.
Yes, C# is a very powerful general purpose language that can do nearly anything. While, inline functions / macros are generally frowned upon, C# does provide you with multiple tools that can accomplish this in a more concise and precise fashion. For example, you may consider using template files which can be used (and reused) in nearly all forms of .NET applications (web, desktop, console, etc).
http://msdn.microsoft.com/en-us/data/gg558520.aspx
From the article:
What Can T4 Templates Do For Me?
By combining literal text, imperative code, and processing directives, you can transform data in your environment into buildable artifacts for your project. For example, inside a template you might write some C# or Visual Basic code to call a web service or open an Excel spreadsheet. You can use the information you retrieve from those data sources to generate code for business rules, data validation logic, or data transfer objects. The generated code is available when you compile your application.
If you're new to programming I would recommend against implementing templates. They are not easy to debug in an N-Tier application because they get generated and ran at run-time. However, it is an interesting read and opens up many possibilities.
I would like to build an application framework that is mainly interpreted.
Say that the source code would be stored in the database that could be edited by the users and always the latest version would be executed.
Can anyone give me some ideas how does one implement sth like this !
cheers,
gabor
In .Net, you can use reflection and CodeDOM to compile code on the fly. But neither approach is really very simple or practical. Mono has some ability to interpret c# on the fly as well, but I haven't looked closely at it yet.
Another alternative is to go with an interpreted .Net language like Boo or IronPython as the language for your database code.
Either way, make sure you think long and hard about the security of your platform. Allowing users to execute arbitrary code is always an exercise fraught with peril. It's often too tempting to look for a simple eval() method, and even if one exists, that is not good enough for this kind of scenario.
Try Mono ( http://www.monoproject.org ). It supports many scripting languages including JavaScript.
If you don't want to use any scripting you can use CodeDOM or Reflection (see Reflection.Emit).
Here are really useful links on the topic :
Dynamically executing code in .Net (Here you can find a tool which can be very helpul)
Late Binding and On-the-Fly Code
Generation Using Reflection in C#
Dynamic Source Code Generation and
Compilation
Usually the Program uses a scripting language for the scriptable parts, i.e. Lua or Javascript.
To answer your technical question: You don't want to write your own language and interpreter. That's too much work for you to do. So pick some other language, say Python or Lua, and look for the documentation that lets your C program hand it blocks of code to execute. Of course, the script needs to be able to do something, so you'll need to find how to expose your program's objects to the script. Also, what will happen if a client is running the program when you update its source code in the database? Should the client restart? Are you going to store the entire program as a single row in this database, or did you want to store individual functions? That affects how you structure your updates.
To address other issues with your question: Why do you want to do this? Making "interpreted language" part of your design spec for a system is not often a good sign. Is the real requirement something like this: "I update the program often and I want users to always have the latest copy?" If so, there are other, better ways to go about this (just give us your actual scenario and requirements).
I'm currently trying to implement something that combines reverse engineering and graph theory. Therefore I'd like to disassemble PE binaries. There're some very sophisticated tools to do so, like IDA or w32dasm. Latter seems to be dead.
IDA is not scriptable - as far as I know.
The reason why I want a scriptable disassembler is, that I implement my program in C#. It gets a binary, and therefore it has to get the opcode somehow. I think I need to call some helping program with arguments. IDA cannot be called without GUI. It doesn't offer real cmdline options.
Any ideas?
Thanks,
wishi
IDA has a built-in scripting language called IDC. Lots of examples here. Also, IDA can be called without a GUI - consult the documentation for idaw.exe.
IDA can be scripted with Python. Version 5.5 even comes bundled with idapython.
[dumpbin /disasm](http://msdn.microsoft.com/en-us/library/xtf7fdaz(VS.71).aspx) should do the trick. You could also script CDB to do it.
Actually, maybe not full-blown Lex/Yacc. I'm implementing a command-interpreter front-end to administer a webapp. I'm looking for something that'll take a grammar definition and turn it into a parser that directly invokes methods on my object. Similar to how ASP.NET MVC can figure out which controller method to invoke, and how to pony up the arguments.
So, if the user types "create foo" at my command-prompt, it should transparently call a method:
private void Create(string id) { /* ... */ }
Oh, and if it could generate help text from (e.g.) attributes on those controller methods, that'd be awesome, too.
I've done a couple of small projects with GPLEX/GPPG, which are pretty straightforward reimplementations of LEX/YACC in C#. I've not used any of the other tools above, so I can't really compare them, but these worked fine.
GPPG can be found here and GPLEX here.
That being said, I agree, a full LEX/YACC solution probably is overkill for your problem. I would suggest generating a set of bindings using IronPython: it interfaces easily with .NET code, non-programmers seem to find the basic syntax fairly usable, and it gives you a lot of flexibility/power if you choose to use it.
I'm not sure Lex/Yacc will be of any help. You'll just need a basic tokenizer and an interpreter which are faster to write by hand. If you're still into parsing route see Irony.
As a sidenote: have you considered PowerShell and its commandlets?
Also look at Antlr, which has C# support.
Still early CTP so can't be used in production apps but you may be interested in Oslo/MGrammar:
http://msdn.microsoft.com/en-us/oslo/
Jison is getting a lot of traction recently. It is a Bison port to javascript. Because of it's extremely simple nature, I've ported the jison parsing/lexing template to php, and now to C#. It is still very new, but if you get a chance, take a look at it here: https://github.com/robertleeplummerjr/jison/tree/master/ports/csharp/Jison
If you don't fear alpha software and want an alternative to Lex / Yacc for creating your own languages, you might look into Oslo. I would recommend you to sit through session recordings of sessions TL27 and TL31 from last years PDC. TL31 directly addresses the creation of Domain Specific Languages using Oslo.
Coco/R is a compiler generator with a .NET implementation. You could try that out, but I'm not sure if getting such a library to work would be faster than writing your own tokenizer.
http://www.ssw.uni-linz.ac.at/Research/Projects/Coco/
I would suggest csflex - C# port of flex - most famous unix scanner generator.
I believe that lex/yacc are in one of the SDKs already (i.e. RTM). Either Windows or .NET Framework SDK.
Gardens Point Parser Generator here provides Yacc/Bison functionality for C#. It can be donwloaded here. A usefull example using GPPG is provided here
As Anton said, PowerShell is probably the way to go. If you do want a lex/ yacc implementation then Malcolm Crowe has a good set.
Edit: Direct Link to the Compiler Tools
Just for the record, implementation of lexer and LALR parser in C# for C#:
http://code.google.com/p/naive-language-tools/
It should be similar in use to Lex/Yacc, however those tools (NLT) are not generators! Thus, forget about speed.
I'm exploring various options for mapping common C# code constructs to C++ CUDA code for running on a GPU. The structure of the system is as follows (arrows represent method calls):
C# program -> C# GPU lib -> C++ CUDA implementation lib
A method in the GPU library could look something like this:
public static void Map<T>(this ICollection<T> c, Func<T,T> f)
{
//Call 'f' on each element of 'c'
}
This is an extension method to ICollection<> types which runs a function on each element. However, what I would like it to do is to call the C++ library and make it run the methods on the GPU. This would require the function to be, somehow, translated into C++ code. Is this possible?
To elaborate, if the user of my library executes a method (in C#) with some arbitrary code in it, I would like to translate this code into the C++ equivelant such that I can run it on CUDA. I have the feeling that there are no easy way to do this but I would like to know if there are any way to do it or to achieve some of the same effect.
One thing I was wondering about is capturing the function to translate in an Expression and use this to map it to a C++ equivelant. Anyone has any experience with this?
There's CUDA.Net if you want some reference how C# can be run on GPU.
To be honest, I'm not sure I fully understand what you are getting at. However, you may be interested in this project which converts .Net applications / libraries into straight C++ w/o any .Net framework required. http://www.codeplex.com/crossnet
I would recommend the following process to accelerate some of your computation using CUDA from a C# program:
First, create an unmanaged C++ library that you P/Invoke for the functions you want to accelerate. This will restrict you more or less to the data types you can easily work with in CUDA.
Integrate your unmanaged library with your C# application. If you're doing things correctly, you should already notice some kind of speed up. If not, you should probably give up.
Replace the C++ functions inside your library (without changing its interface) to perform the computations on the GPU with CUDA kernels.
Interesting question. I'm not very expert at C#, but I think an ICollection is a container of objects. If each element of c was, say, a pixel, you'd have to do a lot of marshalling to convert that into a buffer of bytes or floats that CUDA could use. I suspect that would slow everything down enough to negate the advantage of doing anything on the gpu.
What you could do would be to write an own IQueryable LINQ provider, as is done for LINQ to SQL to translate LINQ queries to SQL.
However, one problem that I see with this approach is the fact that LINQ queries are usually evaluated lazily. In order to benefit from pipelining, this is probably not a viable solution.
It might also be worth investigating how to implement Google’s MapReduce API for C# and CUDA and then use an approach similar to PyCuda to ship the logic to the GPU. In that context, it might also be useful to take a look at the already existing MapReduce implementation in CUDA.
That's a very interesting question and I have no idea how to do this.
However, the Brahma library seems to do something very similar. You can define functions using LINQ which are then compiled to GLSL shaders to run efficiently on a GPU. Have a look at their code and in particular the Game of Life sample.