I am interested in making an application that can automatically determine what files are included in php.
What I'm getting at is that I would like to make either a C/C++ or a C# application that runs in the background and as you're developing on your local machine, it can display included files by php as you launch pages running on your local apache.
What I thought about was to modify the function in php source code, but then I thought that would be a bad idea because then each new version of php, I'd have to go back and make the same modifications and I doubt everyone would do that.
So my question is, is it remotely possible to get all the included files that your php application used and then somehow display them to the user without using get_included_files() in your php program?
You could go outside of PHP completely and rely on the underlying operating system to report these details. It would be difficult to match the request to the includes though so it would only work in a development situation.
If the OS is Linux/UNIX, you can run strace on the executable (assuming using Apache with mod_php, other situations more difficult).
If the OS is Windows, I'm not sure what to use but possible one of the SysInternals utilities (most are GUI but likely there is a console equivalent of strace or a version of strace for Windows).
Another option would be to use xdebug. It would show you much more information including profiling details, memory usage, etc. It is used as a PHP extension and it does make it easy to profile a whole request in one snapshot. Once you have a trace file, you can use WinCacheGrind (Windows), kCacheGrin (UNIX, maybe OS X too) and something else for OS X. I'd suggest trying this as it is the simplest approach and is quite powerful if you are looking to get this done rather than do exploratory programming.
http://xdebug.org/
If you are interested in doing exploratory programming, my suggested route would be to look at how xdebug works and see if you can write a hook to the functions you want to trace.
Related
Maybe I did not fully understand how complex hadoop really is, if there is something incorrect please help me out. So what I got is this:
Hadoop is a great thing to handle a big amount of data. Mostly for data analysis and mining. I can write my own mapreduce functions or using pig or hive. I can even use existing functions, wordcount and stuff like that - I dont even have to write code.
Ok, but what if I would like to use the great power of hadoop for non-analysis/mining things? As example I have a .NET application written in C# that is able to read files and generating pdfs with some barcodes. This application is running on one server, but because the one server cannot handle the big amount of files I need more power. Why not adding some hadoop nodes/clusters to handle this job?
Question: can I take my .NET application and tell hadoop "do this, on every on your nodes/cluster"? -> Running these jobs without coding, is it possible?
If not, do I have to throw away the .NET application and rewrite everything in pig/hive/java-mapreduce? Or how do people solve these issues in my situation?
PS: The important thing here is not the pdf generator and maybe also not .NET/C# - the question is: there is an application in language whatever - can I give it to hadoop just like that? Or do we have to rewrite everything in mapreduce functions?
#Mongo : I'm not sure if I understood correct but I'd try sharing what I know. First of all hadoop is a framework - not an extension or a plugin.
If you want to process the files or perform a task in hadoop, you need to make sure that your requirements are properly put forward so that hadoop understand what to do with your data. To put it simple, let us consider the same word count example. If you want to perform the word count on a file, you can do it using any language. Lets say we have done it in Java, and we want to scale it to larger files- dumping the same code in to a hadoop cluster would not be helpful. Though the java logic remains the same, you will have to write a Map reduce code in java which would be understood by the hadoop framework.
Here's an example of a C# map reduce program for Hadoop processing
Here's another example of MapReduce Without Hadoop Using the ASP.NET Pipeline
Hope this is helpful. I'm assuming that my post adds some value to your question. I'm sure you would be getting better thoughts/suggestions/answers from the wonderful people here...
P.S: You could mostly do anything and everything thats related to file processing/ data analysis in Hadoop. It all depends up on how you do it :)
Cheers !
Any application that can run in Linux can be done in Hadoop, using Hadoop-streaming. And a C# application can run in Linux using Mono.
So you can run your C# application using both Hadoop-streaming and Mono. But still, you need to adapt your logic to the map-reduce paradigm.
However, it should not be a big deal in your case. For instance, you could:
create a Hadoop-streaming job with mappers only (no reducers)
process exactly 1 file per mapper
each mapper would run "mono yourApp.exe", reading the input file in stdin, and writing the output in stdout
Also, Mono must be available on the Hadoop cluster. If not, some admin privileges will be required to install and deploy Mono yourself.
I've been asked to look into a live client site which currently isn't working. I've been told that an IIS recycle will fix this issue for about 3 months when it will re-appear again. The issue seems to be in a 3rd party CMS but I can't currently provide any debugging information for them to try and reproduce.
That got me wondering if it would be possible to do the following - Put together a simple ASP page with a text editor which can accept arbitrary input. Take the input and compile/execute it in the current App Domain using the Rosyln service and print any output to another text area on a page.
Can anyone give me an indication on if this is achievable? The bits I'm not sure if I can achieve are:
Running the Roslyn code in the context of the current page/app domain.
Tracing output to the page without turning on global tracing.
When you load a dynamic assembly, by default it loads it into the same AppDomain that you're running in, with Roslyn or not.
However there are some considerations:
You do not need the Roslyn service (which is used by Script Engine), you only need the Roslyn client DLLs if you're just going to build and run the code.
You don't really need Roslyn to compile dynamic code and run it. .NET already has the ability to load, compile and run an arbitrary lump of code. This article is a little old but still valid. Use reflection in your page to load and run the dynamic DLL.
If it is the Roslyn script engine you need, that needs the Roslyn service (to act as a host). The service needs visual studio 2012. As Daniel Mann points out in the comment, that's not licensed for production.
Memory leaks via IIS are often caused by a given thread/request, not by actions on the App Domain (or shared resources). Your dynamic code (be it Roslyn Script Engine or plain old dynamic assembly) will be running outside where the leaks are happening, so I doubt you will see any problems.
This scares me. Running dynamic code straight onto live sounds amazingly dangerous. Protect yourself!
Can you reproduce it locally or on a test machine similar to live? If it only happens over an extended period of time, then you can simulate use using automation like Selenium.
Once you have it reproduced, use something like ANTS Memory Profiler to see where the problems are.
I have a WCF service accepting requests from our clients.
After analyzing the request I need to generate (compile + link) C++ EXE.
What is the best method to create an C++ EXE from a C# application???
10x
I can only guess what it is you want, but I assume your requirements are something like:
You run a WCF service on a server somewhere.
Upon receiving a certain call, the service must output a (binary) executable, based on the parameters it receives.
C++ should be used as the intermediate language.
Let's look at what you need. The obvious first requirement is a C++ compiler/linker that can be invoked programmatically. On Unix systems, that would be g++, and you can simply shell-out to invoke it; on Windows, g++ is also available (under the name MinGW), but the version is pretty outdated, so you might be better off using Microsoft's command-line C++ compiler.
Obviously, you'll also need to generate C++ source code somewhere; I assume most of the source code is more or less the same for each request, so you probably need some sort of templating system. If it's not too complicated, this can be as simple as running a regex search-and-replace over a bunch of template files; otherwise, you need a proper templating language (XSLT is built into .NET, although the syntax takes some getting used to).
And then the glue to make it work together; I suggest something like this:
Read request and create a suitable data structure (in a format that your template engine can consume)
Pass the data to the template engine, writing the output files to a temporary folder
Invoke the compiler on the temporary location
Read the executable back in, send it to the client
Delete the temporary folder
Since compiling is often a costly operation, consider caching the generated executables (unless they are practically guaranteed to be different every time).
By the way, there is one big caveat: If the client platform is not binary-compatible with the server platform (e.g., the server is running on x64 but the client is x86), the generated executable might not work.
And another one, which is security: By hacking the server, or tricking clients into sending "wrong" requests, an attacker can potentially malicious code through the generated executable; if this application is all but super-trivial, I imagine it's going to be pretty hard to properly secure this thing.
An executable is an executable, and is defined by the abiltity to be executed.
Whatever programming language was, once upon a time, used to write the source code that was fed to a compiler which produced the executable, that no longer matters. An executable looks the same regardless of which language (or languages) you used. (A .NET executable is just an executable with some fairly complex DLL dependencies)
So there is no such thing as a "C++ executable". Perhaps you mean an executable that doesn't depend on the .NET framework?
Or do you simply mean that you have a C++ application that needs to use a WCF service?
Or that you want to rewrite your C# code as C++?
Do you mean you want to compile c# to native machine code? in which case ngen may be some use
http://msdn.microsoft.com/en-us/library/6t9t5wcf%28v=vs.80%29.aspx
I'm diving into web development after ten years of desktop development and I'm experimenting with some testing concepts. I was wondering if it's possible to sandbox and run C++ code that's entered in a textfield in a browser? By that, I mean run the C++ or C# code on the backend webserver and return an analysis of the code. Just to be clear, I don't mean run C++ or C# code that's intended to generate any kind of markup, but simply to blackbox test the C++ or C# block of code that's entered.
How would you invoke the compiler, depending on the web server you're using?
How could you sandbox the code to prevent malicious behavior? If we're considering only one of the C variants, what about blacklisting/whitelisting specific functions and libraries to prevent malicious behavior? Or would that blacklist be too long and too limiting to allow any fair amount of code to run?
These are some fairly high-level questions that I'm asking just because I'm having a hard time finding some direction, but I'm going to continue researching them right now. Thanks so much in advance for your help!
You might find the codepad about page interesting.
# 1 is easy with C#. The Reflection capabilities of .NET allow you to compile and run code "on the fly." And here's a link to another good looking tutorial.
# 2 is a little more difficult but I suppose a basic sand boxing technique might involve executing a dynamic process under a limited, and therefore sand boxed account. Programmatically you could analyze the dynamicly built assembly's dependencies and not allow it to run if it used APIs in certain namespaces such as System.IO. This is non-trivial to say the least though.
C++ doesn't have reflection capabilities and so 3rd party libraries would be your best bet.
The Dinkumware site has something like this.
A simple Perl (or Python, ...) cgi could be used to invoke the compiler, parse it results, run the resulting executable if any and display it's results.
I would take a look at SELinux (maybe AppArmor?) for access controls. Maybe not allowing it write and read to/from the disk and limit it's running time. I don't know if the later can be done with SELinux, too.
If the server runs Linux, you may consider using chroot
We actually did just that with our product called iKnode. We are using this idea to create a Backend in the cloud.
We did this by creating a SandBox that takes an specific piece of code and executes it, captures the result and returns it to the user. This is all done in the cloud.
How would you invoke the compiler, depending on the web server you're
using?
We did this by using the CodeDom utilities from the .Net framework. And we are exploring the coming 'compiler as a service' project coming from Microsoft code-named Roslyn.
This is a good starting point on using CodeDom to programatically compile.
How could you sandbox the code to prevent malicious behavior? If we're
considering only one of the C variants, what about
blacklisting/whitelisting specific functions and libraries to prevent
malicious behavior? Or would that blacklist be too long and too
limiting to allow any fair amount of code to run?
We did this by wrapping the code execution in a separate and limited AppDomain. You can see some examples here.
Additionally, you might want to look into the MonoSandBox, which was created for Moonlight, but it is a more robust SandBox. We are experimenting with it right now, to move away from AppDomains. We believe the MonoSandBox is way better than simple AppDomains.
I'm messing around with Tamir.SharpSsh and wanted to see if it was possible to use it to implement a console SSH client fully in C#. I do not mean something like putty where it's actually running in it's own GUI, but something you could run directly from the windows cmd console.
The library is pretty great, except that it doesn't handle terminal emulation in any way. So when using SshShell, you can do some basic interaction, but the output is often very ugly and full of random characters and you cannot actually interact with things like shell scripts, etc.
As far as I can tell SharpSSH simply redirects the IO to the console IO.
How hard would it be to redirect this elsewhere and handle the terminal emulation? Also, is there an emulation library (C# and open source, preferably) already that I could use?
Edit: Gave up on SharpSSH, see answer below for the final solution I came up with.
I have actually since abandoned trying to use SharpSSH. It is a good library, but was just too lacking in overall functionality. I am now using a library called Granados which is a much more fleshed out SSH implementation. It has a built in event model (unlike SharpSSH which mostly involves wrangling with Streams) that makes usage very easy.
As for the terminal emulation part... Granados is actually the core of another open source project called Poderosa.
Poderosa is a complete terminal emulator application that can connect to ssh, telnet and even your local cygwin install.
I haven't really dove into it's terminal emulation code at all, but it definitely does it quite well, so I'm sure you could easily pull out whatever code you need.
I'm looking for the same thing. There is a library here that costs $700. Found another one on codeproject that looks shoddy but might be a good start. And there is an incomplete implementation right here on stackoverflow. Still searching..