In the midst of asking about manually managing CLR memory, I realized I know very little.
I'm aware that the CLR will place a 'cookie' on the stack when you exit a managed context, so that the Garbage Collector won't trample your memory space; however, in everything I've read the assumption is that you are calling some library written in C.
I want to an entire write layer of my application in C#, outside of the managed context, to manage data at a low level. Then, I want to access this layer from a managed layer.
In this case, will my Unmanaged C# code compile to IL and be run on the CLR? How does this work?
I assume this is related to the same C# database project you mentioned in the question.
It is technically possible to implement an entire write layers in C/C++ or any other language. And it is technically possible to have everything else in C#. I am currently working on an application that uses unmanaged code for some high-performance low level stuff and C# for business logic and upper level management.
However, the complexity of the task shall not be underestimated. The typical way to do this, is to design a contract that both parties can understand. The contract will be exposed to the managed language and managed language will trigger calls to the native application. If you have ever tried calling a C++ method from C# you will get the idea... Plus every call to unmanaged code has quite significant performance overhead, which may kill the whole idea of low level performance.
If you really interested in high-performance relational databases, then use single low level language.
If you want to have a naive, but fully working implementation of a database, just use C#. Don't mix these two languages unless you fully understand the complexity. See Raven DB - a document based NoSQL databases that is fully built in C# only.
Will my Unmanaged C# code compile to IL and be run on the CLR?
No, there is no such thing as unmanaged C#. C# code will be always compiled into the IL code and executed by CLR. It is the case of managed code calling unmanaged code. Unmanaged code can be implemented in several languages C/C++/Assembly etc, but CLR will have no idea of what is happening in that code.
Update from a comment. There is a tool (ngen.exe) that can compile C# directly into native architecture specific code. This tool is designed to improve performance of the managed application by removing JIT-compilation stage and putting native code directly into the executable image or library. This code, however, is still "managed" by the CLR host - memory allocation and collection, managed threading, application domains, exception handling, security and all other aspects are still controlled by the CLR. So even though C# can technically be compiled into native code, this code is not running as a standalone native image.
How does this work?
Managed code interoperate with unmanaged code. There are couple of ways to do this:
Through the code via .Net Interop. This is relatively fast but looks a bit ugly in code (plus it is hard to maintain/test) (good article with C#/C/Assembly samples)
A much much slower approach, but more open to other languages: web wervices (SOAP, WS, REST and company), queueing (such as MSMQ, NServiceBus and others), also (possibly) interprocess communication. So unmanaged process sits on one end and a managed application sits on the other one.
I know this is a C# question, but if you are comfortable with C++, C++/CLI might be an option worth considering.
It allows you to selectively compile portions of your C++ code to either a managed or an unmanaged context - However be aware that code that interacts with CLR types MUST run in a managed context.
I'm not aware of the runtime cost of transitioning from managed-context to unmanaged-context and viceversa from within the C++ code, but I assume it must be similar to the cost of calling a native method via .net Interop from C#, which as #oleksii already pointed out, is expensive. In my experience this has really paid off if you need to interact frequently with native C or C++ libraries - IMHO it is much easier to call them from within a C++/CLI project rather than writing the required .net Interop interfaces in C#.
See this question for a litte bit of information on how it is done.
Related
I have read this and this and was wondering if I use in C# functions from unmanaged C++ library via C# wrapper of this library, is there going to be any difference in performance compared with the same program, but written fully with unmanaged C++ and the C++ library? I am asking about crucial performance difference bigger then 1.5 times. Notice I am asking about the function performance of the C++ library only(in the two ways - with and without the use of C# wrapper), isolating the other code!
After edit:
I was just wondering if I want to use C++ dynamic unmanaged library(.dll) in C# and I am using wrapper - which is going to be compiled to intermediate CIL code and which is not. I guess only the wrapper is being compiled to CIL, and when C# want to use C++ function from the library it is just parsing and passing the arguments to C++ function with the use of the wrapper, so there will be maybe some delay, but not like if I write the whole library via C#. Correct me if I am mistaking please.
Of course, there is overhead involved in switching from managed to unmanaged code execution. It is very modest, takes about 12 cpu cycles. All that needs to be done is write a "cookie" on the stack so that the garbage collector can recognize that subsequent stack frames belong to unmanaged code and therefore should not be inspected for valid object references.
These cookies are strung together like a linked-list, supporting the scenario where C# code calls native code which in turn calls back into managed code. Traversed by the GC when it collects. Not as uncommon as it sounds, it happens in any GUI app for example. The Click event is a good example, triggered when the UI thread pinvokes GetMessage().
Not the only thing that needs to happen however, in any practical scenario you also pass arguments to the native function. They can require a lot more work to get marshaled into a format that the native code can understand. Arrays in particular, they'll need to get pinned if the array elements are blittable, that's still pretty cheap. Gets expensive when the entire array needs to be converted because the element is not blittable. Not always easy to recognize, a profiler is forever the proper tool to detect inefficient code.
I just want to use .NET Profiling API (ICorProfilerCallback etc) but at the same time don't want to deal with C++. I've been looking around for a while and haven't found any example in C# but C# + C++ where the most interesting part is written using C++.
No, you cannot implement the CLR profiling APIs in managed code (C# or otherwise) since the profiling callbacks are called at very specific times when the managed environment is assumed to be in a certain state. Implementing your callbacks in managed code would violate a lot of assumptions.
David Broman, the developer of the CLR profiling APIs, has this to say:
You need to write your profiler in
C++. The profiler is called by the
runtime at very delicate points during
execution of the profiled application,
and it is often extremely unsafe to be
running managed code at those points.
David's blog is a great resource for dealing with the CLR profiling APIs.
So we created simple C# tcp sever for video sharing. all it does is simple and shall be done "life" - recive live video packed into container (FLV in our case) from some broadcaster and share recived stream with all subscribers (means open container and create new containers and make timestamps on container structure but not decode contents of packets in any way). We tested our server but found out that its performance is not enough for 5 incoming streamers and 10 outgoing streams. Fe found this app for porting. We will try it any way but before we try I wonder if any of you have tried such thing on any of your projects. So main question - is will C++ CLI make app faster than original C#?
No.
Writing the same code in a different language won't make any difference whatsoever; it will still compile to the same IL.
C# is not a slow language; you probably have some higher-level performance issues.
You should optimize your existing code.
Not for most code, however if the code does a lot of bit level operations maybe. Likewise if you can safely make use of unmanaged memory to reuse the load on the garbage collector.
So just doing a translation of the code to managed C++ is very unlikely to have a benefit for most code, however managed C++ may let you write some code in a more complex, (and unsafe) way that runs faster.
No- not at all. C++/CLI runs on the same .NET platform as C#, effectively preventing any speed increase purely by changing language. Native C++ on the other hand may yield some benefits, but it's best to be very careful. You're most likely to yield performance benefits from a profiler than changing language, and you should only consider changing language after extensive testing of the language and code that you have.
If you are calling some functions from a native DLL via P/Invoke approach, then at least converting those callback mechanisms to C++/Cli using IJW (It Just Works) would increase the performance a bit.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# driver development?
Why do we use C for device driver development rather than C#?
Because C# programs cannot run in kernel mode (Ring 0).
The main reason is that C is much better suited for low-level (close to hardware) development than C#. C was designed as a form of portable assembly code. Also, many times it may be difficult or unsafe for C# to be used. The key time is when drivers are ran at ring-0, or kernel mode. Most applications are not meant to be run in this mode, including the .NET runtime.
While it may be theoretically possible to do, many times C is more suited to the task. Some of the reasons are tighter control over what machine code is produced and being closer to the processor is almost always better when working directly with hardware.
Ignoring C#'s managed language limitations for a little bit, object oriented languages, such as C#, often do things under the hood which may get in the way when developing drivers. Driver development -- actually reading and manipulating bits in hardware -- often has tight timing constraints and makes use of programming practices which are avoided in other types of programming, such as busy waits and dereferencing pointers which were set from integer constant values. Device drivers are often actually written in a mix of C and inline assembly (or make use of some other method of issuing instructions which C compilers don't normally produce). The low level locking mechanisms alone (written in assembly) are enough to make using C# difficult. C# has lots of locking functionality, but when you dig down they are at the threading level. Drivers need to be able to block interrupts as well as other threads to preform some tasks.
Object oriented languages also tend to allocate and deallocate (via garbage collection) lots of memory over and over. Within an interrupt handler, though, your ability to access heap allocation functionality is severely restricted. This is both because heap allocation may trigger garbage collection (expensive) and because you have to avoid and deal with both the interrupt code and the non-interrupt code trying to allocate at the same time. Without lots of limitations placed on this code, the compiler's output, and on whatever is managing the code (VM or possibly compiled natively and only using libraries) you will likely end up with some very odd bugs.
Managed languages have to be managed by someone. This management probably relies on a functioning underlying operating system. This produces a bootstrap problem for using managed code for many (but not all) drivers.
It is entirely possible to write device drivers in C#. If you are driving a serial device, such as a GPS device, then it may be trivial (though you will be making use of a UART chip or USB driver somewhere lower, which is probably written in C or assembly) since it can be done as an application. If you are writing an ethernet card (that isn't used as for network booting) then it may be (theoretically) possible to use C# for some parts of this, but you would probably rely heavily on libraries written in other languages and/or use of an operating system's userspace driver functionality.
C is used for drivers because it has relatively predictable output. If you know a little bit of assembly for a processor you can write some code in C, compile for that processor, and have a pretty good guess at what the assembly for that will look like. You will also know the determinism of those instructions. None of them are going to surprise you and kick off the garbage collector or call a destructor (or finalize).
Because C# is a high level language and it cannot talk to processors directly. C code compile straight into native code that a processor can understand.
Just to build a little to Darin Dimitrov's answer. Yes C# programs cannot run in kernel mode.
But why can't they?
In Patrick Dussud's interview for Behind the Code he describes an attempt that was made during the development of Vista to include the CLR at a low level*. The wall that they hit was that the CLR takes dependencies the OS security library which in turn takes dependencies on the UI level. They were not able to resolve this for Vista. Except for singularity I don't know of any other effort to do this.
*Note that while "low level" may not have been sufficient for being able to write drivers in C# it was at the very least necessary.
The C is a language which is supports more for interfacing with other peripherals. So C is used to develop System Softwares.The only problem is that one need to manage the memory .It is a developer's nightmare.But in C# the memory management can be done easiy and automatically(there are n number of differences between c and C#,try googling ).
Here's the honest truth. People who tend to be good at hardware or hardware interfaces, are not very sophisticated programmers. They tend to stick to simpler languages like C. Otherwise, frameworks would have evolved that allow languages like C++ or even C# to be used at kernel level. Entire OSes have been written in C++ (ECOS). So, IMHO, it is mostly tradition.
Now, there are some legitimate arguments against using more sophisticated languages in demanding code like drivers / kernel. There is the visibility aspect. C# and C++ compilers do a lot behind the scenes. An innocuous assignment statement may hide a shitload of code (operator overrides, properties). The cost of exceptions may not be clearly visible. Garbage collection makes the lifetime of objects / memory unclear. All the features that make programming easier, are also rope to hang yourself with.
Then there is the required ecosystem for the language. If the required features pull in too many components, the size alone may become a factor. If the primitives in the language tend to be heavy (for the sake of useful software abstractions), that's a burden drivers and kernels may not be willing to carry.
I'm working already a good time with the .net framework. In this time there were some situations, I used P/Invoke for doing things that I couldn't do with managed code.
However I never knew exactly what the real disadvantages of using it were. That was also the reason why I tried not to use it as far as possible.
If I google, then I find various posts about it, some draw a catastrophic picture and recommend never to use it. What are the major drawbacks and problems of an app that uses one ore more P/Invoke-calls and when do they apply. (not of a performance perspective, more in the manner "could not be executed from a network share")?
Marshalling between managed/unmanaged types has an additional overhead
Doesn't work in medium trust
Not quite intuitive and could lead to subtle bugs like leaking handles, corrupting memory, ...
It could take some time before getting it right when looking at an exported C function
Problems when migrating from x86 to x64
When something goes wrong you can't simply step into the unmanaged code to debug and understand why you are getting exceptions. You can't even open it in Reflector :-)
Restricts cross platform interoperability (ie: you can't run under Linux if you rely on a Windows-only library).
Conclusion: use only if there's no managed alternative or in performance critical applications where only unmanaged libraries exist to provide the required speed.
There are three different scenarios where you use P/Invoke, and different disadvantages arise (or don't) in each.
You need to use a Windows capability that is not provided in the .NET Framework. Your only choice is P/Invoke, and the disadvantage you incur is that now your app will only work on Windows.
You need to use a C-style DLL provided to you by someone else and there is no managed equivalent. Now you have to deploy the DLL with your app, and you have the problems of marshalling time and possible screwups in your declaration of the function (eg IntPtr/int), string marshalling, and other things people find difficult.
You have some old native code that you wrote or control and you feel like accessing it from managed code without porting it. Here you have all the problems of (2) but you do have the option of porting it instead. I'm going to assert that you will cause more bugs by porting it than you will by P/Invoking badly. You may also cause a bigger perf problem if the code you're porting makes a lot of calls to native code such as the CRT.
My bottom line is that while P/Invoke is non trivial, telling people to avoid it is bad advice. Once you have it right, the runtime marshalling costs are all that remain. These may be less than the runtime costs you would get with porting the code.