In which cases using language with GC allows audio playback without dropouts? - c#

As far as I know, using a language with Garbage Collection means there will be time intervals, inside which entire application is stopped. But I'm curious about the scope of this stops.
For example, there are PortAudio bindings for Java, and there are 2 modes of operation, which differ in control direction. In one mode you call PortAudio to put some data that it must play and in other mode PortAudio calls you (callback function) to fill its buffers with data. I am wondering, why Java bindings for PortAudio don't allow second mode (using callback). The explanation is, as can be read here: This Java binding does not support audio callbacks because an audio callback should never block. Calling into a Java virtual machine might block for garbage collection or synchronization. So only the blocking read/write mode is supported. This implies that in other case GC should not be a problem? But why? I don't understand this.
And how the situation will differ in other programming languages with GC? (especially interesting are C# and D.) What things should I take care of if I want to implement an audio player (that never ever drops out samples) in language with GC while using only 1 process? And is it possible at all?
Previously I was participating in developing a kind of VoIP software in Java and there was serious problems with dropouts which correlated in time with GCs. But doing music player should be easier, I think, because latency is not a problem here and I can use huge buffer for audio data.
Update:
I am interested only in free and open source solutions. So, for example, using an "alternative" but non-free implementation of Java runtime is not an option for me to use. But it's interesting to know anyway.

Related

Is using C# for an embedded device that requires precisely timed TCP messages practical?

Lets say I have a device running Windows CE and there are 2 options: using native c++ and using the .NET Compact Framework using C# to build the application.
I have to establish a connection with an external computer and send out status messages exactly every 0.5 seconds, with only a +/- 10 millisecond error tolerance.
I know you might say that in practice there are too many factors to know the answer, but lets assume that this has been tested with a c++ program, and works, and I wanted to make an equivalent program using C#. The only factor being changed would be the language/framework. Would this be possible, or would the 10 ms +/- error tolerance be too strict to achieve due to C# being a slower garbage collecting language?
The 10ms requirement would be achievable in C#, but would never be guaranteed. It might be 10ms most of the time, but you can near guarantee a GC is going to happen at an inopportune time and your managed thread is going to get suspended. You will miss that 10ms window.
But why does the solution have to be one or the other? I don't know much about your overall app requirements, but given similar requirements, my inclination would be to create a small piece in C (not C++ because you want very fine control over memory allocation and deallocation) for the time sensitive piece, probably as a service since services are easy in CE, and then create any UI, business logic, etc. in C#. Get the real-time nature of the OS for your tiny time-sensitive routine and the huge benefits of managed code for the rest of the app. Doing UI in C or C++ any more is just silly.
If your program does not need much memory and you can avoid long GC pauses then go with C#. It is infinitely easier to work with C# than C++. Raw C# speed is also pretty good. In some cases C# is even faster than C++.
The only thing that you get from C++ is predictability. There is no garbage collection that can surprise you. That is if you manage to avoid memory corruption, duplicate deallocation, pointer screwups, unallocated memory references, etc. etc.

Will managed C++ (CLI) code ported from C# work faster than original C#? ( tcp server)

So we created simple C# tcp sever for video sharing. all it does is simple and shall be done "life" - recive live video packed into container (FLV in our case) from some broadcaster and share recived stream with all subscribers (means open container and create new containers and make timestamps on container structure but not decode contents of packets in any way). We tested our server but found out that its performance is not enough for 5 incoming streamers and 10 outgoing streams. Fe found this app for porting. We will try it any way but before we try I wonder if any of you have tried such thing on any of your projects. So main question - is will C++ CLI make app faster than original C#?
No.
Writing the same code in a different language won't make any difference whatsoever; it will still compile to the same IL.
C# is not a slow language; you probably have some higher-level performance issues.
You should optimize your existing code.
Not for most code, however if the code does a lot of bit level operations maybe. Likewise if you can safely make use of unmanaged memory to reuse the load on the garbage collector.
So just doing a translation of the code to managed C++ is very unlikely to have a benefit for most code, however managed C++ may let you write some code in a more complex, (and unsafe) way that runs faster.
No- not at all. C++/CLI runs on the same .NET platform as C#, effectively preventing any speed increase purely by changing language. Native C++ on the other hand may yield some benefits, but it's best to be very careful. You're most likely to yield performance benefits from a profiler than changing language, and you should only consider changing language after extensive testing of the language and code that you have.
If you are calling some functions from a native DLL via P/Invoke approach, then at least converting those callback mechanisms to C++/Cli using IJW (It Just Works) would increase the performance a bit.

How to prevent or minimize the negative effects of .NET GC in a real time app?

Are there any tips, tricks and techniques to prevent or minimize slowdowns or temporary freeze of an app because of the .NET GC?
Maybe something along the lines of:
Try to use structs if you can, unless the data is too large or will be mostly used inside other classes, etc.
The description of your App does not fit the usual meaning of "realtime". Realtime is commonly used for software that has a max latency in milliseconds or less.
You have a requirement of responsiveness to the user, meaning you could probably tolerate an incidental delay of 500 ms or more. 100 ms won't be noticed.
Luckily for you, the GC won't cause delays that long. And if it did you could use the Server (background) version of the GC, but I know little about the details.
But if your "user experience" does suffer, it probably won't be the GC.
IMHO, if the performance of your application is being affected noticeably by the GC, something is wrong. The GC is designed to work without intervention and without significantly affecting your application. In other words, you shouldn't have to code with the details of the GC in mind.
I would examine the structure of your application and see where the bottlenecks are, maybe using a profiler. Maybe there are places where you could reduce the number of objects that are being created and destroyed.
If parts of your application really need to be real-time, perhaps they should be written in another language that is designed for that sort of thing.
Another trick is to use GC.RegisterForFullNotifications on back-end.
Let say, that you have load balancing server and N app. servers. When load balancer recieves information about possible full GC on one of the servers it will forward requests to other servers for some time therefore SLA will not be affected by GC (which is especially usefull for x64 boxes where more than 4GB can be addressed).
Updated
No, unfortunately I don't have a code but there is a very simple example at MSDN.com with dummy methods like RedirectRequests and AcceptRequests which can be found here: Garbage Collection Notifications

Why is C used for driver development rather than C#? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# driver development?
Why do we use C for device driver development rather than C#?
Because C# programs cannot run in kernel mode (Ring 0).
The main reason is that C is much better suited for low-level (close to hardware) development than C#. C was designed as a form of portable assembly code. Also, many times it may be difficult or unsafe for C# to be used. The key time is when drivers are ran at ring-0, or kernel mode. Most applications are not meant to be run in this mode, including the .NET runtime.
While it may be theoretically possible to do, many times C is more suited to the task. Some of the reasons are tighter control over what machine code is produced and being closer to the processor is almost always better when working directly with hardware.
Ignoring C#'s managed language limitations for a little bit, object oriented languages, such as C#, often do things under the hood which may get in the way when developing drivers. Driver development -- actually reading and manipulating bits in hardware -- often has tight timing constraints and makes use of programming practices which are avoided in other types of programming, such as busy waits and dereferencing pointers which were set from integer constant values. Device drivers are often actually written in a mix of C and inline assembly (or make use of some other method of issuing instructions which C compilers don't normally produce). The low level locking mechanisms alone (written in assembly) are enough to make using C# difficult. C# has lots of locking functionality, but when you dig down they are at the threading level. Drivers need to be able to block interrupts as well as other threads to preform some tasks.
Object oriented languages also tend to allocate and deallocate (via garbage collection) lots of memory over and over. Within an interrupt handler, though, your ability to access heap allocation functionality is severely restricted. This is both because heap allocation may trigger garbage collection (expensive) and because you have to avoid and deal with both the interrupt code and the non-interrupt code trying to allocate at the same time. Without lots of limitations placed on this code, the compiler's output, and on whatever is managing the code (VM or possibly compiled natively and only using libraries) you will likely end up with some very odd bugs.
Managed languages have to be managed by someone. This management probably relies on a functioning underlying operating system. This produces a bootstrap problem for using managed code for many (but not all) drivers.
It is entirely possible to write device drivers in C#. If you are driving a serial device, such as a GPS device, then it may be trivial (though you will be making use of a UART chip or USB driver somewhere lower, which is probably written in C or assembly) since it can be done as an application. If you are writing an ethernet card (that isn't used as for network booting) then it may be (theoretically) possible to use C# for some parts of this, but you would probably rely heavily on libraries written in other languages and/or use of an operating system's userspace driver functionality.
C is used for drivers because it has relatively predictable output. If you know a little bit of assembly for a processor you can write some code in C, compile for that processor, and have a pretty good guess at what the assembly for that will look like. You will also know the determinism of those instructions. None of them are going to surprise you and kick off the garbage collector or call a destructor (or finalize).
Because C# is a high level language and it cannot talk to processors directly. C code compile straight into native code that a processor can understand.
Just to build a little to Darin Dimitrov's answer. Yes C# programs cannot run in kernel mode.
But why can't they?
In Patrick Dussud's interview for Behind the Code he describes an attempt that was made during the development of Vista to include the CLR at a low level*. The wall that they hit was that the CLR takes dependencies the OS security library which in turn takes dependencies on the UI level. They were not able to resolve this for Vista. Except for singularity I don't know of any other effort to do this.
*Note that while "low level" may not have been sufficient for being able to write drivers in C# it was at the very least necessary.
The C is a language which is supports more for interfacing with other peripherals. So C is used to develop System Softwares.The only problem is that one need to manage the memory .It is a developer's nightmare.But in C# the memory management can be done easiy and automatically(there are n number of differences between c and C#,try googling ).
Here's the honest truth. People who tend to be good at hardware or hardware interfaces, are not very sophisticated programmers. They tend to stick to simpler languages like C. Otherwise, frameworks would have evolved that allow languages like C++ or even C# to be used at kernel level. Entire OSes have been written in C++ (ECOS). So, IMHO, it is mostly tradition.
Now, there are some legitimate arguments against using more sophisticated languages in demanding code like drivers / kernel. There is the visibility aspect. C# and C++ compilers do a lot behind the scenes. An innocuous assignment statement may hide a shitload of code (operator overrides, properties). The cost of exceptions may not be clearly visible. Garbage collection makes the lifetime of objects / memory unclear. All the features that make programming easier, are also rope to hang yourself with.
Then there is the required ecosystem for the language. If the required features pull in too many components, the size alone may become a factor. If the primitives in the language tend to be heavy (for the sake of useful software abstractions), that's a burden drivers and kernels may not be willing to carry.

How to read from a memory mapped I/O port in .Net?

Can standard pointers in .Net do this? Or does one need to resort to P/invoke?
Note that I'm not talking about object references; I'm talking about actual C# pointers in unsafe code.
C#, as a managed and protected run time engine, does not allow low level hardware access and the memory locations associated with actual hardware are not available.
You'll need to use a port driver or write your own in C++ or C with the proper Windows API to access the memory mapped I/O regions of interest. This will run in a lower ring than the C# programs are capable of.
This is why you don't see drivers written in C#, although I understand many are writing access routines with C++, but the main driver logic in C#. It's tricky, though, as crashes and restarting can become tricky, not to mention synchronization and timing issues (which are somewhat more concrete in C++ at a lower ring, even though windows is far from a real-time system).
-Adam
To expand on Adam's answer, you can't even perform memory-mapped I/O from a Win32 application without the cooperation of a kernel driver. All addresses a Win32 app gets are virtual addresses that have nothing to do with physical addresses.
You either need to write a kernel driver to do what you're talking about or have a driver installed that has an API that'll let you make requests for I/O against particular physical addresses (and such a driver would be a pretty big security hole waiting to happen, I'd imagine). I seem to recall that way back when some outfit had such a driver as part of a development kit to help port legacy DOS/Win16 or whatever device code to Win32. I don't remember its name or know if it's still around.

Categories

Resources