How do RAM Test Applications work? C# Example? - c#

How exactly do RAM test applications work, and is it possible to write such using C# (Example)?

Most use low-level hardware access to write various bit patterns to memory, then read them back to ensure they are identical to the pattern written. If not, the RAM is probably faulty.
They are generally written in low-level languages (assembler) to access the RAM directly - this way, any caching (that could possibly affect the result of the test) is avoided.
It's certainly possible to write such an application in C# - but that would almost certainly prevent you from getting direct bit-level access to the memory, and hence could never be as thorough or reliable as low-level memory testers.

You basically write to the RAM, read it back and compare this with the expected result. You might want to test various patterns to detect different errors (always-0, always-1), and run multiple iterations to detect spurious errors.
You can do this in any language you like, as long as you have direct access to the memory you want to test. If you want to test physical RAM, you could use P-invoke to reach out of the CLR.
However, this won't solve one specific problem if your computer is based on the Von Neumann architecture: The program that tests the memory is actually located inside the very same memory. You would have to relocate the program to test all of it. The German magazine c't found a way around this issue for their Ramtest: They run the test from video memory. In practice, this is impossible with C#.

As discovered by some Linux guru trying to write a memtest program in C, any such program must be compiled to run on either bare hardware or a MMU-less OS to be effective.
I don't think any compiler for C# can do that.

You probably can't do as good of a job testing memory from a C# program in Windows as you could from a C or Assembly language program running with no OS, but you could still make something useful.
You're going to need to use the native Windows API (via dllimpott and P/invoke) to allocate sone memory and lock it into RAM. Once you've done that, reading and writing patterns to the memory is pretty easy.
At the end of the test, you can tell the user how much of their memory you were able to test.

Related

Is using C# for an embedded device that requires precisely timed TCP messages practical?

Lets say I have a device running Windows CE and there are 2 options: using native c++ and using the .NET Compact Framework using C# to build the application.
I have to establish a connection with an external computer and send out status messages exactly every 0.5 seconds, with only a +/- 10 millisecond error tolerance.
I know you might say that in practice there are too many factors to know the answer, but lets assume that this has been tested with a c++ program, and works, and I wanted to make an equivalent program using C#. The only factor being changed would be the language/framework. Would this be possible, or would the 10 ms +/- error tolerance be too strict to achieve due to C# being a slower garbage collecting language?
The 10ms requirement would be achievable in C#, but would never be guaranteed. It might be 10ms most of the time, but you can near guarantee a GC is going to happen at an inopportune time and your managed thread is going to get suspended. You will miss that 10ms window.
But why does the solution have to be one or the other? I don't know much about your overall app requirements, but given similar requirements, my inclination would be to create a small piece in C (not C++ because you want very fine control over memory allocation and deallocation) for the time sensitive piece, probably as a service since services are easy in CE, and then create any UI, business logic, etc. in C#. Get the real-time nature of the OS for your tiny time-sensitive routine and the huge benefits of managed code for the rest of the app. Doing UI in C or C++ any more is just silly.
If your program does not need much memory and you can avoid long GC pauses then go with C#. It is infinitely easier to work with C# than C++. Raw C# speed is also pretty good. In some cases C# is even faster than C++.
The only thing that you get from C++ is predictability. There is no garbage collection that can surprise you. That is if you manage to avoid memory corruption, duplicate deallocation, pointer screwups, unallocated memory references, etc. etc.

KERNEL_DATA_INPAGE_ERROR BSOD constantly when testing code

There's obviously a problem somewhere in my code, but I am too novice to realize what it might be.
I've designed a simple program to calculate various cryptographic hashes of files. It seems to work great (I've even got it using multiple threads) on smaller files... but when I try to test it on a large ISO file (nearly 4GB), my computer very reliably crashes with a KERNEL_DATA_INPAGE_ERROR.
Am I doing something rather inefficiently? It seems to me like too much memory is being used up, despite the fact that I've tried to limit the use of memory at one time... I wonder if it's my code, or if it's something wrong with my computer...
fwiw I've got an i5 processor running 4 threads, and 4GB of ram using Windows 7 x64.
Here's my code: http://pastebin.com/KA3KrStf
The problem is almost certainly not in your program. User mode code does not produce kernel faults. The problem is either in your hardware or the drivers. You should direct your search in that direction rather than investigating your code.
This code is ring3, so it should never BSOD your machine. I can only imagine you have bad RAM or a bad HDD, which is triggering a BSOD when you try to allocate a huge blob of memory.

MWMCR::evaluatefunction error out of memory

when I run my application I got this exception
a busy cat http://img21.imageshack.us/img21/5619/bugxt.jpg
I understood that the program is out of memory .. are there any other possible meaning for that exception?
given that I am calling a dll files (deployment from matlab)
thank you all
It's absolutely possible, just use Process Explorer to see your processe's WorkingSet.
For 32 bit Windows systems maximum available memory for .NET Provecesses is arround 2GB, but it can be less based on your version configuration. Here is the SO Link on subject.
Considering the fact that you use matlab, so probably make a massive or complex calculations, you, probably, create a lot of objects/values to pass to DLL functions, which can be a one possible sources of bottleneck. But this is only a guess, cause you need to measure you program to figure out real problem.
Regards.
Note: check your old questions and accept an answer you prefer among responses you got for every question, your rate is too low !

Why is C used for driver development rather than C#? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# driver development?
Why do we use C for device driver development rather than C#?
Because C# programs cannot run in kernel mode (Ring 0).
The main reason is that C is much better suited for low-level (close to hardware) development than C#. C was designed as a form of portable assembly code. Also, many times it may be difficult or unsafe for C# to be used. The key time is when drivers are ran at ring-0, or kernel mode. Most applications are not meant to be run in this mode, including the .NET runtime.
While it may be theoretically possible to do, many times C is more suited to the task. Some of the reasons are tighter control over what machine code is produced and being closer to the processor is almost always better when working directly with hardware.
Ignoring C#'s managed language limitations for a little bit, object oriented languages, such as C#, often do things under the hood which may get in the way when developing drivers. Driver development -- actually reading and manipulating bits in hardware -- often has tight timing constraints and makes use of programming practices which are avoided in other types of programming, such as busy waits and dereferencing pointers which were set from integer constant values. Device drivers are often actually written in a mix of C and inline assembly (or make use of some other method of issuing instructions which C compilers don't normally produce). The low level locking mechanisms alone (written in assembly) are enough to make using C# difficult. C# has lots of locking functionality, but when you dig down they are at the threading level. Drivers need to be able to block interrupts as well as other threads to preform some tasks.
Object oriented languages also tend to allocate and deallocate (via garbage collection) lots of memory over and over. Within an interrupt handler, though, your ability to access heap allocation functionality is severely restricted. This is both because heap allocation may trigger garbage collection (expensive) and because you have to avoid and deal with both the interrupt code and the non-interrupt code trying to allocate at the same time. Without lots of limitations placed on this code, the compiler's output, and on whatever is managing the code (VM or possibly compiled natively and only using libraries) you will likely end up with some very odd bugs.
Managed languages have to be managed by someone. This management probably relies on a functioning underlying operating system. This produces a bootstrap problem for using managed code for many (but not all) drivers.
It is entirely possible to write device drivers in C#. If you are driving a serial device, such as a GPS device, then it may be trivial (though you will be making use of a UART chip or USB driver somewhere lower, which is probably written in C or assembly) since it can be done as an application. If you are writing an ethernet card (that isn't used as for network booting) then it may be (theoretically) possible to use C# for some parts of this, but you would probably rely heavily on libraries written in other languages and/or use of an operating system's userspace driver functionality.
C is used for drivers because it has relatively predictable output. If you know a little bit of assembly for a processor you can write some code in C, compile for that processor, and have a pretty good guess at what the assembly for that will look like. You will also know the determinism of those instructions. None of them are going to surprise you and kick off the garbage collector or call a destructor (or finalize).
Because C# is a high level language and it cannot talk to processors directly. C code compile straight into native code that a processor can understand.
Just to build a little to Darin Dimitrov's answer. Yes C# programs cannot run in kernel mode.
But why can't they?
In Patrick Dussud's interview for Behind the Code he describes an attempt that was made during the development of Vista to include the CLR at a low level*. The wall that they hit was that the CLR takes dependencies the OS security library which in turn takes dependencies on the UI level. They were not able to resolve this for Vista. Except for singularity I don't know of any other effort to do this.
*Note that while "low level" may not have been sufficient for being able to write drivers in C# it was at the very least necessary.
The C is a language which is supports more for interfacing with other peripherals. So C is used to develop System Softwares.The only problem is that one need to manage the memory .It is a developer's nightmare.But in C# the memory management can be done easiy and automatically(there are n number of differences between c and C#,try googling ).
Here's the honest truth. People who tend to be good at hardware or hardware interfaces, are not very sophisticated programmers. They tend to stick to simpler languages like C. Otherwise, frameworks would have evolved that allow languages like C++ or even C# to be used at kernel level. Entire OSes have been written in C++ (ECOS). So, IMHO, it is mostly tradition.
Now, there are some legitimate arguments against using more sophisticated languages in demanding code like drivers / kernel. There is the visibility aspect. C# and C++ compilers do a lot behind the scenes. An innocuous assignment statement may hide a shitload of code (operator overrides, properties). The cost of exceptions may not be clearly visible. Garbage collection makes the lifetime of objects / memory unclear. All the features that make programming easier, are also rope to hang yourself with.
Then there is the required ecosystem for the language. If the required features pull in too many components, the size alone may become a factor. If the primitives in the language tend to be heavy (for the sake of useful software abstractions), that's a burden drivers and kernels may not be willing to carry.

How to read from a memory mapped I/O port in .Net?

Can standard pointers in .Net do this? Or does one need to resort to P/invoke?
Note that I'm not talking about object references; I'm talking about actual C# pointers in unsafe code.
C#, as a managed and protected run time engine, does not allow low level hardware access and the memory locations associated with actual hardware are not available.
You'll need to use a port driver or write your own in C++ or C with the proper Windows API to access the memory mapped I/O regions of interest. This will run in a lower ring than the C# programs are capable of.
This is why you don't see drivers written in C#, although I understand many are writing access routines with C++, but the main driver logic in C#. It's tricky, though, as crashes and restarting can become tricky, not to mention synchronization and timing issues (which are somewhat more concrete in C++ at a lower ring, even though windows is far from a real-time system).
-Adam
To expand on Adam's answer, you can't even perform memory-mapped I/O from a Win32 application without the cooperation of a kernel driver. All addresses a Win32 app gets are virtual addresses that have nothing to do with physical addresses.
You either need to write a kernel driver to do what you're talking about or have a driver installed that has an API that'll let you make requests for I/O against particular physical addresses (and such a driver would be a pretty big security hole waiting to happen, I'd imagine). I seem to recall that way back when some outfit had such a driver as part of a development kit to help port legacy DOS/Win16 or whatever device code to Win32. I don't remember its name or know if it's still around.

Categories

Resources