KERNEL_DATA_INPAGE_ERROR BSOD constantly when testing code - c#

There's obviously a problem somewhere in my code, but I am too novice to realize what it might be.
I've designed a simple program to calculate various cryptographic hashes of files. It seems to work great (I've even got it using multiple threads) on smaller files... but when I try to test it on a large ISO file (nearly 4GB), my computer very reliably crashes with a KERNEL_DATA_INPAGE_ERROR.
Am I doing something rather inefficiently? It seems to me like too much memory is being used up, despite the fact that I've tried to limit the use of memory at one time... I wonder if it's my code, or if it's something wrong with my computer...
fwiw I've got an i5 processor running 4 threads, and 4GB of ram using Windows 7 x64.
Here's my code: http://pastebin.com/KA3KrStf

The problem is almost certainly not in your program. User mode code does not produce kernel faults. The problem is either in your hardware or the drivers. You should direct your search in that direction rather than investigating your code.

This code is ring3, so it should never BSOD your machine. I can only imagine you have bad RAM or a bad HDD, which is triggering a BSOD when you try to allocate a huge blob of memory.

Related

c# 80% Cpu at once. Can't find where the bug is / (on a game server with 250 online)

Hello I made a game server in c# for a big online game.
The only problem is that I get sometimes out of no where 90% cpu usage.
It also got stuck on that 90% when the bug is there it stays on the 90% for ever..
So my question is how can I find the code failure on a simple way because the server is huge.
Are there any .net profilers good foor or something like that?
It also got stuck on that 90% when the bug is there it stays on the 90% for ever
That's awesome! You just got handed the holy grail right there my friend!
Simply attach a debugger when it gets in that state and break. Chances are you'll break in the buggy code, if not keep running and breaking. You should get there really quick if it's using 90% of your CPU.
Alternatively run it through VS's profiler and zoom the graph in only the zone with extremely high sustained CPU usage, you'll get a list of functions that use the most time (assuming it's a CPU bound issue, if it's I/O I don't think it will show).

C# Memory Use Discrepancy between two Machines

Wrote a C# scraper. And analyzed the markup of 30K URLs to pull certain metrics from them.
Run the same code on two machines:
my dev box with 4 core CPU, 8 logical processors and 32GB or RAM. It used up to 300MB of RAM to the end. As I display the WorkingSet size, I could even see the GC kick in and lower memory use, then growing back again.
on a EC2 instance, same data but an instance with only 2 processors and 1.7GB of RAM. Here it used 1.1GB or RAM and, when all threads concluded work, it went down to 300MB just like my local test.
RAM usage was checked with both Environment.WorkingSet and Task Manager. My NET speed is not negligible so I don't think it could affect things even if that Amazon instance might be a little faster. (EC2 net performance differs per instance and this one is on the affordable side hence slower side.)
Why this memory use discrepancy? And can I somehow estimate before hand the memory use in C#?
My guess is that having a slower CPU in the cloud, the GC preferred to keep allocating more than cleaning up what was already used. But this is just my theory to excuse it's unexpected behavior, based on wishful thinking. Still, on my 32GB of RAM it could have used way more but it behaved. On 1.7GB of RAM it went all crazy using 1.1GB of it... I don't get it.
In C++ I just think of how many URLs I fetch at the same time, I think of 256KB average size + size of extracted data and I can tell, before hand, how much memory will be used quite precise. But this C# test left me wondering.
As I plan to release this tool in the wild... I don't feel comfortable taking over half the RAM especially on a lesser machine.
UPDATE: Forgot to mention both machines are Windows 8. Actually one is 8.1 (local) and one Server 2012 (EC2 cloud) both with .NET 4.5.2.

will C# compiler for big codebase run dramatically faster on machine with huge RAM?

I have seen some real slow build times in a big legacy codebase without proper assembly decomposition, running on a 2G RAM machine. So, if I wanted to speed it up without code overhaul, would a 16G (or some other such huge number) RAM machine be radically faster, if the fairy IT department were to provide one? In other words, is RAM the major bottleneck for sufficiently large dotnet projects or are there other dominant issues?
Any input about similar situation for building Java is also appreciated, just out of pure curiosity.
Performance does not improve with additional RAM once you have more RAM than the application uses. You are likely not to see any more improvement by using 128GB of RAM.
We cannot guess the amount needed. Measure by looking at task manager.
It certainly won't do you any harm...
2G is pretty small for a dev machine, I use 16G as a matter of course.
However, build times are going to be gated by file access sooner or later, so whilst you might get a little improvement I suspect you won't be blown away by it. ([EDIT] as a commenter says, compilation is likely to be CPU bound too).
Have you looked into parallel builds (e.g. see this SO question: Visual Studio 2010, how to build projects in parallel on multicore).
Or, can you restructure your code base and maybe remove some less frequently updated assemblies in to a separate sln, and then reference these as DLLs (this isn't a great idea in all cases, but sometimes it can be expedient). From you description of the problem I'm guessing this is easier said than done, but this is how we've achieved good results in our code base.
The whole RAM issue is actually one of ROI (Return on Interest). The more RAM you add to a system, the less likely the application is going to have to search for a memory location large enough to store an object of a particular size and the faster it'll go; however, after a certain point it's so unlikely that the system will pick a location that is not large enough to store the object that it's pointless to go any higher. (note that read/write speeds of the RAM stick play a role in this as well).
In summary: # 2gb RAM, you definitely should upgrade that to something more like 8gb or the suggested 16gb however doing something more than that would be almost pointless because the bottleneck will come from the processor then.
ALSO it's probably a good idea to note the speed of the RAM too because then your RAM can bottleneck because it can only handle XXXXmhz clock speed at most. Generally, though, 1600mhz is fine.

How do RAM Test Applications work? C# Example?

How exactly do RAM test applications work, and is it possible to write such using C# (Example)?
Most use low-level hardware access to write various bit patterns to memory, then read them back to ensure they are identical to the pattern written. If not, the RAM is probably faulty.
They are generally written in low-level languages (assembler) to access the RAM directly - this way, any caching (that could possibly affect the result of the test) is avoided.
It's certainly possible to write such an application in C# - but that would almost certainly prevent you from getting direct bit-level access to the memory, and hence could never be as thorough or reliable as low-level memory testers.
You basically write to the RAM, read it back and compare this with the expected result. You might want to test various patterns to detect different errors (always-0, always-1), and run multiple iterations to detect spurious errors.
You can do this in any language you like, as long as you have direct access to the memory you want to test. If you want to test physical RAM, you could use P-invoke to reach out of the CLR.
However, this won't solve one specific problem if your computer is based on the Von Neumann architecture: The program that tests the memory is actually located inside the very same memory. You would have to relocate the program to test all of it. The German magazine c't found a way around this issue for their Ramtest: They run the test from video memory. In practice, this is impossible with C#.
As discovered by some Linux guru trying to write a memtest program in C, any such program must be compiled to run on either bare hardware or a MMU-less OS to be effective.
I don't think any compiler for C# can do that.
You probably can't do as good of a job testing memory from a C# program in Windows as you could from a C or Assembly language program running with no OS, but you could still make something useful.
You're going to need to use the native Windows API (via dllimpott and P/invoke) to allocate sone memory and lock it into RAM. Once you've done that, reading and writing patterns to the memory is pretty easy.
At the end of the test, you can tell the user how much of their memory you were able to test.

Largest Heap used in a managed environment? (.net/java)

What is the largest heap you have personally used in a managed environment such as Java or .NET? What were some of the performance issues you ran into, and did you end up getting a diminishing returns the larger the heap was?
I work on a 64-bit .Net system that typically uses 9-12 GB, and sometimes as much as 20GB. I have not seen any performance problems even while garbage collecting, and I have been looking hard as I was not expecting it to work so well.
An earlier version hung on to some objects for too long resulting in occasional GCs that freed up 3GB+. Even then, there was no noticeable impact on performance. The system is running on a 16-core server with 32GB RAM, which probably helps...
In .Net, on Windows 32-bit, You can only really get to about 1.4 GB of memory usage before things start getting really screwy (out of memory exceptions). This is due to a limitation in 32 bit windows that limits a single process to using more than 2 GB of RAM. There is /3GB switch you can put in your boot.ini, but that will only bring you a little bit further. If you want to use lots of memory, you should seriously consider running on a 64 bit version of windows.
I currently have a production application with 6 GB of memory. You'll need a 64-bit box as well for the JVM to be able to address that much.
The garbage collector is really the only thing (that I've found so far) where performance degrades with size, and then only if you manually kick off a System.GC, which forces the JVM to bring everything to a screeching halt as it traverses 6 GB worth of objects. Takes a good 20 seconds, too. The default GC behavior does not do this, BTW, you have to be dumb enough to make it do that. Also worth researching JVM tuning at this size.
You can also find things like distributed and clustered JVMs, sorry, don't have any good references as I didn't look into this option too closely, although I did find references to larger installations.
I am unsure what you mean by heap, but if you mean memory used, I have used quite a bit, 2GB+. I have a web app that does image processing and it requires loading 2 large scan files into memory to do analysis.
There were performance issues. Windows would swap out lots of ram, and then that would create a lot of page faults. There was never any need for anymore than 2 images at a time as all requests were gainst those images (I only allowed 1 session per image set at a time)
For instance, to setup the files for initial viewing would take about 5 seconds. Doing simple analysis and zooming would be fairly fast once in memory, in the order of .1 to .5 seconds.
I still had to optimize, so I ended up preparsung the files and chopping into smaller peices and worked only with the peices that were required by the user at the time.
I have used from 2GB to 5GB of memory in java, but usually when I get to more than 2GB I really start thinking about memory optimization. Diminishing returns can vary from not optimizing when it's necessary because you have a lot of memory, to not having memory available for the OS/Disk caches (which can help your application overall).
For Java, I recommend watching your memory usage per generation over time. Do you create a lot of temporary objects or have long-lasting objects that consume a lot of memory? A lot of optimization of memory can be done when knowing those things.

Categories

Resources