I am researching the best way to perform a dump of the memory (RAM) of a Windows 7 machine via a high level programming language e.g. C#. I have done a fair amount of research into this as I will summarize below.
•It appears that Microsoft API access to memory was cut off some time back (\Device\PhysicalMemory)
•It now appears that to gain access to the memory your code must be executed in kernel mode as opposed to user mode to allow you to access the memory
•I have looked into how Windows dump’s memory (Small, Large & Kernel dumps) upon a system hang, this method is not suitable for my requirements
I wondered if anyone had a good take upon how I should go about accessing the physical memory of a machine. My knowledge of low level coding languages is scarce as I am used to programming Window Applications. Suggestions I have received so far would include writing a driver to access the memory and then calling this via a higher level programming language.
Any help that any of the community can offer would be much appreciated! Thanks in advance.
Use procdump -ma. This gives complete state of a single process. Leave debugging the OS to kernel-mode people.
From my rather short experience with kernel-mode debugging with WinDBG, I'd recommend starting with a second PC and a Firewire link. On the system under test, remove hard drive and install the OS with symbols. Make sure you have a good base image of the OS with symbols, for the time when you break it.
On my short list of WinDBG books to read is "Programming the Windows Driver Model," Oney, MSPress.
This may help.
Related
Im a little confused with regards to the memory limitations of an application. As far as i can see, if i write a c# application, targeting x64, my program will have access to 8TB of Virtual address space = space on the HD?
OS >= Windows 7 professional supports 192gigs of RAM. So if i had 192gig system (unfortunately i dont), i could load just over 8.1TB of data into memory (assuming no other processes were running)?
Is virtual memory only used when i have run out available ram? Im assuming there is a performance implication associated with virtual memory vs using RAM?
Apologies if these appear stupid questions, but when it comes to memory management, im rather green.
Your question is actually several related question, taking each individually:
OS >= Windows 7 professional supports 192gigs of RAM. So if i had 192gig system (unfortunately i dont), i could load just over 8.1TB of data into memory (assuming no other processes were running)?
No, it would still be 8 TB. That is the maximum amount of addressable space, whether it is in RAM or elsewhere.
However you could never have 8 TB in use, even if you some how unloaded Windows itself, as the OS needs to keep track of the space being used. In total, you could probably get to 7 TB approximately.
is virtual memory only used when i have run out available ram?
No, if you have virtual memory turned on the entirety of RAM is typically preloaded onto your HDD (give or take a few seconds). This allows the OS to unload something to make room if it feels the need, without having to persist the data. Note that the OS keep thorough track so will know if this is the case or not.
Im assuming there is a performance implication associated with virtual memory vs using RAM?
Depends on your context. Every seek on the hard drive takes a computational eternity, however it is still a fraction of a second. Assuming your process isn't thrashing and repeatedly accessing virtual memory, you should not notice a significant performance hit outside high performance computing.
Apologies if these appear stupid questions, but when it comes to memory management, im rather green.
Your main problem is you have some preconceived notions about how memory works that don't line up with reality. If you are really interested you should look into how memory is used in a modern system.
For instance, most people conceptualize that a pointer points to a location in memory, since it is the fundamental structure. This isn't quite true. In fact the pointer contains a piece of information that can be decoded into a location in the addressable space of the system, which isn't always in RAM. This decoding process uses quite a few tricks that are interesting, but beyond the scope of this question.
Normally, you should write applications targeting Any CPU. The .NET loader then decides (depending on the platform it is running on) which version of the run-time environment will execute the application and into what kind of native code it will be compiled. There is no need to specify the platform, unless you are using custom native components which will be loaded into the process created for your application. This process is then associated with some virtual address space - how that is mapped to physical memory is managed by the OS...
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to specify the hardware your software needs?
How do you determine the system requirements of a user's PC in order for them to install and run your software?
I am aware of the obvious, such as Windows, .NET Framework [version number]. But how do you come up with the correct RAM, Processor and all of that?
Is this just something that you observe while you're debugging your app? Do you just check out the Resource Monitor and watch for how much Disk usage your app is using, or how much memory it is taking up?
Are there any tools, or would you recommend I use tools to help determine system requirements for my applications?
I've searched for this but I have not been able to find much information.
More importantly, what about the Windows Experience Index? I've seen a few box apps in the shop say you need a Windows Exp. Index of N, but are there tools that determine what index is required for my app to run?
Until you start doing stress testing and load testing, using or carefully simulating production volumes and diversity of data, you do not really have a high quality build ready for mass deployment.
And when you do, experience (measurements and, if necessary, projection) from this testing will give you RAM, CPU and similar requirements for your customers.
Sure, the resource monitor is a good way to see how much CPU and ram it consumes. But it all depends on the app you're making, and as the developer you know aprox. how much power is needed under the hood.
If you're just developing standard WinForms / VCL apps that use standard native controls, you really shouldn't worry too much - 256 MB RAM and a 1 GHz processor should be enough, this is usually what I tend to put on my sysreq page.
For heavy 3D games you should probably start looking more into it, how you do that I can't tell you.
If you REALLY want exact hertz and bytes, you could use a VM and alter the specs and see how your app behaves.
Good afternoon,
I inherited some C# code from years ago. I have refactored it a bit to be asynchronous.
Evaluating the impact of my changes on the performance of the CPU, I used Process Explorer to watch, roughly, what my app was doing.
To my surprise, it appears to be doing what Process Explorer reports as I/O. In general, this is related to Disk I/O or Network I/O.
Based on what I can see of the code, I can't figure out an explicit call to either of those 2 I/O sources.
My question is: what is the best way to identify which section of code is causing I/O? We use dotTrace from JetBrains to profile our application, but, from what I can tell, it only handles CPU and Memory performance.
Thanks in advance for any pointers.
Regards,
Eric.
Process Monitor may be your answer. Refer to the following StackOverflow question for more information.
How can I profile file I/O?
Building on that answer, you may be able to search your solution for the filename of any commonly read or written files found with Process Monitor.
The stackshot method, also called random pausing, will find it, if it takes significant time.
If the I/O code is managed, you can load the symbols for the .net framework and set breakpoints in crucial functions (e.g. FileStream constructors etc.)
It involves some guess work but can be informative if you succeed.
In addition to Process Monitor, I find the Resource Monitor on Win7 (also available under 'Performance and Reliability' I think on Vista) very useful for diagnosing I/O-related slowdowns. Switch to the disk view and sort by Read/Write or Total (win7 only). Also keep an eye on the list of files that appear.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
also, how do you know how much memory the machine gives to OS, video cards, etc . .
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
is there anyway i can have my application tell how much memory the user has and if the application is getting close to taking up a high percentage of that.
Yes, it's possible (see some of the other answers), but it's going to be very unlikely that your application really needs to care. What is it that you're doing where you think you need to be this sensitive to memory pressure?
also, how do you know how much memory the machine gives to OS, video cards, etc . .
Again, this should be possible using WMI calls, but the bigger question is why do you need to do this?
for example, if you have 4gb of memory, how much actual memory is given to applications, can you configure this.
No, this isn't a configurable value. When a .NET application starts up the operating system allocates a block of memory for it to use. This is handled by the OS and there is no way to configure the algorithms used to determine the amount of memory to allocate. Likewise, there is no way to configure how much of that memory the .NET runtime uses for the managed heap, stack, large object heap, etc.
I think I read the question a little differently, so hopefully this response isn't too off topic!
You can get a good overview of how much memory your application is consuming by using Windows Task Manager, or even better, Sysinternals Process Monitor. This is a quick way to review your processes at their peaks to see how they are behaving.
Out of the box, an x86 process will only be able to address 2GB of RAM. This means any single process on your machine can only consume up to 2GB. In reality, your likely to be able to consume only 1.5-1.8 before getting out of memory exceptions.
How much RAM your copy of Windows can actually address will depend on the Windows version and cpu architecture.
Using your example of 4GB RAM, the OS is going to give your applications up to 2GB of RAM to play in (which all processes share) and it will reserve 2GB for itself.
Depending on the operating system your running, you can tweak this, using the /3GB switch in the boot.ini, will adjust that ratio to 3GB for applications and 1GB for the OS. This has some impact to the OS, so I'd review that impact first and see if you can live with tradeoff (YMMV).
For a single application to be able to address greater than /3GB, your going to need to set a particular bit in the PE image header. This question/answer has good info on this subject already.
The game changes under x64 architecture. :)
Some good reference information:
Memory Limits for Windows Releases
Virtual Address Space
I think you can use WMI to get all that information
If you don't wish to use WMI, you could use GlobalMemoryStatusEx():
Function Call:
http://www.pinvoke.net/default.aspx/kernel32/GlobalMemoryStatusEx.html
Return Data:
http://www.pinvoke.net/default.aspx/Structures/MEMORYSTATUSEX.html
MemoryLoad will give you a number between 0 and 100 that represents the ~ percentage of physical memory in use and TotalPhys will tell you total total amount of physical memory in bytes.
Memory is tricky because usable memory is a blend of physical (ram) and virtual (page file) types. The specific blend, and what goes where, is determined by the operating system. Luckily, this is somewhat configurable as Windows allows you to stipulate how much virtual memory to use, if any.
Take note that not all of the memory in 32-bit Windows (XP & Vista) is available for use. Windows may report up to 4GB installed but only 3.1-3.2GB is available for actual use by the operating system and applications. This has to do with legacy addressing issues IIRC.
Good Luck
Can standard pointers in .Net do this? Or does one need to resort to P/invoke?
Note that I'm not talking about object references; I'm talking about actual C# pointers in unsafe code.
C#, as a managed and protected run time engine, does not allow low level hardware access and the memory locations associated with actual hardware are not available.
You'll need to use a port driver or write your own in C++ or C with the proper Windows API to access the memory mapped I/O regions of interest. This will run in a lower ring than the C# programs are capable of.
This is why you don't see drivers written in C#, although I understand many are writing access routines with C++, but the main driver logic in C#. It's tricky, though, as crashes and restarting can become tricky, not to mention synchronization and timing issues (which are somewhat more concrete in C++ at a lower ring, even though windows is far from a real-time system).
-Adam
To expand on Adam's answer, you can't even perform memory-mapped I/O from a Win32 application without the cooperation of a kernel driver. All addresses a Win32 app gets are virtual addresses that have nothing to do with physical addresses.
You either need to write a kernel driver to do what you're talking about or have a driver installed that has an API that'll let you make requests for I/O against particular physical addresses (and such a driver would be a pretty big security hole waiting to happen, I'd imagine). I seem to recall that way back when some outfit had such a driver as part of a development kit to help port legacy DOS/Win16 or whatever device code to Win32. I don't remember its name or know if it's still around.