I have to build a simulator with C#. This simulator should be able to run a second thread with configureable CPU speed and limited RAM size, e.g. 144MHz and 50 MB.
Of course I know that a simulator can never be as accurate as the real hardware. But I try to get almost similar performance.
At the moment I'm thinking about creating a thread which I will stop/sleep from time to time. Depending on the desired CPU speed the simulator should adjust the sleep time of this thread and therefore simulate different cpu frequency. To measure the achieved speed I though about using PerformanceCounters. But with this approach I have the problem that I don't know how to limit the RAM size the thread could use.
Do you have any ideas how to realize such a simulator?
Thanks in advance!!
Limit memory is easy with the virtual machines like vmware. You can change cpu speed with some overclocking tools. For example http://cpu.rightmark.org/products/rmclock.shtml
Good luck!
CPU speed limiting? You should check this, perhaps it will useful (to some degree at least).
CPU Emulation and locking to a specific clock speed
If you are concerned with simulating an operating system environment then one answer would be to use a virtual machines environment where you can control memory and CPU parameters, etc.
The threading pause\stop may help you to simulate CPU frequency, but this is going to be terribly inaccurate as when you pause the thread it will be de-scheduled, then it's up to the operating system to re-schedule it at some "random" point in time i.e. a point which you have no control over.
As for limiting the memory, starting a new process that will host your code is an option, and then limiting the memory of that process, e.g.:
http://www.codeproject.com/KB/threads/Setting_Max_Memory_Limit.aspx
This will not really simulate overall OS memory limitations though.
A thread to sleep down the software execution of your guest opcodes ?
I think it works but a little weird, like fast-forward, pause, ff, pause, etc...
If you just want to speed down a process, try this: use the cpu single step features, and "debug" the process. You have to wrote a custom handler for the cpu single stepping trap. Your handler job is only a big loop of NOPs.
You have a fine delay between each instruction.
Related
When coding an app for Windows (c++, c#), is there a way to lock a certain amount of cpu percentage or cores or threads so they cannot be used by other programs or processes when said app is running? I know you could tinker with CPU priority and affinity in task manager, but I don't know if that prevents other programs to 'steal' cpu power when they need it.
The app is very cpu intensive and dependant on 'real time' operation so when the usage reaches 100%, cpu cannot deal with all the load and errors occur.
So ideally the code would make sure that, if the app is currently working nicely and using 80% of the cpu, no other proccess would ever be allowed to take the remaining 20% (allowing only 10% usage, for example). I guess you could call that 'safety overhead'? I hope I made myself clear.
I am trying to figure out if such a concept exists at all, I couldn't be sure of the keywords or find a thread to start pulling.
If that is not possible in Windows c++ c#, is it a thing in other enviroments?
Thanks!
To my knowledge there is no nice way of doing such things, especially since what you are trying to use is a custom scheduler, but as these are usually hard-coded into the operating system I don't see much hope for you.
If real-time functionality is your main concern, I would either recommend using a real-time-operating-system or optimizing your software, so it doesn't need 80% of your CPU, if possible. You could also just upgrade your CPU (if money is not really a concern), so it can handle your software.
Under other operating systems there are ways to encourage the scheduler to like your software more (look up "nice value") but that's similar to changing priority in Task Manager (on steroids however).
I also remember from my operating systems lecture that there are operating systems that allow the scheduler to be modified, this might be further than you wanted to go, but that is a possibility if you become desperate enough.
And as my last idea: If you have something really computationally intense, it is often doing something over and over again. Assuming that these steps are (partially) independent of each other, it could be a massive performance gain to move work from the CPU to the GPU, from my experience (n=1) saving 50% is possible. From C++ with an Nvidia GPU you want to look up CUDA, for everything else you likely want OpenCL.
Is there a way to make an application, or a thread, run at a fixed rate?
I'm trying to do some deterministic simulations between networked clients and would like both machines (Windows) to run or process the data at a fixed, unchanging rate. Is this possible?
You can't make an existing application to run at particular speed (there could be VM based solutions that normalize executions speed, but I'm not aware of those myself).
If you writing your own code usual approach is to basically sleep between processing the next iteration. It is commonly done for (simple) games where is less processing than CPU power.
Pseudocode:
while(true)
{
executeStep();
await Task.Delay(GetTimeforNextStep() - DateTime.Now.Utc);
}
Note that precise synchronization is not possible with consumer grade OS (Windows/Linux/MacOS) - you need RTOS for a precise millisecond level timing.
For control purposes, I print all values in a collection to the debug console, with
Debug.WriteLine(...);
Since I'm also watching the task manager for performance control, I noticed that neither of the 2 CPU cores is under full load while printing. RAM usage also doesn't exceed about 50%.
Both cores have got work to do, so it's not a problem of not having enough tasks to perform
So my question is:
What component or something like that determines the maximum speed at which the debug output can be written?
I would guess that most of the time would be spent in I/O operations, i.e. writing to the logfile or the console (which might be even more expensive). So the CPU will spent the idle time waiting for the Hard drive, the GPU and/or the additional memory operations.
I have just finished a project, but i 've got a question from my teacher. Why does my program (with same algorithm, same data, same environment) run with different finish time at different moments.?
Can anyone help me?
Example: Now my program runs for 1.03s.
but then it runs for 1.05s (sometimes faster 1.01).
That happens because your program is not the only entity executing in the system and it does not get all the resources immediately at all times.
For this reason it's practically of little value to measure short execution times as they are going to vary quite noticeably. Instead, if you're interested in more accurate time measurements, you should execute your code many times and calculate the average time of all runs.
Just an idea here but could it be because of the changes on the memory usage, cpu usage by the background applications changes on different times. I mean time difference would create difference only on;
The memory usage by the other applications
The physical conditions such as cpu heat. ( The changes in time is really small )
And system clock. If you do a random number generation or do any operation that uses system clock on the background might create that change.
Hope this helps.
Cheers.
That's easy. You capture system time difference, using a counter that's imprecise as it uses system resources. There are more programs that run in parallel with yours, some take priority over your code causing temporary (~20ms, depending on OS settings) suspension of your thread. Even in DOS there is code that runs in quasi-parallel with yours, given there's only one thread possible, your code is stalled while the time is still ticking (it's governed by that code).
Because Windows is not a real time operating system. Many other activity can happen when your program is executed, and the cpu can share its cycles with other running processes. Time can change even more if your program need to read from physical devices as disk ( database too ) and the net: this because physical resource can be busy serving other requests. memory could change things too, if there is page faults, so your app need to read pages from virtual memory and as a result you will see a performance decrease. Since you are using C#, time can change sensibly from first execution to the others in the same process, due to the fact the code is JITtted, ie is compiled from intermediate code to assembly the first time is seen, then it is used in the assembly form, that is dramatically faster.
The assumption is wrong. The environment does not stay the same. The available resources for your program depend on many things. E.g. CPU and memory utilization by other processes, e.g. background processes. The harddisk and/or network utilization due to other processes. Even if there are no other processes running your program will change the internal state of the caches.
In "real world" performance scenarios it is not uncommon to see fluctuations of +/- 20% after "warm up". That is: measure 10 times in a row as "warm up" and discard the results. Measure 10 times more and collect the results. --> +/- 20% is quite common. If you do not warm up you might even see differences several orders of magnitude due to "cold" caches.
Conclusion: your program is very small and uses very little resources and it does not benefit from durable cache mechanisms.
System.Environment.ProcessorCount shows me N Processors (N in my case = 8), which I want to make use of. Now the problem is, that the windows resourcemanager sais, that 4 of my CPU's are 'parked', and the 8 Threads i start just seperate up to the 4 unblocked CPUs.
Now is there a way to use the parked CPU's, too?
When Windows "parks" a CPU core, it means that there is not enough work for that core to do so it puts that core in a low-power state. In order to "unpark" the CPU, you just have to create enough work.
If you are starting 8 threads and Windows isn't unparking the CPUs, the threads probably are doing I/O, blocking, or completing too quickly. If you post what your threads are doing, maybe somebody can explain why they're not running on the parked cores.
Usually, you should be able to do it this way:
Process.GetCurrentProcess().ProcessorAffinity = (IntPtr)0x00FF;
see documentation for it here:
http://msdn.microsoft.com/en-us/library/system.diagnostics.process.processoraffinity.aspx
but it also says that, by default your process is assigned to all cores.
On the other hand, you could try ProcessThread.ProcessorAffinity and try to set it manually (if you want to force each thread to use another core).
Win7/2K8R2 won't unpark cores until the other ones are saturated or near saturation.
The whole point of parking cores is to consolidate work. It's more power efficient to use 4 cores at 80% than 8 cores at 40%. Also, the performance difference should be almost non-existent.
Also, depending on how much data is shared, consolidating the work will actually be faster because there would be less sync overhead because there are fewer hardware threads involved. Recent data changes from one thread will be more likely in cache.
So, common worst case is about same performance and less power used and common best case is better performance and less power used.
The parking is not controlled by the CPU affinity setting of your process, it is done automatically by the Windows CPU Scheduler. Adjustments to your CPU affinity can perhaps force utilization of certain cores, but then Windows will just park different cores. The parking is turned on or off dynamically, very quickly, in accordance with system load. It is actually surprisingly aggressive by default (maybe too much so on some platforms). You can watch it in the Resource Monitor, as you saw.
Setting your own CPU affinity is something you should do with extreme caution. You must consider HyperThreaded cores, or in the case of AMD Bulldozer, paired cores that share computational units (their HyperThreading without being HyperThreading ;p). You don't want to end up 'stuck' on a Hyper-Threaded core that offers a mere fraction of the performance of a real core. The CPU scheduler is aware of such things, so usually the affinity is best left to it -- unless you know what you're doing, and have checked that system's CPU.
However, you can enable/disable or tweak CPU Parking very easily, without rebooting. I wrote a HOW-TO, complete with a simple GUI, here: How to Enable/Disable or Tweak CPU Parking Without a Reboot, and without Registry Edits
It also includes more information about CPU Parking, and how to tweak it using PowerCfg.exe. You can actually make the option show up in the standard Advanced Power Profile settings in Windows, but it takes some tweaking I won't get into here.