Ok i've got my game nearing completion, everything is working perfectly in it and it runs fairly well on my computer (running at around 99Mb RAM). But when i run it on my xBox, i tend to get occasional jitters of the main player character. I do have explosions and billboard effects inside my game however i'm doing all that rendering on the xBox GPU (that was originally causing the jitters when explosions occurred, but not anymore).
The jitters are random as well, not when i'm spawning large amounts of units, or performing lots of actions. I'm at a loss as to why this is happening, any ideas??
P.s. The game does have multi-threading integrated into it, update on one thread, rendering on another thread.
It sounds like the garbage collector is causing your jitters. On Xbox it kicks in every time 1MB of data is allocated. The amount of time it takes depends on how many references are in use in your program.
Use the XNA Framework Remote Performance Monitor for Xbox 360 to tell if you have a garbage collection problem. Read How to tell if your garbage collection is too slow by Shawn Hargreaves.
If you do have a garbage collection problem, you'll need a profiler that can determine which objects are generating the garbage. The CLR Profiler for the .NET Framework 2.0 is one option. It's not supported in XNA 4.0 by default, but Dave on Crappy Coding has a workaround. For strategies to solve your garbage collection problem read Twin paths to garbage collection Nirvana by Shawn Hargreaves.
Update: The CLR Profiler for the .NET Framework 4.0 is now available.
Sounds like your rendering thread is just sat there waiting for the update thread to finish what its doing which causes the "jitter". Perhaps put in some code that logs how long a thread has to wait for before being allowed access to the other thread to find out if this really is the issue.
Related
I'm writing a DSP application in C# (basically a multitrack editor). I've been profiling it for quite some time on different machines and I've noticed some 'curious' things.
On my home machine, the first run of the playback loop takes up about 50%-60% of the available time, (I assume it's due to the JIT doing its job), then for the subsequent loops it goes down to a steady 5% consumption. The problem is, if I run the application on a slower computer, the first run takes up more than the available time, causing the playback to get interrupted and messing the output audio, which is unacceptable. After that, it goes down to a 8%-10% consumption.
Even after the first run, the application keeps calling some time-consuming routines from time to time (every 2 seconds more or less), which causes the steady 5% consumption to experience very short peaks of 20%-25%. I've noticed that if I let the application run for a while these peaks will also go down to a 7%-10%. (I'm not sure if it's due to the JIT recompiling these portions of code).
So, I have a serious problem with the JIT. While the application will behave nicely even in very slow machines, these 'compiling storms' are going to be a big problem. I'm trying to figure out how to resolve this issue and I've come up with an idea, which is to mark all the 'sensible' routines with an attribute that will tell the application to 'squeeze' them beforehand during start-up, so they'll be fully optimized when they're really needed. But this is only an idea (and I don't like it too much either) and I wonder if there's a better solution to the whole problem.
I'd like to hear what you guys think.
(NGEN the application is not an option, I like and want all the JIT optimizations I can get.)
EDIT:
Memory consumption and garbage collection kicks are not an issue, I'm using object pools and the maximum peak of memory during playback is 304 Kb.
You can trigger the JIT compiler to compile your entire set of assemblies during your application's initialization routine using the PrepareMethod ... method (without having to use NGen).
This solution is described in more detail here: Forcing JIT Compilation During Runtime.
The initial speed indeed sounds like Fusion+JIT, which would be helped by ILMerge (for Fusion) and NGEN (for JIT); you could always play a silent track through the system at startup so that this does all the hard work without the user noticing any distortion?
NGEN is a good option; is there a reason you can't use it?
The issues you mention after the initial load do not sound like they are related to JIT. Perhaps garbage collection.
Have you tried profiling? Both CPU and memory (collections)?
As Marc mentioned, the ongoing spikes do not sound like JIT issues. Other things to look for:
Garbage collection - are you allocating memory during your audio processing? If you're creating a lot of garbage, or even objects which survive a Gen 0 collection, this might cause noticible spikes. It sounds like you are doing some kind of pre-allocation, but watch out for hidden allocations in library code (even a foreach loop can allocate!)
Denormals. There is an issue with certain types of processors when dealing with very small floating point numbers which can cause CPU spikes. See http://www.musicdsp.org/files/denormal.pdf for details.
Edit:
Even if you don't want to use NGen, at least compare an NGen'd version so you can see what difference JITing makes
If you believe you are being impacted by JIT, then precompile your app with NGEN and run the tests again. There is no JIT overhead in code that has been compiled by NGEN. If you still see spikes in the NGEN'd app, then you know they are not caused by JIT.
I've a C# server developed on both Visual Studio 2010 and Mono Develop 2.8. NET Framework 4.0
It looks like this server behaves much better (in terms of scalability) on Windows than on Linux.
I tested the server scalability on native Windows(12 physical cores), and 8 and 12 cores Windows and Ubuntu Virtual Machines using Apache's ab tool.
The windows response time is pretty much flat. It starts picking up when the concurrency level approaches/overcomes the number of cores.
For some reason the linux response times are much worse. They grow pretty much linearly starting from level 5 of concurrency. Also 8 and 12 cores Linux VM behave similarly.
So my question is: why does it perform worse on linux? (and How can I fix that?).
Please take a look at the graph attached, it shows the averaged time to fulfill 75% of the requests as a function of the requests concurrency(the range bar are set at 50% and 100%).
I have a feeling that this might be due to mono's Garbage Collector. I tried playing around with the GC settings but I had no success. Any suggestion?
Some additional background information: the server is based on an HTTP listener that quickly parses the requests and queues them on a thread pool. The thread pool takes care of replying to those requests with some intensive math (computing an answer in ~10secs).
You need to isolate where the problem is first. Start by monitoring your memory usage with HeapShot. If it's not memory, then profile your code to pinpoint the time consuming methods.
This page, Performance Tips: Writing better performing .NET and Mono applications, contains some useful information including using the mono profiler.
Excessive String manipulation and Boxing are often 'hidden' culprits of code that doesn't scale well.
Try the sgen garbage collector (and for that, Mono 2.11.x is recommended). Look at the mono man page for more details.
I don't believe it's because of the GC. AFAIK the GC side effects should be more or less evenly distributed across the threads.
My blind guess is: you can fix it by playing with ThreadPool.SetMinThreads/SetMaxThreads API.
I would strongly recommend you do some profiles on the code with regards to how long individual methods are running. It's quite likely that you're seeing some locking or similar multi-threading difficulties that are not being handled perfectly by mono. CPU and RAM usage would also help.
I believe this may be the same problem that we tracked down involving the thread pool and the starting behavior for new threads, as well as a bug in the mono implementation of setMinThreads. Please see my answer on that thread for more information: https://stackoverflow.com/a/12371795/1663096
If you code throws a lot of exceptions, then mono is 10x faster than .NET
I've got a strange behavior here:
I get a massive memory leak in production running a WPF application that runs on a DLOG-Terminal (Windows Embedded Standard SP1) that behaves perfectly fine if I run it localy on a normal desktop (Win7 prof.)
After many unsucessful attempts to find any problem I put one of those directly beside my monitor, installed the ANTs MemoryProfiler and did one hour test run simulating user operations on both the terminal and my development PC.
Result is, that due to some strange reasons the embedded system piles up a huge amount of WeakReference and EffectiveValueEntry[] Objects.
Here are are some pictures:
Development (PC):
And the terminal:
Just look at the class list...
Has anyone seen something like this before and are there known solutions to this?
Where can I get help?
(PS the terminals where installed with images prepared for .net4)
PPS: for the close-voter: I think the question is clear: how can I fix this.
You could argue if this is a IT/OS problem vs. a programming problem but I think if I post this in Server Fault it will get a off-topic close in no time...
UPDATE:
I was able to find a big portion of the problem - but it feels a bit like C++:
I use a ViewModel-like Items class for a WPF-List that provides (among others) a ICommand (RelayCommand-pattern). The Items where created on the fly in the getter of a ViewModel-Property for the view and it seems that the application/GC did never free those unused commands - or the subscribtions to their CanExecuteChanged - the memory profiler shows those as "held by a weak reference". I changed my code to reuse those item-viewmodels and Dispose/set to null every used properties in their Dispose and use this too as clean up - as I said: feels like "delete" in those old C++ days.
On top of this I use a forced GC.Collect every 30mins (yeah I know - you never should - but I got no other solution till now).
With this setup the applications runs for 6+ hours without problems so far but it don't feel right.
I cannot understand why those WeakReferences are not claimed as they are on my desktop machine...
Any thoughts on this? Please!
UPDATE:
I am still not able to pin down this problem but I see a strange behavior:
If I use PC-Anywhere to observe the operation of my software on one of the terminals the problem goes away!
Even after running 8hr. straight the software runs as it should - it will even free memory (I put a little memorycounter-display in the main-screen - let's say I connect to the terminal and see that memory is low - after waiting a few minutes the memory is reclaimed)
So I think Devin (one Answer below) has a lead in the right direction - something in the Remote-Control software unblocks the finalizer-thread or whatever is blocking the GC - be it the simulated keyboard/mouse or whatever.
Any thoughts on this?
We had a (somewhat) similar issue running my app on a tablet. The memory would be reclaimed when run on a desktop, but not when run on a tablet or some other device that used a PC Input panel. The problem is that the finalization queue was getting stuck. The COM object finalizer was waiting to run something on the main thread, which didn't have a message loop.
The solution was to find an adequate time to invoke Application.DoEvents(). We had a method that would be called intermittently and we invoke it with every 10th call. I don't know if this is the same issue you are having, but maybe it can shed some light.
EDIT: I do need to make it clear, in general, calling DoEvents() is a bad idea. It works in that case because there isn't any UI on that thread or anything else happening that those events can interfere with.
From the screenshots it is interesting to see that the LOH grows at the same time the used space does not grow much. The free space is growing a lot at the LOH which indicates memory fragmentation due to pinned objects. This looks like a stuck finalizer thread which does prevent the cleanup of managed objects. You should get a memory dump and check in which method the finalizer thread was stuck. You can do this quite easy with Windbg.
I know there's tons of threads about this. And I read a few of them.
I'm wondering if in my case it is correct to GC.Collect();
I have a server for a MMORPG, in production it is online day and night. And the server is restarted every other day to implement changes to the production codebase. Every twenty minutes the server pauses all other threads, and serializes the current game state. This usually takes 0.5 to 4 seconds
Would it be a good idea to GC.Collect(); after serialization?
The server is, obviously, constantly creating and destroying game items.
Would I have a notorious gain in performance or memory optimization / usage?
Should I not manually collect?
I've read about how collecting can be bad if used in the wrong moments or too frequently, but I'm thinking these saves are both a good moment to collect, and not that frequent.
The server is in framework 4.0
Update in answer to a comment:
We are randomly experiencing server freezes, sometimes, unexpectedly, the server memory usage will raise increasingly until it reaches a point when the server takes way too long to handle any network operation. Thus, I'm considering a lot of different approaches to solve the issue, this is one of them.
The garbage collector knows best when to run, and you shouldn't force it.
It will not improve performance or memory optimization. CLR can tell GC to collect object which are no longer used if there is a need to do that.
Answer to an updated part:
Forcing the collection is not a good solution to the problem. You should rather have a look a bit deeper into your code to find out what is wrong. If memory usage grows unexpectedly you might have an issue with unmanaged resources which are not properly handled or even a "leaky code" within managed code.
One more thing. I would be surprise if calling GC.Collect fixed the problem.
Every twenty minutes the server pauses
all other threads, and serializes the
current game state. This usually takes
0.5 to 4 seconds
If all your threads are suspended already anyway you might as well call the garbage collection, since it should be fairly fast at this point. I suspect doing this will only mask your real problem though, not actually solve it.
We are randomly experiencing server
freezes, sometimes, unexpectedly, the
server memory usage will raise
increasingly until it reaches a point
when the server takes way too long to
handle any network operation. Thus,
I'm considering a lot of different
approaches to solve the issue, this is
one of them.
This sounds more like you actually are still referencing all these objects that use the memory - if you weren't the GC would run due to the memory pressure and try to release those objects. You might be looking at an actual bug in your production code (i.e. objects that are still subscribed to events or otherwise are being referenced when they shouldn't be) rather than something you can fix by manually taking out the garbage.
If possible in this scenario you should run a performance analysis to see where your bottlenecks are and what part of your code is causing the brunt of the memory allocations.
Could the memory increase be an "attack" by a player with a fake/modified game-client? Is a lot of memory allocated by the server when it accepts a new client connection? Does the server handle bogus incoming data well?
I've been experiencing a high degree of flicker and UI lag in a small application I've developed to test a component that I've written for one of our applications. Because the flicker and lag was taking place during idle time (when there should--seriously--be nothing going on), I decided to do some investigating. I noticed a few threads in the Threads window that I wasn't aware of (not entirely unexpected), but what caught my eye was one of the threads was set to Highest priority. This thread exists at the time Main() is called, even before any of my code executes. I've discovered that this thread appears to be present in every .NET application I write, even console applications.
Being the daring soul that I am, I decided to freeze the thread and see what happened. The flickering did indeed stop, but I experienced some oddness when it came to doing database interaction (I'm using SQL CE 3.5 SP1). My thought was that this might be the thread that the database is actually running on, but considering it's started at the time the application loads (before any references to the DB) and is present in other, non-database applications, I'm inclined to believe this isn't the case.
Because this thread (like a few others) shows up with no data in the Location column and no Call Stack listed if I switch to it in the debugger while paused, I tried matching the StartAddress property through GetCurrentProcess().Threads for the corresponding thread, but it falls outside all of the currently loaded modules address ranges.
Does anyone have any idea what this thread is, or how I might find out?
Edit
After doing some digging, it looks like the StartAddress is in kernel32.dll (based upon nearby memory contents). This leads me to think that this is just the standard system function used to start the thread, according to this page, which basically puts me back at square one as far as determining where this thread actually comes from. This is further confirmed by the fact that ALL of the threads in this list have the same value for StartAddress, leading me to ask exactly what the purpose is...?
Edit 2
Process Explorer let me to an actually meaningful start address. It looks like it's mscorwks.dll!CreateApplicationContext+0xbbef. This dll is in %WINDOWS%\Microsoft.NET\Framework\v2.0.50, so it looks like it's clearly a runtime assembly. I'm still not sure why
it's Highest priority
it appears to be causing hiccups in my application
You could try using Sysinternals. Process Explorer let's you dig in pretty deep. Right click on the Process to access Properties. Then "Threads" tab. In there, you can see the thread's stack and module.
EDIT:
After asking around some, it seems that your "Highest" priority thread is the Finalizer thread that runs due to a garbage collection. I still don't have a good reason as to why it would constantly keep running. Maybe you have some funky object lifetime behavior going on in your process?
I'm not sure what this is, but if you turn on unmanaged debugging, and set up Visual Studio with the Windows symbol server, you might get some more clues.
Might be the Garbage Collector thread. I noticed it too when I was once investigating a finalizer-related bug. Perhaps your system memory is low and the GC is trying to collect all the time? This was the case in the previously mentioned bug too. I couldn't reproduce it on my machine, but a co-worker of mine had a machine with less RAM where it would reappear like clockwork.