Why does Visual Studio stall while debugging? - c#

Sometimes, while I am debugging a c# application, I will hit a break point and when I try to continue, step or step into, it just does nothing. The yellow line highlighting the current line goes away, but it never reaches the next line. The app is still frozen like I am on a breakpoint and I can do nothing but hit the stop debugging button and restart. This doesn't happen all the time, but once it starts on an app it seems like it always happens after that for that app. I have found that adding the following code just before the class declaration "fixes" the problem for that app, but am very curious as to why this is happening.
[System.Diagnostics.DebuggerDisplay("Form1")]
Additional details:
I have not noticed any kind of pattern as to what the particular line does when it freezes. Most of the apps I write use threading, so there is a decent chance this is happening within a thread every time.

I've seen stalling problems where the debugger is trying to evaluate the variables shown in the Auto/Local windows. If the evaluation is complicated then it can cause significant stalls.
You can turn the auto-evaluation off through Tools|Options and it does make a big difference.

I have come across with this kind of behavior, though this is my first time.
I have got through this problem, by two ways
Your way of putting this attribute, [System.Diagnostics.DebuggerDisplay("Form1")]
Turning off Tools->Options->Debugging->General->Enable Property evaluation and other implicit function calls.
I am still debugging my code but It seems to me that some of the Autos evaluation is failing (possibly throwing an exception), which is possibly crashing the debugger.
Please let us know if this is also your case.

What sort of code are you debugging?
When you "step into" are you calling your own .NET code, or calling a native library, or an external assembly that you don't have the pdb files for? Either of these situations would cause the debugger to freeze while the external code was executing.

If you debug multithreaded application you might be changing of thread. You can switch between Thread with the "Thread windows" while debugging to be able to see again where the debug yellow line is.

My psychic debugger says that you're missing symbols for something and that VS is hitting the network trying to look them up. Try setting your symbol path to something weird like C:\foo.

dead-lock seems likely in your case. Press the pause button and look at the threads view next time it happens.

I have seen this type of behavior when my DB was being very slow, NHibernate is trying to write to it under the hood, and the whole debugger gets locked randomly when the DB gets pegged.

Related

First-chance exception: The RPC server is unavailable

At some point while developing my C# application, the following started to appear in the VS Output pane whenever I create an OpenFileDialog:
First-chance exception at 0x75A6C42D (KernelBase.dll) in (myapp).exe: 0x000006BA: The RPC server is unavailable.
I've been maintaining this application for years, and definitely never saw this before, so I started rolling back in SVN to determine when it began.
Bafflingly, revisions in which it occurs & doesn't occur seem to be inconsistent; if I go back sufficiently far it never happens, but there's an "area" when I can check a revision, it won't happen, I'll check another revision, it will, then I'll return to the first, and this time it suddenly will. In other words, I can't seem to reliably pinpoint when it started happening.
To illustrate this, here's an excerpt of my tests, indented for clarity. Numbers are revisions. For each test, I "update to revision" and do a full rebuild.
3977: Exception. This is the most-recent revision.
3839: OK. Since it didn't happen, I'll start working my way back up to see when it starts
3843: OK
3852: OK
3890: Exception. So it started between 3852 & 3890.
3852: Exception. Huh?? I JUST tried 3852, and last time it didn't happen!
3778: OK. Going back this far, I've never seen it happen.
3852: Exception. I guess I'll start working my way BACK to see when it stops.
3828: Exception
3810: OK
3828: Exception. Just making sure.
3810: OK. Just making sure again.
3828: OK. What?? 3828 showed the exception last time I tried!
3852: OK. (but previously it showed the exception)
3890: Exception
I'm aware that I can just tell VS not to break on these types of exceptions, and ignore them. But as mentioned, after years of working on this software, I've never seen it once - so I'd like to determine exactly when and why they started, rather than just turning a blind eye.
This has nothing to do with your project. When you use the shell dialogs, like OpenFileDialog, you load Explorer into your process. Which comes with a lot of baggage, you also get all of the shell extensions loaded. The kind that customize Explorer, they work just as well in the dialog.
Misbehaving ones are quite common. Programmers tend to use the quirkier kind. Any mishap in such a shell extension is now visible to you, the debugger tells you about it.
So, nothing actually went wrong, the exception was caught and handled. Explorer implements counter-measures against bad shell extensions destabilizing it and automatically disables them. So you just have a lame-duck shell extension that doesn't work, low odds you'd notice since it probably hasn't worked for a while.
The debugger can tell you which one is bad. Enable unmanaged debugging and tick the Thrown checkboxes in the Debug + Exception dialog. The debugger will now stop when the exception is thrown. You won't see any source code but you can look at the Call Stack debugger window for hints. It displays the name of the DLL that contain the bad code somewhere on the stack, below the Windows DLL functions. The name ought to give you a hint which one is the troublemaker. SysInternals' AutoRuns utility is excellent to disable them.

Decimal Ternary Not Working

I'm trying to use a ternary to assign a decimal type. It's not working for me. Am I going crazy?
Here's a screen shot of my debug. You can see the value of everything before I step.
And after I step here is the value. It isn't even one of the viable options (i.e. 1 or 2000).
Is there some strange limitation with decimals that I don't know about? When I break it out into its full if/else logical representation it works fine. The only thing I can guess is that I did recently install .NET Framework 4.5.
UPDATE
I've cleaned the solution and made sure I was running on code that was compiled in debug mode as recommended in the comments. Neither of those seemed to change anything.
I started to get curious though when I noticed all my unit tests were still passing. After a little more sleuthing I found that when I stepped one more time (i.e. stepped over memberItems.Add) price magically has the right value in it.
Does .Net do some kind of a delayed resolution of ternary operators similar to the yield command in iterator blocks? I've never noticed it before now but I don't know what else it could be. I suppose I could also still be running on code compiled in release mode accidentally. I've made dumber mistakes after triple checking myself.
Impossible to diagnose code from a screenshot, so just a guess.
You cannot always completely rely on what a watch expression tells you. The first possible failure mode is debugging code that was optimized. A local variable like price will very typically be optimized by the jitter optimizer to be stored in a cpu register instead of the stack. The watch expression will show you the stack location value, not the cpu register value. With 0 being a common result. The only real defense you have against this is only debugging code that was built by the Debug configuration.
Second failure mode is the way watch expressions are evaluated. The CLR starts a dedicated thread when it detects an attached debugger. The debugger can then use this thread to evaluate watch expressions. This can go wrong if a variable has any thread affinity. Common cases are variables that are [ThreadStatic] or are properties of COM objects.
I had the same problem and I also thought I was going crazy.
I found that changing my ASP.NET app to use "Visual Studio Developer Server" instead of IIS fixes it. Pity because I like using IIS as that's closer what's happening in production.

Bugs which may be dispelled by the presence of a debugger

I have a seriously nasty bug which I'm not having much uck tracking down. It only manifests itself as the program freezing and windows saying that the program is not responding when I run it without a debugger attached.
When I attach a debugger to the process from within visual studio and step through the code there is nothing untoward, and resuming execution sets the program running again, just fine, no longer frozen.
What type of bug could this possibly be which is dispelled by the very presence of a debugger?
You should look out for any race conditions in your code. Setting breakpoints and stepping through the code might resolve any timing issues where one action has not yet been completed in time, but when you pause the execution, it completes in time.
It might not actually be locked up - it's probably that the code that is executing is running on the same thread as the UI, so it LOOKS like it's locked up. Obviously if you're stepping through the code, you can see that it's actually doing something, but when some process is going on that freezes the UI it often appears to be locked up.
Look for extensive loops or processes that take time to complete, and try running them on a different thread and see if that takes care of it. You can also tell if your app is actually frozen or if it's just running a long process by looking at the CPU usage in the Task Manager.
As a temporary measure, you could try to do something during loops that you suspect may be causing this by doing something to update the UI. For example, in a WinForms application, you can add code to show where you are in the loop by adding a label and modifying the text.
For example:
for (each Employee currentEmployee in myEmployees)
{
lblStatus.Text = "working on " + currentEmployee.FullName;
Application.DoEvents();
}
Updating the UI this way will slow down your app because calls to Application.DoEvents are expensive, BUT it will help to assure your users that the program is not locked up (if you keep it in production) or you could choose to leave it out of the final production version and just use it while developing/testing to see how the processing is going and assure yourself that the app is not locked up.
When I run into an issue where the act of running the debugger seems to change the behavior of my application, I'll fall back to the old sneaker-net debugging method of outputting comments to a file. This will give you great insight into the execution paths of your program and help you to identify where it may be getting stuck.
Sounds like a race condition or deadlock.
The most common affect attaching a debugger has is timing and hence affecting race conditions which exist in the code. This is the first thing I think of when I have a scenario where attaching a debugger changes whether or not the bug reproduces.
Here are a couple of things you can try to work around this problem.
Try launching the process under the debugger vs. attaching
Use WinDbg over Visual Studio. It's a much lighter weight debugger and IME tends to affect the target process less.
One more change in behavior without debugger attached is optimizations are enabled during JIT compile - as result lifetime of variables could be different (smaller) and some objects could be garbage collected earlier (when they are no longer accessible, which could be earlier than end of method). When you attache debugger before JIT happens this optimizations are normally disabled. (see http://naveensrinivasan.com/2010/05/04/net-%e2%80%93-how-can-debugtrue-extend-the-life-time-of-local-variable/)
Advise: Attach debugger AFTER the bug happens and investigate. Collect memory dump and investigate with WinDbg if needed.

Do breakpoints introduce delay?

How is that setting a breakpoint in my code allows the following code to complete which would fail otherwise.
Here is the problem.
I'm writing an add-on for SAP B1 and encountered following problem.
When I load a form I would like to enter some values into the form' matrix.
But without a breakpoint (set on a method in which loading a form takes place) the part of code that is executed afterward will fail. That part of code is referencing a matrix that is not yet displayed which results in an exception. This is all clear. But why setting a breakpoint "solves" the problem.
What is going on?
I suspect that my breakpoint introduces some delay between loading and displaying my form and part of code that references element of that form but I could be wrong.
Running under the debugger does slow your app and will often hide race conditions even without a breakpoint. When you introduce a breakpoint, it is even more likely to hide race conditions. These kind of problems can be difficult to solve. You might want to introduce some simple logging (e.g. log4net) to see what it is going on without impacting the app so much that you see different behavior. Just keep in mind that even logging can be enough to change things.
Having breakpoints means that every time a module is loaded at runtime Visual Studio scans the module for the positions of possible breakpoints. This must introduce a delay.
Is this a Windows Forms based application? (I'm afraid I know nothing about SAP B1)
Try putting your code into the Load event of the form if it is not already there. Some controls aren't ready to use properly until their handle has been allocated which doesn't happen until the windows message loop has run a few times.
Breakpoints do introduce some delay. A breakpoint is the addition of extra instructions to your programs regular execution. Both hardware and software breakpoints ADD something to the execution of the program (although the amount would widely vary).
http://en.wikipedia.org/wiki/Breakpoint

What does "Cannot evaluate expression because the code of the current method is optimized." mean?

I wrote some code with a lot of recursion, that takes quite a bit of time to complete. Whenever I "pause" the run to look at what's going on I get:
Cannot evaluate expression because the code of the current method is optimized.
I think I understand what that means. However, what puzzles me is that after I hit step, the code is not "optimized" anymore, and I can look at my variables. How does this happen? How can the code flip back and forth between optimized and non-optimzed code?
While the Debug.Break() line is on top of the callstack you can't eval expressions. That's because that line is optimized. Press F10 to move to the next line - a valid line of code - and the watch will work.
The Debugger uses FuncEval to allow you to "look at" variables. FuncEval requires threads to be stopped in managed code at a GarbageCollector safe point. Manually "pausing" the run in the IDE causes all threads to stop as soon as possible. Your highly recursive code will tend to stop at an unsafe point. Hence, the debugger is unable to evaluate expressions.
Pressing F10 will move to the next Funceval Safe point and will enable function evaluation.
For further information review the rules of FuncEval.
You are probably trying to debug your app in release mode instead of debug mode, or you have optimizations turned on in your compile settings.
When the code is compiled with optimizations, certain variables are thrown away once they are no longer used in the function, which is why you are getting that message. In debug mode with optimizations disabled, you shouldn't get that error.
This drove me crazy. I tried attaching with Managed and Native code - no go.
This worked for me and I was finally able to evaluate all expressions :
Go into Project / Properties
Select the Build tab and click
Advanced...
Make sure Debug Info is set to "full"
(not pdb-only)
Debug your project - voila!
The below worked for me, thanks #Vin.
I had this issue when I was using VS 2015. My solution: configuration has (Debug) selected. I resolved this by unchecking the Optimize Code property under project properties.
Project (right Click)=> Properties => Build (tab) => uncheck Optimize code
Make sure you do not have something like that
[assembly: Debuggable(DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints)]
in your AssemblyInfo
Look for a function call with many params and try decreasing the number until debugging returns.
Friend of a friend from Microsoft sent this:
http://blogs.msdn.com/rmbyers/archive/2008/08/16/Func_2D00_eval-can-fail-while-stopped-in-a-non_2D00_optimized-managed-method-that-pushes-more-than-256-argument-bytes-.aspx
The most likely problem is that your call stack is getting optimized because your method signature is too large.
Had the same problem but was able to resolve it by turning off exception trapping in the debugger. Click [Debug][Exceptions] and set the exceptions to "User-unhandled".
Normally I have this off but it comes in handy occasionally. I just need to remember to turn it off when I'm done.
I had this issue when I was using VS 2010. My solution configuration has (Debug) selected. I resolved this by unchecking the Optimize Code property under project properties.
Project (right Click)=> Properties => Build (tab) => uncheck Optimize code
In my case I had 2 projects in my solution and was running a project that was not the startup project.
When I changed it to startup project the debugging started to work again.
Hope it helps someone.
Assessment:
In .NET, “Function Evaluation (funceval)” is the ability of CLR to inject some arbitrary call while the debuggee is stopped somewhere. Funceval takes charge of the debugger’s chosen thread to execute requested method. Once funceval finishes, it fires a debug event. Technically, CLR have defined ways for debugger to issue a funceval.
CLR allows to initiate funceval only on those threads that are at GC safe point (i.e. when the thread will not block GC) and Funceval Safe (FESafe) point (i.e. where CLR can actually do the hijack for the funceval.) together. Thus, possible scenarios for CLR, a thread must be:
stopped in managed code (and at a GC safe point): This implies that we cannot do a funceval in native code. Since, native code is outside the CLR’s control, it is unable to setup the funceval.
stopped at a 1st chance or unhandled managed exception (and at a GC safe point): i.e at time of exception, to inspect as much as possible to determine why that exception occurred. (e.g: debugger may try to evaluate and see the Message property on raised exception.)
Overall, common ways to stop in managed code include stopping at a breakpoint, step, Debugger.Break call, intercepting an exception, or at a thread start. This helps in evaluating the method and expressions.
Possible resolutions:
Based on the assessment, if thread is not at a FESafe and GCSafe points, CLR will not be able to hijack the thread to initiate funceval. Generally, following helps to make sure funceval initiates when expected:
Step #1:
Make sure that you are not trying to debug a “Release” build. Release is fully optimized and thus will lead to the error in discussion. By using the Standard toolbar or the Configuration Manager, you can switch between Debug & Release.
Step #2:
If you still get the error, Debug option might be set for optimization. Verify & Uncheck the “Optimize code” property under Project “Properties”:
Right click the Project
Select option “Properties”
Go to “Build” tab
Uncheck the checkbox “Optimize code”
Step #3:
If you still get the error, Debug Info mode might be incorrect. Verify & set it to “full” under “Advanced Build Settings”:
Right click the Project
Select option “Properties”
Go to “Build” tab
Click “Advanced” button
Set “Debug Info” as “full”
Step #4:
If you still face the issue, try the following:
Do a “Clean” & then a “Rebuild” of your solution file
While debugging:
Go to modules window (VS Menu -> Debug -> Windows -> Modules)
Find your assembly in the list of loaded modules.
Check the Path listed against the loaded assembly is what you expect it to be
Check the modified Timestamp of the file to confirm that the assembly was actually rebuilt
Check whether or not the loaded module is optimized or not
Conclusion:
It’s not an error but an information based on certain settings and as designed based on how .NET runtime works.
in my case i was in release mode ones i changed to debug it all worked
I had a similar issue and it got resolved when I build the solution in Debug Mode and replaced the pdb file in the execution path.
I believe that what you are seeing is a result of the optimisations - sometimes a variable will be reused - particularly those that are created on the stack. For example, suppose you have a method that uses two (local) integers. The first integer is declared at the start of the method, and is used solely as a counter for a loop. Your second integer is used after the loop has been completed, and it stores the result of a calculation that is later written out to file. In this case, the optimiser MAY decide to reuse your first integer, saving the code needed for the second integer. When you try to look at the second integer early on, you get the message that you are asking about "Cannot evaluate expression". Though I cannot explain the exact circumstances, it is possible for the optimiser to transfer the value of the second integer into a separate stack item later on, resulting in you then being able to access the value from the debugger.

Categories

Resources