I'm trying to use a ternary to assign a decimal type. It's not working for me. Am I going crazy?
Here's a screen shot of my debug. You can see the value of everything before I step.
And after I step here is the value. It isn't even one of the viable options (i.e. 1 or 2000).
Is there some strange limitation with decimals that I don't know about? When I break it out into its full if/else logical representation it works fine. The only thing I can guess is that I did recently install .NET Framework 4.5.
UPDATE
I've cleaned the solution and made sure I was running on code that was compiled in debug mode as recommended in the comments. Neither of those seemed to change anything.
I started to get curious though when I noticed all my unit tests were still passing. After a little more sleuthing I found that when I stepped one more time (i.e. stepped over memberItems.Add) price magically has the right value in it.
Does .Net do some kind of a delayed resolution of ternary operators similar to the yield command in iterator blocks? I've never noticed it before now but I don't know what else it could be. I suppose I could also still be running on code compiled in release mode accidentally. I've made dumber mistakes after triple checking myself.
Impossible to diagnose code from a screenshot, so just a guess.
You cannot always completely rely on what a watch expression tells you. The first possible failure mode is debugging code that was optimized. A local variable like price will very typically be optimized by the jitter optimizer to be stored in a cpu register instead of the stack. The watch expression will show you the stack location value, not the cpu register value. With 0 being a common result. The only real defense you have against this is only debugging code that was built by the Debug configuration.
Second failure mode is the way watch expressions are evaluated. The CLR starts a dedicated thread when it detects an attached debugger. The debugger can then use this thread to evaluate watch expressions. This can go wrong if a variable has any thread affinity. Common cases are variables that are [ThreadStatic] or are properties of COM objects.
I had the same problem and I also thought I was going crazy.
I found that changing my ASP.NET app to use "Visual Studio Developer Server" instead of IIS fixes it. Pity because I like using IIS as that's closer what's happening in production.
Related
The problem
On our ASP .net website I keep getting wrong line numbers in stack traces of exceptions. I am talking about our live environment. There seems to be a pattern: The stack trace will always point to the line that contains the method's closing curly brackets.
For Example:
public class Foo
{
public void Bar()
{
object someObject = null;
someObject.ToString();
/*
arbitrarily more lines of code
*/
} // this line will be the one that the stack trace points to
}
More details
To be clear: This does not only happen for some method(s), it happens for every exception that we log. So I would rule out (JIT) optimizations here, that might lead to line numbers being seemingly randomly off. What bothers me is, that the wrong line numbers seem to consistently point to the closing curly brackets of the containing method.
Note that before .net 4.6 there actually was a bug in the framework, such that if you compiled targeting x64, exactly this would happen. However, Microsoft confirmed that this has been fixed. Also running a minimal example app for this, reaffirms their claim. But for some reason it still happens on the web servers.
It puzzles me even more that it does not happen in our test and development environments. The test and live systems are pretty similarly set up. The only real difference is that live we are running Windows Server 2012, while on the test system we are still using Windows Server 2008.
What I have checked
pdb files are valid (tested with chkmatch)
Compile time code optimizations are not the issue
The aforementioned bug in .net before 4.6 cannot be reproduced with a minimal example
.NET versions are exactly the same
Build comes from the same deploy script
Any clues towards solving this problem are highly appreciated. If you know anything that we could check, please let me know.
Compilation optimization as well as runtime optimization can change the actual execution, as well as the call stack information constructed. So you cannot treat the numbers that seriously.
More information can be found in posts such as Scott Hanselman's: Release IS NOT Debug: 64bit Optimizations and C# Method Inlining in Release Build Call Stacks.
It would be too difficult to troubleshoot a specific case without touching your bits. But if you know of WinDbg you might dive deeper, by live debugging the application when the exception occurs. Then you can dump the actual jitted machine code as well as other runtime information, so as to determine how the line number is constructed.
Thanks everyone for your help! We found out that the SCOM APM agent running on the machines in production caused our application to log wrong line numbers in the stack traces. As soon as we deactivated the agent, the line numbers were correct.
In latest .NET Core versions (aka .NET 6.0) if you have "Ready to run" JIT optimization enabled (PublishReadyToRun in publishing settings), this essentially renders line numbers inaccurate too.
Disabling it might help. But this comes as a performance trade obviously.
I work on an application in C# 4.0 (VS2010), and I have a very strange situation. A bug is reported to me from all the team and I always fail to reproduce it, till one of the other developers told me to double-click the executable and follow the bug's scenario instead of launching it from VS2010.
After some research, I found that most of the comments on this problem are regarding uninitialized heap memory and the like, but in a C++ context. I know that C# produces an error rather than a warning if a variable is left uninitialized, so this is not the problem, most probably.
Both builds are the same on my machine and the users', and I now know that pressing F5 (Start with debugging) doesn't produce the problem, while Ctrl+F5 does. So the question is not the difference between both (other questions have already addressed that), but rather: how can attaching a debugger to a C# process affect its behavior?!
The code creates a connection over the network.
So the question is: How can attaching a debugger to a C# process affect its behavior?!
In all kinds of ways. It affects JIT optimizations, garbage collection, timing (think race conditions), anything which explicitly tries to detect whether it's running in the debugger, and potentially the order and timing of type initialization.
If you can now reproduce it, I would start adding logging and see where that leads you - once you've worked out what the problem actually is, you may well find it's obvious why the debugger changes things.
I just ran into one of the most mind boggling errors ever. false == true What information would you guys need to confirm/debug this behavior? I've never seen anything like it.
VS2008 sp1
Debug Mode | Any Cpu
IIS 7.5
Edit:
I did a clean->rebuild and still the same.
Here's the assembly and registers. I don't know how to read this, but maybe it could help someone else.
I suppose your PDB files are not in phase and you have differences in what's really executed and what Visual Studio sees as a line number. Try rebuilding. We all know that it is impossible to have true = false, or the world as we know it may change :-)
Does it actually throw the error? The debugger can often highlight the wrong lines if you feed it the wrong pdb, so this could be a false lead. It is also trivial to reproduce using the "immediate" pane to change the value after the test.
If result was a field or a captured variable, it could also be set by external code (perhaps on another thread).
If result wasn't a bool but your own custom type, you could just override ==, or provide a custom true/false operator.
Probably the source does not correspond to the running version or there's a bug in the debugger.
Part of the problem is that you're assuming the debugger is 100% correct. It in fact is not and is subject to a number of situations where values can have incorrect or misleading displays. The most common causes of this are ...
Mismatched PDB files. This will usually lead to at least a warning dialog in the debugger about mismatched source files but not always
Simple data inspection or display error by the underlying expression evaluator. Not likely in this case as it's a simple local and a primitive type.
Optimizations causing the data to be displayed incorrectly.
But it in fact is almost certainly not false. The easiest way to verify this is to use a Debug.WriteLine call to print the value out to the output window.
Are you sure that's the exception being thrown? My hunch is that your method isContextSignatureValid is actually throwing an exception, but the Visual Studio debugger can get ahead of itself sometimes and highlight a line that is not actually throwing the exception.
Maybe you moved the current instruction pointer (yellow arrow) with your mouse inadvertently while in break mode... It happened to me once and I flipped out. :-)
Just to add a little suggestion:
If you ever get confusing results from the debugger, stick a Console.WriteLine() in there and get the code itself to tell you what is going on. This can often clear up confusion.
(You can also get an effect like this when debugging release code, but you said it was a debug build which eliminates that suspect)
I've seen this sort of thing before. A co-worker was convinced he'd uncovered a bug in the .Net Framework or CLR. In the end it was just an old assembly or pdb synch problem.
Sometimes, while I am debugging a c# application, I will hit a break point and when I try to continue, step or step into, it just does nothing. The yellow line highlighting the current line goes away, but it never reaches the next line. The app is still frozen like I am on a breakpoint and I can do nothing but hit the stop debugging button and restart. This doesn't happen all the time, but once it starts on an app it seems like it always happens after that for that app. I have found that adding the following code just before the class declaration "fixes" the problem for that app, but am very curious as to why this is happening.
[System.Diagnostics.DebuggerDisplay("Form1")]
Additional details:
I have not noticed any kind of pattern as to what the particular line does when it freezes. Most of the apps I write use threading, so there is a decent chance this is happening within a thread every time.
I've seen stalling problems where the debugger is trying to evaluate the variables shown in the Auto/Local windows. If the evaluation is complicated then it can cause significant stalls.
You can turn the auto-evaluation off through Tools|Options and it does make a big difference.
I have come across with this kind of behavior, though this is my first time.
I have got through this problem, by two ways
Your way of putting this attribute, [System.Diagnostics.DebuggerDisplay("Form1")]
Turning off Tools->Options->Debugging->General->Enable Property evaluation and other implicit function calls.
I am still debugging my code but It seems to me that some of the Autos evaluation is failing (possibly throwing an exception), which is possibly crashing the debugger.
Please let us know if this is also your case.
What sort of code are you debugging?
When you "step into" are you calling your own .NET code, or calling a native library, or an external assembly that you don't have the pdb files for? Either of these situations would cause the debugger to freeze while the external code was executing.
If you debug multithreaded application you might be changing of thread. You can switch between Thread with the "Thread windows" while debugging to be able to see again where the debug yellow line is.
My psychic debugger says that you're missing symbols for something and that VS is hitting the network trying to look them up. Try setting your symbol path to something weird like C:\foo.
dead-lock seems likely in your case. Press the pause button and look at the threads view next time it happens.
I have seen this type of behavior when my DB was being very slow, NHibernate is trying to write to it under the hood, and the whole debugger gets locked randomly when the DB gets pegged.
I wrote some code with a lot of recursion, that takes quite a bit of time to complete. Whenever I "pause" the run to look at what's going on I get:
Cannot evaluate expression because the code of the current method is optimized.
I think I understand what that means. However, what puzzles me is that after I hit step, the code is not "optimized" anymore, and I can look at my variables. How does this happen? How can the code flip back and forth between optimized and non-optimzed code?
While the Debug.Break() line is on top of the callstack you can't eval expressions. That's because that line is optimized. Press F10 to move to the next line - a valid line of code - and the watch will work.
The Debugger uses FuncEval to allow you to "look at" variables. FuncEval requires threads to be stopped in managed code at a GarbageCollector safe point. Manually "pausing" the run in the IDE causes all threads to stop as soon as possible. Your highly recursive code will tend to stop at an unsafe point. Hence, the debugger is unable to evaluate expressions.
Pressing F10 will move to the next Funceval Safe point and will enable function evaluation.
For further information review the rules of FuncEval.
You are probably trying to debug your app in release mode instead of debug mode, or you have optimizations turned on in your compile settings.
When the code is compiled with optimizations, certain variables are thrown away once they are no longer used in the function, which is why you are getting that message. In debug mode with optimizations disabled, you shouldn't get that error.
This drove me crazy. I tried attaching with Managed and Native code - no go.
This worked for me and I was finally able to evaluate all expressions :
Go into Project / Properties
Select the Build tab and click
Advanced...
Make sure Debug Info is set to "full"
(not pdb-only)
Debug your project - voila!
The below worked for me, thanks #Vin.
I had this issue when I was using VS 2015. My solution: configuration has (Debug) selected. I resolved this by unchecking the Optimize Code property under project properties.
Project (right Click)=> Properties => Build (tab) => uncheck Optimize code
Make sure you do not have something like that
[assembly: Debuggable(DebuggableAttribute.DebuggingModes.IgnoreSymbolStoreSequencePoints)]
in your AssemblyInfo
Look for a function call with many params and try decreasing the number until debugging returns.
Friend of a friend from Microsoft sent this:
http://blogs.msdn.com/rmbyers/archive/2008/08/16/Func_2D00_eval-can-fail-while-stopped-in-a-non_2D00_optimized-managed-method-that-pushes-more-than-256-argument-bytes-.aspx
The most likely problem is that your call stack is getting optimized because your method signature is too large.
Had the same problem but was able to resolve it by turning off exception trapping in the debugger. Click [Debug][Exceptions] and set the exceptions to "User-unhandled".
Normally I have this off but it comes in handy occasionally. I just need to remember to turn it off when I'm done.
I had this issue when I was using VS 2010. My solution configuration has (Debug) selected. I resolved this by unchecking the Optimize Code property under project properties.
Project (right Click)=> Properties => Build (tab) => uncheck Optimize code
In my case I had 2 projects in my solution and was running a project that was not the startup project.
When I changed it to startup project the debugging started to work again.
Hope it helps someone.
Assessment:
In .NET, “Function Evaluation (funceval)” is the ability of CLR to inject some arbitrary call while the debuggee is stopped somewhere. Funceval takes charge of the debugger’s chosen thread to execute requested method. Once funceval finishes, it fires a debug event. Technically, CLR have defined ways for debugger to issue a funceval.
CLR allows to initiate funceval only on those threads that are at GC safe point (i.e. when the thread will not block GC) and Funceval Safe (FESafe) point (i.e. where CLR can actually do the hijack for the funceval.) together. Thus, possible scenarios for CLR, a thread must be:
stopped in managed code (and at a GC safe point): This implies that we cannot do a funceval in native code. Since, native code is outside the CLR’s control, it is unable to setup the funceval.
stopped at a 1st chance or unhandled managed exception (and at a GC safe point): i.e at time of exception, to inspect as much as possible to determine why that exception occurred. (e.g: debugger may try to evaluate and see the Message property on raised exception.)
Overall, common ways to stop in managed code include stopping at a breakpoint, step, Debugger.Break call, intercepting an exception, or at a thread start. This helps in evaluating the method and expressions.
Possible resolutions:
Based on the assessment, if thread is not at a FESafe and GCSafe points, CLR will not be able to hijack the thread to initiate funceval. Generally, following helps to make sure funceval initiates when expected:
Step #1:
Make sure that you are not trying to debug a “Release” build. Release is fully optimized and thus will lead to the error in discussion. By using the Standard toolbar or the Configuration Manager, you can switch between Debug & Release.
Step #2:
If you still get the error, Debug option might be set for optimization. Verify & Uncheck the “Optimize code” property under Project “Properties”:
Right click the Project
Select option “Properties”
Go to “Build” tab
Uncheck the checkbox “Optimize code”
Step #3:
If you still get the error, Debug Info mode might be incorrect. Verify & set it to “full” under “Advanced Build Settings”:
Right click the Project
Select option “Properties”
Go to “Build” tab
Click “Advanced” button
Set “Debug Info” as “full”
Step #4:
If you still face the issue, try the following:
Do a “Clean” & then a “Rebuild” of your solution file
While debugging:
Go to modules window (VS Menu -> Debug -> Windows -> Modules)
Find your assembly in the list of loaded modules.
Check the Path listed against the loaded assembly is what you expect it to be
Check the modified Timestamp of the file to confirm that the assembly was actually rebuilt
Check whether or not the loaded module is optimized or not
Conclusion:
It’s not an error but an information based on certain settings and as designed based on how .NET runtime works.
in my case i was in release mode ones i changed to debug it all worked
I had a similar issue and it got resolved when I build the solution in Debug Mode and replaced the pdb file in the execution path.
I believe that what you are seeing is a result of the optimisations - sometimes a variable will be reused - particularly those that are created on the stack. For example, suppose you have a method that uses two (local) integers. The first integer is declared at the start of the method, and is used solely as a counter for a loop. Your second integer is used after the loop has been completed, and it stores the result of a calculation that is later written out to file. In this case, the optimiser MAY decide to reuse your first integer, saving the code needed for the second integer. When you try to look at the second integer early on, you get the message that you are asking about "Cannot evaluate expression". Though I cannot explain the exact circumstances, it is possible for the optimiser to transfer the value of the second integer into a separate stack item later on, resulting in you then being able to access the value from the debugger.