Environment: Visual Studio 2015 RTM. (I haven't tried older versions.)
Recently, I've been debugging some of my Noda Time code, and I've noticed that when I've got a local variable of type NodaTime.Instant (one of the central struct types in Noda Time), the "Locals" and "Watch" windows don't appear to call its ToString() override. If I call ToString() explicitly in the watch window, I see the appropriate representation, but otherwise I just see:
variableName {NodaTime.Instant}
which isn't very useful.
If I change the override to return a constant string, the string is displayed in the debugger, so it's clearly able to pick up that it's there - it just doesn't want to use it in its "normal" state.
I decided to reproduce this locally in a little demo app, and here's what I've come up with. (Note that in an early version of this post, DemoStruct was a class and DemoClass didn't exist at all - my fault, but it explains some comments which look odd now...)
using System;
using System.Diagnostics;
using System.Threading;
public struct DemoStruct
{
public string Name { get; }
public DemoStruct(string name)
{
Name = name;
}
public override string ToString()
{
Thread.Sleep(1000); // Vary this to see different results
return $"Struct: {Name}";
}
}
public class DemoClass
{
public string Name { get; }
public DemoClass(string name)
{
Name = name;
}
public override string ToString()
{
Thread.Sleep(1000); // Vary this to see different results
return $"Class: {Name}";
}
}
public class Program
{
static void Main()
{
var demoClass = new DemoClass("Foo");
var demoStruct = new DemoStruct("Bar");
Debugger.Break();
}
}
In the debugger, I now see:
demoClass {DemoClass}
demoStruct {Struct: Bar}
However, if I reduce the Thread.Sleep call down from 1 second to 900ms, there's still a short pause, but then I see Class: Foo as the value. It doesn't seem to matter how long the Thread.Sleep call is in DemoStruct.ToString(), it's always displayed properly - and the debugger displays the value before the sleep would have completed. (It's as if Thread.Sleep is disabled.)
Now Instant.ToString() in Noda Time does a fair amount of work, but it certainly doesn't take a whole second - so presumably there are more conditions that cause the debugger to give up evaluating a ToString() call. And of course it's a struct anyway.
I've tried recursing to see whether it's a stack limit, but that appears not to be the case.
So, how can I work out what's stopping VS from fully evaluating Instant.ToString()? As noted below, DebuggerDisplayAttribute appears to help, but without knowing why, I'm never going to be entirely confident in when I need it and when I don't.
Update
If I use DebuggerDisplayAttribute, things change:
// For the sample code in the question...
[DebuggerDisplay("{ToString()}")]
public class DemoClass
gives me:
demoClass Evaluation timed out
Whereas when I apply it in Noda Time:
[DebuggerDisplay("{ToString()}")]
public struct Instant
a simple test app shows me the right result:
instant "1970-01-01T00:00:00Z"
So presumably the problem in Noda Time is some condition that DebuggerDisplayAttribute does force through - even though it doesn't force through timeouts. (This would be in line with my expectation that Instant.ToString is easily fast enough to avoid a timeout.)
This may be a good enough solution - but I'd still like to know what's going on, and whether I can change the code simply to avoid having to put the attribute on all the various value types in Noda Time.
Curiouser and curiouser
Whatever is confusing the debugger only confuses it sometimes. Let's create a class which holds an Instant and uses it for its own ToString() method:
using NodaTime;
using System.Diagnostics;
public class InstantWrapper
{
private readonly Instant instant;
public InstantWrapper(Instant instant)
{
this.instant = instant;
}
public override string ToString() => instant.ToString();
}
public class Program
{
static void Main()
{
var instant = NodaConstants.UnixEpoch;
var wrapper = new InstantWrapper(instant);
Debugger.Break();
}
}
Now I end up seeing:
instant {NodaTime.Instant}
wrapper {1970-01-01T00:00:00Z}
However, at the suggestion of Eren in comments, if I change InstantWrapper to be a struct, I get:
instant {NodaTime.Instant}
wrapper {InstantWrapper}
So it can evaluate Instant.ToString() - so long as that's invoked by another ToString method... which is within a class. The class/struct part seems to be important based on the type of the variable being displayed, not what code needs
to be executed in order to get the result.
As another example of this, if we use:
object boxed = NodaConstants.UnixEpoch;
... then it works fine, displaying the right value. Colour me confused.
Update:
This bug has been fixed in Visual Studio 2015 Update 2. Let me know if you are still running into problems evaluating ToString on struct values using Update 2 or later.
Original Answer:
You are running into a known bug/design limitation with Visual Studio 2015 and calling ToString on struct types. This can also be observed when dealing with System.DateTimeSpan. System.DateTimeSpan.ToString() works in the evaluation windows with Visual Studio 2013, but does not always work in 2015.
If you are interested in the low level details, here's what's going on:
To evaluate ToString, the debugger does what's known as "function evaluation". In greatly simplified terms, the debugger suspends all threads in the process except the current thread, changes the context of the current thread to the ToString function, sets a hidden guard breakpoint, then allows the process to continue. When the guard breakpoint is hit, the debugger restores the process to its previous state and the return value of the function is used to populate the window.
To support lambda expressions, we had to completely rewrite the CLR Expression Evaluator in Visual Studio 2015. At a high level, the implementation is:
Roslyn generates MSIL code for expressions/local variables to get the values to be displayed in the various inspection windows.
The debugger interprets the IL to get the result.
If there are any "call" instructions, the debugger executes a
function evaluation as described above.
The debugger/roslyn takes this result and formats it into the
tree-like view that's shown to the user.
Because of the execution of IL, the debugger is always dealing with a complicated mix of "real" and "fake" values. Real values actually exist in the process being debugged. Fake values only exist in the debugger process. To implement proper struct semantics, the debugger always needs to make a copy of the value when pushing a struct value to the IL stack. The copied value is no longer a "real" value and now only exists in the debugger process. That means if we later need to perform function evaluation of ToString, we can't because the value doesn't exist in the process. To try and get the value we need to emulate execution of the ToString method. While we can emulate some things, there are many limitations. For example, we can't emulate native code and we can't execute calls to "real" delegate values or calls on reflection values.
With all of that in mind, here is what's causing the various behaviors you are seeing:
The debugger isn't evaluating NodaTime.Instant.ToString -> This is
because it is struct type and the implementation of ToString can't
be emulated by the debugger as described above.
Thread.Sleep seems to take zero time when called by ToString on a
struct -> This is because the emulator is executing ToString.
Thread.Sleep is a native method, but the emulator is aware
of it and just ignores the call. We do this to try and get a value
to show to the user. A delay wouldn't be helpful in this case.
DisplayAttibute("ToString()") works. -> That is confusing. The only
difference between the implicit calling of ToString and
DebuggerDisplay is that any time-outs of the implicit ToString
evaluation will disable all implicit ToString evaluations for that
type until the next debug session. You may be observing that
behavior.
In terms of the design problem/bug, this is something we are planning to address in a future release of Visual Studio.
Hopefully that clears things up. Let me know if you have more questions. :-)
Related
I have a simple class intended to store scaled integral values
using member variables "scaled_value" (long) with a "scale_factor".
I have a constructor that fills a new class instance with a decimal
value (although I think the value type is irrelevant).
Assignment to the "scaled_value" slot appears... to not happen.
I've inserted an explicit assignment of the constant 1 to it.
The Debug.Assert below fails... and scaled_value is zero.
On the assertion break in the immediate window I can inspect/set using assignment/inspect "scale_factor"; it changes as I set it.
I can inspect "scaled_value". It is always zero. I can type an
assignment to it which the immediate window executes, but its value
doesn't change.
I'm using Visual Studio 2017 with C# 2017.
What is magic about this slot?
public class ScaledLong : Base // handles scaled-by-power-of-ten long numbers
// intended to support equivalent of fast decimal arithmetic while hiding scale factors from user
{
public long scaled_value; // up to log10_MaxLong digits of decimal precision
public sbyte scale_factor; // power of ten representing location of decimal point range -21..+21. Set by constructor AND NEVER CHANGED.
public byte byte_size; // holds size of value in underlying memory array
string format_string;
<other constructors with same arguments except last value type>
public ScaledLong(sbyte sf, byte size, string format, decimal initial_value)
{
scale_factor = sf;
byte_size = size;
format_string = format;
decimal temp;
sbyte exponent;
{ // rip exponent out of decimal value leaving behind an integer;
_decimal_structure.value = initial_value;
exponent = (sbyte)_decimal_structure.Exponent;
_decimal_structure.Exponent = 0; // now decimal value is integral
temp = _decimal_structure.value;
}
sbyte sfDelta = (sbyte)(sf - exponent);
if (sfDelta >= 0)
{ // sfDelta > 0
this.scaled_value = 1;
Debug.Assert(scaled_value == 1);
scaled_value = (long)Math.Truncate(temp * DecimalTenToPower[sfDelta]);
}
else
{
temp = Math.Truncate(temp / DecimalHalfTenToPower[-sfDelta]);
temp += (temp % 2); /// this can overflow for value at very top of range, not worth fixing; note: this works for both + and- numbers (?)
scaled_value = (long)(temp / 2); // final result
}
}
The biggest puzzles often have the stupidest foundations. This one is a lesson in unintended side effects.
I found this by thinking about, wondering how in earth a member can get modified in unexpected ways. I found the solution before I read #mjwills comment, but he was definitely sniffing at the right thing.
What I left out (of course!) was that I had just coded a ToString() method for the class... that wasn't debugged. Why did I leave it out? Because it obviously can't affect anything so it can't be part of the problem.
Bzzzzt! it used the member variable as a scratchpad and zeroed it (there's the side effect); that was obviously unintended.
When this means is that when code the just runs, ToString() isn't called and the member variable DOES get modified correctly. (I even had unit tests for the "Set" routine checked all that and they were working).
But, when you are debugging.... the debugger can (and did in this case) show local variables. To do that, it will apparently call ToString() to get a nice displayable value. So the act of single stepping caused ToSTring() to get called, and its buggy scratch variable assignment zeroed out the slot after each step call.
So it wasn't a setter that bit me. It was arguably a getter. (Where is FORTRAN's PURE keyword when you need it?)
Einstein hated spooky actions at a distance. Programmers hate spooky side effects at a distance.
One wonders a bit at the idea of the debugger calling ToString() on a class, whose constructor hasn't finished. What assertions about the state of the class can ToString trust, given the constructor isn't done? I think the MS debugger should be fixed. With that, I would have spent my time debugging ToString instead of chasing this.
Thanks for putting up with my question. It got me to the answer.
If you still have a copy of that old/buggy code it would be interesting to try to build it under VS 2019 and Rider (hopefully the latest, 2022.1.1 at this point) with ReSharper (built in) allowed to do the picky scan and with a .ruleset allowed to bitch about just about anything (just for the 1st build - you'll turn off a lot but you need it to scream in order to see what to turn off). And with .NET 5.0 or 6.0
The reason I mention is that I remember some MS bragging about doing dataflow analysis to some degree in 2019 and I did see Rider complaining about some "unsafe assignments". If the old code is long lost - never mind.
CTOR-wise, if CTOR hasn't finished yet, we all know that the object "doesn't exist" yet and has invalid state, but to circumvent that, C# uses default values for everything. When you see code with constant assignments at the point of definition of data members that look trivial and pointless - the reason for that is that a lot of people do remember C++ and don't trust implicit defaults - just in case :-)
There is a 2-phase/round initialization sequence with 2 CTOR-s and implicit initializations in-between. Not widely documented (so that people with weak hearts don't use it :-) but completely deterministic and thread-safe (hidden fuses everywhere). Just for the sake of it's stability you never-ever want to have a call to any method before the 2 round is done (plain CTOR done still doesn't mean fully constructed object and any method invocation from the outside may trigger the 2nd round prematurely).
1st (plain) CTOR can be used in implicit initializations before the 2nd runs => you can control the (implicit) ordering, just have to be careful and step through it in debugger.
Oh and .ToString normally shouldn't be defined at all - on purpose :-) It's de-facto intrinsic => compiler can take it's liberties with it. Plus, if you define it, pretty soon you'll be obliged to support (and process) format specifiers.
I used to define ToJson (before big libs came to fore) to provide, let's say a controllable printable (which can also go over the wire and is 10-100 times faster than deserialization). These days VS debugger has a collection of "visualizers" and an option to tell debugger to use it or not (when it's off then it will jerk ToString's chain if it sees it.
Also, it's good to have dotPeek (or actual Reflector, owned by Redgate these days) with "find source code" turned off. Then you see the real generated code which is sometimes glorious (String is intrinsic and compiler goes a few extra miles to optimize its operations) and sometimes ugly (async/await - total faker, inefficient and flat out dangerous - how do you say "deadlock" in C# :-) - not kidding) but you need to to be able to see the final code or you are driving blind.
This is very dangerous so I wonder why it's allowed. Since I often need to switch between VB.NET and C# I sometimes add breakpoint-conditions like following:
foo = "bah"
I want to stop if the string variable foo is "bah, so the correct way was to use foo == "bah" instead of foo = "bah".
But it "works". You don't get any warnings or errors at compile- or runtime. But actually this modifies the variable foo, it makes it always "bah" even if it had a different value. Since that happens silently (the breakpoint never gets hit) it is incredibly dangerous.
Why is it allowed? Where is my error in reasoning (apart from confusing the C# and VB.NET syntax)? In C# (as opposed to VB.NET) an assignment statement returns the value that was assigned, so not a bool, but a string in this case. But a breakpoint condition has to be a bool if you check the box "Is True".
Here is a little sample "program" and screenshots from my (german) IDE:
static void Main()
{
string foo = "foo";
// breakpoint with assignment(foo = "bah") instead of comparison(foo == "bah"):
Console.WriteLine(foo); // bah, variable is changed from the breakpoint window
}
The breakpoint-condition dialog:
The code as image including the breakpoint:
It is an automatic consequence of C# syntax, common in the curly-braces language group. An assignment is also an expression, its result is the value of the right-hand side operand. The debugger does not object either to expressions having side-effects, nor would it be simple at all to suppress them. It could be blamed for not checking that the expression has a bool result, the debugger however does not have a full-blown C# language parser. This might well be fixed in VS2015 thanks to the Roslyn project. [Note: see addendum at the bottom].
Also the core reason that the curly-brace languages need a separate operator for equality, == vs =. Which in itself must be responsible for a billion dollar worth of bugs, every C programmer makes that mistake at least once.
VB.NET is different, assignment is a statement and the = token is valid for both assignment and comparison. You can tell from the debugger, it picks the equality operator instead and doesn't modify the variable.
Do keep in mind that this is actually useful. It lets you temporarily work around a bug, forcing the variable value and allowing you to continue debugging and focus on another problem. Or create a test condition. That's pretty useful. In a previous life-time, I wrote a compiler and debugger and implemented "trace points". Discovered the same scenario by accident and left it in place. It ran in a host that relied heavily on state machines, overriding the state variable while debugging was incredibly useful. The accident, no, not so useful :)
A note about what other SO users are observing, it depends on the debugging engine that you use. The relevant option in VS2013 is Tools + Options, Debugging, General, "Use Managed Compatibility Mode" checkbox. Same option exists in VS2012, it had a slightly different name (don't remember). When ticked you get an older debugging engine, one that is still compatible with C++/CLI. Same one as used in VS2010.
So that's a workaround for VS2013, untick the option to get the debugger to check that the expression produces a bool result. You get some more goodies with that new debugging engine, like seeing method return values and Edit+Continue support for 64-bit processes.
I can disagree about buggy nature of this behavior. Few examples where for debugging purposes in my life if was useful:
1. Other thread modifies something in your code.
2. Other service updated value in db
So I suppose for cases of synchronization it can be useful feature, but I agree that it can cause problems
In Visual Studio 2019, variable assignment expressions in Breakpoint conditions no longer work.
The expression will be evaluated for value, but side effects such as variable assignment are discarded.
https://developercommunity.visualstudio.com/content/problem/715716/variables-cannot-be-assigned-in-breakpoint-conditi.html
I have stumbled across a situation where garbage collection seems to be behaving differently between the same code running written as a Unit Test vs written in the Main method of a Console Application. I am wondering the reason behind this difference.
In this situation, a co-worker and I were in disagreement over the effects of registering an event handler on garbage collection. I thought that a demonstration would be better accepted than simply sending him a link to a highly rated SO answer. As such I wrote a simple demonstration as a unit test.
My unit test showed things worked as I said they should. However, my coworker wrote a console application that showed things working his way, which meant that GC was not occurring as I expected on the local objects in the Main method. I was able to reproduce the behavior he saw simply by moving the code from my test into the Main method of a Console Application project.
What I would like to know is why GC does not seem to collecting objects as expected when running in the Main method of a Console Application. By extracting methods so that the call to GC.Collect and the object going out of scope occurred in different methods, the expected behavior was restored.
These are the objects I used to define my test. There is simply an object with an event and an object providing a suitable method for an event handler. Both have finalizers setting a global variable so that you can tell when they have been collected.
private static string Log;
public const string EventedObjectDisposed = "EventedObject disposed";
public const string HandlingObjectDisposed = "HandlingObject disposed";
private class EventedObject
{
public event Action DoIt;
~EventedObject()
{
Log = EventedObjectDisposed;
}
protected virtual void OnDoIt()
{
Action handler = DoIt;
if (handler != null) handler();
}
}
private class HandlingObject
{
~HandlingObject()
{
Log = HandlingObjectDisposed;
}
public void Yeah()
{
}
}
This is my test (NUnit), which passes:
[Test]
public void TestReference()
{
{
HandlingObject subscriber = new HandlingObject();
{
{
EventedObject publisher = new EventedObject();
publisher.DoIt += subscriber.Yeah;
}
GC.Collect(GC.MaxGeneration);
GC.WaitForPendingFinalizers();
Thread.MemoryBarrier();
Assert.That(Log, Is.EqualTo(EventedObjectDisposed));
}
//Assertion needed for foo reference, else optimization causes it to already be collected.
Assert.IsNotNull(subscriber);
}
GC.Collect(GC.MaxGeneration);
GC.WaitForPendingFinalizers();
Thread.MemoryBarrier();
Assert.That(Log, Is.EqualTo(HandlingObjectDisposed));
}
I pasted the body above in to the Main method of a new console application, and converted the Assert calls to Trace.Assert invocations. Both equality asserts fail then fail. Code of resulting Main method is here if you want it.
I do recognize that when GC occurs should be treated as non-deterministic and that generally an application should not be concerning itself with when exactly it occurs.
In all cases the code was compiled in Release mode and targeting .NET 4.5.
Edit: Other things I tried
Making the test method static since NUnit supports that; test still worked.
I also tried extracting the whole Main method into an instance method on program and calling that. Both asserts still failed.
Attributing Main with [STAThread] or [MTAThread] in case this made a difference. Both asserts still failed.
Based on #Moo-Juice's suggestions:
I referenced NUnit to the Console app so that I could use the NUnit asserts, they failed.
I tried various changes to visibility to the both the test, test's class, Main method, and the class containing the Main method static. No change.
I tried making the Test class static and the class containing the Main method static. No change.
If the following code was extracted to a separate method, the test would be more likely to behave as you expected. Edit: Note that the wording of the C# language specification does not require this test to pass, even if you extract the code to a separate method.
{
EventedObject publisher = new EventedObject();
publisher.DoIt += subscriber.Yeah;
}
The specification allows but does not require that publisher be eligible for GC immediately at the end of this block, so you should not write code in such a way that you are assuming it can be collected here.
Edit: from ECMA-334 (C# language specification) ยง10.9 Automatic memory management (emphasis mine)
If no part of the object can be accessed by any possible continuation of execution, other than the running of finalizers, the object is considered no longer in use and it becomes eligible for finalization. [Note: Implementations might choose to analyze code to determine which references to an object can be used in the future. For instance, if a local variable that is in scope is the only existing reference to an object, but that local variable is never referred to in any possible continuation of execution from the current execution point in the procedure, an implementation might (but is not required to) treat the object as no longer in use. end note]
The problem isn't that it's a console application - the problem is that you're likely running it through Visual Studio - with a debugger attached! And/or you're compiling the console app as a Debug build.
Make sure you're compiling a Release build. Then go to Debug -> Start Without Debugging, or press Ctrl+F5, or run your console application from a command line. The Garbage Collector should now behave as expected.
This is also why Eric Lippert reminds you not to run any perf benchmarks in a debugger in C# Performance Benchmark Mistakes, Part One.
The jit compiler knows that a debugger is attached, and it deliberately de-optimizes the code it generates to make it easier to debug. The garbage collector knows that a debugger is attached; it works with the jit compiler to ensure that memory is cleaned up less aggressively, which can greatly affect performance in some scenarios.
Lots of the reminders in Eric's series of articles apply to your scenario. If you're interested in reading more, here are the links for parts two, three and four.
I am trying to assess the performance of a program I'm writing.
I have a method:
public double FooBar(ClassA firstArg, EnumType secondArg)
{
[...]
If I check the Function Details in the VS Performace Analyser for FooBar I can see that the method accounts for 14% of the total time (inclusive), and that 10% is spent in the body of the method itself. The thing that I cannot understand is that it looks like 6.5% of the total time (both inclusive and exclusive) is spent in the open brace of this method; it is actually the most time-consuming line in the code (as exclusive time concerns).
The method is not overriding any other method. The profiling is done in Debug configuration using sampling, the run last about 150s and that 6.5% correspond to more than 3000 samples out of a total of 48000.
Can someone explain me what it is happening in this line and if there is a way to improve that behaviour?
In the first open curly braces of the method is shown the amount of time spent for method initialization.
During the method initialization, the local variables are allocated and initialized.
Be aware that all the local variables of the method are initialized before the execution also if are declared in the middle of the body.
In order to reduce the initialization time try moving local variables to the heap or, if they are only used sometimes (like variables inside an if branch or after a return), extract the piece of code that uses them to another method.
I have built an abstract class that is used to handle command line options for our products.
One need only to create a class inheriting from AbstractOptions, fill it with decorated fields and call the inherited Parse(args) method to have it automatically filled through reflection with values from the command line. Values that were not found on the command line retain their current (default) values.
Then, the application needs only to check the option fields to get their value. The AbstractOptions class provides more features, like help output, etc, but it is beside the point.
Short example:
public class SignalOptions: AbstractOptions
{
[Option("-i, --iterations", "Number of iterations (0 = infinite).")]
volatile public int NumberOfIterations;
[Option("-s, --silent", "Silent mode, only display final results.")]
volatile public bool Silent;
[Option("-w, --zwindow", "Window size for z-score analysis.")]
volatile public int ZWindow = 7;
[Option("-a, --zalert", "z-score value to consider as peak.")]
public double ZAlert = 2.1;
}
static void Main(string[] args)
{
var opts = new SignalOptions();
opts.Parse(args)
// If optimizations are turned off, SILENT will be written or not
// followind presence or absence of the --silent switch on the command line.
// If optimizations are turned on, SILENT will never be written.
// The reflection part is working fine. I suspect the problem is that
// the compiler of the jitter having never found this set anywhere simply inlines
// the value 'false' inthe if, because when I step on it, it shows me the value as
// true or false, but has the same behavior no matter what value opts.Silence has.
if( opts.Silent )
Console.Writeline("SILENT");
}
Now, the problem I have is that since the compiler does not find any code actually changing the values of the SignalOptions class, it simply inlines the values where they are used in the code. I have circumvent the issue by requiring that all 'option' fields in the class be volatile, so no optimization is applied, and it works fine, but unfortunately the volatile keyword is not valid on a double.
I have spent much time on the net trying to find a workaround, without success. Is there anyway to either prevent optimizations on the fields or otherwise fool the compiler/jitter into thinking they are used at runtime?
I also would like to put as less as possible the onus on the calling application.
Thanks
I have a local copy here with Parse written as the rather opaque:
public void Parse(string[] args)
{ // deliberately opaque, not that it would make any difference
string fieldName = (new string('S', 1) + "ilentX").Substring(0, 6);
GetType().GetField(fieldName).SetValue(this, true);
}
It works fine. I do not believe the problem is what you think it is.
Here is my guess:
Parse is running in a separate thread, but as your synchronization is somehow flawed, this makes the rest of code run without having the values set already.
This would also explain why you are seeing the correct values in the debugger.
Update (opinionated):
Having Parse run in a separate thread is very weird, and should be considered a design flaw. Sounds like someone was thinking 'Reflection is slow, let's put it in a separate thread'.