What are tracepoints used for? - c#

They can only be placed on method names. How are they used and what are they for?

The Debugger team has a good blog post on this subject with examples as well: http://blogs.msdn.com/b/visualstudioalm/archive/2013/10/10/tracepoints.aspx
https://web.archive.org/web/20190109221722/https://blogs.msdn.microsoft.com/devops/2013/10/10/tracepoints/
Tracepoints are not a new feature at all (they been in Visual Studio since VS 2005). And they aren't breakpoints per se, as they don't cause the program execution to break. That can be useful when you need to inspect something, but not stop the program as that causes the behavior of a bug not to repro, etc.
Tracepoints are an attempt to overcome the case when you can't stop the program to inspect something as that will cause some behavior not to repro, by allowing a breakpoint to log information to the debug output window and continue, without pausing at the UI. You can also do this with macros, but it can be more time consuming.
To set a tracepoint, first set a breakpoint in code. Then use the context menu on the breakpoint and select the “When Hit...” menu item. You can now add log statements for the breakpoint and switch off the default Stop action, so that you log and go. There is a host of other info you can add to the log string, including static information about the location of the bp, such as file, line, function and address. You can also add dynamic information such as expressions, the calling function or callstack. Things like adding thread info and process info, can help you track down timing bugs when dealing with multiple threads and/or processes.

Use case where it can prove really helpful in debugging:
There could a case when you want to debug a function which gets called numerous number of times (say in hundreds), and you may just want to see the trend in which a local variable is changing. Doing this is possible by putting breakpoint, but think about stopping(while debugging) at that function hundreds of times and taking the pain of noting down the values in notepad. Doing this is so easy by using tracepoint, it directly puts the logs in "Output" window, which can be easily analysed or even cleared. Saving hours of manual effort and patience.
Example log at Output window(can run to hundreds of lines):
keyframeNo = 2, time = 1100
keyframeNo = 1, time = 0
keyframeNo = 1, time = 1
keyframeNo = 1, time = 1
keyframeNo = 1, curTime =22
curTime=1132835, keyframeno=15
keyframeNo = 2, time = 1
keyframeNo = 2, time = 1
How to use it:
right-click mouse at code > BreakPoint > Insert TracePoint
Advantage of using TracePoint:
There is no need to add code for generating logs. So, no tension to build the code, also no overhead of cleaning the code.
It doesn't obstruct the flow of code under execution, unlike breakpoints.
It can print value of local variables as well. Enter {local_variable} after clicking "When Hit"
You can also insert tracepoints in Debugging state, just as you can do for breakpoint.

According to MSDN:
Tracepoints are a new debugger feature in Visual Studio. A tracepoint is a breakpoint with a custom action associated with it. When a tracepoint is hit, the debugger performs the specified tracepoint action instead of, or in addition to, breaking program execution.

Related

Is there object/class browsing in Visual Studio with C# while debugging/code tracing?

So my idea is I want to see what an object's members or properties would return/change while I am debugging/tracing. There is Object Browser but it is only to show tree list of an object.
For example, let's say
var cacheDir = cotext.CacheDir;
But I want to change .CacheDir to .ExternalCacheDir while debugging to see what value would be returned to the variable.
var cacheDir = context.ExternalCacheDir;
Otherwise, I have to change it in editing mode and restart the whole debugging process. I think that we can do something like this in browser developer console or Jupyter notebook like CLI environment.
With C# Keyboard settings you press Ctrl+Alt+I - the immediate window. Or via the Command window type Immed.
In the Immediate Window you can do ad-hoc commands.
So in the debugger IDE you'd step over the line of code:
var cacheDir = cotext.CacheDir;
And now you want to tweak it just a once off, Ctrl+Alt+I
Then paste:
cacheDir = cotext.ExternalCacheDir;
And press enter. You can always revert back in the immediate window, eg:
cacheDir = cotext.CacheDir;
If you just want to see the value of a variable you can do a ? cacheDir to see the values. Give it a go :)
While you're debugging, you can use Watch windows to watch variables and expressions.
Open a Watch window by selecting Debug > Windows > Watch > Watch 1,
or pressing Ctrl+Alt+W > 1.
In the Watch window, select an empty row, and type variable or Expression
Continue debugging by selecting Debug > Step Into or pressing F11 as needed to advance.
The variable values in the Watch window change as you iterate through the for loop.
Reference
There are many ways to see variables values in VS. You can use the Watch Window, You can hover a variable and see a Data Tip, you can use the Immediate Window. You can also check OzCode that provides a HUD that shows the variable values without the need to open any Window, and provide a nice way to choose the properties that you like to present, and provides a google like search for variable name and values.
In the next version of OzCode (you can download a preview version of it) you can use OzCode Predict that also support VS Edit&Continue.

Conditional Break point at Multiple hitcount Times in Visual Studio

How can I set the breakpoint to be hit on multiple hitcounts.
Like in above figure I want it to be hit at times when the hit count is 234 ,345,567,1234 ,2314 etc.
It doesn't allow me to put a comma as well.
You CAN set "when breakpoint is a multiple of the counter". In your example, this would be "234, 468, 702, ...":
MSDN: Breakpoint Hit Count Dialog Box
Q: Is that sufficient?
Q: If not, is there any specific reason (besides sheer perverseness) you'd need a different hit count each breakpoint?
ONE OTHER ALTERNATIVE:
If you wanted, you can you can keep your own counter, and programatically invoke a break in C# any time you want:
MSDN: System.Diagnostics.Debugger.Break()

How to prevent loops in JavaScript that crash the browser or Apps?

I am creating a live editor in Windows 8.1 App using JavaScript. Almost done with that, but the problem is whenever I run such bad loops or functions then it automatically hangs or exits.
I test it with a loop such as:( It just a example-user may write its loop in its own way..)
for(i=0;i<=50000;i++)
{
for(j=0;j<5000;j++){
$('body').append('hey I am a bug<br>');
}
}
I know that this is a worst condition for any app or browser to handle that kind of loop. So here I want that if user uses such a loop then how I handle it, to produce their output?
Or if its not possible to protect my app for that kind of loop, if it is dangerous to my app so I alert the user that:
Running this snippet may crash the app!
I have an idea to check the code by using regular expressions if code have something like for(i=0;i<=5000;i++) then the above alert will show, how to do a Regex for that?
Also able to include C# as back-end .
Unfortunately, without doing some deep and complex code analysis of the edited code, you'll not be able to fully prevent errant JavaScript that kills your application. You could use, for example, a library that builds an abstract syntax tree from JavaScript and not allow code execution if certain patterns are found. But, the number of patterns that could cause an infinite loop are large, so it would not be simple to find, and it's likely to not be robust enough.
In the for example, you could modify the code to be like this:
for(i=0;!timeout() && i<=50000;i++)
{
for(j=0;!timeout() && j<5000;j++){
$('body').append('hey I am a bug<br>');
}
}
I've "injected" a call to a function you'd write called timeout. In there, it would need to be able to detect whether the loop should be aborted because the script has been running too long.
But, that could have been written with a do-while, so that type of loop would need to be handled.
The example of using jQuery for example in a tight loop, and modifying the DOM means that solutions that trying to isolate the JavaScript into a Web Worker would be complex, as it's not allowed to manipulate the DOM directly. It can only send/receive "string" messages.
If you had used the XAML/C# WebView to host (and build) the JavaScript editor, you could have considered using an event that is raised called WebView.LongRunningScriptDetected. It is raised when a long running script is detected, providing the host the ability to kill the script before the entire application becomes unresponsive and is killed.
Unfortunately, this same event is not available in the x-ms-webview control which is available in a WinJS project.
I've got 2 solutions:
1.
My first solution would be defining a variable
startSeconds=new Date().getSeconds();.
Then, using regex, I'm inserting this piece of code inside the nested loop.
;if(startSecond < new Date().getSeconds())break;
So, what it does is each time the loop runs, it does two things:
Checks if startSecond is less than current seconds new Date().getSeconds();.
For example, startSecond may be 22. new Date().getSeconds() may return 24.Now, the if condition succeeds so it breaks the loop.
Mostly, a non dangerous loop should run for about 2 to 3 seconds
Small loops like for(var i=0;i<30;i++){} will run fully, but big loops will run for 3 to 4 seconds, which is perfectly ok.
My solution uses your own example of 50000*5000, but it doesn't crash!
Live demo:http://jsfiddle.net/nHqUj/4
2.
My second solution would be defining two variables start, max.
Max should be the maximum number of loops that you are willing to run. Example 1000.
Then, using regex, I'm inserting this piece of code inside the nested loop.
;start+=1;if(start>max)break;
So, what it does is each time the loop runs, it does two things:
Increments the value of start by 1.
Checks whether start is greater than the max. If yes, it breaks the loop.
This solution also uses your own example of 50000*5000, but it doesn't crash!
Updated demo:http://jsfiddle.net/nHqUj/3
Regex I'm using:(?:(for|while|do)\s*\([^\{\}]*\))\s*\{([^\{\}]+)\}
One idea, but not sure what is your editor is capable of..
If some how you can understand that this loop may cause problem(like if a loop is more than 200 times then its a issue) and for a loop like that from user if you can change the code to below to provide the output then it will not hang. But frankly not sure if it will work for you.
var j = 0;
var inter = setInterval( function(){
if( j<5000 ){
$('#test').append('hey I am a bug<br>');
++j;
} else {
clearInterval(inter);
}
}, 100 );
Perhaps inject timers around for loops and check time at the first line. Do this for every loop.
Regex: /for\([^{]*\)[\s]*{/
Example:
/for\([^{]*\)[\s]*{/.test("for(var i=0; i<length; i++){");
> true
Now, if you use replace and wrap the for in a grouping you can get the result you want.
var code = "for(var i=0; i<length; i++){",
testRegex = /(?:for\([^{]*\)[\s]*{)/g,
matchReplace = "var timeStarted = new Date().getTime();" +
"$1" +
"if (new Date().getTime() - timeStarted > maxPossibleTime) {" +
"return; // do something here" +
"}";
code.replace(textRegex, matchReplace);
You cannot find what user is trying to do with a simple regex. Lets say, the user writes his code like...
for(i=0;i<=5;i++)
{
for(j=0;j<=5;j++){
if(j>=3){
i = i * 5000;
j = j * 5000;
}
$('body').append('hey I am a bug<br>');
}
}
Then with a simple regex you cannot avoid this. Because the value of i is increased after a time period. So the best way to solve the problem is to have a benchmark. Say, your app hangs after continuos processing of 3 minutes(Assume, until your app hits 3 minutes of processing time, its running fine). Then, whatever the code the user tries to run, you just start a timer before the process and if the process takes more than 2.5 minutes, then you just kill that process in your app and raise a popup to the user saying 'Running this snippet may crash the app!'... By doing this way you dont even need a regex or to verify users code if it is bad...
Try this... Might help... Cheers!!!
Let's assume you are doing this in the window context and not in a worker. Put a function called rocketChair in every single inner loop. This function is simple. It increments a global counter and checks the value against a global ceiling. When the ceiling is reached rocketChair summarily throws "eject from perilous code". At this time you can also save to a global state variable any state you wish to preserve.
Wrap your entire app in a single try catch block and when rocket chair ejects you can save the day like the hero you are.

WPF application sometimes(randomly) selects wrong value while printing

I am not sure as how to put this up. Actually the code is right. 99% of the time, the print-out shows correct values but, now and then it prints some other value. If i again try to print the same page, the correct value is restored.
What can be the reason for this and how can i determine it for this error. Because whenever i try to run the application in VS on my development PC everything seems correct. Can this happen or has happened to someone else not just in WPF but windows or web application.
EDIT
After doing ~50 test entries, i was able to produce the error once and the noticeable thing i discovered following : (rather than writing code i am trying to explain in general)
A = 100
B = 9
C = A+B // but sometime C gets the value of 1000 treating B as 900
Actual code
VatOnAmount = ((decimal)record.Element("Amount") +
(decimal)record.Element("Invoice").Element("CommissionAmount"))
It sounds like you have an issue where you data binding / element values may not be updated at the time the property is called to get the value. You may want to put some trace statements in your code to output values to see. Also - do you get the same issues if you step through via the debugger? Sometimes that changes race condition issues such as this.

Function profiling woes - Visual Studio 2010 Ultimate

I am trying to profile my application to monitor the effects of a function, both before and after refactoring. I have performed an analysis of my application and having looked at the Summary I've noticed that the Hot Path list does not mention any of my functions used, it only mentions functions up to Application.Run()
I'm fairly new to profiling and would like to know how I could get more information about the Hot Path as demonstrated via the MSDN documentation;
MSDN Example:
My Results:
I've noticed in the Output Window there are a lot of messages relating to a failure when loading symbols, a few of them are below;
Failed to load symbols for C:\Windows\system32\USP10.dll.
Failed to load symbols for C:\Windows\system32\CRYPTSP.dll.
Failed to load symbols for (Omitted)\WindowsFormsApplication1\bin\Debug\System.Data.SQLite.dll.
Failed to load symbols for C:\Windows\system32\GDI32.dll.
Failed to load symbols for C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7601.17514_none_41e6975e2bd6f2b2\comctl32.dll.
Failed to load symbols for C:\Windows\system32\msvcrt.dll.
Failed to load symbols for C:\Windows\Microsoft.NET\Framework\v4.0.30319\nlssorting.dll.
Failed to load symbols for C:\Windows\Microsoft.Net\assembly\GAC_32\System.Data\v4.0_4.0.0.0__b77a5c561934e089\System.Data.dll. Failed to load symbols for
C:\Windows\Microsoft.Net\assembly\GAC_32\System.Transactions\v4.0_4.0.0.0__b77a5c561934e089\System.Transactions.dll.
Unable to open file to serialize symbols: Error VSP1737: File could not be opened due to sharing violation: - D:\(Omitted)\WindowsFormsApplication1110402.vsp
(Formatted using code tool so it's readable)
Thanks for any pointers.
The "Hot Path" shown on the summary view is the most expensive call path based on the number of inclusive samples (samples from the function and also samples from functions called by the function) and exclusive samples (samples only from the function). A "sample" is just the fact the function was at the top of the stack when the profiler's driver captured the stack (this occurs at very small timed intervals). Thus, the more samples a function has, the more it was executing.
By default for sampling analysis, a feature called "Just My Code" is enabled that hides functions on the stack coming from non-user modules (it will show a depth of 1 non-user functions if called by a user function; in your case Application.Run). Functions coming from modules without symbols loaded or from modules known to be from Microsoft would be excluded. Your "Hot Path" on the summary view indicates that the most expensive stack didn't have anything from what the profiler considers to be your code (other than Main). The example from MSDN shows more functions because the PeopleTrax.* and PeopleNS.* functions are coming from "user code". "Just My Code" can be turned off by clicking the "Show All Code" link on the summary view, but I would not recommend doing so here.
Take a look at the "Functions Doing The Most Individual Work" on the summary view. This displays functions that have the highest exclusive sample counts and are therefore, based on the profiling scenario, the most expensive functions to call. You should see more of your functions (or functions called by your functions) here. Additionally, the "Functions" and "Call Tree" view might show you more details (there's a drop-down at the top of the report to select the current view).
As for your symbol warnings, most of those are expected because they are Microsoft modules (not including System.Data.SQLite.dll). While you don't need the symbols for these modules to properly analyze your report, if you checked "Microsoft Symbol Servers" in "Tools -> Options -> Debugging -> Symbols" and reopened the report, the symbols for these modules should load. Note that it'll take much longer to open the report the first time because the symbols need to be downloaded and cached.
The other warning about the failure to serialize symbols into the report file is the result of the file not being able to be written to because it is open by something else that prevents writing. Symbol serialization is an optimization that allows the profiler to load symbol information directly from the report file on the next analysis. Without symbol serialization, analysis simply needs to perform the same amount of work as when the report was opened for the first time.
And finally, you may also want to try instrumentation instead of sampling in your profiling session settings. Instrumentation modifies modules that you specify to capture data on each and every function call (be aware that this can result in a much, much larger .vsp file). Instrumentation is ideal for focusing in on the timing of specific pieces of code, whereas sampling is ideal for general low-overhead profiling data collection.
Do you mind too much if I talk a bit about profiling, what works and what doesn't?
Let's make up an artificial program, some of whose statements are doing work that can be optimized away - i.e. they are not really necessary.
They are "bottlenecks".
Subroutine foo runs a CPU-bound loop that takes one second.
Also assume subroutine CALL and RETURN instructions take insignificant or zero time, compared to everything else.
Subroutine bar calls foo 10 times, but 9 of those times are unnecessary, which you don't know in advance and can't tell until your attention is directed there.
Subroutines A, B, C, ..., J are 10 subroutines, and they each call bar once.
The top-level routine main calls each of A through J once.
So the total call tree looks like this:
main
A
bar
foo
foo
... total 10 times for 10 seconds
B
bar
foo
foo
...
...
J
...
(finished)
How long does it all take? 100 seconds, obviously.
Now let's look at profiling strategies.
Stack samples (like say 1000 samples) are taken at uniform intervals.
Is there any self time? Yes. foo takes 100% of the self time.
It's a genuine "hot spot".
Does that help you find the bottleneck? No. Because it is not in foo.
What is the hot path? Well, the stack samples look like this:
main -> A -> bar -> foo (100 samples, or 10%)
main -> B -> bar -> foo (100 samples, or 10%)
...
main -> J -> bar -> foo (100 samples, or 10%)
There are 10 hot paths, and none of them look big enough to gain you much speedup.
IF YOU HAPPEN TO GUESS, and IF THE PROFILER ALLOWS, you could make bar the "root" of your call tree. Then you would see this:
bar -> foo (1000 samples, or 100%)
Then you would know that foo and bar were each independently responsible for 100% of the time and therefore are places to look for optimization.
You look at foo, but of course you know the problem isn't there.
Then you look at bar and you see the 10 calls to foo, and you see that 9 of them are unnecessary. Problem solved.
IF YOU DIDN'T HAPPEN TO GUESS, and instead the profiler simply showed you the percent of samples containing each routine, you would see this:
main 100%
bar 100%
foo 100%
A 10%
B 10%
...
J 10%
That tells you to look at main, bar, and foo. You see that main and foo are innocent. You look at where bar calls foo and you see the problem, so it's solved.
It's even clearer if in addition to showing you the functions, you can be shown the lines where the functions are called. That way, you can find the problem no matter how large the functions are in terms of source text.
NOW, let's change foo so that it does sleep(oneSecond) rather than be CPU bound. How does that change things?
What it means is it still takes 100 seconds by the wall clock, but the CPU time is zero. Sampling in a CPU-only sampler will show nothing.
So now you are told to try instrumentation instead of sampling. Contained among all the things it tells you, it also tells you the percentages shown above, so in this case you could find the problem, assuming bar was not very big. (There may be reasons to write small functions, but should satisfying the profiler be one of them?)
Actually, the main thing wrong with the sampler was that it can't sample during sleep (or I/O or other blocking), and it doesn't show you code line percents, only function percents.
By the way, 1000 samples gives you nice precise-looking percents. Suppose you took fewer samples. How many do you actually need to find the bottleneck? Well, since the bottleneck is on the stack 90% of the time, if you took only 10 samples, it would be on about 9 of them, so you'd still see it.
If you even took as few as 3 samples, the probability it would appear on two or more of them is 97.2%.**
High sample rates are way overrated, when your goal is to find bottlenecks.
Anyway, that's why I rely on random-pausing.
** How did I get 97.2 percent? Think of it as tossing a coin 3 times, a very unfair coin, where "1" means seeing the bottleneck. There are 8 possibilities:
#1s probabality
0 0 0 0 0.1^3 * 0.9^0 = 0.001
0 0 1 1 0.1^2 * 0.9^1 = 0.009
0 1 0 1 0.1^2 * 0.9^1 = 0.009
0 1 1 2 0.1^1 * 0.9^2 = 0.081
1 0 0 1 0.1^2 * 0.9^1 = 0.009
1 0 1 2 0.1^1 * 0.9^2 = 0.081
1 1 0 2 0.1^1 * 0.9^2 = 0.081
1 1 1 3 0.1^0 * 0.9^3 = 0.729
so the probability of seeing it 2 or 3 times is .081*3 + .729 = .972

Categories

Resources