Are there any performance costs when static code analysis is enabled in a production (release) build?
Our CI server runs code analysis on a debug build of our C# projects, whereas the release build has static code analysis disabled (i.e. CODE_ANALYSIS not defined). If there's no reason to disable code analysis on production builds, then I'm wasting time with the debug build.
Reflector shows me that SuppressMessage attributes are excluded if code analysis is disabled, but I don't expect the extra attribute to affect run-time performance. Is that the only effect of enabling static code analysis (in Visual Studio 2013)?
There are actual differences when compiling with the CODE_ANALYSIS keyword enabled, for example, the compiler will remove all [SuppressMessage] attributes from the assembly when it is not enabled (and may thus cause messages to show up when you run FxCop later from the command line, since the Suppressions have been removed). If you're installing your binaries on a internal system, it may be fine to leave the suppressions in the binaries. Some companies want them removed from assemblies released to 3rd parties, since the presence of these attributes (and the contents of the Justification properties) might disclose sensitive information.
When running Code Analysis on a DEBUG build you might get stricter results, certain optimizations that occur in most RELEASE builds can cause specific FxCop rules to get lost. The optimization may remove private methods (through inlining) or replace calls to constants with the value, instead of the definition of the constant. It will not be possible for FxCop to validate these items, since they have been removed. This is to be expected.
For best results: Run Code Analysis in a Debug build. For least information disclosure, remove the CODE_ANALYSIS constant from Release builds.
Related
I have a library that will be built and published as a nuget package in release mode.
However for debugging purposes it is useful to capture stacktraces at various points.
Capturing a stack trace is relatively expensive, and I don't want to do it in release. However neither do I want to force everyone to replace my nuget package with the debug version when they want to debug their code.
Is there a way to check if the executable that a dll is running in was compiled with debug or release? In other words even though my nuget package was compiled with release, I want to run different code depending on whether or not the executable that my library is used by was built in release or debug?
The lines below appear to be in conflict with each other:
Is there a way to check if the executable that a dll is running in was
compiled with debug or release? In other words even though my nuget
package was compiled with release, I want to run different code
Usually it is possible to do something similar to what you are after by using pre-processor directives. This would allow your program to execute different paths according to for instance, the name of the configuration used to build the project:
#if debug
//Log
#endif
However, you seem to be after something different. It seems that you want to change behaviour but keeping the same build (release in both cases).
To cater for this, it might be easier to have a flag, say verbose which is false by default and when enabled, it logs the extra information. This would allow you to keep the same build mechanism but be able to log more information accordingly.
Edit: As per your comments, what I mean is something like this:
In your code that calls the nuget, you would have something like so:
#if DEBUG
var x = new NugetInstance(verbose:true...);
#else
var x = new NugetInstance(verbose:false...);
#endif
I've got a solution containing two MVC 5 web applications with associated class libraries and the code analysis settings are causing the build to hang. If I try to interact with the UI during this time I get the "VS is busy" bubble. Leaving the build to complete overnight doesn't work either.
To troubleshoot this I turned off code analysis on all projects and the project builds just fine [0]. So I enabled the "Microsoft All Rules" on one of the MVC projects and the build process doesn't complete.
"Microsoft Managed Minimum Rules" builds but what I'd now like is that there's some kind of structured way of going through the rulesets, where the next one I try is a superset of the last successful one. Does such a hierarchy exist, and if so, is there a canonical reference for it?
Once I get to that level then I can start to isolate individual rules, perhaps by increasing the verbosity of the build output...
[0]
This statement should not be interpreted as "Building without code analysis is perfectly okay"
A general hierarchy is exposed via the Include elements in the .ruleset files located under the Visual Studio install directory (e.g.: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Team Tools\Static Analysis Tools\Rule Sets for a typical VS 2015 installation). Broadly, it looks something like this (with "All Rules" not actually depending on any of the others):
All Rules
Extended Correctness Rules
Basic Correctness Rules
Minimum Recommended Rules
Extended Design Guideline Rules
Basic Design Guideline Rules
Minimum Recommended Rules (same as above)
Globalization Rules
Security Rules
It's also worth noting that this isn't a clean hierarchy without overlaps. For example, rules included in the "Globalization" and "Security" rulesets are also included in some of the others (including the "Minimum" set).
To inherit from a ruleset file, you can include it with:
<Include Path="MyOther.ruleset" Action="Default" />
Then you can override the action for specific rules.
I'm tring to run analysis on SonarQube, using an FxCop custom Rule.
In SonarQube 4.5.7 I add the rule to the set of rules, I activate it and then run the analysis.
To run the analysis I use the sequence of following commands:
1) MSBuild.SonarQube.Runner.exe begin /k:my.project.C-Sharp-ConsoleApp /n:C-Sharp-ConsoleApp /v:1.1
2) "C:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild" /T:Rebuild
3) MSBuild.SonarQube.Runner.exe end
I see that the rule is executed, because when I run the second command I read the following part of log:
...
(RunCodeAnalysis target) ->
MSBUILD : warning CR1000: MyRules : Field 'CSharpSortNames.m_variabile' is not in Hungarian notation. Field name should be prefixed with 's'. [C:\Users\Alessandro\Documents\Visual Studio 2015\Projects\C-Sharp-ConsoleApp\C-Sharp-ConsoleApp\C-Sharp-ConsoleApp.csproj]
My custom rule checkID is CR1000, and after the third command, I see that an error of this rule is founded, but the web app doesn't let me see where. For all other errors the web app let me see the precise line where is the error by a link to the .cs file. For my rule it doesn't.
Anyone can help me about this?
Further problem is in SonarQube 5.4 the same rule is activated but web app does not show the error.
The root cause here is that FxCop uses information from the PDB file for providing location information. However, the PDB only contains information that would be useful for debugging scenarios, which means that FxCop rule violations that are associated with non-executable code (e.g.: field declarations or interface definitions) will not have location information available. (FWIW, there is an open SonarQube issue for addressing this, but it would be non-trivial to accomplish unless SonarQube were to directly examine the source files to attempt to locate the field declaration. I rather suspect they might not bother given that it simpler to address via a Roslyn analyzer.)
Further problem is in SonarQube 5.4 the same rule is activated but web
app does not show the error.
That's because older versions of the C# plugin for SonarQube completely ignored FxCop violations without location information. This was addressed in version 5.2 of the plugin, which only became in early May 2016 (and is presumably what you used when you installed SonarQube 5.5). It is compatible with version 5.4 of SonarQube, so you should be able to use it with your older installs if you like.
This question already has answers here:
Performance differences between debug and release builds
(9 answers)
Closed 8 years ago.
I notice a few other stack overflow questions about this but none seem to really address what I am trying to find out.
I have been running in "debug" mode when developing but now I notice that when I switch to "release" mode in the "Solution Configurations" I am still able to set breakpoints and look at the value of variables.
I was told "debug" mode is very slow. Can someone explain what is the advantage of setting my solution configuration (top menu drop down) to "debug"? What additional debug capabilities does it give me?
Most notable: The Debug configuration will contain line numbers in stack traces.
However, there is more: The compiler will not optimize or inline your code, which "could" impact the performance. In general it is not recommended to deploy debug builds.
And you can add compiler directives, since (by default) the DEBUG constant is defined:
public void Method()
{
#if DEBUG
log("Method invoked");
#end if
...
}
Directly from Microsoft:
The Debug configuration of your program is compiled with full symbolic debug information and no optimization. Optimization complicates debugging, because the relationship between source code and generated instructions is more complex.
The Release configuration of your program contains no symbolic debug information and is fully optimized. Debug information can be generated in PDB Files, depending on the compiler options that are used. Creating PDB files can be very useful if you later have to debug your release version.
Programs running in Debug mode may be somewhat slower due to the lack of optimization. However, you would not distribute a Debug version of your application, so speed is generally not a huge factor. You only have access to breakpoints because a .pdb file is generated in release mode, but if you were to remove the .pdb files, you would no longer be able to set breakpoints. Also, due to optimization, there could be some areas of code where breakpoints would not work.
When should I include PDB files for a production release? Should I use the Optimize code flag and how would that affect the information I get from an exception?
If there is a noticeable performance benefit I would want to use the optimizations but if not I'd rather have accurate debugging info. What is typically done for a production app?
When you want to see source filenames and line numbers in your stacktraces, generate PDBs using the pdb-only option. Optimization is separate from PDB generation, i.e. you can optimize and generate PDBs without a performance hit.
From the C# Language Reference
If you use /debug:full, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with /debug:full. We recommend /debug:pdbonly or no PDB for generating release code.
To answer your first question, you only need to include PDBs for a production release if you need line numbers for your exception reports.
To answer your second question, using the "Optimise" flag with PDBs means that any stack "collapse" will be reflected in the stack trace. I'm not sure whether the actual line number reported can be wrong - this needs more investigation.
To answer your third question, you can have the best of both worlds with a rather neat trick. The major differences between the default debug build and default release build are that when doing a default release build, optimization is turned on and debug symbols are not emitted. So, in four steps:
Change your release config to emit debug symbols. This has virtually no effect on the performance of your app, and is very useful if (when?) you need to debug a release build of your app.
Compile using your new release build config, i.e. with debug symbols and with optimization. Note that 99% of code optimization is done by the JIT compiler, not the language compiler.
Create a text file in your app's folder called xxxx.exe.ini (or dll or whatever), where xxxx is the name of your executable. This text file should initially look like:
[.NET Framework Debugging Control]
GenerateTrackingInfo=0
AllowOptimize=1
With these settings, your app runs at full speed. When you want to debug your app by turning on debug tracking and possibly turning off (CIL) code optimization, just use the following settings:
[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0
EDIT According to cateye's comment, this can also work in a hosted environment such as ASP.NET.
There is no need to include them in your distribution, but you should definitely be building them and keeping them. Otherwise debugging a crash dump is practically impossible.
I would also turn on optimizations. Whilst it does make debugging more difficult the performance gains are usually very non-trivial depending on the nature of the application. We easily see over 10x performance on release vs debug builds for some algorithms.