How to see the list of unit tests? [duplicate] - c#

We use Jenkins to build C# project, to run unit tests (NUnit) and code coverage (NCover). As output, coverage.nccov and nunit-result.xml files.
Jenkins triggers SonarQube analysis (SonarQube 5.0.1 and up to date C# plugin). The SonarQube dashboard displays unit tests coverage and unit tests results, but list of failed tests cannot be displayed as drilldown.
When user clicks on the metrics, the page displayed is quite empty (no list of files, no drilldown, just the metric).
sonar-project.properties:
sonar.visualstudio.solution=MyProject.sln
sonar.cs.ncover3.reportsPaths=coverage.nccov
sonar.cs.nunit.reportsPaths=nunit-result.xml
Unit Tests Coverage metrics display drilldown as expected.

This indeed is a known limitation of the plugin, which depends on this ticket: https://jira.sonarsource.com/browse/SONARCS-657
For your information, the main difficulty to implement this feature is due to unit test reports not containing links back to the source code files, but only to assemblies/types/methods instead. SonarQube needs to know which files to show in the drilldown.

Related

Code coverage of entire source code in sonarqube?

Is there a way to account for the code coverage of all source code for my solution, even if I'm only including code coverage results that include a few projects? In this case if I add a new project that has no unit tests/code coverage ran against it, it doesn't get included in code coverage % value on sonarqube, so my code coverage stays the same. But I want it to consider new code not being tested. I'm using OpenCover and Xunit for a .net project.
This is addressed in SonarC# 6.6, but not in VB.NET. The plugin is currently in RC and should be released in next day or two.
There is nothing you need to configure, just use 6.6.

dotcover/opencover link coverage to other assembly than test-assembly

I have divided my source code and tests in separate solutions for C#.
In my tests I create link to the actual code and implement stub-implementations for its dependencies where I can't mock the implementation.
If I run opencover or dotcover to get the coverage of the unit-tests and genreate a report with reportgenerator it groups the coverage per test assembly, however i want to map this coverage to the actual source code/assembly (from which i linked my file) in my resulting report.
Is there any way, in opencover or dotcover, to map my code coverage results with the actual code (and exclude my stub-classes that don't map with the code)?
I fixed this by running a small script before generating the actual report. This script replaces the ModuleName in the XML of the coverage result by OpenCover with the assembly of my production code.
ReportGenerator will then nicely merge all my results into a single report.

Visual Studio 2015 "Analyze Code Coverage for All Tests" returns Empty Results Generated in some projects

We have a very large source tree with many solutions/projects. We're attempting to improve our unit test coverage and putting together a CI environment (TFS). However, some of our projects will not generate code coverage when using VS 2015's "Analyze Code Coverage for All Tests". We simply get the "Empty results generated: No binaries were instrumented." message.
This is easily remedied on a development machine by selecting all the tests in Test Explorer, right clicking, and then selecting "Analyze Code Coverage for Selected Tests". Unfortunately, this does nothing to fix the issue in our CI environment.
In both cases the unit tests are run and I can see the pass/fail results.
I've not been able to find any rhyme or reason as to why this happens in some project and not others. We do have a runsettings file to prevent calculating coverage for the test assemblies, however, excluding it only causes the test assemblies to be included in the coverage. Coverage for the code under test is still missing.
Presumably there is a difference in how the underlying commands are run. Is there a reason why code coverage would exist under one and not the other? How can I fix my projects to make sure the coverage is calculated properly when running on the TFS server or using the "Analyze Code Coverage for All Tests" from the VS2015 interface?
Additional Notes:
Projects under test are all .NET 4.0 Class Libraries. There is some WinForms code in these projects, Forms are being excluded from coverage though.
Test projects were created using the Test Project template.
I can replicate the visual studio behavior on my projects using the command line.
Does not provide coverage:
vstest.console.exe /EnableCodeCoverage "TestAssembly.dll"
Does provide coverage:
vstest.console.exe /EnableCodeCoverage /Tests: Test1 Test2 Test3 ... TestN "TestAssembly.dll"
Additional Notes 2
I ran
vstest.console.exe /EnableCodeCoverage "TestAssembly.dll"
All unit tests are run and pass as expected.
Afterwards I ran the CodeCoverage analyze suggested at the end of this blog.
CodeCoverage.exe analyze /include_skipped_modules my.coverage > analysis.xml
"AssemblyUnderTest.dll" does not appear in either the modules section or the skipped_modules section. I know "AssemblyUnderTest.dll" is being run because I can debug the test and break into the code for that assembly and as mentioned the tests ran and passed during this test run. Other assemblies referenced by the code are present in the skipped_modules section with reason "no_symbols" - this is expected.

Sonarqube C# unable to display Integration test/coverage results

We are using TFS/VS 2013 Microsoft build stack to build our .NET ASP application (runs unit tests), then deploys the application based on the build to a web server which then has our API rest and Selenium UI tests ran against it.
I am getting the .coverage and .trx file, and I am able to merge the .coverage into a single .coveragexml. This is able to be processed by sonar and does display the correct results for coverage and test results, but places all the results under 'Unit Test Success or Coverage'.
Is there any way to separate out the results, i.e. Integration Tests or a widget that can display multiple test runs against a single project?
I can somewhat accomplish this by setting up multiple projects (using the same source) and attaching different .coverage and .trx to the project, but this doesn't give a clear picture of the results since the tests are all ran against the same source and bits.
I would like to display our unit tests (not all are .net), C# integration tests, API, and UI tests into separate "widgets"
Here is a copy of my sonar-project.properties file:
# Root project information
sonar.projectKey=XXX.XXX.Presentation
sonar.projectName=XXX.XXX.Presentation
sonar.projectVersion=1.0
# Enable the Visual Studio bootstrapper
sonar.visualstudio.enable=true
sonar.visualstudio.solution=XXX.XXX.sln
sonar.visualstudio.testProjectPattern=.*Tests
# Code Coverage
sonar.cs.vscoveragexml.reportsPaths=MergedResults.coveragexml
# Unit Test Results
sonar.cs.vstest.reportsPaths=TestResults/*.trx
# Some properties that will be inherited by the modules [P.S : Forward slashes]
sonar.sources=.
sonar.projectBaseDir=.
# Info required for SonarQube
sonar.language=cs
sonar.sourceEncoding=UTF-8
Integration code coverage is not yet supported by the C# plugin.
See http://jira.sonarsource.com/browse/SONARNTEST-5
Same story for integration test results:
http://jira.sonarsource.com/browse/SONARNTEST-22

Tests succeed when run from Test View but fail when run from test list editor or command line

I have a test suite, comprised of both unit tests and integration tests, in a project using C# on .NET 4.0 with Visual Studio 2010. The test suite uses MSTest. When I run all tests in solution (either by hitting the button in the testing toolbar or using the Ctrl-R A shortcut chord) all of the tests, integration and unit, pass successfully.
When I either attempt to run the same tests from the command line with mstest (explicitly using the only .testsettings file present) or attempt to run them from the Test List Editor or using the .vsmdi file the integration tests fail.
The integration tests test the UI and so have dependencies on deployment items and such, whereas the unit tests do not. However, I cannot seem to pin down what is actually different between these two methods of running the tests.
When I inspect the appropriate Out directories from the test run, not all of the files are present.
What would cause some of the files that deploy correctly in one situation from Visual Studio to not deploy correctly in another?
The static content started being copied shortly after I wrote the comments above. The other major issue I ran into was that the integration test project referenced libraries that were dependencies of the system-under-test (with copy-local set to true) in order to ensure that the DLLs would be present when they were needed. For some reason, these stubbornly refused to copy when the tests were run through Test List or mstest.
What I eventually did to work around it was include [DeploymentItem] attributes for the DLLs that I needed. This got things working no matter how the tests were run. What I am still unclear on, that may have answered the underlying solution, or provided a better solution, is how Test View/mstest differ from the regular test runner (assuming that the correct .settings file was passed to mstest.). I'm putting these notes/workarounds in an answer, but I'll leave the question open in case anyone can address the underlying cause for how the different test execution paths differ.

Categories

Resources