We are using TFS/VS 2013 Microsoft build stack to build our .NET ASP application (runs unit tests), then deploys the application based on the build to a web server which then has our API rest and Selenium UI tests ran against it.
I am getting the .coverage and .trx file, and I am able to merge the .coverage into a single .coveragexml. This is able to be processed by sonar and does display the correct results for coverage and test results, but places all the results under 'Unit Test Success or Coverage'.
Is there any way to separate out the results, i.e. Integration Tests or a widget that can display multiple test runs against a single project?
I can somewhat accomplish this by setting up multiple projects (using the same source) and attaching different .coverage and .trx to the project, but this doesn't give a clear picture of the results since the tests are all ran against the same source and bits.
I would like to display our unit tests (not all are .net), C# integration tests, API, and UI tests into separate "widgets"
Here is a copy of my sonar-project.properties file:
# Root project information
sonar.projectKey=XXX.XXX.Presentation
sonar.projectName=XXX.XXX.Presentation
sonar.projectVersion=1.0
# Enable the Visual Studio bootstrapper
sonar.visualstudio.enable=true
sonar.visualstudio.solution=XXX.XXX.sln
sonar.visualstudio.testProjectPattern=.*Tests
# Code Coverage
sonar.cs.vscoveragexml.reportsPaths=MergedResults.coveragexml
# Unit Test Results
sonar.cs.vstest.reportsPaths=TestResults/*.trx
# Some properties that will be inherited by the modules [P.S : Forward slashes]
sonar.sources=.
sonar.projectBaseDir=.
# Info required for SonarQube
sonar.language=cs
sonar.sourceEncoding=UTF-8
Integration code coverage is not yet supported by the C# plugin.
See http://jira.sonarsource.com/browse/SONARNTEST-5
Same story for integration test results:
http://jira.sonarsource.com/browse/SONARNTEST-22
Related
We use Jenkins to build C# project, to run unit tests (NUnit) and code coverage (NCover). As output, coverage.nccov and nunit-result.xml files.
Jenkins triggers SonarQube analysis (SonarQube 5.0.1 and up to date C# plugin). The SonarQube dashboard displays unit tests coverage and unit tests results, but list of failed tests cannot be displayed as drilldown.
When user clicks on the metrics, the page displayed is quite empty (no list of files, no drilldown, just the metric).
sonar-project.properties:
sonar.visualstudio.solution=MyProject.sln
sonar.cs.ncover3.reportsPaths=coverage.nccov
sonar.cs.nunit.reportsPaths=nunit-result.xml
Unit Tests Coverage metrics display drilldown as expected.
This indeed is a known limitation of the plugin, which depends on this ticket: https://jira.sonarsource.com/browse/SONARCS-657
For your information, the main difficulty to implement this feature is due to unit test reports not containing links back to the source code files, but only to assemblies/types/methods instead. SonarQube needs to know which files to show in the drilldown.
We have a very large source tree with many solutions/projects. We're attempting to improve our unit test coverage and putting together a CI environment (TFS). However, some of our projects will not generate code coverage when using VS 2015's "Analyze Code Coverage for All Tests". We simply get the "Empty results generated: No binaries were instrumented." message.
This is easily remedied on a development machine by selecting all the tests in Test Explorer, right clicking, and then selecting "Analyze Code Coverage for Selected Tests". Unfortunately, this does nothing to fix the issue in our CI environment.
In both cases the unit tests are run and I can see the pass/fail results.
I've not been able to find any rhyme or reason as to why this happens in some project and not others. We do have a runsettings file to prevent calculating coverage for the test assemblies, however, excluding it only causes the test assemblies to be included in the coverage. Coverage for the code under test is still missing.
Presumably there is a difference in how the underlying commands are run. Is there a reason why code coverage would exist under one and not the other? How can I fix my projects to make sure the coverage is calculated properly when running on the TFS server or using the "Analyze Code Coverage for All Tests" from the VS2015 interface?
Additional Notes:
Projects under test are all .NET 4.0 Class Libraries. There is some WinForms code in these projects, Forms are being excluded from coverage though.
Test projects were created using the Test Project template.
I can replicate the visual studio behavior on my projects using the command line.
Does not provide coverage:
vstest.console.exe /EnableCodeCoverage "TestAssembly.dll"
Does provide coverage:
vstest.console.exe /EnableCodeCoverage /Tests: Test1 Test2 Test3 ... TestN "TestAssembly.dll"
Additional Notes 2
I ran
vstest.console.exe /EnableCodeCoverage "TestAssembly.dll"
All unit tests are run and pass as expected.
Afterwards I ran the CodeCoverage analyze suggested at the end of this blog.
CodeCoverage.exe analyze /include_skipped_modules my.coverage > analysis.xml
"AssemblyUnderTest.dll" does not appear in either the modules section or the skipped_modules section. I know "AssemblyUnderTest.dll" is being run because I can debug the test and break into the code for that assembly and as mentioned the tests ran and passed during this test run. Other assemblies referenced by the code are present in the skipped_modules section with reason "no_symbols" - this is expected.
I have a test suite, comprised of both unit tests and integration tests, in a project using C# on .NET 4.0 with Visual Studio 2010. The test suite uses MSTest. When I run all tests in solution (either by hitting the button in the testing toolbar or using the Ctrl-R A shortcut chord) all of the tests, integration and unit, pass successfully.
When I either attempt to run the same tests from the command line with mstest (explicitly using the only .testsettings file present) or attempt to run them from the Test List Editor or using the .vsmdi file the integration tests fail.
The integration tests test the UI and so have dependencies on deployment items and such, whereas the unit tests do not. However, I cannot seem to pin down what is actually different between these two methods of running the tests.
When I inspect the appropriate Out directories from the test run, not all of the files are present.
What would cause some of the files that deploy correctly in one situation from Visual Studio to not deploy correctly in another?
The static content started being copied shortly after I wrote the comments above. The other major issue I ran into was that the integration test project referenced libraries that were dependencies of the system-under-test (with copy-local set to true) in order to ensure that the DLLs would be present when they were needed. For some reason, these stubbornly refused to copy when the tests were run through Test List or mstest.
What I eventually did to work around it was include [DeploymentItem] attributes for the DLLs that I needed. This got things working no matter how the tests were run. What I am still unclear on, that may have answered the underlying solution, or provided a better solution, is how Test View/mstest differ from the regular test runner (assuming that the correct .settings file was passed to mstest.). I'm putting these notes/workarounds in an answer, but I'll leave the question open in case anyone can address the underlying cause for how the different test execution paths differ.
I would like to run automated Silverlight unit tests from a Hudson build server. It seems there are two options:
Use Statlight, although it seems to be designed for TeamCity rather than Hudson, so it would involve a bit of hacking to get it to work.
Use NUnit Silverlight tests.
Can anyone recomend either of these options? Or is there a better alternative?
You can try using Lighthouse Silverlight Unit Test Runner, it works with every Build Server including Hudson, TeamCity and CCNet because it by default produces NUnit compatible xml results file:
http://lighthouse.codeplex.com/
In our company we are using NUnit with Hudson for automatized unit testing. It is simple to setup and execute.
Just download and unzip latest nunit somewhere on Hudson host.
Add Windows batch command as last buildstep with content like:
C:\NUnit\bin\net-2.0\nunit-console.exe "%WORKSPACE%\src\Test\AllTests.nunit" /config=Release /xml="%WORKSPACE%\src\Test\TestResults.xml"
This will execute tests as defined in "AllTests.nunit" file. It is possible tu point just to one assembly (.dll).
To populate test results within Hudson Job page, you would need to install Hudson NUnit plugin. Its possible directly from Hudson plugin management.
After instalation there will be new Post build action: Publish NUnit test result report.
If you check it, you've got line to enter path to test result report. Corresponding path for example above is:
src/Test/TestResults.xml
Hope it helps you to decide ;-)
I have a common db unit test which is used to test naming convetions, it's shared between multiple projects as a an existing file link within visual studio and this runs fine per project ( each project is specific a db).
However it breaks the VS test editor support since the projects with the shared file do not show up in the test view as it shows an error when loading:
Error loading C:\: The test 'DefaultConstraints' from 'c:\listtest.dll' that is loading has the same TestId {3c0c0672-f45b-4b13-697a-77d588b873e4} as the test 'DefaultConstraints' already loaded from 'c:\sandbox\commontest.dll'.
So I can't run the test within VS but can using MSBuild, is there a better way to share common tests?
This is a bug
https://connect.microsoft.com/VisualStudio/feedback/details/574115/unit-tests-error-because-vs-mixes-up-the-debug-and-release-binaries#