I have a suite of Webdriver tests which are written with C# and I am using MSTest as a runner. At this point NUnit is not an option, so I need to figure this out how to make it work with the current configuration. For CI I am using Jenkins ver. 1.514. I am not in control of what plugins are being installed or when Jenkins is updated and if asking such a thing might lead to a long wait and approval processes in different teams (hate birocracy).
So.. I have a few DataDriven tests which are defined as follows(i'll paste in one of them)
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "UsersData.csv", "UsersData#csv", DataAccessMethod.Sequential)]
[TestMethod()]
public void Test_Login()
{
Logger.Info("");
Logger.Info("-----------------------------------------------------------------");
So, this should be clear enough that i am using UsersData.csv file, which is placed in TestData folder in my project. To run this test in Jenkins, I used to use this command line
mstest /testmetadata:"%WORKSPACE%\SeleniumJenkins.vsmdi" /testlist:Jenkins /resultsfile:"%WORKSPACE%\AllTests_Jenkins.trx" /runconfig:"%WORKSPACE%\Local.testsettings" /detail:stdout
Everything worked just fine, but one day, when i encountered this error in the TRX results file:
The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library.Error details: The .Net Framework Data Providers require Microsoft Data Access Components(MDAC). Please install Microsoft Data Access Components(MDAC) version 2.6 or later.Retrieving the COM class factory for component with CLSID {2206CDB2-19C1-11D1-89E0-00C04FD7A829} failed due to the following error: 8007007e The specified module could not be found. (Exception from HRESULT: 0x8007007E).
BUT if i log on the machine where the slave is running and run the same command it seems it finds the DataSource files and ir runs ok.
Moreover, i installed psexec and i placed the command into a *.bat file, then i called this file from ps exec like this:
psexec \\my_IP -u "machine-name\jenkins-local" -p "password" cmd /C call "%WORKSPACE%\Selenium\msteststart.bat"
This seems to be working, but i don't get any Logging into Jenkins and if i redirect it to a file, then whenever another build starts and wipes out the workspace the file is lost, so i only have the last version of the file and i cannot compare it to other builds.
The local.testsettings file looks like this:
<?xml version="1.0" encoding="UTF-8"? >
<TestSettings name="Local" id="06505635-693a-4f31-b962-ecf8422b5eca" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Description>These are default test settings for a local test run.</Description>
<Deployment>
<DeploymentItem filename="Selenium\TestData\UsersData.csv" />
</Deployment>
<NamingScheme baseName="Selenium_" useDefault="false" />
<Execution>
<Timeouts testTimeout="10800000" />
<TestTypeSpecific>
<UnitTestRunConfig testTypeId="13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b">
<AssemblyResolution>
<TestDirectory useLoadContext="true" />
</AssemblyResolution>
</UnitTestRunConfig>
</TestTypeSpecific>
<AgentRule name="Execution Agents">
</AgentRule>
</Execution>
</TestSettings>
I would appreciate if anyone could give me a hint on this one. Thanks
It could be
an MDAC installation error. E.g. here are some ideas on how to repair it. Consider asking your admin to try to check if MDAC was properly installed.
a permission issue ? are you 100% sure you are running the command on your slave as the same user both using jenkins slave and psexec ?
As you say you manage to get it to work using psexec, a workaround would be to generate the file on the same machine the job is ran and archive the generated log file as artifact. Jenkins will keep track of it.
If you prefer to try to have the output in the console, maybe to apply console parsing, you can also make it that your psexec command outputs the file after the build to the console (by type-ing it after it has run), or maybe use this tee-like batch command to manage to get psexec to output what it does into jenkins console: Using a custom Tee command for .bat file
And don't forget to capture the standard error as well!
Related
I have my dlls for the test project deployed to the server. Now I'm trying to run them using dotnet test "Path to tests.dll" but I get an error:
F:\path\tests.dll(1,1): error MSB4025: The project file could not be loaded. Data at the root level is invalid. Line 1, position 1.
Basically it requires csproj file to be there in the same catalog as it was on my local machine. What's the point of having runnable DLLs for testing if I still need csproj to run the tests on the remote server. That doesn't make any sense.
How can I run the tests without having to have csproj file on the server?
I found the answer to my problem. In order to run the tests on a remote server through PowerShell, I had to use "dotnet vstest" instead of "dotnet test". vstest command allows running the tests by simply using dll file without the need for csproj file.
I am following the tutorial for .NET on
https://www.microsoft.com/net/learn/get-started/macos
I installed the .NET SDK and create an app by using on macOS High Sierra Version 10.13.5:
~$ dotnet new console -o myApp
which gives me an error of:
The template "Console Application" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on myApp/myApp.csproj...
Unable to load shared library 'libproc' or one of its dependencies. In order to help diagnose loading problems, consider setting the DYLD_PRINT_LIBRARIES
environment variable: dlopen(liblibproc, 1): image not found
I tried to do
export DYLD_PRINT_LIBRARIES=/usr/lib/
before deleting the created folder and files, and I get a lot of printed statements that look like:
dyld: loaded: /usr/local/share/dotnet/shared/Microsoft.NETCore.App/2.1.1/System.IO.Compression.Native.dylib
as well as the same error:
The template "Console Application" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on myApp/myApp.csproj...
Unable to load shared library 'libproc' or one of its dependencies. In order to help diagnose loading problems, consider setting the DYLD_PRINT_LIBRARIES
environment variable: dlopen(liblibproc, 1): image not found
I got it to work by using fs-usage to examine where does dotnet try to find libproc.dylib. In my case, I found that dotnet was trying to find libproc.dylib at ~/libproc.dylib, so I copied /usr/lib/libproc.dylib to ~/libproc.dylib and dotnet worked.
I think this is not a satisfactory answer so if anyone knows why dotnet was not looking for libproc.dylib at /usr/lib/libproc.dylib, please help me out. Thank you!
I noticed that my comment only worked when I started pwsh in ~.
I created a symlink in /usr/local/microsoft/powershell/6 that points to libproc.dylib:
/usr/local/microsoft/powershell/6$ sudo ln -s /usr/lib/libproc.dylib libproc.dylib
PowerShell starts correctly in any directory.
This will likely need to be recreated when I upgrade PowerShell Core (via Homebrew Cask).
** edit **
I should have read the question more closely--it was about dotnet, not pwsh.
I've been trying to wrap WPF app inside a Windows Universal App, using Desktop Bridge.
In order to make the app's taskbar icon unplated, with transparent background, I followed the instructions that can be found in various blogs and MSDN articles/forums, such as this one.
The first commands I've been executing are these two:
"C:\Program Files (x86)\Windows Kits\10\bin\10.0.15063.0\x64\makepri.exe" createconfig /o /cf priconfig.xml /dq en-US
"C:\Program Files (x86)\Windows Kits\10\bin\10.0.15063.0\x64\makepri.exe" new /o /pr . /cf priconfig.xml
These commands were executed in the WPF app's output folder, where I also put an AppxManifest.xml file, along with the files and folders referenced by it (such as the Executable file and the Assets' images in various scales and resolutions).
From this point, I got two different weird errors:
First, If the AppManifest.xml file contains the following section:
<Extensions>
<desktop2:Extension Category="windows.firewallRules">
<desktop2:FirewallRules Executable="app\MyWpfApp.exe">
<desktop2:Rule Direction="in" IPProtocol="TCP" Profile="all" />
<desktop2:Rule Direction="in" IPProtocol="UDP" Profile="all" />
</desktop2:FirewallRules>
</desktop2:Extension>
</Extensions>
then the second makepri command will result in the following error message:
onecoreuap\base\mrt\tools\indexertool\src\tool\parametermanager.cpp(:908): error PRI175: 0x80080204 -
onecoreuap\base\mrt\tools\indexertool\src\tool\parametermanager.cpp(:318): error PRI175: 0x80080204 -
Microsoft (R) MakePRI Tool
Copyright (C) 2013 Microsoft. All rights reserved.
error PRI191: 0x80080204 - Appx manifest not found or is invalid. Please ensure well-formed manifest file is present. Or specify an index name with /in switch.
Then if I remove that FirewallRules section, everything seems to run fine - at least on my machine.
Second, It doesn't always run as expected:
when I try to run exactly the same files (with the fixed version of AppxManifest.xml) and same commands on a different machine, I get the same error that I used to get in the first machine (from before removing the FirewallRules section).
Any idea what could be causing these problems? What possible differences between the build machines could cause the second problem? What should I look for?
The first problem was solved by removing the IgnorableNamespaces property from the Package xml-element (at the root level of AppxManifest.xml).
--
As for the second problem, after contacting Microsoft's support team about this issue, this seems to be a bug in the makepri tool running on older systems:
Apparently, when running on Windows Server 2012 R2 (and perhaps other version as well), the makepri tool command has to supplied with another "optional" parameter that states the app's name:
"C:\Program Files (x86)\Windows Kits\10\bin\10.0.15063.0\x64\makepri" new /o /pr . /cf priconfig.xml /mn AppxManifest.xml /of resources.pri /in "MyAppName"
The important part is the /in "MyAppName" argument at the end of the line, though the other arguments might be crucial as well. Also, "MyAppName" has to be the same as defined in the AppxManifest.xml file, in that part:
<Applications>
<Application Id="MyAppName" ...>
...
Another step that might have been relevant to solve this issue, was to make sure that the file-mappings file, used later for the makeappx command, has the correct definition for ResourceId - as explained in this article.
My setup is as follows:
Jenkins on Windows Server 2012 system
Git installed on the same system
SonarQube on separate server, executed by SonarQube Runner (not Maven)
I have a Jenkins project that pulls source code (C#) using Git, compiles it, then presents it for analysis to SonarQube. The pull, compile, and analysis all work, but I am getting error messages saying:
16:05:58.883 INFO - Retrieve SCM info for (source file path)
16:05:58.883 WARN - Fail to retrieve SCM info of: (source file path) Reason: The git blame command failed. 'git' is not recognized as an internal or external command,
operable program or batch file.
I first assumed that Git had to be installed on the SonarQube server, too, but there is another project using a similar setup and the same SonarQube server that is working correctly, adding "blame" information to SonarQube's source code listings.
What am I missing?
I am working with a windows application in c#.
Using Jenkins, i created a job that will do the following tasks.
1.Build the application using msbuild
2.Tests Unit Test Cases using nunit-console.exe
3.Calculate Code coverage using NCover. (Issue)
4.Later publish the application using Nant plugin
Tasks 1, 2 and 4 works fine, while 3 having issue.
Can somebody put light on this matter?
This is the batch file that i used to find out the coverage
C:\Program Files\NCover\NCover.Console.exe" "E:\Myapp\test.exe" -h //x "E:\Newfolder\coverage.xml
The batch file is executed in Jenkins and we can see the Test.exe in task manager, what i need is the code coverage in html format when executing the Nunit test cases and no need to run my text.exe
D:\Set Up\Nuint\NUnit-2.6.2\bin\nunit-console.exe" "E:\Myapp\test.sln" /xml="E:\Newfolder\TestResult.xml
This is the batch command that i used to test the test cases, i need to know the code coverage while executing the test case, but in my case my test.exe is executed and NCover console.exe start monitor my test.exe for calculating the coverage
i tried by adding
C:\Program Files\NCover\NCover.Console.exe" infront of "D:\Set Up\Nuint\NUnit-2.6.2\bin\nunit-console.exe" "E:\Myapp\test.sln" /xml="E:\Newfolder\TestResult.xml , build succeded. and in console o/p found some coverage data like
Execution Time: 92.4688 s
Symbol Coverage: 43.72%
Branch Coverage: 22.70%
and a coverage.nccov file is created . but i need to create/show a coverage report in html format.
You can use the NCover plug-in or a post-build-task to start the calculation.
For second variant we mostly use a simple batch-file to start the action (in your case the ncover calculation). This batch-file will be called by the jenkins post-build-task.
Edit:
To get HTML you can do it via (look here):
NCover.Reporting Coverage.xml //or FullCoverageReport:Html //op "C:\Coverage Report"