Referencing a folder in my project with a SpecFlow test - c#

I'm trying to write a SpecFlow test where I test what happens when my application reads a certain structure of folders and files. I want to include these folders and files in my project so the tests don't just run on my own computer.
For example, I have two folders in my Specs project. One called 'SimpleTestModel' and the other called 'ComplexTestModel'. How can I reference these folders in my SpecFlow tests?

You want a Test Fixture.
From Wikipedia:
In software testing, a test fixture is a fixed state of the software under test used as a baseline for running tests; also known as the test context. It may also refer to the actions performed in order to bring the system into such a state.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
Software used to systematically run reproducible tests on a piece of software under test is known as a test harness; part of its job is to set up suitable test fixtures.
For your specific problem:
Create a Fixtures directory in your SpecFlow test project. Inside that create any numbers of sub directories based on your tests to set up the directory and file structures that you need.
Add an <appSettings> in App.config entry define the root folder for all your test fixtures
<configuration>
...
<appSettings>
<!-- Path relative to the build output directory -->
<add name="FixturesRootDirectory" value="..\..\Fixtures" />
</appSettings>
...
</configuration>
In a [BeforeScenario] hook, set the absolute path to the fixtures directory on the current scenario context (reference: How do I get the path of the assembly the code is in?)
using System.Configuration;
using System.IO;
using TechTalk.SpecFlow;
namespace Foo
{
[Binding]
public class CommonHooks
{
[BeforeScenario]
public void BeforeScenario()
{
InitFixturesPath();
}
private void InitFixturesPath()
{
if (ScenarioContext.Current.ContainsKey("FixturesPath"))
return;
string codeBase = Path.GetDirectoryName(Assembly.GetExecutingAssembly().CodeBase)
+ Path.DirectorySeparatorChar
+ ConfigurationManager.AppSettings["FixturesRootDirectory"];
UriBuilder uri = new UriBuilder(codeBase);
string path = Uri.UnescapeDataString(uri.Path);
ScenarioContext.Current.Set<string>("FixturesPath", Path.GetDirectoryName(path));
}
}
}
Now you can use ScenarioContext.Current.Get<string>("FixturesPath") to get the root directory for all of your fixtures. You could even write your own Fixtures helper class:
public static class FixturesHelper
{
public static string Path { get; set; }
// other methods and properties making it easier to use fixtures
}

Related

Unexpected Location of Assembly.GetExecutingAssembly When NUnit Runs Multiple Assemblies

I recently encountered an odd issue when performing unit tests. My solution contains a helper class with a property for getting the directory of the executing assembly. It looks like this:
public static class DirectoryHelper
{
public static string ExecutingAssemblyDirectory
{
get
{
var codeBase = Assembly.GetExecutingAssembly().CodeBase;
var uri = new UriBuilder(codeBase);
var path = Uri.UnescapeDataString(uri.Path);
return Path.GetDirectoryName(path);
}
}
}
This method is called through various test classes to get relative file paths to dependent resources.
Take the following contrived projects as examples:
TestProject1.dll - TestFixture1.cs
[TestFixture]
public class TestFixture1
{
[Test]
public void VerifyExecutingAssemblyDirectory1()
{
StringAssert.Contains(#"\TestProject1\bin\Debug",
DirectoryHelper.ExecutingAssemblyDirectory);
}
}
TestProject2.dll - TestFixture2.cs
[TestFixture]
public class TestFixture2
{
[Test]
public void VerifyExecutingAssemblyDirectory1()
{
StringAssert.Contains(#"TestProject2\bin\Debug",
DirectoryHelper.ExecutingAssemblyDirectory);
}
}
When these tests are ran individually they pass and the location of the returned assembly is the debug folder of the test class.
However, when ran together, TestFixture2.VerifyExecutingAssemblyDirectory2() is actually returning the path to the bin folder of TestProject1, rather than TestProject2.
I'm trying to determine why this behavior is happening and understand a better way of going about this.
I've found that using .GetCallingAssembly will resolve this problem, but it doesn't seem like I should have to do this.
I've created an example to reproduce this issue and posted to GitHub. TylerNielsen/NUnitExecutingAssemblyExample
Note: I'm aware of the TestContext.TestDirectory in NUnit, however this library is currently not dependent on NUnit and I'd prefer to keep it that way.
UPDATE
I'm running the NUnit tests through both Resharper in Visual Studio and via NUnit3-Console. When I run using NUnit3-Console, I'm only specifying the two individual .dlls and not providing any other arguments.
Both TestProject1 and TestProject2 reference the assembly containing DirectoryHelper. I'm assuming that your references cause the assembly to be copied to the individual (separate) output directories.
When you run both test assemblies together, one of them causes it's "personal" copy of that assembly to be loaded. The second one finds that the assembly is already in memory.
Of course, this behavior will depend on how you run the assemblies, which you haven't said. In the case where you use nunit3-console, it will also depend on your command-line arguments, especially whether you use a separate process for each assembly.

When multiple unit tests copy the same file, running all unit tests fail

Description
I am writing unit tests for a method, which copies a file from a source to a destination. Basically it includes this code:
public void MyMethod()
{
// ...
File.Copy(source, destination, true);
// ...
}
In my unit test project, I have a test file: (test.png), which is located in the Resources folder of my unit test project. And I've set the Copy to Output property to Always.
I have 3 unit tests which are testing this method.
When they hit the line of the code which copies the file: source = "Resources\\test.png".
Issue
When I run the unit test individually, they all pass and everything is fine.
However, when I run All Tests in Visual Studio, I get this run time error and unit tests fail:
System.IO.DirectoryNotFoundException
Could not find a part of the path 'Resources\test.png'.
My Thoughts...(UPDATED)
Probably because Visual Studio runs each unit tests simultaneously in a separate thread and they all accessing the same file at the same time?
I think for every unit test, Visual Studio is cleaning bin/Debug and bin/Release folders. Then it copies all the required project files in that folder. This causes sometimes the file actually does not exist?
Question
How can I fix this problem?
Is there any settings of configurations to resolve this?
How can I run all unit tests in Visual Studio (and Team City) when multiple unit tests are accessing the same file?
You could try to rule out the multi-threading issue by following the instructions from MSDN: Executing Unit Tests in parallel on a multi-CPU/core machine, setting parallelTestCount to 1. If the tests now pass, you've narrowed down the problem.
However, if your tests are still failing when you run them in a group - and I think this is the more likely scenario -, then my advice would be to check for any state those tests are sharing. The pattern you describe (i.e. passes in isolation; fails when not in isolation) is a symptom typically exhibited by tests that are (incorrectly) sharing state, and that state is being modified by those tests, causing one or more tests to fail.
Accessing the same file should not be a problem. Make sure you don't have a cleanUp Fixture(TestSuite level) to delete the file. Because from exception it looks like the file is being deleted after running a test.
Also concurrent read operation is fine and perfectly legal. If your unit tests are overwriting the file then it's a problem.
What happened was, since I was using relative path to the testing files, for some reason when running the unit tests in batch, test runner working directory is different than when running individual tests, hence it couldn't find the directory.
So I used this function to build the absolute path to the testing files:
private string GetFilePath([CallerFilePath] string path = "")
{
return path;
}
Then:
string projectDir = Path.GetDirectoryName(GetFilePath());
string testFile = Path.Combine(projectDir, #"Resources\test.png";
I'd speculate that your problem is that one of the tested methods changes directory, given the explicit "Directory Not Found" exception. It's improbable that file locking or any concurrency problems would cause the behaviour described.
If unit testing you shouldn't really be testing whether File.Copy (or any of the File class methods) worked since you didn't write that code. Instead you should test whether your code interacts correctly with the File type (i.e. did it pass the correct source file name, desination file name and overwrite value when you called "Copy"). First create an interface for the File class and a wrapper for it that implements the interface;
public interface IFileWrapper
{
void Copy(string sourceFileName,string destFileName,bool overwrite);
//Other required file system methods and properties here...
}
public class FileWrapper : IFileWrapper
{
public void Copy(string sourceFileName, string destFileName, bool overwrite)
{
File.Copy(sourceFileName, destFileName, overwrite);
}
}
You should then make the class you are testing include an IFileWrapper parameter (dependency injection). In your unit tests you can then use a mocking framework such as Moq or you could write your own mock;
public class MockFileWrapper : IFileWrapper
{
public string SoureFileName { get; set; }
public string DestFileName { get; set; }
public bool Overwrite { get; set; }
public void Copy(string sourceFileName, string destFileName, bool overwrite)
{
SoureFileName = sourceFileName;
DestFileName = destFileName;
Overwrite = overwrite;
}
}
In real implementations pass in FileWrapper as the IFileWrapper parameter but in your unit tests pass in MockFileWrapper. By checking the properties of mockFileWrapper in your unit tests you can now determine whether you class calls Copy and how it is called. Since you are no longer sharing a real file between your unit tests you will avoid the chance of the tests sharing state or potentially locking the file.
As you mentioned in your answer, the test framework does not always run tests with the working directory set to your build output folder.
To instruct the test framework to place build artifacts or other files from your build output into the test directory, you need to use DeploymentItemAttribute. For your case, you would do something like:
const string destination = "Destination.txt";
const string source = "MyData.txt";
[DeploymentItem(source)]
[TestMethod]
public void MyMethod()
{
// …
File.Copy(source, destination, true);
// …
}
[TestCleanup]
public void Cleanup()
{
// Clean up the destination so that subsequent tests using
// the same deploy don’t collide.
File.Delete(destination);
}
Also ensure that your files are marked with a Build Action of Contents and Always Copy. Otherwise, they won’t be in the build output directory and won’t be able to be copied to the test directory.

Get the path from a SpecFlow feature file in a Step Definition

Is it possible to retrieve the path of a SpecFlow feature file during runtime in a Step Definition?
Snippet:
[Given(#"Some given statement")]
public void GivenSomeGivenStatement() {
var featureFilePath = // retrieve the path of the feature file
// that executes this step.
}
Context:
We do testing on databases and queries. The source data is created in Excel files and .SQL files (for check queries). These source data are large datasets, not feasible to put into the feature files itself or use the SpecFlow.Plus.Excel extension.
To keep the data close to the feature file, we want to have this data in the same folder as the feature file itself. To achieve this, we need the path to this feature file, so we also have the path to the testdata.
Here's a suggestion. This is just something I put together quickly so lots of room to improve. It relies on the Feature file name being identical to the title of the feature you provide in the description. It also assumes you have a conventional folder structure for your SpecFlow VS project as there is a lot of string manipulation.
Firstly, the calling code should use the SpecFlow BeforeScenario attribute. Something like this:
public void BeforeScenario()
{
//grabs Feature Title from SpecFlow context
var featureName = FeatureContext.Current.FeatureInfo.Title;
//Calls method to obtain path of file
var featureFilePath = GetFeatureFilePath(featureName);
}
The method GetFeatureFilePath will then look like this:
private static string GetFeatureFilePath(string featureName)
{
string startupPath = Environment.CurrentDirectory;
var splitStartupPath = startupPath.Split(new[] {"\\"}, StringSplitOptions.None);
var featureFolder = splitStartupPath[0] + #"\\" +
splitStartupPath[1] + #"\\" +
splitStartupPath[2] + #"\\" +
splitStartupPath[3] + #"\\" +
splitStartupPath[4] + #"\\" +
splitStartupPath[5] + #"\\Features\";
var dir = new DirectoryInfo(featureFolder);
foreach (var fi in dir.GetFiles())
{
if (fi.FullName.Contains(featureName))
return fi.FullName;
}
return "No Feature File Found With Title: " + featureName;
}
It grabs your current directory and splits it to the point where the Features folder should be. It then iterates through each feature file until it finds one that contains your feature title in its path name and returns that as a full path.
I'm not aware of any other way to get this currently.
I don't think knowing the path to the feature file will be possible, as the feature file is used to generate a file containing the unit tests and this is compiled and copied to the test run directory.
The simplest thing will be to set the files as part of the solution and then have them copied to the output directory when the project builds.
If you are using NUnit as the test framework then the files should be in the same directory as the tests are executing so you should just be able to load them without specifying any path, or using the Assembly.GetExecutingAssembly().Location to findout where the code is actually executing.
If you are using MSTest then you need to add a [DeploymentItem(FileToDeploy)] attribute to the test to ensure that the file actually gets deployed with the tests when they are run. Unfortunately as Specflow generates the tests it won't add this for you. To solve this you need to create a partial class which has the same name as the class which contains the tests. This class is called the same as the feature with 'Feature' tagged on the end. So if you have this in your feature:
Feature: Do A Thing
The your test class will be called DoAThingFeature
so you need to create a partial class like this:
[DeploymentItem("FileToDeploy.ext")]
public partial class DoAThingFeature
{}
to ensure that MsTest copies the file you need to the correct directory.
Edit
based on your comment you could maybe do something similar to this
add tags to your feature #hasFiles #source:myFile.xlsx
Then you could add this class:
[Binding]
public class DeployFiles
{
[BeforeScenario("hasFiles")]
public void CopyFiles()
{
..in here find the current executing directory and search
..the subtree for any files defined in the
..ScenarioInfo.Tags array that start with `source:` and copy
..them to the current executing directory
}
}
then any scenario tagged with the #hasFiles will deploy any files specified by #source tags to the root directory where the tests are running.
Not pretty and I'm not certain it'll work, but it might.
maybe this could help you , in .net 4.5 you can get the hold of the path to the caller, take a look at this thread source path in .net 4.5

Generating methods in design or in build time (C#)

I have an integration testing solution. I have my tests described in XML files. In order to capitalize on Visual Studio 2010 testing infrastructure, I have a C# class where every XML test file has an associated method that loads the XML file and executes its content. It looks like this:
[TestClass]
public class SampleTests
{
[TestMethod]
public void Test1()
{
XamlTestManager.ConductTest();
}
[TestMethod]
public void Test2()
{
XamlTestManager.ConductTest();
}
...
[TestMethod]
public void TestN()
{
XamlTestManager.ConductTest();
}
}
Each method name corresponds to an XML file name. Hence, I have to have the following files in my test directory:
Test1.xml
Test2.xml
...
TestN.xml
XamlTestManager.ConductTest() uses the StackTrace class to get the name of the calling method and this way it can find the correct XML test file to load.
I would like to get rid of the extra administration of adding/removing/renaming test methods whenever I change my tests, adding/removing/renaming an XML test file. How can I automagically generate this class or its methods during the compilation process based on the actual XML files in my test directory?
Option 1:
I have considered PostSharp, but it does not allow me to look up the XML files and generate methods on the fly (or was I superficial?).
Option 2:
The other idea was to build a Visual Studio custom tool that generates my code whenever it is executed. The downside here is the deployment. The custom tool needs to be registered to VS. I want a solution that can be committed into a repository, check it out to another computer and use it right away.
(I believe in simplicity. "Check out and run" just simplifies the life of new developers soooooo much, if they do not need to go through a list of thing to install before they can compile run the application.)
Do you have any recommendation, how to get rid of the unnecessary maintenance issue?
EDIT:
For the request of Justin, I add more details. We use Bizunit (fantastic!!!) as the basis of our framework with a truckload of custom made high level test steps. From these steps we can build our test like from lego blocks in a declarative manner. Our steps include like FileDrop, WebService invokation or even polling, firing up a full blown web server to simulate a partner web application, random data generator, data comparing steps etc. Here is an example test xml (in fact XAML):
<TestCase BizUnitVersion="4.0.154.0" Name="StackOverflowSample" xmlns="clr-namespace:BizUnit.Xaml;assembly=BizUnit" xmlns:nib="clr-namespace:MyCompany.IntegrationTest;assembly=BizUnit.MyCustomSteps" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<TestCase.SetupSteps>
<nib:ClearStep FailOnError="True" RunConcurrently="False" />
<nib:LaunchSimulatedApp AppKernelCacheKey="provider" FailOnError="True" FireWakeUpCall="False" PortNumber="4000" RepresentedSystem="MyProviderService" RunConcurrently="False" />
<nib:HttpGetStep FailOnError="True" RunConcurrently="False" Url="http://localhost:10000/Home/StartSvgPolling">
<nib:HttpGetStep.Parameters>
<x:String x:Key="PolledAddress">http://localhost:4000/SvgOutputPort.asmx</x:String>
<x:String x:Key="PollingInterval">10</x:String>
<x:String x:Key="FilterFile"></x:String>
</nib:HttpGetStep.Parameters>
</nib:HttpGetStep>
</TestCase.SetupSteps>
<TestCase.ExecutionSteps>
<nib:DocumentMergeStep FailOnError="True" OutputCacheKey="inputDocument" RunConcurrently="False">
<nib:DocumentMergeStep.InputDocuments>
<nib:RandomLoader BoundingBox="Europe" LinkbackUrlPattern="http://MyProviderService/id={0}" MaxAmount="10" MaxID="100" MinAmount="10" MinID="0" NamePattern="EuropeanObject_{0}" NativeFormat="Svg" RepeatableRandomness="False" UriPrefix="European" />
<nib:RandomLoader BoundingBox="PacificIslands" LinkbackUrlPattern="http://MyProviderService/id={0}" MaxAmount="10" MaxID="100" MinAmount="10" MinID="0" NamePattern="PacificObject_{0}" NativeFormat="Svg" RepeatableRandomness="False" UriPrefix="Pacific" />
</nib:DocumentMergeStep.InputDocuments>
</nib:DocumentMergeStep>
<nib:PushToSimulatedApp AppKernelCacheKey="provider" ContentFormat="Svg" FailOnError="True" RunConcurrently="False">
<nib:PushToSimulatedApp.InputDocument>
<nib:CacheLoader SourceCacheKey="inputDocument" />
</nib:PushToSimulatedApp.InputDocument>
</nib:PushToSimulatedApp>
<nib:GeoFilterStep FailOnError="True" OutputCacheKey="filteredDocument" RunConcurrently="False" SelectionBox="Europe">
<nib:GeoFilterStep.InputDocument>
<nib:CacheLoader SourceCacheKey="inputDocument" />
</nib:GeoFilterStep.InputDocument>
</nib:GeoFilterStep>
<nib:DeepCompareStep DepthOfComparision="ID, Geo_2MeterAccuracy, PropertyBag, LinkbackUrl" FailOnError="True" RunConcurrently="False" Timeout="30000" TolerateAdditionalItems="False">
<nib:DeepCompareStep.ReferenceSource>
<nib:CacheLoader SourceCacheKey="filteredDocument" />
</nib:DeepCompareStep.ReferenceSource>
<nib:DeepCompareStep.InvestigatedSource>
<nib:SvgWebServiceLoader GeoFilter="Europe" NvgServiceUrl="http://localhost:10000/SvgOutputPort.asmx"/>
</nib:DeepCompareStep.InvestigatedSource>
</nib:DeepCompareStep>
</TestCase.ExecutionSteps>
<TestCase.CleanupSteps>
<nib:HttpGetStep FailOnError="True" RunConcurrently="False" Url="http://localhost:10000/Home/StopSvgPolling">
<nib:HttpGetStep.Parameters>
<x:String x:Key="PolledAddress">http://localhost:4000/SvgOutputPort.asmx</x:String>
</nib:HttpGetStep.Parameters>
</nib:HttpGetStep>
<nib:KillSimulatedApp AppKernelCacheKey="provider" FailOnError="True" PortNumber="4000" RunConcurrently="False" />
</TestCase.CleanupSteps>
</TestCase>
This is what it does:
Invokes a Clear operation on the test subject
Launches a webserver on port 4000 as a simulated partner app under the name MyProviderService
Invokes the test subject via HTTP Get to poll the simulated partner
Creates a new document containing geo info from two random generated content
Pushes the document to the simulated partner - hence the test subject will pick it up via polling
The test applies a geo filter on the document
The deep compare step loads the filtered document as base of comparision, and loads the content of the test subject via a web service
As clean-up, it stops the polling via an HTTP GET step and kills the simulated partner's web server.
The power of Bizunit is that merges the ease of creating tests in C# with intellisense and ease of maintaining/duplicating it in XAML files. For a quick easy read on how it works: http://kevinsmi.wordpress.com/2011/03/22/bizunit-4-0-overview/
As #GeorgeDuckett said, T4 templates are probably the way to go. In the application I am working on, we use them for a lot, including generating Repositories, Services, ViewModels, Enums and recently unit tests.
They are basically code generating scripts written in either VB or C#, looking at a directory for XML files would be no problem for these kinds of templates.
If you do choose to go the T4 route, the Tangible T4 Editor is definitely a must have, it is a free download.
Here is a quick example of a T4 script which should do or be pretty close to what you want:
<## template language="C#" debug="true" hostspecific="true"#>
<## output extension="g.cs"#>
[TestClass]
public class SampleTests
{
<#
string[] files = Directory.GetFiles(#"C:\TestFiles", "*.xml");
foreach(string filePath in files)
{
string fileName = Path.GetFileNameWithoutExtension(filePath);
#>
[TestMethod]
public void <#=fileName#>()
{
XamlTestManager.ConductTest();
}
<#
}
#>
}
Make sure this is placed in a file with the .tt extension, then on the property windows for this file, ensure the Build Action is None, Custom Tool is TextTemplatingFileGenerator.
Edit: Accessing output directory from T4 template
Add the following two lines to the top of your T4 template, under the <## template ... #> line:
<## assembly name="EnvDTE" #>
<## import namespace="EnvDTE" #>
Then inside your template, you can access and use the visual studio API like so:
IServiceProvider serviceProvider = this.Host as IServiceProvider;
DTE dte = serviceProvider.GetService(typeof(DTE)) as DTE;
object[] activeSolutionProjects = dte.ActiveSolutionProjects as object[];
if(activeSolutionProjects != null)
{
Project project = activeSolutionProjects[0] as Project;
if(project != null)
{
Properties projectProperties = project.Properties;
Properties configurationProperties = project.ConfigurationManager.ActiveConfiguration.Properties;
string projectDirectory = Path.GetDirectoryName(project.FullName);
string outputPath = configurationProperties.Item("OutputPath").Value.ToString();
string outputFile = projectProperties.Item("OutputFileName").Value.ToString();
string outDir = Path.Combine(projectDirectory, outputPath);
string targetPath = Path.Combine(outDir, outputFile);
}
}
outDir and targetPath contain the output directory and the full path to the output file.
Instead of creating a separate test for each set of test data you can create a single test that is repeatedly run for each set of test data:
[TestClass]
public class SampleTests
{
[TestMethod]
public void Test()
{
for (var i = 0; i < 10; ++i)
XamlTestManager.ConductTest(i);
}
}
You can also perform data-driven tests by using the DataSource attribute. This will perform your test for each row in your data set.
[TestClass]
public class SampleTests
{
public TestContext Context { get; set; }
[TestMethod]
[DataSource(...)]
public void Test()
{
var someData = Context.DataRow["SomeColumnName"].ToString();
...
}
}
I actually don't think this is a job for build-time code generation, I think you should use data attributes to drive the tests in this case.
If you used xunit you could do that like this:
public class SampleTests
{
[Theory]
[InlineData(1)]
[InlineData(2)]
[InlineData(...)]
[InlineData(N)]
public void Test(int x)
{
XamlTestManager.ConductTest(x);
}
}
And it will run the test once per InlineData attribute. Also I believe there is another attribute that you can pass a path to a file and it will populate your parameters with values from that file...
I think NUnit has a similar feature but XUnit is much better, I would recommend using XUnit instead.
Just answered alike "code generation from XML with T4" question.
https://stackoverflow.com/a/8554949/753110
Your requirement matches exactly what we did initially (and what lead to discovery of the ADM described on that answer).
We are currently working on test-case based generation, where the test-cases are actually built by the testing-staff, yet the complete integrationtests through code are generated to support them.
Added custom XML based generation demo for that other example, if you want to see:
https://github.com/abstractiondev/DemoSOCase8552428ABS

Visual Studio - Unit tests loading resources in the project

The goal is to run some tests given some data in those Xml files.
How would you easily load a given Xml file into an XmlDoc within the unit test methods?
Current state is:
XmlDocument doc = new XmlDocument();
string xmlFile = "4.xml";
string dir = System.IO.Directory.GetCurrentDirectory() + #"\Msgs\"
//dir is then the value of the current exe's path, which is
//d:\sourcecode\myproject\TestResults\myComputer 2009-10-08 16_07_45\Out
//we actually need:
//d:\sourcecode\myproject\Msgs\
doc.Load( dir + fileName); //should really use System.IO.Path.Combine()!
Is it just a simple matter of putting that path in an app.config? I was hoping to avoid that, given the possibility of different paths on developer machines.
Question: How would you write the algorithm to load a given Xml file into an XmlDocument in the unit test method?
There is a Visual Studio Unit Testing feature for this: DeploymentItemAttribute
I use this feature to copy all xml files in a given project folder to the unit test output folder, before testing if all required files are present.
You can use this attribute with your unit tests to copy specific files from the Project folder (or anywhere else) to the Unit Test output folder. Like so:
[TestMethod()]
[DeploymentItem("MyProjectFolder\\SomeDataFolder\\somefile.txt", "SomeOutputSubdirectory")]
public void FindResourcefile_Test()
{
string fileName = "SomeOutputSubdirectory\\somefile.txt";
Assert.IsTrue(System.IO.File.Exists(fileName));
}
You can also copy the contents of whole folders:
[TestMethod()]
[DeploymentItem("MyProjectFolder\\SomeDataFolder\\", "SomeOutputSubdirectory")]
public void FindResourcefile_Test()
{
string fileName = "SomeOutputSubdirectory\\someOtherFile.txt";
Assert.IsTrue(System.IO.File.Exists(fileName));
}
The first parameter is the source, the second the destination folder. The source is relative to your solution folder (so you can access the Unit Test project of the project being tested) and the destination is relative to the output folder of the unit test assembly.
UPDATE:
You need to enable Deployment in the Test Settings for this to work. This MSDN page explains how (it's real easy): http://msdn.microsoft.com/en-us/library/ms182475(v=vs.90).aspx#EnableDisableDeploy
You can build those files into your executable (set their "Build Action" property to "Embedded Resource") and then get them using the Assembly.GetManifestResourceStream method.
In the unit test project add a post-build event that copies the XML file to the output directory. Then, you can use your original code to get the XML file.
The post build event will look like something like this:
copy $(SolutionDir)file.xml $(ProjectDir)$(OutDir)file.xml
You may also need this to add to your path:
Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)
I use a helper class to deal with getting basic paths I might want to access in my Unit Tests.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Brass9.Testing
{
public static class TestHelper
{
public static string GetBinPath()
{
return System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
}
public static string GetProjectPath()
{
string appRoot = GetBinPath();
var dir = new DirectoryInfo(appRoot).Parent.Parent.Parent;
var name = dir.Name;
return dir.FullName + #"\" + name + #"\";
}
public static string GetTestProjectPath()
{
string appRoot = GetBinPath();
var dir = new DirectoryInfo(appRoot).Parent.Parent;
return dir.FullName + #"\";
}
public static string GetMainProjectPath()
{
string testProjectPath = GetTestProjectPath();
// Just hope it ends in the standard .Tests, lop it off, done.
string path = testProjectPath.Substring(0, testProjectPath.Length - 7) + #"\";
return path;
}
}
}
Sometimes my interactions with paths are more complex; I often use a central class I name "App" to indicate some basic details about the application, like its root folder, its root namespace and module, etc. Classes will sometimes depend on App's existence, and so instead I'll place an init method on App that uses code like the above to initialize itself for test harnesses, and call that method from the Init command in a Unit Test.
(Updated)
Old Answer
I found this helps for getting arbitrary paths to access files in the project folder you intend to test (as opposed to files in the Test project folder, which can make busywork if you need to copy things over).
DirectoryInfo projectDir = new DirectoryInfo(#"..\..\..\ProjectName");
string projectDirPath = projectDir.FullName;
You can then use either of those variables to access whatever you need from the related project. Obviously swap "ProjectName" out for the actual name of your project.
Resources are just resources and that's it, no need to complicate. If you don't want to embed them then you could add these files as "Content" resources to your project and set them to Copy always. Then specify the sub-folder in your code:
var xmlDoc = XElement.Load("ProjectSubFolder\\Resource.xml");
This will automatically load the resources from the project output (running assembly location) bin\$(Configuration)\ResourceSubfolder\
This works for all types of projects, not just unit tests.
I would just put the path in the app.config and load from the default path. In my team, i am really anal about developers changing paths, so i make all my developers have the same exact paths and files on their computers, so i dont have an issue of any rogue developer changing a path to suite his workspace.
For example , all developers in my team must use C:\Project\Product\Module, etc etc. I also make sure all their software installed also is standard. This way, i can ghost any machine into any other easily.
I think in VS.NET 2012 DeploymentItem attribute works without any Test Settings configuration.

Categories

Resources