I have a test which fails on this line. I've figured out that it is because of the HttpContext inside of my GetProofOfPurchase method. Here is the line I'm failing on:
var image = Image.GetInstance(HttpContext.Current.Server.MapPath(GlobalConfig.HeaderLogo));
This is my test:
[Test]
public void GetProofOfPurchase_Should_Call_Get_On_ProofOfPurchaseRepository()
{
string customerNumber = "12345";
string orderNumber = "12345";
string publicItemNumber = "12345";
var result = new ProofOfPurchase();
this.proofOfPurchaseRepository.Expect(p => p.Get(new KeyValuePair<string,string>[0])).IgnoreArguments().Return(result);
this.promotionTrackerService.GetProofOfPurchase(customerNumber, orderNumber, publicItemNumber);
this.promotionTrackerRepository.VerifyAllExpectations();
}
The test fails on promotionTrackerService.GetProofOfPurchase line. How do I fake the HttpContext in this situation? I have searched Stack Overflow for similar issues to mine but I'm unable to get anything to work.
I've tried doing this:
var image = Image.GetInstance(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, GlobalConfig.HeaderLogo));
But it fails saying:
System.Net.WebException : Could not find a part of the path 'C:\Images\HeaderLogo.png'.
From what I've read on Stack Overflow I shouldn't be using HttpContext.Current if I plan to unit test it, which is why I have tried using Path.Combine, but I'm unable to get that to work properly.
Can someone offer some guidance to what I need to do to get this unit test to work?
Thank you!
What I prefer to do when writing tests for code involving non-pure functions is to hide them behind, in simplest cases, plain old Func<string, string>:
class PromotionTrackerService
{
private readonly Func<string, string> imageMapper;
public PromotionTrackerService(Func<string, string> imageMapper)
{
this.imageMapper = imageMapper ?? HttpContext.Current.Server.MapPath;
}
public void GetProofOfPurchase()
{
var image = Image.GetInstance(imageMapper(GlobalConfig.HeaderLogo));
}
}
Now, your test does not look like a unit test -- it's more of an integration test, with all that file access and all.
Related
I am writing some integration tests for my web API, which means that it has to be running during the execution of the tests. Is there any way to run it with an in-memory database instead of a real one based on SQL Server?
Also, I need to run a few instances at a time, so I need somehow to change the base address of each of them to be unique. For example, I could append to the base URL these instance IDs, that are mentioned in the code below.
Here is the code which I am using to run a new instance for my tests:
public static class WebApiHelper
{
private const string ExecutableFileExtension = "exe";
private static readonly Dictionary<Guid, Process> _instances = new();
public static void EnsureIsRunning(Assembly? assembly, Guid instanceId)
{
if (assembly is null)
throw new ArgumentNullException(nameof(assembly));
var executableFullName = Path.ChangeExtension(
assembly.Location, ExecutableFileExtension);
_instances.Add(instanceId, Process.Start(executableFullName));
}
public static void EnsureIsNotRunning(Guid instaceId)
=> _instances[instaceId].Kill();
}
Talking in general, is this a good way to create test instances, or maybe I am missing something? Asking this, because maybe there is another 'legal' way to achieve my goal.
Okay, so in the end, I came up with this super easy and obvious solution.
As was mentioned in the comments - using the in-memory database is not the best way to test, because relational features are not supported if using MS SQL.
So I decided to go another way.
Step 1: Overwrite the connection strings.
In my case, that was easy since I have a static IConfiguration instance and was need just to overwrite the connection string within that instance.
The method looks as follows:
private const string ConnectionStringsSectionName = "ConnectionStrings";
private const string TestConnectionStringFormat = "{0}_Test";
private static bool _connectionStringsOverwitten;
private static void OverwriteConnectionStrings()
{
if (_connectionStringsOverwitten)
return;
var connectionStrings = MyStaticConfigurationContainer.Configuration
.AsEnumerable()
.Where(entry => entry.Key.StartsWith(ConnectionStringsSectionName)
&& entry.Value is not null);
foreach (var connectionString in connectionStrings)
{
var builder = new SqlConnectionStringBuilder(connectionString.Value);
builder.InitialCatalog = string.Format(TestConnectionStringFormat,
builder.InitialCatalog);
MyStaticConfigurationContainer.Configuration[connectionString.Key] = builder.ConnectionString;
}
_connectionStringsOverwitten = true;
}
Of course, you would need to handle the database creation and deletion before and after running the tests, otherwise - your test DBs may become a mess.
Step 2: Simply run your web API instance within a separate thread.
In my case, I am using the NUnit test framework, which means I just need to implement the web API setup logic within the fixture. Basically, the process would be more or less the same for every testing framework.
The code looks as follows:
[SetUpFixture]
public class WebApiSetupFixture
{
private const string WebApiThreadName = "WebApi";
[OneTimeSetUp]
public void SetUp() => new Thread(RunWebApi)
{
Name = WebApiThreadName
}.Start();
private static void RunWebApi()
=> Program.Main(Array.Empty<string>());
// 'Program' - your main web app class with entry point.
}
Note: The code inside Program.Main(); will also look for connection strings in the MyStaticConfigurationContainer.Configuration which was changed in the previous step.
And that's it! Hope this could help somebody else :)
Below I've pasted my code. I'm validating a measure. I've written code that will read a Linux file. But if I wanted to pass multiple file names here would this be possible? so for example instead of my test just validating one file could I do a loop so it could ready multiple files in one go.
Once the file is being read and proceeded I return actualItemData. In my next method, I want to make a call to this actualItemData so the data is published in my var actual
public string validateMeasurement
{
var processFilePath = **"/orabin/app/oracle/inputs/ff/ff/actuals/xx_ss_x.csv.ovr";**
var actualItemData = Common.LinuxCommandExecutor.
RunLinuxcommand("cat " + processFilePath);
**return actualItemData;**
}
public void validateInventoryMeasurementValue(string Data, string itemStatus)
{
var expected = '6677,6677_6677,3001,6';
**var actual = actualItemData);**
Assert.AreEqual(expected, actual);
}
It looks like you are using msunit. As far as I know it doesn't support test cases. If you were to use nunit you would be able to do this using the TestCase attribute.
[TestCase("myfile1.txt", "6677,6677_6677,3001,6")]
[TestCase("myfile2.txt", "1,2,3")]
public void mytest(string path, string expected)
{
var actual = Common.LinuxCommandExecutor.
RunLinuxcommand("cat " + path);
Assert.AreEqual(expected, actual);
}
Generally you don't want to write unit tests that cross code boundaries (read files, hit the database, etc) as these tests tend to be brittle and difficult to maintain. I am not sure of the aim of your code but it appears you may be trying to parse the data to check it's validity. If this is the case you could write a series of tests to ensure that when your production code (parser) is given a string input, you get an output that matches your expectation. e.g.
[Test()]
public void Parse_GivenValidDataFromXX_S_X_CSV_ShouldReturnTrue(string filename)
{
// Arrange
var parser = CreateParser(); // factory function that returns your parser
// Act
var result = parser.Parse("6677,6677_6677,3001,6");
// Arrage
Assert.IsTrue(result);
}
I am new to unit test and wondering how to start testing. The application I am currently working on, does not have any unit test. It a winform application and I am only interested to test the data layer of this application.
Here is an example.
public interface ICalculateSomething
{
SomeOutout1 CalculateSomething1(SomeInput1 input1);
SomeOutout2 CalculateOSomething2(SomeInput2 input2);
}
public class CalculateSomething : ICalculateSomething
{
SomeOutout1 ICalculateSomething.CalculateSomething1(SomeInput1 input1)
{
SomeOutout1.Prop1 = calculateFromInput1(input1.Prop1, input1.Prop2);
SomeOutout1.Prop3 = calculateFromInput2(input1.Prop3, input1.Prop4);
return SomeOutout1;
}
SomeOutout2 ICalculateSomething.CalculateOSomething2(SomeInput2 input2)
{
SomeOutout2.Prop1 = calculateFromInput1(input2.Prop1, input2.Prop2);
SomeOutout2.Prop3 = calculateFromInput2(input2.Prop3, input2.Prop4);
return SomeOutout2;
}
}
I would like to test these two methods in the CalculateSomething. Those methods implementation are long and complicated. How should I structure my test?
I don't see a reason for not using a straight-forward unit test implementation. I'd start with a basic test method:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var calc = new CalculateSomething(...);
var actual = calc.CalculateSomething1(input);
Assert.AreEqual(expected.Prop1, actual.Prop1);
Assert.AreEqual(expected.Prop2, actual.Prop2);
Assert.AreEqual(expected.Prop3, actual.Prop3);
}
And then, as you add CalculateSomething1_BarInput and CalculateSomething2_FooInput, factor out some common code into helper methods:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var actual = CreateTestCalculateSomething().CalculateSomething1(input);
AssertSomeOutput1Equality(expected, actual);
}
As far as unit testing is concerned you would have to create the test methods for the functions that you want.
[TestMethod()]
public void CalculateSomething1()
{
// First we have to define the input for the fucntion
var input = new SomeInput1(); // Assumes your constructor creates the value for prop1 and prop2. Change as needed.
var classToBeTested = new CalculateSomething();
var output = classToBeTested(input);
// There are multiple ways to test if the outcome is correct choose the one that is correct for the method/output.
Assert.IsNotNull(output);
}
The method above would be in a unit test project and associated class file.
Some things to keep in mind when unit testing
Unit tests need to be independent
Long complicated code should be re-factored down into smaller units of code and tested.
Interfaces are an awesome way to remove dependencies. The use of interfaces allows concepts such as mocking. Mocking can be a little complicated at first so take your time when learning it. There are several mocking frameworks out there that can help a lot. i.e. RhinoMocks, Moq just to name a couple.
Those are explicitly implemented properties, so you have to use an interface reference to test them.
var input1 = new SomeInput1();
// setup required data in input1.
ICalculateSomething calculator = new CalculateSomething();
var output = calculator.CalculateSomething1(input1);
// Have assert statements on the properties of output to verify the calculation.
Don't use var for calculator, because that will give you a CalculateSomething reference and the interface methods are hidden.
I have an application that takes a dictionary of files (file type, and list of file names) and copies the files from the original directory into another location.
I've already got the basic code for the copy process, but I need to do some unit tests so it is as robust as possible.
I have wrapper class that I am using so I can test that the System.IO methods are called as I expect, but I am having some difficulty figuring out how to form the tests as there are foreach and switch statements in the code.
Sample code below:
private IFileSystemIO _f;
public CopyFilesToDestination(IFileSystemIO f){
_f = f;
}
public void Cpy_Files(Dictionary<string, List<string>> files)
{
// get a list of the file types in the directory
var listOfFileTypes = new List<string>(files.Keys);
foreach (var fileType in listOfFileTypes){
var fileList = files[fileType].ToList();
foreach (var file in fileList){
switch(fileType){
case ".txt":
_f.Copy(file, #"c:\destination\text");
break;
case ".dat":
_.Copy(file, #"c:\destination\data");
break;
}
}
}
}
To test the above I had thought I would use a mock dictionary object, set up with a list of file types and paths:
public virtual Dictionary<string, List<string>> FakeFiles(){
return fakeDictionary = new Dictionary<string, List<string>>(){
{".txt", new List<string>(){
"c:\test\file1.txt",
"c:\test\file2.txt"
}
},
{".dat", new List<string>(){
"c:\test\file1.dat",
"c:\test\file2.dat"
}
};
}
}
The first test I came up with looks like this:
[Test]
public void Should_Copy_Text_Files(){
var dictionary = new FakeDictionary().FakeFiles();
var mockObject = MockRepository.GenerateMock<IFileSystemIO>();
var systemUnderTest = new CopyFileToDestination(mockObject);
systemUnderTest.Cpy_Files(dictionary);
// I think this means "test the operation, don't check the values in the arguments" but I also think I'm wrong
mockObject.AssertWasCalled(f => f.Copy("something", "something"), o => o.IgnoreArguments());
}
My first problem is: How do I test for a specific file type, such as ".txt"?
Then how do I test the loops? I know with the mocked dictionary that I only have two items, do I leverage this to form the test? How?
I think I may be close to a solution, but I am running out of time/patience hunting it down. Any help is greatly appreciated.
Thanks
Jim
I tried using Roberts solution, but as I stated, I have too many different file types to set up each test case individually. The next thing I tried was setting up a TestCaseSource, but every time I ran the test for that it marked the test as ignored:
[Test, TestCaseSource(typeof(FakeDictionary), "TestFiles")]
public void Cpy_Files_ShouldCopyAllFilesInDictionary(string extension, string fielName) {
// Arrange
var mockObject = MockRepository.GenerateMock<IFileSystemIO>();
var systemUnderTest = new CopyFileToDestination(mockObject);
// Act
systemUnderTest.Cpy_Files(dictionary);
// Assert
mockObject.AssertWasCalled(f => f.Copy(extension, fileName));
}
The data source is below:
public static Dictionary<string, string> TestFiles{
get
{
return new Dictionary<string, string>()
{
{".txt",
"C:\\test\\test1.txt"},
{".txt",
"c:\\test\\test2.txt"}
};
}
}
What I finally worked out uses the times to repeat option in Rhino and is really pretty simple:
[Test]
public void Cpy_Files_ShouldCopyAllFilesInDictionary(){
// Arrange
var mockObject = MockRepository.GenerateMock<IFileSystemIO>();
var systemUnderTest = new CopyFileToDestination(mockObject);
// Act
systemUnderTest.Cpy_Files(dictionary);
// Assert
// I know how many objects are in my fake dictionary so I set the times to repeat as a const
const int timesToRepeat = 2;
// now I just set the values below. I am not testing file names so the test will ignore arguments
mockObject.AssertWasCalled(f => f.Copy("",""), options => options.Repeat.Times(timesToRepeat).IgnoreArguments());
}
I hope this helps someone else with a similar problem.
I would try making use of the TestCase attribute:
[TestCase(".txt", "c:\test\file1.txt")]
[TestCase(".txt", "c:\test\file2.txt")]
[TestCase(".dat", "c:\test\file1.dat")]
[TestCase(".dat", "c:\test\file2.dat")]
public void Should_Copy_Text_Files(string extension, string fileName){
var dictionary = new FakeDictionary().FakeFiles();
var mockObject = MockRepository.GenerateMock<IFileSystemIO>();
var systemUnderTest = new CopyFileToDestination(mockObject);
systemUnderTest.Cpy_Files(dictionary);
mockObject.AssertWasCalled(f => f.Copy(extension, fileName));
}
This will run the test separately for each TestCase attribute, passing the parameters it contains into the test method. That way you can test that each item in your dictionary was "copied" without using multiple asserts in the same test.
UPDATE: I made major changes to this post - check the revision history for details.
I'm starting to dive into TDD with NUnit and despite I've enjoyed checking some resources I've found here at stackoverflow, I often find myself not gaining good traction.
So what I'm really trying to achieve is to acquire some sort of checklist/workflow —and here's where I need you guys to help me out— or "Test Plan" that will give me decent Code Coverage.
So let's assume an ideal scenario where we could start a project from scratch with let's say a Mailer helper class that would have the following code:
(I've created the class just for the sake of aiding the question with a code sample so any criticism or advice is encouraged and will be very welcome)
Mailer.cs
using System.Net.Mail;
using System;
namespace Dotnet.Samples.NUnit
{
public class Mailer
{
readonly string from;
public string From { get { return from; } }
readonly string to;
public string To { get { return to; } }
readonly string subject;
public string Subject { get { return subject; } }
readonly string cc;
public string Cc { get { return cc; } }
readonly string bcc;
public string BCc { get { return bcc; } }
readonly string body;
public string Body { get { return body; } }
readonly string smtpHost;
public string SmtpHost { get { return smtpHost; } }
readonly string attachment;
public string Attachment { get { return Attachment; } }
public Mailer(string from = null, string to = null, string body = null, string subject = null, string cc = null, string bcc = null, string smtpHost = "localhost", string attachment = null)
{
this.from = from;
this.to = to;
this.subject = subject;
this.body = body;
this.cc = cc;
this.bcc = bcc;
this.smtpHost = smtpHost;
this.attachment = attachment;
}
public void SendMail()
{
if (string.IsNullOrEmpty(From))
throw new ArgumentNullException("Sender e-mail address cannot be null or empty.", from);
SmtpClient smtp = new SmtpClient();
MailMessage mail = new MailMessage();
smtp.Send(mail);
}
}
}
MailerTests.cs
using System;
using NUnit.Framework;
using FluentAssertions;
namespace Dotnet.Samples.NUnit
{
[TestFixture]
public class MailerTests
{
[Test, Ignore("No longer needed as the required code to pass has been already implemented.")]
public void SendMail_FromArgumentIsNotNullOrEmpty_ReturnsTrue()
{
// Arrange
dynamic argument = null;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
Assert.IsNotNullOrEmpty(mailer.From, "Parameter cannot be null or empty.");
}
[Test]
public void SendMail_FromArgumentIsNullOrEmpty_ThrowsException()
{
// Arrange
dynamic argument = null;
Mailer mailer = new Mailer(from: argument);
// Act
Action act = () => mailer.SendMail();
act.ShouldThrow<ArgumentNullException>();
// Assert
Assert.Throws<ArgumentNullException>(new TestDelegate(act));
}
[Test]
public void SendMail_FromArgumentIsOfTypeString_ReturnsTrue()
{
// Arrange
dynamic argument = String.Empty;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
mailer.From.Should().Be(argument, "Parameter should be of type string.");
}
// INFO: At this first 'iteration' I've almost covered the first argument of the method so logically this sample is nowhere near completed.
// TODO: Create a test that will eventually require the implementation of a method to validate a well-formed email address.
// TODO: Create as much tests as needed to give the remaining parameters good code coverage.
}
}
So after having my first 2 failing tests the next obvious step would be implementing the functionality to make them pass, but, should I keep the failing tests and create new ones after implementing the code that will make those pass, or should I modify the existing ones after making them pass?
Any advice about this topic will really be enormously appreciated.
If you install TestDriven.net, one of the components (called NCover) actually helps you understand how much of your code is covered by unit test.
Barring that, the best solution is to check each line, and run each test to make sure you've at least hit that line once.
I'd suggest that you pick up some tool like NCover which can hook onto your test cases to give code coverage stats. There is also a community edition of NCover if you don't want the licensed version.
If you use a framework like NUnit, there are methods available such as AssertThrows where you can assert that a method throws the required exception given the input: http://www.nunit.org/index.php?p=assertThrows&r=2.5
Basically, verifying expected behavior given good and bad inputs is the best place to start.
When people (finally!) decide to apply test coverage to an existing code base, it is impractical to test everything; you don't have the resources, and there isn't often a lot of real value.
What you ideally want to do is to make sure that your tests apply to newly written/modified code and anything that might be affected by those changes.
To do this, you need to know:
what code you changed. Your source control system will help you here at the level of this-file-changed.
what code is executed as a consequence of the new code being executed. For this you need either a static analyzer that can trace the downstream impact of the code (don't know of many of these) or a test coverage tool, which can show what has been executed when you run your specific tests. Any such executed code probably needs re-testing, too.
Because you want to minimize the the amount of test code you write, you clearly want better than file-precision granularity of "changed". You can use a diff tool (often build into your source control system) to help hone the focus to specific lines. Diff tools don't actually understand code structure, so what they report tends to be line-oriented rather than structure oriented, producing rather bigger diffs than necessary; nor do they tell you the convenient point of test access, which is likely to be a method because the whole style of unit test is focused on testing methods.
You can get better diff tools. Our Smart Differencer tools provide differences in terms of program structures (expressions, statements, methods) and abstracting editing operations (insert, delete, copy, move, replace, rename) which can make it easier to interpret the code changes. This doesn't directly solve the "which method changed?" question, but it often means looking at a lot less stuff to make that decision.
You can get test coverage tools that will answer this question. Our Test Coverage tools have a facility to compare previous test coverage runs with current test coverage runs, to tell you which tests have to be re-run. They do so by examining the code differences (something like the Smart Differencer) but abstract the changes back to method level.