I am working on a large codebase with basically no unit test coverage. We are about to start moving toward a more test-driven approach, so I thought I would try to write a unit test for a very simple function I added, basically
class ClassUnderTest {
public void SetNoMatchingImage() {
currentState.State = FSMState.NoMatchingImage;
... // Some more logic
NotifyViews();
}
public ViewStatus GetStatus() {
...
if (currentState.State == FSMState.NoMatchingImage)
return ViewStatus.EmptyScreen;
...
}
...
}
Ok, so test this, I would just like to do:
[Test]
public void TestSetNoMatchingImage() {
ClassUnderTest view = new ClassUnderTest(...);
view.SetNoMatchingImage();
Assert.AreEqual(ViewStatus.EmptyScreen, view.Status);
}
But my problem here is that the ClassUnderTest constructor takes 3 arguments to non-interfaces that cannot be null, so I cannot easily create a ClassUnderTest. I can try to either create instances of these classes or stub them, but the problem is the same for them: each of the contructors take arguments that has to be created. And the problem is the same for... and so on. The result is of course a very large overhead and a lot of code needed even for very simple tests.
Is there a good way of dealing with cases like this to make the test cases easier to write?
You'll have lots of situations like this when you start refactoring a project without tests if it wasn't design with Dependency Injection in mind and the mocking framework you use cannot mock concrete classes (such as NMock).
As Andriys just mentioned, Typemock (and moq too) can mock concrete classes as long as it got virtual members.
Personally, I would extract an interface from each of those three classes and inject the interfaces as part of some refactoring to make the class easy to test. I can't remember if VS has a refactor to extract an interface in 2 clicks, which wouldn't take too long.
I would recommend looking at Typemock Isolator framework. According to Art of Unit Testing book by Roy Osherove, it's your best bet when writing unit tests for legacy code:
...it’s the only one [framework] that allows you to create stubs and mocks of dependencies in production code without needing to refactor it at all, saving valuable time in bringing a component under test.
Cheers!
I'd second the recommendation for Typemock, and the solutions suggested in the other answers. In addition to what's already being said Michael Feathers has written a book dealing with the patterns that you're bumping up against called 'Working Effectively With Legacy Code' -
http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052/ref=sr_1_1?ie=UTF8&qid=1313835584&sr=8-1
There's a PDF extract here - http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf
Related
I have a class that deals with Account stuff. It provides methods to login, reset password and create new accounts so far.
I inject the dependencies through the constructor. I have tests that validates each dependency's reference, if the reference is null it throws an ArgumentNullException.
The Account class exposes each of these dependencies through read only properties, I then have tests that validates if the reference passed on the constructor is the same that the property returns. I do this to make sure the references are being held by the class. (I don't know if this is a good practice too.)
First question: Is this a good practice in TDD? I ask this because this class has 6 dependencies so far, and it gets very repetitive and also the tests get pretty long as I have to mock all the dependencies for each test. What I do is just a copy and paste every time and just change the dependency's reference being tested.
Second question: my account creation method does things like validating the model passed, inserting data in 3 different tables or a forth table if a certain set of values are present and sending an email. What should I test here? I have so far a test that checks if the model validation gets executed, if the Add method of each repository gets called, and in this case, I use the Moq's Callback method of the mocked repository to compare each property being added to the repository against the ones I passed by the model.
Something like:
userRepository
.Setup(r => r.Add(It.IsAny<User>()))
.Callback<User>(u =>
{
Assert.AreEqual(model.Email, u.Email);
Assert.IsNotNull(u.PasswordHash);
//...
})
.Verifiable();
As I said, these tests are getting longer, I think that it doesn't hurt to test anything I can, but I don't know if it's worth it as it it's taking time to write the tests.
The purpose of testing is to find bugs.
Are you really going to have a bug where the property exists but is not initialized to the value from the constructor?
public class NoNotReally {
private IMyDependency1 _myDependency;
public IMyDependency1 MyDependency {get {return _myDependency;}}
public NoNotReally(IMyDependency dependency) {
_myDependency = null; // instead of dependency. Really?
}
}
Also, since you're using TDD, you should write the tests before you write the code, and the code should exist only to make the tests pass. Instead of your unnecessary tests of the properties, write a test that demonstrates that your injected dependency is being used. In order or such a test to pass, the dependency will need to exist, it will need to be of the correct type, and it will need to be used in the particular scenario.
In my example, the dependency will come to exist because it's needed, not because some artificial unit test required it to be there.
You say writing these tests feels repetitive. I say you feel the major benefit of TDD. Which is in fact not writing software with less bugs and not writing better software, because TDD doesn't guarantee either (at least not inherently). TDD forces you to think about design decisions and make design decisions all. The. Time. (And reduce debugging time.) If you feel pain while doing TDD, it's usually because a design decision is coming back to bite you. Then it's time to switch to your refactoring hat and improve the design.
Now in this particular case it's just the design of your tests, but you have to make design decisions for those as well.
As for testing whether properties are set. If I understand you correctly, you exposed those properties just for the sake of testing? In that case I'd advise against that. Assume you have a class with a constructor parameter and have a test that asserts the construtor should throw on null arguments:
public class MyClass
{
public MyClass(MyDependency dependency)
{
if (dependency == null)
{
throw new ArgumentNullException("dependency");
}
}
}
[Test]
public void ConstructorShouldThrowOnNullArgument()
{
Assert.Catch<ArgumentNullException>(() => new MyClass(null));
}
(TestFixture class omitted)
Now when you start to write a test for an actual business method of the class under test, the parts will start to fit together.
[Test]
public void TestSomeBusinessFunctionality()
{
MyDependency mockedDependency;
// setup mock
// mock calls on mockedDependency
MyClass myClass = new MyClass(mockedDependency);
var result = myClass.DoSomethingOrOther();
// assertions on result
// if necessary assertion on calls on mockedDependency
}
At that point, you will have to assign the injected dependency from the constructor to a field so you can use it in the method later. And if you manage to get the test to pass without using the dependency... well, heck, obviously you didn't need it to begin with. Or, maybe, you'll only start to need it for the next test.
About the other point. When it becomes a hassle to test all the reponsibilities of a method or class, TDD is telling you that the method/class is doing to much and would maybe like to be split up into parts that are easy to test. E.g. one class for verification, one for mapping and one for executing the storage calls.
That can very well lead to over-engineering, though! So watch out for that and you'll develop a feeling for when to resist the urge for more indirection. ;)
To test if properties are mapped properly, I'd suggest to use stubs or self-made fake objects which have simple properties. That way you can simply compare the source and target properties and don't have to make lengthy setups like the one you posted.
Normally in unit tests (especially in TDD), you are not going to test every single statement in the class that you are testing. The main purpose of the TDD unit tests is to test the business logic of the class, not the initialization stuff.
In other words, you give scenarios (remember to include edge cases too) as input and check the results, which can either be the final values of the properties and/or the return values of the methods.
The reason you don't want to test every single possible code path in your classes is because should you ever decide to refactor your classes later on, you only need to make minimal changes to your TDD unit tests, as they are supposed to be agnostic to the actual implementation (as much as possible).
Note: There are other types of unit tests, such as code coverage tests, that are meant to test every single code path in your classes. However, I personally find these tests impractical, and certainly not encouraged in TDD.
So I am playing around with mocking frameworks (Moq) for my unit tests, and was wondering when you should use a mocking framework?
What is the benefit/disadvantage between the following two tests:
public class Tests
{
[Fact]
public void TestWithMock()
{
// Arrange
var repo = new Mock<IRepository>();
var p = new Mock<Person>();
p.Setup(x => x.Id).Returns(1);
p.Setup(x => x.Name).Returns("Joe Blow");
p.Setup(x => x.AkaNames).Returns(new List<string> { "Joey", "Mugs" });
p.Setup(x => x.AkaNames.Remove(It.IsAny<string>()));
// Act
var service = new Service(repo.Object);
service.RemoveAkaName(p.Object, "Mugs");
// Assert
p.Verify(x => x.AkaNames.Remove("Mugs"), Times.Once());
}
[Fact]
public void TestWithoutMock()
{
// Arrange
var repo = new Mock<IRepository>();
var p = new Person { Id = 1, Name = "Joe Blow", AkaNames = new List<string> { "Joey", "Mugs" } };
// Act
var service = new Service(repo.Object);
service.RemoveAkaName(p, "Mugs");
// Assert
Assert.True(p.AkaNames.Count == 1);
Assert.True(p.AkaNames[0] == "Joey");
}
}
Use mock objects to truly create a unit test--a test where all dependencies are assumed to function correctly and all you want to know is if the SUT (system under test--a fancy way of saying the class you're testing) works.
The mock objects help to "guarantee" your dependencies function correctly because you create mock versions of those dependencies that produce results you configure. The question then becomes if the one class you're testing behaves as it should when everything else is "working."
Mock objects are particularly critical when you are testing an object with a slow dependency--like a database or a web service. If you were to really hit the database or make the real web service call, your test will take a lot more time to run. That's tolerable when it's only a few extra seconds, but when you have hundreds of tests running in a continuous integration server, that adds up really fast and cripples your automation.
This is what really makes mock objects important--reducing the build-test-deploy cycle time. Making sure your tests run fast is critical to efficient software development.
There are some rules I use writing unit-tests.
If my System Under Test (SUT) or Object Under Test has dependencies
then I mock them all.
If I test a method that returns result then I
check result only. If dependencies are passed as parameters of the method they should be mocked. (see 1)
If I test 'void' method then verifying mocks is the best option for testing.
There is an old one article by Martin Fowler Mocks Aren't Stubs.
In first test you use Mock and in the second one you use Stub.
Also I see some design issues that lead to your question.
If it is allowed to remove AkaName from AkaNames collection then it is OK to use stub and check state of the person. If you add specific method void RemoveAkaName(string name) into Person class then mocks should be used in order to verify its invocation. And logic of RemoveAkaName should be tested as part of Person class testing.
I would use stub for product and mock for repository for you code.
The mocking framework is used to remove the dependency, so the unit test will focus on the "Unit" to be tested. In your case, the person looks like a simple entity class, there is no need to use Mocking for it.
Mocking has many benefits especially in agile programming where many quick release cycles means that sytem strucutre and real data might be incomplete. In such cases you can mock a repository to mimic production code in order to continue work on ui, or services. This is usually complemented with an IoC mechanism like Ninject to simplify the switch to the real repositories. The two examples you give are equal and without any other context I would say it's a matter of choice between them. The fluent api in moq might be easier to read as its kind of self documenting. That's my opinion though ;)
Mock is used to test object that cannot function in isolation. suppose function A is dependent on function B, to perform unit testing on function A we even end up testing function B.By using mock you can simulate the functionality of function B and testing can be focussed only on function A.
A mocking framework is useful for simulating the integration points in the code under test. I would argue that your concrete example is not a good candidate for a mocking framework as you can already inject the dependency (Person) directly into the code. Using a mocking framework actually complicates it in this case.
A much better use case is if you have a repository making a call to a database. It is desirable from a unit test perspective to mock the db call and return predetermined data instead. The key advantages of this is removing dependencies on exiting data, but also performance since the db call will slow the test down.
When should you use mocks? Almost never.
Mocks turn your tests into white-box tests, which are very labor intensive to write and extremely labor intensive to maintain. This may not be an issue if you are in the medical industry, aerospace industry, or other high-criticality software business, but if you are writing regular commercial grade software, your tests should be black-box tests. In other words, you should be testing against the interface, not against the implementation.
What to use instead of mocks?
Use fakes. Martin Fowler has an article explaining the difference here: https://martinfowler.com/bliki/TestDouble.html but to give you an example, an in-memory database can be used as fake in place of a full-blown RDBMS. (Note how fakes are a lot less fake than mocks.)
I follow the naming convention of
MethodName_Condition_ExpectedBehaviour
when it comes to naming my unit-tests that test specific methods.
for example:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
But when I need to rename the method under test, tools like ReSharper does not offer me to rename those tests.
Is there a way to prevent such cases to appear after renaming? Like changing ReSharper settings or following a better unit-test naming convention etc. ?
A recent pattern is to groups tests into inner classes by the method they test.
For example (omitting test attributes):
public CityGetterTests
{
public class GetCity
{
public void TakesParidId_ReturnsParis()
{
//...
}
// More GetCity tests
}
}
See Structuring Unit Tests from Phil Haack's blog for details.
The neat thing about this layout is that, when the method name changes,
you'll only have to change the name of the inner class instead of all
the individual tests.
I also started with this convertion, however ended up with feeling that is not very good. Now I use BDD styled names like should_return_Paris_for_ParisID.
That makes my tests more readable and alsow allows me to refactor method names without worrying about my tests :)
I think the key here is what you should be testing.
You've mentioned TDD in the tags, so I hope that we're trying to adhere to that here. By that paradigm, the tests you're writing have two purposes:
To support your code once it is written, so you can refactor without fearing that you've broken something
To guide us to a better way of designing components - writing the test first really forces you to think about what is necessary for solving the problem at hand.
I know at first it looks like this question is about the first point, but really I think it's about the second. The problem you're having is that you've got concrete components you're testing instead of a contract.
In code terms, that means that I think we should be testing interfaces instead of class methods, because otherwise we expose our test to a variety of problems associated with testing components instead of contracts - inheritance strategies, object construction, and here, renaming.
It's true that interfaces names will change as well, but they'll be a lot more rigid than method names. What TDD gives us here isn't just a way to support change through a test harness - it provides the insight to realise we might be going about it the wrong way!
Take for example the code block you gave:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
{
// some test logic here
}
And let's say we're testing the method GetCity() on our object, CityObtainer - when did I set this object up? Why have I done so? If I realise GetMatchingCity() is a better name, then you have the problem outlined above!
The solution I'm proposing is that we think about what this method really means earlier in the process, by use of interfaces:
public interface ICityObtainer
{
public City GetMatchingCity();
}
By writing in this "outside-in" style way, we're forced to think about what we want from the object a lot earlier in the process, and it becoming the focus should reduce its volatility. This doesn't eliminate your problem, but it may mitigate it somewhat (and, I think, it's a better approach anyway).
Ideally, we go a step further, and we don't even write any code before starting the test:
[TestMethod]
public void GetCity_TakesParId_ReturnsParis
{
ICityObtainer cityObtainer = new CityObtainer();
var result = cityObtainer.GetCity("paris");
Assert.That(result.Name, Is.EqualTo("paris");
}
This way, I can see what I really want from the component before I even start writing it - if GetCity() isn't really what I want, but rather GetCityByID(), it would become apparent a lot earlier in the process. As I said above, it isn't foolproof, but it might reduce the pain for this particular case a bit.
Once you've gone through that, I feel that if you're changing the name of the method, it's because you're changing the terms of the contract, and that means you should have to go back and reconsider the test (since it's possible you didn't want to change it).
(As a quick addendum, if we're writing a test with TDD in mind, then something is happening inside GetCity() that has a significant amount of logic going on. Thinking about the test as being to a contract helps us to separate the intention from the implementation - the test will stay valid no matter what we change behind the interface!)
I'm late, but maybe that Can be still useful. That's my solution (Assuming you are using XUnit at least).
First create an attribute FactFor that extends the XUnit Fact.
public class FactForAttribute : FactAttribute
{
public FactForAttribute(string methodName = "Constructor", [CallerMemberName] string testMethodName = "")
=> DisplayName = $"{methodName}_{testMethodName}";
}
The trick now is to use the nameof operator to make refactoring possible. For example:
public class A
{
public int Just2() => 2;
}
public class ATests
{
[FactFor(nameof(A.Just2))]
public void Should_Return2()
{
var a = new A();
a.Just2().Should().Be(2);
}
}
That's the result:
In the past, I have only used Rhino Mocks, with the typical strict mock. I am now working with Moq on a project and I am wondering about the proper usage.
Let's assume that I have an object Foo with method Bar which calls a Bizz method on object Buzz.
In my test, I want to verify that Bizz is called, therefore I feel there are two possible options:
With a strict mock
var mockBuzz= new Mock<IBuzz>(MockBehavior.Strict);
mockBuzz.Setup(x => x.Bizz()); //test will fail if Bizz method not called
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.VerifyAll();
With a loose mock
var mockBuzz= new Mock<IBuzz>();
foo.Buzz = mockBuzz
foo.Bar();
mockBuzz.Verify(x => x.Bizz()) //test will fail if Bizz method not called
Is there a standard or normal way of doing this?
I used to use strict mocks when I first starting using mocks in unit tests. This didn't last very long. There are really 2 reasons why I stopped doing this:
The tests become brittle - With strict mocks you are asserting more than one thing, that the setup methods are called, AND that the other methods are not called. When you refactor the code the test often fails, even if what you are trying to test is still true.
The tests are harder to read - You need to have a setup for every method that is called on the mock, even if it's not really related to what you want to test. When someone reads this test it's difficult for them to tell what is important for the test and what is just a side effect of the implementation.
Because of these I would strongly recommend using loose mocks in your unit tests.
I have background in C++/non-.NET development and I've been more into .NET recently so I had certain expectations when I was using Moq for the first time. I was trying to understand WTF was going on with my test and why the code I was testing was throwing a random exception instead of the Mock library telling me which function the code was trying to call. So I discovered I needed to turn on the Strict behaviour, which was perplexing- and then I came across this question which I saw had no ticked answer yet.
The Loose mode, and the fact that it is the default is insane. What on earth is the point of a Mock library that does something completely unpredictable that you haven't explicitly listed it should do?
I completely disagree with the points listed in the other answers in support of Loose mode. There is no good reason to use it and I wouldn't ever want to, ever. When writing a unit test I want to be certain what is going on - if I know a function needs to return a null, I'll make it return that. I want my tests to be brittle (in the ways that matter) so that I can fix them and add to the suite of test code the setup lines which are the explicit information that is describing to me exactly what my software will do.
The question is - is there a standard and normal way of doing this?
Yes - from the point of view of programming in general, i.e. other languages and outside the .NET world, you should use Strict always. Goodness knows why it isn't the default in Moq.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}
Me personally, being new to mocking and Moq feel that starting off with Strict mode helps better understand of the innards and what's going on. "Loose" sometimes hides details and pass a test which a moq beginner may fail to see. Once you have your mocking skills down - Loose would probably be a lot more productive - like in this case saving a line with the "Setup" and just using "Verify" instead.
My company is on a Unit Testing kick, and I'm having a little trouble with refactoring Service Layer code. Here is an example of some code I wrote:
public class InvoiceCalculator:IInvoiceCalculator
{
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
UpdateLine(il);
}
//do a ton of other stuff here
}
private UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff, including calls to other private methods
}
}
In this simplified case (it is reduced from a 1,000 line class that has 1 public method and ~30 private ones), my boss says I should be able to test my CalculateInvoice and UpdateLine separately (UpdateLine actually calls 3 other private methods, and performs database calls as well). But how would I do this? His suggested refactoring seemed a little convoluted to me:
//Tiny part of original code
public class InvoiceCalculator:IInvoiceCalculator
{
public ILineUpdater _lineUpdater;
public InvoiceCalculator (ILineUpdater lineUpdater)
{
_lineUpdater = lineUpdater;
}
public CalculateInvoice(Invoice invoice)
{
foreach (InvoiceLine il in invoice.Lines)
{
_lineUpdater.UpdateLine(il);
}
//do a ton of other stuff here
}
}
public class LineUpdater:ILineUpdater
{
public UpdateLine(InvoiceLine line)
{
line.Amount = line.Qty * line.Rate;
//do a bunch of other stuff
}
}
I can see how the dependency is now broken, and I can test both pieces, but this would also create 20-30 extra classes from my original class. We only calculate invoices in one place, so these pieces wouldn't really be reusable. Is this the right way to go about making this change, or would you suggest I do something different?
Thank you!
Jess
This is an example of Feature Envy:
line.Amount = line.Qty * line.Rate;
It should probably look more like:
var amount = line.CalculateAmount();
There isn't anything wrong with lots of little classes, it's not about re-usability as much as it's about adaptability. When you have many single responsibility classes, it's easier to see the behavior of your system and change it when your requirements change. Big classes have intertwinded responsibilities which make it very difficult to change.
IMO this all depends on how 'significant' that UpdateLine() method really is. If it's just an implementation detail (e.g. it could easily be inlined inside CalculateInvoice() method and they only thing that would hurt is readability), then you probably don't need to unit test it separately from the master class.
On the other hand, if UpdateLine() method has some value to the business logic, if you can imagine situation when you would need to change this method independently from the rest of the class (and therefore test it separately), then you should go on with refactoring it to a separate LineUpdater class.
You probably won't end up with 20-30 classes this way, because most of those private methods are really just implementation details and do not deserve to be tested separately.
Well, your boss goes more correct way in terms of unit-testing:
He is now able to test CalculateInvoice() without testing UpdateLine() function. He can pass mock object instead of real LineUpdater object and test only CalculateInvoice(), not a whole bunch of code.
Is it right? It depends. Your boss wants to make real unit-tests. And testing in your first example would not be unit-testing, it would be integration testing.
What are advantages of unit-tests before integration tests?
1) Unit-tests allow you to test only one method or property, without it being affected by other methods/database and so on.
2) Second advantage - unit tests execute faster (for example, you said UpdateLine uses database) because they don't test all the nested methods. Nested methods can be database calls so if you have thousand of tests your tests can run slow (several minutes).
3) Third advantage: if your methods make database calls then sometimes you need to setup database (fill it with data which is necessary for test) and it can be not easy - maybe you will have to write a couple of pages of code just to prepare database for a test. With unit tests, you separate database calls from the methods being tested (using mock objects).
But! I am not saying that unit tests a better. They are just different. As I said, unit tests allow you to test a unit in isolation and quickly. Integration tests are easier and allow you to test results of a joint work of different methods and layers. Honestly, I prefer integration tests more :)
Also, I have a couple of suggestions for you:
1) I don't think having Amount field is a good idea. It seems that Amount field is extra because it's value can be calculated based on 2 other public fields. If you want to do it anyway, I would do it as a read only property which returns Qty * Rate.
2) Usually, having a class which consists of 1000 rows may mean that it's badly designed and should be refactored.
Now, I hope you better understand the situation and can decide. Also, if you understand the situation you can talk to your boss and you can decide together.
yeah, nice one. I'm not sure wether the InvoiceLine object also has some logic included, otherwise then you would probably need a IInvoiceLine also.
I sometimes have the same questions. On one hand you want to do things right and unit test your code, but when database calls and maybe filewriting is involved it causes a lot of extra work to setup the first test with all the testobjects which step in when filewriting and database io is about to happen, interfaces, asserts and you also want to test that the datalayer doesn't contain any errors. So a test which is more 'process' then 'unit' is often easier to build.
If you have a project that will be changed a lot (in the future) and lots of dependencies of this code (maybe other programs read the file or database data) it can be nice to have a solid unit test for all parts of your code, and the investment time is worthwhile.
But if the project is, like my latest client says 'let's get it live and maybe we'll tweak a bit next year and next year there will be something new', than i wouldn't be so hard on myself to get all unit tests up and running.
Michel
Your boss' example looks reasonable to me.
Some of the key considerations I try to keep in mind when designing for any scenario are:
Single Responsibility Principle
A class should only change for one reason.
Does each new class justify its existence
Have classes been created just for the sake of it, or do they encapsulate a meaningful portion of logic?
Are you able to test each piece of code in isolation?
In your scenario, just looking at the names, it would appear that you were wandering away from Single Responsibility - You have an IInvoiceCalculator, yet this class is then also responsible for updating InvoiceLines. Not only have you made it very difficult to test the update behaviour, but you now need to change your InvoiceCalculator class both when calculation business rules change and when the rules around updating change.
Then there is the question about the updating logic - does the logic justify a seperate class? That really depends and it is hard to say without seeing the code, but certainly the fact that your boss wants that logic under test would suggest that it is more than a simple on liner call off to a datalayer.
You say that this refactoring creates a great number of extra classes, (I'm taking that you mean across all your business entities, since I only see a couple of new classes and their interfaces in your example) but you have to consider what you get from this. It looks like you gain full testability of your code, the ability to introduce new calculation and new update logic in isolation, and a more clear encapsulation of what are seperate pieces of business logic.
The gains above are of course subject to a cost benefit analysis, but since your boos is asking for them, it sounds like he is happy that they will pay off, against the extra work to implement the code this way.
The final, third point about testing in isolation is also a key benefit of the way your boss has designed this - the closer your public methods are to the code that does that actualy work, the easier it is to inject stubs or mocks for parts of your system that are not under test. For example, if you are testing an update method that calls off the a datalayer, you do not want to test the datalayer, so you would usually inject a mock. If you need to pass that mocked datalayer through all your calculator logic first, your test setup is going to be far more complicated since the mock now needs to meet many other potential requirements, not related to the actual test in question.
While this approach is extra work initially, I'd say that the majority of the work is the time to think through the design, after that, and after you get up to speed on the more injection based style of code, the raw implementation time of software that is structured in that way is actually comparible.
Your hoss' approach is a great example of dependency injection and how doing so allows you to use a mock ILineUpdater to conduct your tests efficiently.