Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm writing some test code for testing an ASP.NET MVC web application with Castle Windsor DI, Domain Driven Design (Application/Domain Services, Repository, Domain Model), NHibernate, and (most likely) MOQ for mocking. The possibilities of what can be tested are endless as basically everything can be tested.
Some possibilities are for example:
Ensure Castle Windsor configuration will work (Test some conventions)
Business logic (Inside an Entity, or Domain Service)
Other things can be tested such as the controller actions etc.
There are quite a few things (so many layers - Controllers, Services, Repositories) which hardly seem worth any effort to test as they are quite simple in general.
With a smaller application it isn't very clear what could have the most benefit yet, but it will grow and the same patterns will be used on more complex applications.
For those with similar applications, what are you unit testing?
Domain models and Application services are first citizens for unit tests if you don't have enough time or new to writing tests. These tests cover the most important part(application services for flow control and domain models for business rules). When I start to learn writing tests(don't know TDD at that time), they're the only part I test.
Then everything could be tested after adopting tdd. You'll need integration tests covering persistence, messaging and other integration points(mostly for testing configurations).
The best answer for "what to unit test" question is ... everything - according to TDD :)
However, ensuring that properly configured components will work well together is rather a part of Integration Testing or Smoke Testing.
The suggested scenario in TDD is to write tests in parallel with the code, so both codes grow simultaneously. Your problem probably lies within the fact that you already have the code, and don't have the unit tests. In such situation there are two possibilities:
1) Your components are well separated. Then you can write unit tests for public interface of each component, aiming to achieve high coverage (measured with some coverage tool).
2) Your components are tangled together. Then I'd suggest writing as much integration tests as you can, also aiming for high coverage. This will be harder than in case 1), so best to test most typical and crucial scenarios, and then refactor the code to loose some coupling, and then proceed with step 1).
Related
I tried to find the unit testing-friendly best code practice in .NET development.
I'm assuming that it will be painful if I got to implement unit testing for an already developed project which is not planned to do unit testing when it was developed.
What are things we need to keep in mind when we develop Web API or web applications?
What is the best unit testing-friendly architecture?
What are the principles which we need to follow. eg: Interface segregation in SOLID principles.
I'm assuming that it will be painful if I got to implement unit testing for an already developed project which is not planned to do unit testing when it was developed.
Indeed an already existing projects with no (or few) tests is not the best motivator for writing new tests.
In such case, I would recommend to consider those all never-written tests as tech debt, and start writing tests for new modules only. With time, you will probably find yourself covering old modules as part of fixing bugs (and bugs are likely to be found in old module because, well, they had never been covered with tests).
Secondly, the process of writing tests in a project with lots of untested code can itself lead to make necessary design changes. Not only that unit tests help us finding bugs in dev-time and have regression cover, but also they promote better design in our systems.
What are things we need to keep in mind when we develop Web API or web applications?
Big question. There are many aspects for a good app and lots of docs on web to learn from. I can give you a specific view in the context of unit tests: use interfaces, and follow the dependency-injection and dependency-inversion principles. This way, it will be easy to write tests, which in turn improve the quality of your app.
What is the best unit testing-friendly architecture?
I am not sure that 'architecture' is the accurate term. There are however several approaches for writing tests, such as TDD and BDD, but you have to keep in mind that writing tests with, for example, the TDD approach might require a learning curve for developers.
Use one of the many frameworks available for writing unit tests, and start writing tests for methods. Check outputs against inputs and mock-objects, and learn how to pick best sample data as inputs for your tests.
What are the principles which we need to follow. eg: Interface segregation in SOLID principles.
Again, big question. In short: SOLID.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
My question is, where is an easy place in the codebase to begin writing unit testing for so I can get my feet wet and develop some basic skills?
The details:
I am a recently hired junior dev and I have been tasked with adding unit testing to my company's projects. I've installed NUnit, watched tutorials, done the basic samples, and am reading through the official docs. I can see the value of automated unit testing, plus think it's a good way to get familiar with my company's source code.
We have 2 online platforms, a "brochure" site to attract clients and our "database" site. both written in Written in Javascript, jQuery, C#, .NET, connecting to MSSQL via Entity framework.
The "brochure" site doesn't have to much user functionality built in. They click links to get to pages of information. And there is a contact request form where they submit basic info so we can contact them. Very simple website.
The "database" is more complex. Registered users can grab any number of combination of data from different databases, there are access rights involved with various levels of registration. Lots of filtering based on users selections.
There aren't currently any tests written for either but they are both in production. Where would be a good place to begin?
I think I should begin on the simple "brochure" site to learn on, but I do see the testing as being more important in the long-term for the "database" site.
Should I try and test controllers, ajax get/post functions, views?
I think what I'm hoping for is someone to say "these 1,2,3 are simple tests to write, start there" If I'm wrong on this thinking please let me know. Thank you.
If the applications currently have no unit tests then it's likely that they aren't immediately "testable." In other words, they may not be built out of smaller classes and methods that can be unit tested. The two go hand-in-hand. People who write unit tests try to write testable code.
Chances are that you may need to refactor a little bit in order to be able to write unit tests. One approach is to look for "low hanging fruit" - areas where that refactoring is easier.
For example, are there large blocks of duplicate code? You could write a class that provides some of that functionality and replace those methods with references to that class. Now you can unit test that class.
Or, does a class have a ton of private methods that all do things seemingly unrelated to the class's primary purpose? There's no way to unit test those private methods, but you could break them out into their own classes.
For the longest time I thought that refactoring meant redesigning the code to use dependency injection or rewriting classes to depend on abstractions, and that made it seem overwhelming. I recently read something that changed my outlook on that. Here it is. The article is titled "Invading Legacy Code in the Name of Testability."
The gist is this (or at least what I remember): Suppose we have a class that calls a bunch of private methods, and that class is untestable. We could put that method in its own class, refactor it to make it testable, and then write unit test for it.
Then, in our original class, we change
DoSomethingInAPrivateMethod();
to
var doesSomething = new DoesSomething();
doesSomething.DoSomething();
Now we can write unit tests for our DoesSomething class.
Normally we might try to avoid creating new classes like that. We're still not applying the principles of OOP. But in this case, what we've done is taken a small piece of code that couldn't be tested and made it testable. It's not ideal. It's not the way we'd like to write our code. But it's better than it was. At least now something has unit tests.
You might get to move 20 lines of code out of one class and put it in another. At least now that original class is a tiny bit more manageable. Maybe after doing that the original class will get to the point where you can unit test it, too.
First of all: There are different type of tests to think about.
Unit tests in general are used to test small units (functions for example).
So writing tests for every complexer function (no need to test getters and setters or something similar) in a Class/Module could be a good point to start. Calls to a database or other external services (e.g. API) should be mocked.
You can create a folder/sub project "tests" in the root folder of your project. The structure of this test folder should be like your code project sources to make it easy to find the test class/files for a specific class in the code project.
Later you can (and should) think about Integration tests. This is were you can connect to your database or other external service.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I am looking at different options for persistence modelling in Windows Phone using Isolated Storage. One of the ideas I have come up with was the concept of each object handling its own (were it makes sense of course) persistence, rather than making a repository or other such entity for the purpose of saving objects.
I can't seem to find any good information on this method of persistence which leads me to believe I may have stumbled onto an anti pattern of sorts.
Has anyone approached persistence in this manner? If so what are your for's or against's in relation to this approach.
There are several undeniable truths in software development:
A prototype becomes a product before you know it.
An app targetted "just for platform-x" will soon be ported to platform-y.
The data-store will change. Probably as a result of #2.
There are more ( :) ) but these are enough for to answer your question:
Go with a respository so your objects can be tested, know nothing about persistence, and you can swap out data stores (even go over the wire!) Might as well plan for that up-front.
Sounds like you're talking about the Active Record pattern? It works for some folks but there are criticisms against it (mostly from a testability / separation of concerns standpoint).
The biggest issue is that you end up with persistence logic spread out across all your classes. That can quickly lead to bloat, and it also embeds assumptions about your persistence technology all over your codebase. That gets messy if you need to change where or how you store your objects.
Those assumptions also make automated testing more difficult because now you have a persistence layer dependency to work around. You could inject a repository into the object to counteract some of this stuff, but then you're implementing a repository anyway. :) Better to just keep the core classes entirely peristence-ignorant if you can...
On the plus side, it's a simpler pattern for people to grasp and is a quick way to get things done on a lightweight project. If the number of classes is small it could be the quickest way to get from A to B. I still find myself building out separate repositories on small projects however, I just can't stand having persistence stuff mixed in with my business logic.
Cons:
Violates Single Responsibility Principle (SRP)
Hampers testability
Tightly couples you business logic to your database
Pros:
Is simple to implement
Basically, if your data model is flat and simple, and your application requirements are modest, Active Record might be a good choice; however, it starts to break down when your mapping requirements get a bit more complex. More robust ORM patterns like Data Mapper become appropriate in cases like these.
Pros
simplicity
Cons
breaks separation of concerns
tight coupling of business logic with database
makes testing much more difficult
This pretty much boils down to testing becoming much harder, and decreasing the time before you have to do a major refactor in your project.
At the end of the day you need to weigh your goals and concerns for the project and decide if the loss of testing/verifiability/cleaness is worth it to gain a simpler system.
If it's a simple application, you're probably fine to drop the DAL layer, and go for the simpler model. Though if you application has lots of moving parts and is of considerable complexity, I would avoid removing the DAL as you will want to be able to test and verify your code well.
It flies in the face of using a Data Access Layer...not that there's anything wrong with that.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Just attended a presentation where a testing company stated that V-model testing can be used in an Agile development team developing with java and c#.
I would prefer testdriven development and automation of acceptance testing before V-model testing.
Not sure if V-model testing and Agile testing can be considered the same.
Looking for your testing experience or opinions about using v-model testing in Agile teams.
If you are using V-model testing in agile, how do you do it (or is it not making sense)?
Update: Thoughtworks presentation (Agile vs v model)
V-Model is widely used at my company. I must add that, IMHO, there are better development models out there, but V-Model can still be effectively used when developing large-scale systems where you are NOT using iterative development.
Still, it's my idea that test-driven development can still be applied to V-Model, as part of the unit testing phase and even on integration testing phase, as long as you can automate that as part of the development cycle.
V-Model, however, sees system testing as a test that occurs after the development of the product is considered complete, so test-driven development doesn't apply. Sure you can automatize it through the use of tools, scripts or programs, but you are no longer developing your code. On System Tests you no longer care about the code, only about specifications. This happens so because your tests units could be incomplete.
Finally, user acceptance, in V-Model, shouldn't be completely automatized, because it's when the final user looks at the system and decides whether it adheres to the requirements or not. Of course the user will have a script on his hands in order to know what he/she should be testing, and in case of, let's say, batch systems, there will be an supply of data, but in no way should a script determine the success of this phase.
But let's get back to the question. What I just said is that TDD and automation can be used as the implementation of testing phases in V-Model. So, if you can use V-Model testing with Agile development, as presentation you saw affirmed, then I can also use TDD and automation techniques.
However, I'm not sure you would want to. I don't know how one could apply V-Model to Agile or if it would be coherent, since V-Model is not agile.
Test driven development is about specification, not test. This is not antagonist with a V approach.
On the other side, V-model implies a single long cycle of development. This is antagonist with an agile approach.
V model testing doesn't really fit in with the ethos of agile development. So in short, while it is feasible that it could be done, it would compromise the nature of the agile process.
One of the important features of agile is the ability to adapt to change. The V model doesn't really support it well.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm involved in a project where we are using a Continuous Integration server and NUnit for unit testing and integration testing.
A customer asked us the other day if we were writting the tests before the code... Well, were are not doing it this way always. Specially when there are complex technology issues that we would like to test to understand the problem and the possible solution first.
I would like to know if we could still considered our development process as following Agile Development, say it to customers and don't lie.
I think you are mixing up things here.
Test-Driven-Development (TDD) does not necessarily mean you are using an agile approach. Surely, it is a best practice that many of us who do agile use, but TDD can also be used in a Waterfall process, replacing/supplementing the specification.
Continous-Integration on its own means having the code your team produces integrated on an at least daily basis. This does not only force every member of the team having to merge/checkin continously, but also assures you actually can do a release of every build.The unified build process forces you to overcome the "works on my machine syndrom". Because you could do a release everyday this supports an agile process, even though it is not absolutely necessary in the strict sense.
Using tests and integrating them into the build process is a way to enrich your buildprocess with automated Quality Assurance and deepen the level on which integration (integrity) is actually tested.
As long as you are developing in small iterations, focus on getting a working product rather than on getting extensive documentation, and the customer is continuosly involved in the project, it is agile development. Unit testing, TDD and integration testing are of course good and very advisable practices, but they do not decide whether your project is agile or not.
In the absence of Automated tests, CI only verifies that the code under source control is maintained in a compilable state between revisions and that the single-step build works properly. While this is useful, it isn't as useful as the automatic verification that the correctness of the code has been maintained between revisions.
With that said, I'd rather have some verification of code between check-ins than none. I'd rather have partial code coverage or an incomplete set of functional tests than nothing. Or worse.