Function testing tool for c,c++,c# - c#

Is there any testing tool for c,c++ or c#, other than debugging, which works like copying-pasting an independent function to some text box, and entering parameters on other text boxes?

Maybe you think about unit testing. I recommend you Google Test and Google Mock. It is simple and powerful tool (and for free!).
http://code.google.com/p/googletest/
http://code.google.com/p/googlemock/
There are some clear examples and very easy to read documentations.
After you create unit test you have possibility to refactor your code in easy way. After yo change sth unit test will fail. There is a lot of different solution like boost tests etc.
After few weeks with UT you will be not able to life without them.
Programming is really less stressful.

It sounds like you want LINQPad, which allows you to quickly and easily execute arbitrary C# code.

Related

generate c# test method

I was just wondering, given an input file(excel,xml etc), can we generate a unit test code in c#? Consider for example, I need to validate a database. In the input excel file, i will mention which all attributes to be set, which all to retrieve, expected value etc. Also for these , i can provide the queries to run. So given these many inputs from my side, can I create a unit test case method in c# through some tool or script or another program? Sorry if this sounds dumb. Thank you for the help.
A unit-test should test if your software works correct/as expected not that your data is correct. To be concise you should test the software that imported the data to your database. When the data is already in the database you can however write a validation-script or something similar which has nothing to do with a Unit-Test (however the script may be tested of course for working correctly).
You should however test if the queries provided by your software to run against the database are correct and wheather they work as expected, with both arbitrary and real-world-data.
Even when code-generation is involved you do not want to check if the process of generating the source-code works correctly (at least until you did not write your own code-generator). Simply assume the generator works as expected and continue with the stuff you can handle yourself.
I had a similar question some time back, though not in the context of unit tests. This code that can be generated from another file/database table is called Boilerplate Code.
So if you ask whether this can be done, the answer is yes. But if you wonder whether this should be done, the answer is no. Unit tests are not ideal boilerplate code. They are mutable... On catching an edge case that you did not consider earlier you may have to add a few more tests.
Also, unit tests are often used to not just test the code but to drive code development. This method is known as Test Driven Development (abbr. TDD). It'd be a mess to "drive" your development from boilerplate tests.

How to write automated tests for SQL queries?

The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits.
The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful.
I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct.
I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve.
I am happy to add more information if required, add a comment if necessary. Thank you.
Edit: I am using c#
The standard approach to testing code that runs SQL queries is to unit-test it. (There are higher-level kinds of testing than unit testing, but it sounds like your problem is with a small, specific part of your application so don't worry about higher-level testing yet.) Don't try to test the queries directly, but test the result of the queries. That is, write unit tests for each of the C# methods that runs a query. Each unit test should insert known data into the database, call the method, and assert that it returns the expected result.
The two most common approaches to unit testing in C# are to use the Visual Studio unit test tools or NUnit. How to write unit tests is a big topic. Roy Osherove's "Art of Unit Testing" should be a good place to get started.
The other answer to this question, while generally correct for testing code, does not address the issue of testing your database at all.
It sounds like you're after database unit tests. The idea is that you create a temporary, isolated database environment with your desired schema and test data, then you validate that your queries are returning appropriate data.

Programmatically create MSTest unit tests

I am looking for a way to programmatically create unit tests using MSTest. I would like to loop through a series of configuration data and create tests dynamically based on the information. The configuration data will not be available at compile time and may come from an external data source such as a database or an XML file. Scenario: Load configuration data into a test harness and loop through the data while creating a new test for each element. Would like each dynamically created test to be reported (success/fail) on separately.
You can use Data Driven Testing depending on how complex your data is. If you are just substituting values and testing to make sure that your code can handle the same inputs that might be the way to go, but this doesn't really sound like what you are after. (You could make this more complex, after all all you are doing is pulling in values from a data source and then making a programmatic decision based on it)
All MS Test really does is run a series of tests and then produce the results (in an xml file) which is then interpreted by the calling application. It's just a wrapper for executing methods that you designate through attributes.
What it sounds like you're asking is to write C# code dynamically, and have it execute in the harness.
If you really want to run this through MS test you could:
Build a method (or series of methods) which looks at the XML file
Write out the C# code (I would maybe look at T4 Templates for this) (Personally, I would use F# to do this, but I'm more partial to functional languages, and this would be easier for me).
Calls the csc.exe (C# compiler)
Invokes MS Test
You could also write MSIL code into the running application directly, and try to get MS Test to execute it, which for some might be fun, but that could be time consuming and not necessarily guaranteed to work (I haven't tried it, so I don't know what the pit falls would be).
Based on this, it might be easier to quickly build your own harness which will interpret your XML file and dynamically build out your test scenarios and produce the same results file. (After all the results are what's important, not how you got there.) Since you said it won't be available during compile time, I would guess that you aren't interested in viewing the results in the VS studio window.
Actually, personally, I wouldn't use XML as your Domain Specific Language (DSL). The parsing of it is easy, because .NET already does that for you, but it's limiting in how it would define how your method can function. It's meant for conveying data, and although technically code is a form of data, it doesn't have the sufficient expressive strength to convey many abilities in more formal language. This is just my personal opinion though, and there are many ways to skin a cat.

How can I automatically generate unit tests in MonoDevelop/.Net?

I am looking to automatically generate unit tests in MonoDevelop/.Net.
I've tried NUnit, but it doesn't generate the tests. In eclipse, the plug-in randoop does this, however it targets Java and JUnit.
How can I automatically generate unit tests in MonoDevelop and/or for .Net? Or perhaps there is an existing tool out there I am unaware of...
Calling methods with different (random) input is just one part of the process. You also need to define the correct result for each input, and I don't think a tool can do that for you.
randoop only seems to check very few very basic properties of equal, which is not of great use imo and also might lead to a false impression of correctness ("Hey look, all tests pass, my software is ok" ...)
Also just randomly generating code (and input) has the risk of undetermined test results. You might or might not get tests that really find flaws in your code.
That said, a quick googling gave the following starting points for approaches you might want to take:
You might be interested in using test case generators (this CodeProject article describes a very simple one). They support you in generating the "boilerplate" code and can make sure, you dont miss any classes/methods you want to test. Of course, the generated tests need to be adapted by defining proper (i.e. meaningful) input and (correct) output values. Googling for "NUnit Test generators" will give you other links, also for commercial software, which i don't want to repeat here ...
NUnit (and other testing frameworks) support parameterized tests: These can be used to test a whole class of input scenarios. For NUnit, i found the Random attribute which lets you generate random input (in a certain range) for your methods. Remember what I wrote above about random test inputs: the results of these tests will not be reproducable which renders them useless for automatic or regression testing.
That said, also look at this question (and certainly others on SO), which may support my argument against automatic unit test generation.

Is it OK to copy & paste unit-tests when the logic is basically the same?

I currently have like 10 tests that test whenever my Tetris piece doesn't move left if there is a piece in the path, or a wall. Now, I will have to test the same behaviour for the right movement.
Is it too bad if I just copy the 10 tests I already have for the left movement and make only the needed changes and do the same for the code itself too? Or should I go again and make each test from the beginning, even so if the logic is basically the same?
I have a somewhat controversial position on this one. While code duplication must be avoided as much as possible in production code, this is not so bad for test code. Production and test code differ in nature and intent:
Production code can afford some complexity so as to be understandable/maintainable. You want the code to be at the right abstraction level, and the design to be consistent. This is ok because you have tests for it and you can make sure it works. Code duplication in production code wouldn't be a problem if you had really a 100% code coverage at the logical level. This is really hard to achieve, so the rule is: avoid duplication and maximize code coverage.
Test code on the other hand must be as simple as possible. You must make sure that test code actually tests what it should. If tests are complicated, you might end up with bug in the tests or the wrong tests -- and you don't have tests for the tests, so the rule is: keep it simple. If test code is duplicated, this is not so a big problem when it something changes. If the change is applied only in one test, the other will fail until you fix it.
The main point I want to make is that production and test code have a different nature. Then it's always a matter of common sense, and I'm not saying you should not factor test code, etc. If you can factor something in test code and you're sure it's ok, then do it. But for test code, I would favor simplicity over elegance, while for production code, I would favor elegance over simplicity. The optimum is of course to have a simple, elegant solution :)
PS: If you really don't agree, please leave a comment.
Try taking the 3rd approach that you haven't mentioned, that of refactoring your code so that you can share one implementation of the test between all 10 tests.
The jist is, duplicating code is almost always the wrong thing to do. In this example you could refactor the checking code into a method called, for example IsTetrisPieceUnableToMoveLeftBecauseOfAPieceOrAWall. I always go for very descriptive method names like that when writing a bit of "shared" functionality for a unit test as it makes it extraordinarily clear just what's being done / tested.
Test code is like any other code and should be maintained and refactored.
This means that if you have shared logic, extract it to its own function.
Some unit test libraries such as the xUnit family have specific test fixture, setup and teardown attributes for such shared code.
See this related question - "Why is copy paste of code dangerous?".
There's nothing wrong with copy-pasting, and it's a good place to start. Indeed, it's better than from scratch, as if you've got working code (whether tests or otherwise), then copy-paste is more reliable than from scratch, as well as quicker.
However, that's only step 1. Step 2 is refactoring for commonality, and step 1 is only to help you see that commonality. If you can already see it clearly without copying (sometimes it's easier to copy first and then examine, sometimes it isn't, and it depends on the person doing it) then skip step 1.
The xunitpatterns.org web site says "no" (copy / paste is not ok) because it can increase costs when tests need to be updated:
Test Code Duplication
"Cut and Paste" is a powerful tool for writing code fast but it
results in many copies of the same code each of which must be
maintained in parallel.
and for further reading it also links to the article
Refactoring Test Code
By: Arie van Deursen, Leon Moonen, Alex van den Bergh, Gerard Kok
If you are repeating code, then you must refactor. Your situation is a common problem and is solved using 'Parameteric Testing'. Parameteric testing when supported by the test harness allows for passing multiple sets of input values as parameters. You may also want to look up Fuzz testing, I have found that it useful in situations such as this.
Remember that your tests are pushing against your code. if you find that your tests look duplicated except for something like left/right, then maybe there issome underlying code that left and right is duplicating. So you might want to see if you can refactor your code to use either left or right and send a left or right flag to it.
I agreed #Rob. The code needs refactoring. But if you don't want to do refactor the code at this point of time then you can go and have parametrized tests. The same test run for different parameters. See TestCase and TestCaseSource attributes in nunit.
Refer
http://nunit.org/index.php?p=parameterizedTests&r=2.5
I sometimes find myself making very complex unit tests only for avoiding test code duplication. I think it's not good to do so. Any single unit test should be as simple as possible. And if you need a duplication for achieving it - let it be.
On the other hand, if your unit test has +100500 lines of code, then it should obviously be refactored, and this will be a simplification.
And, of course, try avoiding meaningless unit test duplication, like testing 1+1=2, 2+2=4, 3+3=6. Write data-driven tests if you really need to test the same method on different data.

Categories

Resources