generate c# test method - c#

I was just wondering, given an input file(excel,xml etc), can we generate a unit test code in c#? Consider for example, I need to validate a database. In the input excel file, i will mention which all attributes to be set, which all to retrieve, expected value etc. Also for these , i can provide the queries to run. So given these many inputs from my side, can I create a unit test case method in c# through some tool or script or another program? Sorry if this sounds dumb. Thank you for the help.

A unit-test should test if your software works correct/as expected not that your data is correct. To be concise you should test the software that imported the data to your database. When the data is already in the database you can however write a validation-script or something similar which has nothing to do with a Unit-Test (however the script may be tested of course for working correctly).
You should however test if the queries provided by your software to run against the database are correct and wheather they work as expected, with both arbitrary and real-world-data.
Even when code-generation is involved you do not want to check if the process of generating the source-code works correctly (at least until you did not write your own code-generator). Simply assume the generator works as expected and continue with the stuff you can handle yourself.

I had a similar question some time back, though not in the context of unit tests. This code that can be generated from another file/database table is called Boilerplate Code.
So if you ask whether this can be done, the answer is yes. But if you wonder whether this should be done, the answer is no. Unit tests are not ideal boilerplate code. They are mutable... On catching an edge case that you did not consider earlier you may have to add a few more tests.
Also, unit tests are often used to not just test the code but to drive code development. This method is known as Test Driven Development (abbr. TDD). It'd be a mess to "drive" your development from boilerplate tests.

Related

How to write automated tests for SQL queries?

The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits.
The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful.
I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct.
I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve.
I am happy to add more information if required, add a comment if necessary. Thank you.
Edit: I am using c#
The standard approach to testing code that runs SQL queries is to unit-test it. (There are higher-level kinds of testing than unit testing, but it sounds like your problem is with a small, specific part of your application so don't worry about higher-level testing yet.) Don't try to test the queries directly, but test the result of the queries. That is, write unit tests for each of the C# methods that runs a query. Each unit test should insert known data into the database, call the method, and assert that it returns the expected result.
The two most common approaches to unit testing in C# are to use the Visual Studio unit test tools or NUnit. How to write unit tests is a big topic. Roy Osherove's "Art of Unit Testing" should be a good place to get started.
The other answer to this question, while generally correct for testing code, does not address the issue of testing your database at all.
It sounds like you're after database unit tests. The idea is that you create a temporary, isolated database environment with your desired schema and test data, then you validate that your queries are returning appropriate data.

Programmatically create MSTest unit tests

I am looking for a way to programmatically create unit tests using MSTest. I would like to loop through a series of configuration data and create tests dynamically based on the information. The configuration data will not be available at compile time and may come from an external data source such as a database or an XML file. Scenario: Load configuration data into a test harness and loop through the data while creating a new test for each element. Would like each dynamically created test to be reported (success/fail) on separately.
You can use Data Driven Testing depending on how complex your data is. If you are just substituting values and testing to make sure that your code can handle the same inputs that might be the way to go, but this doesn't really sound like what you are after. (You could make this more complex, after all all you are doing is pulling in values from a data source and then making a programmatic decision based on it)
All MS Test really does is run a series of tests and then produce the results (in an xml file) which is then interpreted by the calling application. It's just a wrapper for executing methods that you designate through attributes.
What it sounds like you're asking is to write C# code dynamically, and have it execute in the harness.
If you really want to run this through MS test you could:
Build a method (or series of methods) which looks at the XML file
Write out the C# code (I would maybe look at T4 Templates for this) (Personally, I would use F# to do this, but I'm more partial to functional languages, and this would be easier for me).
Calls the csc.exe (C# compiler)
Invokes MS Test
You could also write MSIL code into the running application directly, and try to get MS Test to execute it, which for some might be fun, but that could be time consuming and not necessarily guaranteed to work (I haven't tried it, so I don't know what the pit falls would be).
Based on this, it might be easier to quickly build your own harness which will interpret your XML file and dynamically build out your test scenarios and produce the same results file. (After all the results are what's important, not how you got there.) Since you said it won't be available during compile time, I would guess that you aren't interested in viewing the results in the VS studio window.
Actually, personally, I wouldn't use XML as your Domain Specific Language (DSL). The parsing of it is easy, because .NET already does that for you, but it's limiting in how it would define how your method can function. It's meant for conveying data, and although technically code is a form of data, it doesn't have the sufficient expressive strength to convey many abilities in more formal language. This is just my personal opinion though, and there are many ways to skin a cat.

How can I automatically generate unit tests in MonoDevelop/.Net?

I am looking to automatically generate unit tests in MonoDevelop/.Net.
I've tried NUnit, but it doesn't generate the tests. In eclipse, the plug-in randoop does this, however it targets Java and JUnit.
How can I automatically generate unit tests in MonoDevelop and/or for .Net? Or perhaps there is an existing tool out there I am unaware of...
Calling methods with different (random) input is just one part of the process. You also need to define the correct result for each input, and I don't think a tool can do that for you.
randoop only seems to check very few very basic properties of equal, which is not of great use imo and also might lead to a false impression of correctness ("Hey look, all tests pass, my software is ok" ...)
Also just randomly generating code (and input) has the risk of undetermined test results. You might or might not get tests that really find flaws in your code.
That said, a quick googling gave the following starting points for approaches you might want to take:
You might be interested in using test case generators (this CodeProject article describes a very simple one). They support you in generating the "boilerplate" code and can make sure, you dont miss any classes/methods you want to test. Of course, the generated tests need to be adapted by defining proper (i.e. meaningful) input and (correct) output values. Googling for "NUnit Test generators" will give you other links, also for commercial software, which i don't want to repeat here ...
NUnit (and other testing frameworks) support parameterized tests: These can be used to test a whole class of input scenarios. For NUnit, i found the Random attribute which lets you generate random input (in a certain range) for your methods. Remember what I wrote above about random test inputs: the results of these tests will not be reproducable which renders them useless for automatic or regression testing.
That said, also look at this question (and certainly others on SO), which may support my argument against automatic unit test generation.

C# Unit Testing warn instead of fail

When using Assert(...) if a logical test fails the unit test is aborted and the rest of the unit test isn't run. Is there a way to get the logical test to fail but just provide a warning or something and still run the rest of the unit test?
An example of the context is I have a test that creates some students, teachers and classes, creates relationships, then places them into a database. Then some SSIS packages are run on this database that takes the existing data and converts it into another database schema in another database. The test then needs to check the new database for certain things like the correct number of rows, actions, etc.
Obviously other tests are deletes and mods, but all of them follow the same structure - create data in source db, run SSIS packages, verify data in target db.
It sounds like you are attempting to test too many things in a single test.
If a precondition isn't met, then presumably the rest of the test will not pass either. I'd prefer to end the test as soon as I know things aren't what I expect.
The concepts of unit testing are Red fail, Green pass. I know MSTest also allows for a yellow, but it isn't going to do what you want it to. You can do an Assert.Inconclusive to get a yellow light. I have used this when I worked on a code base that had a lot of integration tests that relied on specific database data. Rather than have the test fail, I started having the results be inconclusive. The code might have worked just fine, but the data was missing. And there was no reason to believe the data would always be there (they were not good tests IMO).
If you are using Gallio/MbUnit, you can use Assert.Multiple to achieve what you want. It captures the failing assertions but does not stop the execution of the test immediately. All the failing assertions are collected and reported later at the end of the test.
[Test]
public void MultipleAssertSample()
{
Assert.Multiple(() =>
{
Assert.Fail("Boum!");
Assert.Fail("Paf!");
Assert.Fail("Crash!");
});
}
The test in the example above is obviously failing but what's insteresting is that the 3 failures are shown in the test report. The execution does not stop at the first failure.
I know your question was asked several years ago. But recently (around 2017 or 2018) NUNIT 3 supports Warnings. You can embed a [boolean] test within an Assert.Warning as you would Assert.Fail. But instead of the single Assert line failing the whole test, the test running will log the Warning and continue on the the test.
Read about it here: https://docs.nunit.org/articles/nunit/writing-tests/Warnings.html
It behaves similarly to Multiple (listed by #Yann Trevin above, and Multiple is also available in NUnit 3.0). The cool difference though is on integration tests where having the flexibility of using stand-alone Assert.Warning commands shines. Contrast to a group Asserts within a Multiple instance. Once the Multiple assert has completed, the test may not continue.
Integration tests, especially those which can run for hours to, perhaps test how well a bunch of micros services all play together, are expensive to re-run. Also, if you happen to have multiple teams (external, internal, outurnal, infernal, and eternal) and timezones committing code virtually all the time, it may be challenging to get a new product to run a start-to-end integration test all the way to the end of its workflow once all the pieces are hosted together. (note - It's important to assemble teams with ample domain knowledge and at least enough software engineering knowledge to assemble solid "contracts" for how each API will be, and manage it well. Doing so should help alleviate the mismatches implied above.)
The simple black/white, pass/fail testing is absolutely correct for Unit testing.
But as systems become more abstract, layered upon service after service, agent after agent, the ability to know a systems robustness and reliability become more important. We already know the small blocks of code will work as intended; the Unit tests and code coverage tells us so. But when they must all run atop of someone else's infrastructure (AWS, Azure, Google Cloud), Unit testing isn't good enough.
Knowing how many times a service had to retry to come through, how much did a service cost, will the system meet SLA given certain loads? These are things Integration tests can help find out using the type of Assert you were asking about, #dnatoli.
Given the number of years since your question, you're almost certainly an expert by now.
From what you've explained in the question, it is more of an acceptance test (than a unit test). Unit testing frameworks are designed to fail fast. That is why Assert behaves the way it does (and its a good thing.)
Coming back to your problem: You should take a look at using an acceptance testing framework like Fitnesse, which would support what you want i.e. show me the steps that failed but continue execution till the end.
However if you MUST use a unit-testing framework, use a collecting variable/parameter to simulate this behavior. e.g.
Maintain a List<string> within the test
append a descriptive error message for every failed step
At the end of the test, assert that the collecting variable is empty
I had a similar problem when I wanted to get a bit more meaningful failure report. I was comparing collections and used to get wrong number of elements - no idea what was the real reason for the failure. Unfortunately I ended up writing a comparison code manually - so checking all the conditions and then doing a single assert at the end with a good error message.
Unit testing is black and white - either you pass tests or not, either you are breaking logic or not, either your data in DB is correct or not (although unit testing with DB is no longer unit testing per se).
What are you going to do with the warning? Is that pass or fail? If it's pass then what's the point of the unit testing in this case? If it's fail.. well.. just fail then.
I suggest spending a little of time on figuring out what should be unit tested and how it should be unit tested. "Unit testing" is a cliche term used by many for very very different things.
unit-testing might be more black and white than something like integration testing but if you were using a tool like speck flow then you might need the testing framework to give you a warning or assert inconclusive...
Why do some unit testing frameworks allow you to assert inconclusive if it's black and white?
Imagine that the test date of that you're passing in your unit test is created from a random data generator... Maybe you have several of these... for some data condition you are sure that it is a failure for another day to condition you might be unsure...
The point of a warning or a certain inconclusive is to tell the engineer to take a look at this corner case and add more code to try to catch it next time...
The assumption that your test will always be perfect or black and white I don't think it's correct I've run into too many cases and 15 years of testing where it's not passed or failed... it passes...fails... and I don't know yet... You got to think about the fact that when you fail a test it means that you know it failed...
False failures are really bad in automated tests... It creates a lot of noise... you're better off saying I don't know if you don't know that you are failing...

Maintainability of database integration testing

I am developing a ETL process that extract business data from one database to a data warehouse. The application is NOT using NHibinate, Linq to Sql or Entity Framework. The application has its own generated data access classes that generate the necessary SQL statements to perform CUID.
As one can image, developers who write code that generate custom SQL can easily make mistakes.
I would like to write a program that generate testing data (Arrange), than perform the ETL process (Act) and validate the data warehouse (Assert).
I don't think it is hard to write such program. However, what I worry is that in the past my company had attempt to do something similar, and ending up with a brunch of un-maintainable unit tests that constantly fail because of many new changes to the database schema as new features are added.
My plan is to write an integration test that runs on the build machine, and not any unit tests to ensures the ETL process works. The testing data cannot be totally random generate because of business logic on determine how data are loaded to the data warehouse. We have custom development tool that generates new data access classes when there is a change in the database definition.
I would love any feedback from the community on giving me advice on write such integration test that is easy to easy to maintain. Some ideas I have:
Save a backup testing database in the version control (TFS), developers will need to modify the backup database when there are data changes to the source or data warehouse.
Developers needs to maintain testing data though the testing program (C# in this case) manually. This program would have a basic framework for developer to generate their testing data.
When the test database is initialize, it generate random data. Developers will need to write code to override certain randomly generated data to ensure the test passes.
I welcome any suggestions
Thanks
Hey dsum,
allthough I don't really know your whole architecture of the ETL, I would say, that integration-testing should only be another step in your testing process.
Even if the unit-testing in the first encounter ended up in a mess, you should keep in mind, that for many cases a single unit-test is the best place to check. Or do you want to split the whole integration test for triple-way case or sth. other further deep down, in order to guarantee the right flow in every of the three conditions?
Messy unit-test are only the result of messy production code. Don't feel offended. That's just my opinion. Unit-tests force coders to keep a clean coding style and keep the whole thing much more maintainable.
So... my goal is, that you just think about not only to perform integration testing on the whole thing, because unit-tests (if they are used in the right way) can focus on problems in more detail.
Regards,
MacX
First, let's say I think that's a good plan, and I have done something similar using Oracle & PL/SQL some years ago. IMHO your problem is mainly an organizational one, not a technical:
You must have someone who is responsible to extend and maintain the test code.
Responsibility for maintaining the test data must be clear (and provide mechanisms for easy test data maintenance; same applies to any verification data you might need)
The whole team should know that no code will go into the production environment as long as the test fails. If the test fails, first priority of the team should be to fix it (the code or the test, whatever is right). Train them not to work on any new feature as long as the test breaks!
After a bug fix, it should be easy for the one who fixed it to verify that the part of the integration which failed before does not fail afterwards. That means, it should be possible to run the whole test quick and easily from any developer machine (or at least, parts of it). Quick can get a problem for an ETL process if your test is too big, so focus on testing a lot of things with as few data as possible. And perhaps you can break the whole test into smaller pieces which can be executed step-by-step.
If one wants to maintain data while performing Data integration testing in ETL, We could also go with these steps because Integration testing of the ETL process and the related applications involves in them. for eg:
1.Setup test data in the source system.
2.Execute ETL process to load the test data into the target.
3.View or process the data in the target system.
4.Validate the data and application functionality that uses the data

Categories

Resources