Referencing Main Project from a separate NUnit Test Library - c#

I'm brand new to Unit Testing, fresh out of college, and suddenly responsible for developing and maintaining a fairly large application by myself. The previous developer (who just left the company) included NUnit with the project references, wrote a few empty test fixtures at the bottom of a couple classes, and never did anything else with it.
I'm afraid if I don't start writing unit tests now, while I'm refactoring and learning the system, I'll never get them done, but I'm having trouble understanding how to properly set up my test project.
I was told that I should keep my test project as a separate Class Library within the same solution, so I've made an OurApplication.Test Library project. I don't know how I reference the project that I'll be testing within it, though.
Guides online say to point it to my main project's .DLL... but there isn't one. Our project is a standalone application that doesn't generate a dll, and I'm not sure what I'm supposed to do in this case.
Any advice on what's wrong here, or pointers to more comprehensive guides would be greatly appreciated. I'd like to get this done the right way, and as soon as I can.

If you're using Visual Studio you can add references to Projects within the solution as here referencing a console application from a test project

Ideally it is best to separate your applications into separate dlls which based of functionalities , I.e. separating the solution into modules represented by projects of type class libraries. This will make it simpler to test these dlls separately. I usually have separate test project for each module (projects).
Having said that you can add a .exe project as a reference to your nun it project and get access to all namespaces and classes to test, though not best practice for maintainable code.

Related

How to hide my test framework code and give the posibility for other testers to only be able to write features?

I have a test framework written in C#, I am using Selenium, nUnit and Specflow.
Visual Studio is the tool and I have the normal project folder structure (pages, hooks, steps, features).
Testcases are associated with AzureDevOps and the code is pushed to the repo and pipilines are working great.
My issue is that I want to hide the code under the level of the features.
The idea is that other people could create new feature files and create tests within them, but I do not want them to be able to see the code that is under those Gherkin sentences.
So they should be only able to create and write new features, but not seeing the code below.
Could you please give me some ideas how this can be achieved?
You can import bindings from external assemblies. In SpecFlow.json, add:
{
"stepAssemblies": [
{
"assembly": "Name.Of.Your.StepDefinitions.dll"
}
]
}
While this is typically used as a means to reuse step definitions in multiple projects, you could use this to "hide" the step definitions from test writers. The step definitions would be a pre-compiled DLL file that they can import into a different Visual Studio solution.
This requires two different solutions in Visual Studio:
A regular class library that holds step definition classes. You will probably need to choose .NET Core or .NET Framework 5+, since it must integrate with a unit testing framework. I don't believe .NET Standard would work (but you can certainly try).
Another Visual Studio solution that testers can use, which contains the feature files. This project would reference a DLL file you would create from project #1.
The challenge is disseminating this DLL file. Any update you make would need to be given to the testers' Visual Studio solution. A couple of options come to mind:
Manually update solution #2.
Create a NuGet package for solution #1. Set up a build pipeline in Azure DevOps to build, package, and push this NuGet package to your project's "Artifacts" feed whenever you make code changes. Someone (you or a tester) would need to update that NuGet package in solution #2.
As an alternative, give everyone access to the features and step definitions, but enforce a pull request policy on the repository requiring you as a reviewer. That way you can protect the main branch as the gatekeeper. Testers can submit changes to step definitions, and you are free to reject their pull requests if the changes are not satisfactory.
This, of course, depends on the team structure. If you have outside contractors, sharing the step definition code might not be desirable. Just don't limit people's abilities simply because you don't trust their skill level. Use pull request policies requiring you as a reviewer if they change step definition files. This allows you to benefit from someone else's labor while still giving you control.

How to organize a modular (C#/MEF) application in Visual Studio to allow debugging?

I am in the process of developing a modular application using C# and MEF. The application is supposed to consist of
a main application, which is the actual executable, providing core functionality, exposing a number of interfaces (extension points) and using MEF to pull in plug-in assemblies that fit into these
a set of plug-ins that provide classes that fit into the interfaces and can be used by the main application
The main application may either run all by itself or with one or more plug-ins imported. This should be a rather standard architecture for a modular MEF-based application.
Initial tests have shown that this appears to generally work. If I deploy the main application as well as one or multiple plug-in assemblies in to a folder, everything works fine.
However, I am struggling with how to organize the Visual Studio solution for this. My initial approach is that the main application, as well as each plug-in are separate projects within a solution. The main application is an exe project, whereas the plug-ins are dll projects. Plug-in projects depend on the main project, since they are implementing interfaces and using classes defined in the main application (I could have created a common lib project that does this, but it does not seem to add any benefit).
This way, I can start and debug the main application (with no plug-ins) fine.
But how can the solution be organized so I can debug the main application with one, multiple or all plug-ins?
The current approach builds each plug-in into its own folder (which is generally fine) and copies the main application into each of these (which is not quite desirable). I could potentially configure an individual plug-in project to start the main application in its output folder, but I have no idea how to do this for more than one plug-in or how to do this if the main application should not be copied into each plug-in output folder.
Any hints or best practices would be highly appreciated. I am using Visual Studio 2015 - if that makes any difference.
Plug-in projects depend on the main project, since they are
implementing interfaces and using classes defined in the main
application (I could have created a common lib project that does this,
but it does not seem to add any benefit).
There are benefits to it, here's a couple of them:
it just makes sense that if you have multiple projects which are all using the same classes/functions, that these reside in a seperate common project. And possibly even another seperate project just for the interfaces. It makes it easier to grasp things, seeing a solution where you have a main application, some dlls with common functionality, some plugins
suppose you once have to add another main app then that can just make use of the common dll as well, instead of having to reference the first main app
you won't end up with your main app being copied x times to other project's output directories because they depend on it
it seems a bit strange (to me at last) that other projects have an exe project as dependency, rather than a dll project. Just like it is really weird to me that a plugin depends on the main application it is in turn loaded in. Might be just me though - but I think it's also one of the reasons you have to ask the rest of your question in the first place.
But how can the solution be organized so I can debug the main
application with one, multiple or all plug-ins?
Typcially you tell your main application at startup which plugins to load. There are different ways to do this: read a file containing names of plugins, scan a known directory for plugins, a combination. All supported or relatively easy to implement using MEF. For instance for one large C# app we have, all plugins are copied into something like bin\Plugins. When the application starts it looks for dlls in bin\Plugins, filters the list based on a textfile containing regexes, then loads plugins from the filtered list. When sending the application to customers they get all, or only some plugins. When developping we use the text file to cut down application load time.

Reducing dependencies through IoC

In a quest to reduce the dependencies in my projects, I now have everything depending on and implementing interfaces, and they are glued together by an IoC container. This means projects need only to have direct references to such interface libraries.
However, if you don't specify the project as having a reference to the implementation (even though you don't need it at compile time) the implementation libraries are not included with the executable or in the setup project.
Is in a way Visual Studio promoting bad practices by requiring explicit references when they are not needed? Is it possible to have the dependencies only to the required interfaces and in this case what is the best method to get the implementation libraries available?
Is in a way Visual Studio promoting bad practices by requiring explicit references when they are not needed?
Not really. Your issue is one of deployment, not building.
Deploying an application and building it are separate things - if you have a good deployment strategy, you will be deploying implementations where they belong.
Is it possible to have the dependencies only to the required interfaces and in this case what is the best method to get the implementation libraries available?
The easiest way is indeed to reference the implementation assemblies. This will definitely make building and running locally as easy as F5, but do you really want that? To be honest, if you and your team have the discipline to only code to interfaces, that's all you need (and there are static analysis tools like nDepend that help with ensuring that remains the case).
One way forward is to create a deployment script that will deploy all dependencies whether local or elsewhere.
Visual studio does not require these references, but your IoC container does.
When adding a reference to the project, its binaries are automatically included in the output folder, which is necessary for your IoC container to glue the code together. There are other ways to get these binaries to the output folder than referencing their projects in Visual Studio - perhaps a post-build step?
No. It is simply the minimum they need to do in order to give developers working code without them having to do anything extra (aside from hitting F5) or for all references to be added by default (which would likely be a mess and slow on older hard discs.
For local development builds, you can simply have a post-build step on the relevant project to copy the DLLs to the main working directory. Just make sure the project is added to the active configuration to be built, other wise you'll go through a whole annoying debug session on the post-build only to realise there was no-build... lol
VS 2010. Post-build. Copy files in to multiple directories/multiple output path
Copy bin files on to Physical file location on Post Build event in VS2010
For full-scale application deployment, you'd likely be looking at Visual Studio setup projects at a bare minimum, but more ideally something like WiX or another deployment tool-set.

What is the best way to make shared libraries available to multiple applications?

Like most shops we've got a team of people working on various projects that all need to access the same core information and functions that relate to our business, usually in C#. We're currently just copying common classes from project to project, but everyone is starting to have their own flavors and we want to consolidate.
We use Tortoise SVN and have decided to maintain a separate project to contain our common classes, but are not sure the best way to deploy this common code to our various applications. We work for an internal IT shop that can dictate everything about how the users access the applications, we don't have to worry about releasing our products into the real world.
Some of our thoughts have been:
Compile the classes into a single DLL and load it into the Global Assembly Cache (GAC)
Compile the classes into a single DLL and save it to a centrally located shared drive to be referenced by all other projects
Compile the classes into a single DLL and include it in each project
Just fetch the most recent classes when starting a project, but don't have a central shared library (our interpretation of this: http://www.yosefk.com/blog/redundancy-vs-dependencies-which-is-worse.html)
SVN Externals http://svnbook.red-bean.com/en/1.0/ch07s03.html
I know this is a common problem, and if you spend any time looking into these or other options, you invariably find people explaining the pitfalls of each method (versioning, regression testing, "DLL Hell", "The GAC sucks", etc). I can hardly find anyone talking about what WORKS and why. Is there a preferred method?
At my company we have the same issue. Currently, we just use .bat files that go to our SVN Trunk and pull the most recent .dll references and fill a local References folder for the project you are working on.
However, we are currently working on switching this system over to NuGet. I'm not 100% sure how it works, but it's definitely worth looking into. Looks like you can set it up to where you can point it to a shared code repository, and then in Visual Studio using a plugin, it's as simple as just right clicking and hitting 'Update' everytime you need to get the newest code.

How to use NUnit GUI with a C#/ASP.NET website?

I have a C#/ASP.NET website that has some code (*.cs) files in the App_Code directory. I would like to test them using NUnit. I have written a test file with the proper [TestFixture] and [Test] annotations and have put it here: App_Code/Test/TestClassName.cs.
I load up the NUnit GUI to run it but it wants me to select a .exe or .dll file. There is none in the bin folder of my project. My project does successfully run and is built and everything, but still no exe or dll file. How can I get the NUnit Gui to just run the test in that class?
I don't recommend putting test code in the same package you'll be deploying to production.
You may want to move the test classes to a library project, something like Business.UnitTest(there may be a built in way to create an nUnit specific project, if there is, use that). Then move the business classes that are in your App_Code directory into another project called Business. Have Business.UnitTest reference the Business library. That should get nUnit to run(I don't believe that nUnit runs against websites, only libraries, but I'm not 100% sure).
For your website add a reference to the business library you just created so your production code can access the business objects in the business library. You may have to work out some namespace issues, but that shouldn't be too bad.
The trick with .NET Website projects is that the code files are not normally compiled up front, they are compiled on execution of the respective pages. This presents a challenge where testing is concerned, since as you mentioned NUnit wants to run a .exe or .dll for testing.
One way to deal with the issue is to convert the website project to a web application; they sound similar, but work in different ways. In contrast to a website not requiring up-front compilation, a web application requires it. So you would have one or more projects that compile to assemblies (.dll) or executables (.exe). NUnit could then hook into those to run the tests.
To make this work, you would want to separate testable code into another project; your front-end web application can refer to this other project to make use of the code within. Ideally, the front-end would be a thin layer of logic and user interaction, and the real work can be sent to the second project. Therefore, the second project is what you will want to test.
You'll want to have one more project to contain the tests - general wisdom is to not have your tests in the same project as the code being tested. This project refers to the project being tested, and to NUnit, and contains the tests themselves. This assembly is what you would direct NUnit to run for testing.
First, you want to create a new project for your tests. If you happen to have any internal classes or attributes, you should use InternalsVisibleToAttribute in order to be able to test these from within your testing project, outside your "real" project.
This attribute is suitable for the entire assembly, so I recommend putting it into your Assembly.info file of your "real" assembly.
From within your tests project, add a reference to your "real" assembly.
Make sure to exactly know the path to your binary (assembly.dll);
Open your TestsProjectAssembly.dll from within your NUnit GUI, makeing sure you are browsing to the right folder;
You might also want NUnit GUI to reload your assembly on each tests run (there is an option for doing so in the options properties);
Run your tests.
Absolutely make sure your path or browsable folder is the one in which your testing project is generated.

Categories

Resources