I am looking for an easy approach to test a (medium size) WPF application. Sort of what you'd do with BDD, but without the fancy specflow scripts, and instead of invoking mouse clicks we just want to interact with the ViewModel layer.
I refactored the code slightly so that I can start the application easier from a unit test. This is what I have now:
[TestFixture, RequiresSTA]
public class StartAndInitializeTests
{
[Test]
public void StartAndInitializeSystemControl()
{
var systemControl = new SystemControl((string) "302");
// This line of code never gets executed because of App.Run() is not returning until
// the application stops.
ViewModelContext.MachineControllerViewModel.InitMachine();
}
}
Obviously the InitMachine method will never execute, because constructing SystemControl in the end results in the WPF app being started (App.Run()).
What is the best way around this? Handcraft some multi-threading framework which posts events to the UI thread and "mimics" user events? Or is there a somewhat proven framework that I should know about? Or should I have a completely different approach?
PS:
We are not looking for an approach with specflow, because we don't really need external stakeholders to write tests, so we can and will manage this in NUnit directly
We want to interact with ViewModels directly so that we don't have to bother with frameworks like White. Maybe we will go down that path in the future, but for starters we would like to keep it as simple as possible.
Many thanks in advance!
Generally you don't need to init the whole application when testing at a ViewModel level. You should be using dependency injection which means if you instantiate a ViewModel any of its dependency will get injected appropriately in its constructor. You can then perform your tests on the resulting ViewModel. You may want to hook into ViewModel OnChangeNotify events and your Model to perform validation for your tests.
A simple example may be a property modifier on your ViewModel that modifies a property in the Model. A example test may construct the ViewModel in question and hook into the Model underlying the ViewModel, modify the property on the ViewModel and check that the appropriate value is modified in the Model.
Related
I understood the basic priinciple of not calling the MessageBox in the ViewModel code or the Model code but instead use a callback of some kind, be it an interface or a func declaration that is added to the ViewModel upon construction.
So far, so good.
But the examples given only go so far that you press a button in the View and then the ViewModel raises the MessageBox via callback to confirm and then continues.
But what if the Model is doing tons of stuff first before realizing the need for a user feedback? Do I give the model also the callback function to work?
Does it have to be designed differently?
Any advice is appreciated. :-)
In my opinion, it shouldn't be a big issue to raise the callback from your model, but I guess this depends on your architecture and your personal preferences.
So if you really don't want to have any callbacks connected to the view in your model, you could let your mvvm (or your presentation/application layer) handle the control flow instead of letting the model do it.
You could implement your model methods more fine grained and let the application layer coordinate the operations of your model. In that way, whenever a model operation is completed and a user input is required, the mvvm layer could raise the callback.
Example:
// method of your view model / application layer
public void InteractiveProcessing()
{
// business logic is separated in smaller chunks
model.DoFirstPartOfOperation();
// check if model needs additional user input
if(model.NeedsInput)
// raise callback here, let user enter input etc...
// continue processing with user input
model.DoSecondPartOfOperation(userInput);
}
Of course this makes only sense if you could split up your business logic into smaller parts.
You expose a public event, and have the View (.xaml.cs) to listen to it on startup. The code is still going to run on the worker thread, but the backend logic will not hang during unit testing.
OK, I think I've figured it out.
In my model I've encapsulated every call to the file system in a self-written interface called IIOServices and all my UI calls in an interface called IUIServices.
The UIServices only use standard datatypes or self-defined enums and nothing from the System.Windows.Forms or System.Windows namespace.
Then the clients of the model are responsible for providing an implementation to access FileOpenDialogs and MessageBoxes and such in any way they please.
My sample code for this implementation (which is kept small for the learning experience) can be found here, if anyone's interested:
MVVM with MessageBoxes sample code
I am fairly new to testing still, but have been using SpecFlow for a a few months. I am not entirely sure if what I am going to ask is possible, but maybe someone will have a suggestion to go about the problem.
Synopsis: My feature file makes a call to a method that creates a dialog window stored in a variable created in that method. The user would then need to fill out the dialog window (it is basically picking a file, and then clicking ok). The rest of the method relies on the information provided by the dialog window.
Problem: Since the window is created in the method and the result is stored in a variable created at that moment, I can not provide my information into the variable. But in order for my behavior tests to finish, I need to provide this information.
Example code:
Feature File:
Given I initialize the class
And I click on change selected item
Steps File:
[Given(#"I initialize the class")]
public void GivenIInitializeTheClass()
{
DoStuff();
SomeClass testClass = new SomeClasee();
}
[Given(#"IClickOnChangeSelectedItem")]
public void GivenIClickOnChangeSelectItem()
{
testClass.ChangeItem();
}
Method From Class:
public void ChangeItem()
{
var window = new SomeDialogWindow();
var result = window.ShowDialog();
if (result.HasValue && result.Value)
{
NewItem = window.SelectedItem;
}
}
I would know how to go about this if I could change the method in the class, but, in this example, I can make no changes to the class itself. Again I do not know if it is possible to assign anything to the result, or control the window since the variables for both are created within the method.
Depending on what you want to do this is quite a common pattern and fairly easy to solve, but first lets consider what kind of testing you might want to be running.
Unit tests - In a unit test that only wants to test the implementation of SomeClass we don't care about the implementation of other classes including SomeDialogWindow. Alternatively we could be writing tests that care solely about the implementation of SomeDialogWindow, or even just the implementation of SomeClass::ChangeItem and nothing else. How fine do you want to go? These tests are there to pinpoint exactly where some of your code is broken.
Acceptance tests - We build these to test how everything works together. They do care about the interaction between different units and as such will show up things that unit tests don't find. Subtle configuration issues, or more complicated interactions between units. Unfortunately they cover huge swathes of code, so leave you needing something more precise to find out what is wrong.
In a Test driven development project, we might write a single acceptance test so we can see when we are done, then we will write many unit tests one at a time, each unit test used to add a small piece of functionality to the codebase, and confirm it works before moving to the next.
Now for how to do it. As long as you are able to modify SomeClass its not a huge change, in fact all you really need is to add virtual to the ChangeItem method to make
public virtual void ChangeItem()
...
What this now allows you do is to replace the method with a different implementation when you are testing it. In the simplest form you can then declare something like,
namespace xxx.Tests
{
public class TestableSomeClass : SomeClass
{
public Item TestItem {get;set;}
public override void ChangeItem()
{
NewItem = TestItem;
}
}
}
This is a very common pattern and known as a stub. We've reduced the functionality in ChangeItem down to its bare essentials so its just a minimal stub of its original intent. We aren't testing ChangeItem anymore, just the other parts of our code.
In fact this pattern this pattern is so common there are libraries there to help us to Mock the function instead. I tend to use one called Moq and it would now look like this.
//Given
var desiredItem = ...
Mock<SomeClass> myMock = new Mock<SomeClass>();
myMock.Setup(x=>x.ChangeItem).Returns(desiredItem);
var testClass = myMock.Object;
//When
testClass.ChangeItem();
//Then
testClass.NewItem.ShouldEqual(....);
You will notice that in both these examples we have gotten rid of the GUI part of the codebase so that we can concentrate on your functionality. I would personally recommend this approach for getting 90% to get your codebase covered and end up with rapid uncomplicated testing. However sometimes you need Acceptance tests that even test the UI, and then we come to an altogether more complicated beast.
For eaxmple, your UI will include blocking calls when it displays the visual elements, such as SomeDialogWindow.ShowDialog() and these have to occur on what is commonly referred to as the UI thread. Fortunately while only one thread can be the UI thread, any thread can be the UI thread if it gets there first, but you will need to have at least one thread displaying the UI and another running the tests. You can steal a pattern from web based testing and create driver classes that control your UI, and these will end up on the test running thread performing the click operations and polling to see if the operations are complete.
If you need to go to these lengths then don't start with this as you learn how to use the testing frameworks, start with the simple stuff.
My specific issue is an attempt to execute the Telerik DataBoundListBox method StopPullToRefreshLoading(true) from my ViewModel. The difficulty is that I do not want to break MVVM convention by putting application logic in the code behind.
I'm relatively new to MVVM and I'm not sure what the proper convention is for interacting with methods on controls on the view. I've done numerous searches on the topic and I've yet to find a solution that I can apply to my situation. I suspect I've probably come across the answer but I'm not drawing the proper conclusions.
It seems like this would be a common situation with 3rd party controls but maybe I'm just not thinking about the problem in the proper way.
I'm building my first Windows 8 Phone app using MVVM Light.
A lot of people get very hung up thinking that when following MVVM you MUST NOT HAVE CODE IN THE CODE BEHIND!!! This simply isn't the case, a design pattern like MVVM is there to make the code more maintainable. If something relates directly to the UI only and doesn't care about information in the viewmodel class then by all means put it in the code behind. I had the same situation when I was using third partly controls, sometimes there is no other option that isn't as bad or worse than putting code in the code behind.
First I agree with Chris McCabe on this, design patterns are a guideline, a framework, a suggestion. They are not rules to live-or-die by. That being said, you should be able to join the two (VM/Telerik) without introducing 'real' business logic into the UI.
The first possibility is to use an event on the controller. The UI can subscribe to this event to forward the call to the Telerik control; however, the UI should not decide when it is called.
class MyModel {
public event EventHandler StopRefreshLoading;
}
class myForm : Form {
public myForm(MyModel data)
{
data.StopRefereshLoading += (o, e) => this.CustomControl.StopPullToRefreshLoading(true);
// ... etc
}
Frankly, I prefer using interfaces for this type of behavior. It's then easy for the controller to force implementations to update to a new contract requirement. The downside is that the interfaces can become too verbose in a complex UI making them difficult to write tests for.
interface IMyModelView {
void StopRefreshLoading();
}
class myForm : Form, IMyModelView {
void IMyModelView.StopRefreshLoading()
{
this.CustomControl.StopPullToRefreshLoading(true);
}
Either direction you go some violation of the UI design pattern is likely to occur; however, in the real world there are no points for strictly adhering to a specific pattern. The patterns are there as an aid to make the code more reliable, testable, flexible, whatever. Decide why you are using a pattern and you will be able to evaluate when can safely violate that pattern.
Hi I am using the Simple Injector DI library and have been following some really interesting material about an architectural model designed around the command pattern:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
The container will manage the lifetime of the UnitOfWork, and I am using commands to perform specific functions to the database.
My question is if I have a command, for example an AddNewCustomerCommand, which in turn performs another call to another service (i.e. sends a text message), from a design standpoint is this acceptable or should this be done at a higher level and if so how best to do this?
Example code is below:
public class AddNewBusinessUnitHandler
: ICommandHandler<AddBusinessUnitCommand>
{
private IUnitOfWork uow;
private ICommandHandler<OtherServiceCommand> otherHandler;
AddNewBusinessUnitHandler(IUnitOfWork uow,
ICommandHandler<OtherServiceCommand> otherHandler)
{
this.uow = uow;
this.otherHandler = otherHandler;
}
public void Handle(AddBusinessUnitCommand command)
{
var businessUnit = new BusinessUnit()
{
Name = command.BusinessUnitName,
Address = command.BusinessUnitAddress
};
var otherCommand = new OtherServiceCommand()
{
welcomePostTo = command.BusinessUnitName
};
uow.BusinessUnitRepository.Add(businessUnit);
this.otherHandler.Handle(otherCommand);
}
}
It depends on your architectural view of (business) commands, but it is quite natural to have a one to one mapping between a Use Case and a command. In that case, the presentation layer should (during a single user action, such as a button click) do nothing more than create the command and execute it. Furthermore, it should do nothing more than execute that single command, never more. Everything needed to perform that use case, should be done by that command.
That said, sending text messages, writing to the database, doing complex calculations, communicating with web services, and everything else you need to operate the business' needs should be done during the context of that command (or perhaps queued to happen later). Not before, not after, since it is that command that represents the requirements, in a presentation agnostic way.
This doesn't mean that the command handler itself should do all this. It will be quite naturally to move much logic to other services where the handler depends on. So I can imagine your handler depending on a ITextMessageSender interface, for instance.
Another discussion is if command handlers should depend on other depend command handlers. When you look at use cases, it is not unlikely that big use cases consist of multiple smaller sub use cases, so in that sense it isn't strange. Again, there will be a one to one mapping between commands and use cases.
However, note that having a deep dependency graph of nested command handlers depending on each other, can complicate navigating through the code, so take a good look at this. It might be better to inject an ITextSessageSender instead of using an ICommandHandler<SendTextMessageCommand>, for instance.
Another downside of allowing handlers to nest, is that it makes doing infrastructural stuff a bit more complex. For instance, when wrapping command handlers with a decorator that add transactional behavior, you need to make sure that the nested handlers run in the same transaction as the outer most handler. I happened to help a client of me with this today. It's not incredibly hard, but takes a little time to figure out. The same holds for things like deadlock detection, since this also runs at the boundary of the transaction.
Besides, deadlock detection is an great example to show case the power of this command/handler pattern, since almost every other architectural style will make it impossible to plug-in this behavior. Take a look at the DeadlockRetryCommandHandlerDecorator class in this article) to see an example.
I have a big winform with 6 tabs on it, filled with controls. The first tab is the main tab, the other 5 tabs are part of the main tab. In database terms, the other 5 tabs have a reference to the main tab.
As you can imagine, my form is becoming very large and hard to maintain. So my question is, how do you deal with large UI's? How do you handle that?
Consider your aim before you start. You want to aim for SOLID principles, in my opinion. This means, amongst other things, that a class/method should have a single responsibility. In your case, your form code is probably coordinating UI stuff and business rules/domain methods.
Breaking down into usercontrols is a good way to start. Perhaps in your case each tab would have only one usercontrol, for example. You can then keep the actual form code very simple, loading and populating usercontrols. You should have a Command Processor implementation that these usercontrols can publish/subscribe to, to enable inter-view conversations.
Also, research UI design patterns. M-V-C is very popular and well-established, though difficult to implement in stateful desktop-based apps. This has given rise to M-V-P/passive view and M-V-VM patterns. Personally I go for MVVM but you can end up building a lot of "framework code" when implementing in WinForms if you're not careful - keep it simple.
Also, start thinking in terms of "Tasks" or "Actions" therefore building a task-based UI rather than having what amounts to a create/read/update/delete (CRUD) UI. Consider the object bound to the first tab to be an aggregate root, and have buttons/toolbars/linklabels that users can click on to perform certain tasks. When they do so, they may be navigated to a totally different page that aggregates only the specific fields required to do that job, therefore removing the complexity.
Command Processor
The Command Processor pattern is basically a synchronous publisher/consumer pattern for user-initiated events. A basic (and fairly naive) example is included below.
Essentially what you're trying to achieve with this pattern is to move the actual handling of events from the form itself. The form might still deal with UI issues such as hiding/[dis/en]abling controls, animation, etc, but a clean separation of concerns for the real business logic is what you're aiming for. If you have a rich domain model, the "command handlers" will essentially coordinate calls to methods on the domain model. The command processor itself gives you a useful place to wrap handler methods in transactions or provide AOP-style stuff like auditing and logging, too.
public class UserForm : Form
{
private ICommandProcessor _commandProcessor;
public UserForm()
{
// Poor-man's IoC, try to avoid this by using an IoC container
_commandProcessor = new CommandProcessor();
}
private void saveUserButton_Click(object sender, EventArgs e)
{
_commandProcessor.Process(new SaveUserCommand(GetUserFromFormFields()));
}
}
public class CommandProcessor : ICommandProcessor
{
public void Process(object command)
{
ICommandHandler[] handlers = FindHandlers(command);
foreach (ICommandHandler handler in handlers)
{
handler.Handle(command);
}
}
}
The key to handle a large UI is a clean separation of concerns and encapsulation. In my experience, it's best to keep the UI free of data and functionality as much as possible: The Model-View-Controller is a famous (but rather hard to apply) pattern to achieve this.
As the UI tends to get cluttered by the UI code alone it's best to separate all other code from the UI and delegate all things that don't concern the UI directly to other classes (e.g. delegating the handling of user input to controller classes). You could apply this by having a controller class for each tab, but this depends on how complicated each tab is. Maybe it's better to break a single tab down into several controller classes themself and compose them in a single controller class for the tab for easier handling.
I found a variation of the MVC pattern to be useful: The passive view. In this pattern, the view holds nothing more than the hierarchy and state of the UI components. Everything else is delegated to and controlled by controller classes which figure out what to do on user input.
Of course, it also helps to break the UI itself down into well organized and encapusalted components itself.
I would suggest you to read about the CAB ( Composite UI Application Block ) from Microsoft practice and patterns, which features the following patterns : Command Pattern, Strategy Pattern, MVP Pattern ... etc.
Microsoft Practice and patterns
Composite UI Application Block