I have a UWP Project with 2 pages so far. The MainPage.xaml is the basic layout of the app ( hamburger menu, search bar, etc.). The other part of this MainPage contains a frame into which the other page LandingPage.xaml is loaded. I want to capture the user input from an AutosuggestBox in the MainPage.xaml and show the results on LandingPage.xaml ( which is in a frame present inside MainPage.xaml).
I tried inheriting the MainPage, but that's not allowed.
While Marian's answer would certainly work, I think it's far from being 'clean' or 'good' code.
First and foremost, you should implement the MVVM pattern in your UWP apps (if you don't do it already) and use a dependency injection framework for that. A very basic, easy to understand one is MVVMLight, while a more sophisticated framework of choice could be Autofac. I advise you to start with the former, it's much quicker to wrap your head around it first.
In MVVM there's a concept that solves just your problem: messengers. I wouldn't like to get into the details here, since there already a lot of very good resources about this written by much smarter people than me. For example this article from the author of MVVMLight himself: https://msdn.microsoft.com/en-us/magazine/jj694937.aspx (I know it's from 2013 and speaks about Windows 8, but fear not, the concepts are just the same.)
The idea is that distinct ViewModels shouldn't have strict dependencies on each other - it makes unit testing (which is one of the main points of doing MVVM in the first place) hard. So in your case, you should have two ViewModels: MainViewModel and LandingViewModel. One for MainPage, and one for LandingPage, respectively. Now you should implement a handler in MainPage's code-behind for AutoSuggestBox's QuerySubmitted event and call a function in MainViewModel. In that function, you would instantiate a new message with the string coming from your AutoSuggestBox (which you can acquire either from doing data binding to it or through the event handler of QuerySubmitted, it's up to you) and send it via the Messenger. In LandingViewModel, you would subscribe to this exact message and then it's again just a matter of few lines to display the received message through data binding on LandingPage.
I know it looks like a lot of hassle for just something very basic like this, especially if you compare it to Marian's straight to the point solution. But trust me, in the long run writing clean code, nicely separated, easily unit testable ViewModels will make up for the additional effort that you have to put into them initially to make them work. After such a system is set up between two ViewModels, adding a third (which I assume you'll need to do soon) is absolutely trivial and can be done very quickly.
If you're not using MVVM I'd suggest adding x:FieldModifier="public" on the AutoSuggestBox and add a public static property to MainPage to store its instance.
MainPage.xaml.cs
public static MainPage Current { get; private set; }
public MainPage()
{
Current = this;
// Rest of your code in ctor
}
Then you can access it using
string text = MainPage.Current.NameOfYourAutoSuggestBox.Text;
Just use a simple message passing mechanism of your own, like this:
public class Messages {
public static Messages Instance { get; } = new Messages();
private readonly List<Subscription> subscriptions;
private Messages() {
subscriptions = new List<Subscription>();
}
public void Send<T>(T message) {
var msgType = message.GetType();
foreach (var sub in subscriptions)
if (sub.Type.IsAssignableFrom(msgType))
sub.Handle(message);
}
public Guid Subscribe<T>(Action<T> action) {
var key = Guid.NewGuid();
lock (subscriptions) {
subscriptions.Add(new Subscription(typeof(T), key, action));
}
return key;
}
public void Unsubscribe(Guid key) {
lock (subscriptions) {
subscriptions.RemoveAll(sub => sub.Key == key);
}
}
public bool IsSubscribed(Guid key) {
lock (subscriptions) {
return subscriptions.Any(sub => sub.Key == key);
}
}
public void Dispose() {
subscriptions.Clear();
}
}
internal sealed class Subscription {
internal Guid Key { get; }
internal Type Type { get; }
private object Handler { get; }
internal Subscription(Type type, Guid key, object handler) {
Type = type;
Key = key;
Handler = handler;
}
internal void Handle<T>(T message) {
((Action<T>)Handler).Invoke(message);
}
}
It's small and simple but it allows the subscription of different messages in parallel, separated by message type. You can subscribe, in a case similar to yours, with:
Messages.Instance.Subscribe<TextChangeArgs>(OnTextChanged);
and your other pages can send their messages using:
Messages.Instance.Send(new TextChangeArgs(...));
From all subscribers, only those interested in this specific message type will receive the message. You can (and, of course, should) also unsubscribe. Some more error handling could also be necessary in a real world scenario.
If necessary, you can add extra functionality like throttling easily (to avoid too many consecutive messages in a given time period).
Related
Suppose I have a CQRS command that looks like below:
public sealed class DoSomethingCommand : IRequest
{
public Guid Id { get; set; }
public Guid UserId { get; set; }
public string A { get; set; }
public string B { get; set; }
}
That's processed in the following command handler:
public sealed class DoSomethingCommandHandler : IRequestHandler<DoSomethingCommand, Unit>
{
private readonly IAggregateRepository _aggregateRepository;
public DoSomethingCommand(IAggregateRepository aggregateRepository)
{
_aggregateRepository = aggregateRepository;
}
public async Task<Unit> Handle(DoSomethingCommand request, CancellationToken cancellationToken)
{
// Find aggregate from id in request
var id = new AggregateId(request.Id);
var aggregate = await _aggregateRepository.GetById(id);
if (aggregate == null)
{
throw new NotFoundException();
}
// Translate request properties into a value object relevant to the aggregate
var something = new AggregateValueObject(request.A, request.B);
// Get the aggregate to do whatever the command is meant to do and save the changes
aggregate.DoSomething(something);
await _aggregateRepository.Save(aggregate);
return Unit.Value;
}
}
I have a requirement to save auditing information such as the "CreatedByUserID" and "ModifiedByUserID". This is a purely technical concern because none of my business logic is dependent on these fields.
I've found a related question here, where there was a suggestion to raise an event to handle this. This would be a nice way to do it because I'm also persisting changes based on the domain events raised from an aggregate using an approach similar to the one described here.
(TL;DR: Add events into a collection in the aggregate for every action, pass the aggregate to a single Save method in the repository, use pattern matching in that repository method to handle each event type stored in the aggregate to persist the changes)
e.g.
The DoSomething behavior from above would look something like this:
public void DoSomething(AggregateValueObject something)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */));
}
The AggregateRepository would then have methods that looked like this:
public void Save(Aggregate aggregate)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event);
}
}
private void Handle(DidSomething #event)
{
// Persist changes from event
}
As such, adding a RaisedByUserID to each domain event seems like a good way to allow each event handler in the repository to save the "CreatedByUserID" or "ModifiedByUserID". It also seems like good information to have when persisting domain events in general.
My question is related to whether there is an easy to make the UserId from the DoSomethingCommand flow down into the domain event or whether I should even bother doing so.
At the moment, I think there are two ways to do this:
Option 1:
Pass the UserId into every single use case on an aggregate, so it can be passed into the domain event.
e.g.
The DoSomething method from above would change like so:
public void DoSomething(AggregateValueObject something, Guid userId)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */, userId));
}
The disadvantage to this method is that the user ID really has nothing to do with the domain, yet it needs to be passed into every single use case on every single aggregate that needs the auditing fields.
Option 2:
Pass the UserId into the repository's Save method instead. This approach would avoid introducing irrelevant details to the domain model, even though the repetition of requiring a userId parameter on all the event handlers and repositories is still there.
e.g.
The AggregateRepository from above would change like so:
public void Save(Aggregate aggregate, Guid userId)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events, userId);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events, Guid userId)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event, Guid userId);
}
}
private void Handle(DidSomething #event, Guid userId)
{
// Persist changes from event and use user ID to update audit fields
}
This makes sense to me as the userId is used for a purely technical concern, but it still has the same repetitiveness as the first option. It also doesn't allow me to encapsulate a "RaisedByUserID" in the immutable domain event objects, which seems like a nice-to-have.
Option 3:
Could there be any better ways of doing this or is the repetition really not that bad?
I considered adding a UserId field to the repository that can be set before any actions, but that seems bug-prone even if it removes all the repetition as it would need to be done in every command handler.
Could there be some magical way to achieve something similar through dependency injection or a decorator?
It will depend on the concrete case. I'll try to explain couple of different problems and their solutions.
You have a system where the auditing information is naturally part of the domain.
Let's take a simple example:
A banking system that makes contracts between the Bank and a Person. The Bank is represented by a BankEmployee. When a Contract is either signed or modified you need to include the information on who did it in the contract.
public class Contract {
public void AddAdditionalClause(BankEmployee employee, Clause clause) {
AddEvent(new AdditionalClauseAdded(employee, clause));
}
}
You have a system where the auditing information is not natural part of the domain.
There are couple of things here that need to be addressed. For example can users only issue commands to your system? Sometimes another system can invoke commands.
Solution: Record all incomming commands and their status after processing: successful, failed, rejected etc.
Include the information of the command issuer.
Record the time when the command occured. You can include the information about the issuer in the command or not.
public interface ICommand {
public Datetime Timestamp { get; private set; }
}
public class CommandIssuer {
public CommandIssuerType Type { get; pivate set; }
public CommandIssuerInfo Issuer {get; private set; }
}
public class CommandContext {
public ICommand cmd { get; private set; }
public CommandIssuer CommandIssuer { get; private set; }
}
public class CommandDispatcher {
public void Dispatch(ICommand cmd, CommandIssuer issuer){
LogCommandStarted(issuer, cmd);
try {
DispatchCommand(cmd);
LogCommandSuccessful(issuer, cmd);
}
catch(Exception ex){
LogCommandFailed(issuer, cmd, ex);
}
}
// or
public void Dispatch(CommandContext ctx) {
// rest is the same
}
}
pros: This will remove your domain from the knowlegde that someone issues commands
cons: If you need more detailed information about the changes and match commands to events you will need to match timestamps and other information. Depending on the complexity of the system this may get ugly
Solution: Record all incomming commands in the entity/aggregate with the corresponding events. Check this article for a detailed example. You can include the CommandIssuer in the events.
public class SomethingAggregate {
public void Handle(CommandCtx ctx) {
RecordCommandIssued(ctx);
Process(ctc.cmd);
}
}
You do include some information from the outside to your aggregates, but at least it's abstracted, so the aggregate just records it. It doesn't look so bad, does it?
Solution: Use a saga that will contain all the information about the operation you are using. In a distributed system, most of the time you will need to do this so it whould be a good solution. In another system it will add complexity and an overhead that you maaaay not wan't to have :)
public void DoSomethingSagaCoordinator {
public void Handle(CommandContext cmdCtx) {
var saga = new DoSomethingSaga(cmdCtx);
sagaRepository.Save(saga);
saga.Process();
sagaRepository.Update(saga);
}
}
I've used all methods described here and also a variation of your Option 2. In my version when a request was handled, the Repositoires had access to a context that conained the user info, so when they saved events this information was included in EventRecord object that had both the event data and the user info. It was automated, so the rest of the code was decoupled from it. I did used DI to inject the contex to the repositories. In this case I was just recording the events to an event log. My aggregates were not event sourced.
I use these guidelines to choose an approach:
If its a distributed system -> go for Saga
If it's not:
Do I need to relate detailed information to the command?
Yes: pass Commands and/or CommandIssuer info to aggregates
If no then:
Does the dabase has good transactional support?
Yes: save Commandsand CommandIssueroutside of aggregates.
No: save Commandsand CommandIssuer in aggreages.
In a view model's constructor I have a command declaration that calls a method:
OpenGroupCommand = new DelegateCommand(OnOpenGroupExecute);
And the method looks like:
private void OnOpenGroupExecute(object obj)
{
string groupName = (string)obj;
Application.Current.MainPage.Navigation.PushAsync(new GroupPage(groupName));
}
How can I test, that groupName is passed to another view model correctly? In another view model groupName parameter is sent to GroupName property on VM instance:
public class GroupPageViewModel : ViewModelBase, IGroupPageViewModel
{
private string _groupName;
public GroupPageViewModel(string groupName)
{
LoadGroupName(groupName);
}
public void LoadGroupName(string groupName)
{
GroupName = groupName;
}
public string GroupName
{
get
{
return _groupName;
}
set
{
_groupName = value;
OnPropertyChanged();
}
}
}
On debug all works fine, but how can I unit test it? Where can I read a bit about testing and mocking stuff like this, even with Moq framework?
I believe your question is actually about how to test navigation between pages.
In the implementation of method OnOpenGroupExecute, because you are using Xamarin forms stuff to implement the navigation, you have to refer Xamarin Forms assemblies in your test project which makes the unit test depend on Xamarin Forms.
As suggested in this document https://learn.microsoft.com/en-us/xamarin/xamarin-forms/enterprise-application-patterns/ , try to create an interface for navigation and navigate with viewmodel (more details on https://github.com/dotnet-architecture/eShopOnContainers)
And in your unit test project, implement a fake navigation service class like below and inject into the DI container:
public class FakeNavigationService : INavigationService //this interface is from MS eShopOnContainer project
{
private List<ViewModelBase> _viewModels = new List<ViewModel>();
public Task NavigateToAsync<TViewModel>() where TViewModel : ViewModelBase {
//create viewModel object from DI container
//var viewModel = ......
_viewModels.Add(viewModel);
}
public ViewModelBase CurrentPageViewModel {
get {
if (_viewModels.Count() < 1) {
return null;
}
return _viewModels[_viewModels.Count() - 1];
}
}
}
This is just a suggestion. If you have implemented most of features in your app, it takes time to change navigate-with-page to navigate-with-viewmodel.
Well, let's see what you have:
you have some code in a private method, unless you make that public you won't be able to test it directly, because you can't call it. I am not considering here any tricks that allow you to call private methods.
what does that method do? It is not clear at all, it receives an object, we don't know what's in it. You're converting it to string, but what if it is not a string? Can you convert that object to a string? who knows.
So we have a method, that we don't know what it does, we don't know what it receives as parameters, we can't call it directly, but we want to test it. This is not a good position to be in.
Step back a bit and ask yourself, what are you really trying to test?
You said : How can I test, that groupName is passed to another view model correctly?
what does "correctly" mean? You need to define what it means for that string to be correct. This will give a test scenario you can work with.
I expect to receive an object, which looks like A and I want to convert it to a string which looks like B. Forget about viewmodels for now, that's just unimportant noise.
You can change the method into a public one and you can test that for different types of input data, you're getting the right result. This is literally, working with an object and extract some stuff from it. When that method is correct, you can guarantee that the viewmodel will receive the right input and that is good enough from a unit testing point of view.
You can of course add more tests for various inputs, you can test for correct failure conditions etc.
While implementing a WPF Application I stumbled on the problem that my application needs some global data in every ViewModel. However some of the ViewModels only need reading access while other need read/write access for this Field. At First I stumbled upon the Microsoft Idea of a SessionContext like so:
public class SessionContext
{
#region Public Members
public static string UserName { get; set; }
public static string Role { get; set; }
public static Teacher CurrentTeacher { get; set; }
public static Parent CurrentParent { get; set; }
public static LocalStudent CurrentStudent { get; set; }
public static List<LocalGrade> CurrentGrades { get; set; }
#endregion
#region Public Methods
public static void Logon(string userName, string role)
{
UserName = userName;
Role = role;
}
public static void Logoff()
{
UserName = "";
Role = "";
CurrentStudent = null;
CurrentTeacher = null;
CurrentParent = null;
}
#endregion
}
This isn't (in my Opinion at least) nicely testable and it gets problematic in case my global data grows (A think that could likely happen in this application).
The next thing I found was the implementation of a Mediator/the Mediator Pattern from this link. I liked the Idea of the Design Norbert is going here and thought about implementing something similar for my project. However in this project I am already using the impressive Mediatr Nuget Package and that is also a Mediator implementation. So I thought "Why reinvent the Wheel" if I could just use a nice and well tested Mediator. But here starts my real Question: In case of sending changes to the global data by other ViewModels to my Readonly ViewModels I would use Notifications. That means:
public class ReadOnlyViewModel : NotificationHandler<Notification>
{
//some Member
//global Data
public string Username {get; private set;}
public async Task Handle(Notification notification, CancellationToken token)
{
Username = notification.Username;
}
}
The Question(s) now:
1. Is this a good Practice for using MVVM (It's just a Feeling that doing this is wrong because it feels like exposing Business Logic in the ViewModel)
2. Is there a better way to seperate this so that my Viewmodel doesn't need to inherit 5 to 6 different NotificationHandlers<,>?
Update:
As Clarification to what I want to achieve here:
My Goal is to implement a wpf application that manages some Global Data (lets say a Username as mentioned above) for one of its Window. That means because i am using a DI Container (and because of what kind of data it is) that I have to declare the Service #mm8 proposed as a Singleton. That however is a little bit problematic in case (and I have that case) I need to open a new Window that needs different global data at this time. That would mean that I either need to change the lifetime to something like "kind of scoped" or (breaking the single Responsibility of the class) by adding more fields for different Purposes or I create n Services for the n possible Windows I maybe need to open. To the first Idea of splitting the Service: I would like to because that would mitigate all the above mentioned problems but that would make the sharing of Data problematic because I don't know a reliable way to communicate this global data from the Writeservice to the readservice while something async or parallell running is happening in a Background Thread that could trigger the writeservice to update it's data.
You could use a shared service that you inject your view models with. It can for example implement two interfaces, one for write operations and one for read operations only, e.g.:
public interface IReadDataService
{
object Read();
}
public interface IWriteDataService : IReadDataService
{
void Write();
}
public class GlobalDataService : IReadDataService, IWriteDataService
{
public object Read()
{
throw new NotImplementedException();
}
public void Write()
{
throw new NotImplementedException();
}
}
You would then inject the view models that should have write access with a IWriteDataService (and the other ones with a IReadDataService):
public ViewModel(IWriteDataService dataService) { ... }
This solution both makes the code easy to understand and easy to test.
Question
If the current method being tested is initiated via some other method than an incoming message (into a handler) how do you test the publishing of events. The problem with NServiceBus.Testing (as far as I see it) is that its very geared towards testing handlers which in turn cause messages/events to be send/published.
Background:
We have a number of legacy systems, and in the ongoing effort to move to a proper SOA implementation we need to integrate with some legacy DB jobs. These jobs perform actions on the DB every 15 minutes which we want to hook into and publish events when certain conditions occur.
We have a windows service that is running within the NsbHost but that contains no handlers. Instead we are using Quartz to create an internal cron job that runs every minute, polls the DB for records that match a specified pattern, and publishes an event before updating the DB to say we have processed that record. This is how we are integrating old and new systems.
Clearly this is not an ideal situation but given that we are in a transition phase in our SOA implementation its as good as we have right now.
Details and code:
Our event interface we are publishing looks like this
public interface IPremiumAdjustmentFinalised
{
string PolicyNumber { get; set; }
decimal Amount { get; set; }
DateTime FinalisedOn { get; set; }
}
The code to actually publish the event is
Bus.Publish<IPremiumAdjustmentFinalised>(e =>
{
e.Amount = AdjustmentAmount;
e.PolicyNumber = PolicyNumber;
e.FinalisedOn = LastModifiedOn;
});
This all works fine and we can test the call was made using MOQ thus:
eventPublisher.MethodToTest();
bus.Verify(x => x.Publish(It.IsAny<Action<IPremiumAdjustmentFinalised>>()), Times.Once);
where bus is a new Mock inserted into the constructor.
But I want to test the values within IPremiumAdjustmentFinalised. I'm finding this really difficult I think mainly because its an interface rather than a concrete class.
We have tried using NServiceBus.Testing to try and inspect the generated event but to no avail.
Does anyone know of how to test the values in the event in this given scenario?
There are 2 solutions that we have come up with
Solution 1:
Create a concrete class that inherits the interface and bus publish that instead
public class PremiumAdjustmentFinalisedEvent : IPremiumAdjustmentFinalised
{
public string PolicyNumber { get; set; }
public decimal Amount { get; set; }
public DateTime FinalisedOn { get; set; }
}
This class is only used in the sender, the handler at the other end would still listen for events of the type IPremiumAdjustmentFinalised.
Sending the event would change to this
var message = new PremiumAdjustmentFinalisedEvent
{
Amount = AdjustmentAmount,
PolicyNumber = PolicyNumber,
FinalisedOn = LastModifiedOn
};
Bus.Publish<IPremiumAdjustmentFinalised>(message);
which allows us to test thus:
bus.Verify(x => x.Publish(It.Is<IPremiumAdjustmentFinalised>(paf =>
paf.Amount == AdjustmentAmount &&
paf.FinalisedOn == LastModifiedOn &&
paf.PolicyNumber == PolicyNumber)), Times.Once);
This is not an ideal solution as we need to add code to the live solution to enable testing, but it works well and is easy to understand.
Solution 2:
Implement our own IBus and assert objects are correct within that:
class myBus : IBus
{
public void Publish<T>(Action<T> messageConstructor)
{
IPremiumAdjustmentFinalised paf = new PremiumAdjustmentFinalised();
var ipaf = (T) paf;
messageConstructor(ipaf);
Assert.AreEqual(paf.Amount, AdjustmentAmount);
Assert.AreEqual(paf.FinalisedOn, LastModifiedOn);
Assert.AreEqual(paf.PolicyNumber, PolicyNumber);
}
...
<<Leave the rest of the methods not implemented>>
...
}
With my own test implementation of IPremiumAdjustmentFinalised (similar to solution 1 but in the test project)
Now the test looks like this
var myBus = new myBus();
eventPublisher = new AdjustmentFinaliserEventPublisher(myBus);
eventPublisher.UpdateAndPublishFinalisedAdjustments();
I like this solution as it does not change the live code just for testing but the downside is that there is a big implementation of IBus with loads of not Implemented methods on it.
After digging around google results for days after finding this article, there still seemed to be no real guidance other than this one. After implementing solution 2, I was determined to find a lighter way and came up with the following using Moq as the mocking library.
///Declarations
private Mock<IPremiumAdjustmentFinalised > _mockEvent = new Mock<IPremiumAdjustmentFinalised >();
private readonly Mock<IBus> _mockBus = new Mock<IBus>();
private SomeEventPublisher _someEventPublisher;
[SetUp]
public void Setup()
{
//Instruct moq to set up implementations for the event interface's properties.
_mockEvent.SetupAllProperties();
_mockBus.Setup(b => b.Publish(It.IsAny<Action<IPremiumAdjustmentFinalised>>()))
.Callback((Action<IPremiumAdjustmentFinalised> messageConstructor) =>
{
//Execute the constructor provided in the action in the event publisher, so the event data can be interrogated.
messageConstructor(_mockEvent.Object);
});
_someEventPublisher = new SomeEventPublisher(_mockBus);
}
[Test]
public void SomeEventPublisherTest()
{
_someEventPublisher.PublishSomeEvent(AdjustmentAmount, LastModifiedOn, PolicyNumber)
Assert.AreEqual(_mockEvent.Object.Amount, AdjustmentAmount);
Assert.AreEqual(_mockEvent.Object.FinalisedOn, LastModifiedOn);
Assert.AreEqual(_mockEvent.Object.PolicyNumber, PolicyNumber);
}
(Edited a lot) I've got some classes with Abstracts Members. The concrete type of the abstract members is to be determined at the class instanciation, based on the user's input. However, the second member's concrete type might depend on the first member.
I'm trying to do something keeping the MVP design pattern in mind. I taught about making the Presenter pass a delegate to the Model's Ctor, which he (the Ctor) would use to request the informations needed for the instanciation of the class. I'm not sure if it's a good idea. Here is what I wrote :
// In the Model :
public class Model
{
public E Element1;
public E Element2;
public Model(CustomCtor<ModelElement, IModelElement> GetModelElement)
{
this.Element1 = (E)GetModelElement(ModelElement.E, null);
this.Element2 = (E)GetModelElement(ModelElement.E, null);
//Element2 does not depend on Element1 in this case though.
}
}
public abstract class E : IModelElement { }
public class EA : E
{
public string Element1;
public EA(string Element1) { this.Element1 = Element1; }
}
public class EB : E
{
public int Element1;
public EB(int Element1) { this.Element1 = Element1; }
}
public interface IModelElement { }
public enum ModelElement { E, EA, EB }
// In the Presenter :
public class Presenter
{
View.View view1;
public Presenter() { }
public void SetView(View.View view) { this.view1 = view; }
public Model.Model MakeModel()
{
CustomCtor<ModelElement, IModelElement> GetModelElement = new CustomCtor<ModelElement, IModelElement>(GetModelElement<ModelElement, IModelElement>);
return new Model.Model(GetModelElement);
}
private Model.IModelElement GetModelElement<ModelElement, Tout>(Model.ModelElement ME, object obj)
{
switch (ME)
{
case Model.ModelElement.E:
return MakeE();
// One case per Model.ModelElement
default:
throw new Exception("ModelElement not implemented in the Presenter.");
}
return default(Model.IModelElement);
}
private E MakeE()
{
switch (view1.AskEType())
{
case 1:
return MakeEA();
case 2:
return MakeEB();
default:
throw new Exception();
}
}
private EA MakeEA() { return new EA(view1.AskString("EA.Element1 (String)")); }
private EB MakeEB() { return new EB(view1.AskInt("EB.Element1 (Int)")); }
}
// Shared to the Model and the Presenter :
public delegate TOut CustomCtor<EnumType, TOut>(EnumType Enum, object Params) where EnumType : struct;
// In the View :
public class View
{
public int AskEType()
{
Console.WriteLine(string.Format("Type of E : EA(1) or EB(2)?"));
return int.Parse(Console.ReadLine());
}
public string AskString(string Name)
{
Console.Write(string.Format("{0} ? ", Name));
return Console.ReadLine();
}
public int AskInt(string Name)
{
Console.Write(string.Format("{0} ? ", Name));
return int.Parse(Console.ReadLine());
}
}
//In the Program :
class Program
{
static void Main(string[] args)
{
View.View view1 = new View.View();
Presenter.Presenter presenter1 = new Presenter.Presenter();
presenter1.SetView(view1);
presenter1.MakeModel();
}
}
Does that make sense? Is there a name for the thing I'm trying to do? (beside "A weird thing")
Are you aware of a design pattern I should read on?
I taught about mixing the Builder design pattern with the MVP, but I'm not sure how I'd do that.
Thanks
I am not certain this is what your asking about, but I am assuming you are trying to keep your view isolated from your model. If that is indeed what you are trying to do, I think your taking a much too complicated approach. The view is simply a presentation and feedback medium. It really does not need to know anything about models, it can be designed to make use of simple data in a property bag of some kind. This creates a cleaner separation, however, it often makes rendering data and maintaining the view a lot harder as well.
First question I would ask is, is it REALLY worth it to expend so much effort keeping your view entirely isolated from your model? What are you really gaining by having an absolute separation?
If you do indeed need a separation, make sure you understand the roles of view and presenter. The view is dumb...it knows nothing and does nothing. It presents information and forms. The browser issues commands requested by the user. The presenter handles commands, and directs data to its view. The concept of "presenter asking the view" for anything is generally incorrect. The presenter should be handling the command (http request) directly, so it should know any and all details about a particular command. When it comes time to render the view, the presenter should provide any data to the view in whatever form the view needs it to be in. If you do not want your view to know about your object model, then either create properties on the view itself to contain the data, or create a view-specific model that encapsulates the data required.
EDIT:
I've just read your update. I think I understand your problem a bit better now. First off, before I go any farther, you need to reorganize responsibilities a little bit. Currently, you have it such that your view is responsible for handling input. That is a bit of a corruption of the purpose and concept of a 'view'. In both MVP and MVC, the view is supposed to be as "dumb" as possible, and really should not be responsible for processing anything...commands, actions, input, etc. should all be the responsibility of the Controller or Presenter.
Seeing that your view is actually a console application, not a web forms application (which was my original assumption), I think that MVC might actually be a better fit for your needs. MVP is a good solution for getting around the deficiencies of ASP.NET WebForms, but it is not as powerful or successful at helping separate concerns as MVC is. I would look into implementing an MVC pattern, which was originally designed for console type applications. The controller becomes the central input handler, which then issues commands to command handlers and your model. The view would then be pure and true to form...only rendering information and nothing else.
If there is some reason why you cannot use an MVC approach, which I think would be ideal, and must use MVP, I can offer more advice on how you could fix your current implementation. However, I would strongly suggest looking into using MVC, as that patterns was originally designed to solve the very problem you are trying to solve...in console applications.