Mediatr Notifications on ViewModel in WPF MVVM - c#

While implementing a WPF Application I stumbled on the problem that my application needs some global data in every ViewModel. However some of the ViewModels only need reading access while other need read/write access for this Field. At First I stumbled upon the Microsoft Idea of a SessionContext like so:
public class SessionContext
{
#region Public Members
public static string UserName { get; set; }
public static string Role { get; set; }
public static Teacher CurrentTeacher { get; set; }
public static Parent CurrentParent { get; set; }
public static LocalStudent CurrentStudent { get; set; }
public static List<LocalGrade> CurrentGrades { get; set; }
#endregion
#region Public Methods
public static void Logon(string userName, string role)
{
UserName = userName;
Role = role;
}
public static void Logoff()
{
UserName = "";
Role = "";
CurrentStudent = null;
CurrentTeacher = null;
CurrentParent = null;
}
#endregion
}
This isn't (in my Opinion at least) nicely testable and it gets problematic in case my global data grows (A think that could likely happen in this application).
The next thing I found was the implementation of a Mediator/the Mediator Pattern from this link. I liked the Idea of the Design Norbert is going here and thought about implementing something similar for my project. However in this project I am already using the impressive Mediatr Nuget Package and that is also a Mediator implementation. So I thought "Why reinvent the Wheel" if I could just use a nice and well tested Mediator. But here starts my real Question: In case of sending changes to the global data by other ViewModels to my Readonly ViewModels I would use Notifications. That means:
public class ReadOnlyViewModel : NotificationHandler<Notification>
{
//some Member
//global Data
public string Username {get; private set;}
public async Task Handle(Notification notification, CancellationToken token)
{
Username = notification.Username;
}
}
The Question(s) now:
1. Is this a good Practice for using MVVM (It's just a Feeling that doing this is wrong because it feels like exposing Business Logic in the ViewModel)
2. Is there a better way to seperate this so that my Viewmodel doesn't need to inherit 5 to 6 different NotificationHandlers<,>?
Update:
As Clarification to what I want to achieve here:
My Goal is to implement a wpf application that manages some Global Data (lets say a Username as mentioned above) for one of its Window. That means because i am using a DI Container (and because of what kind of data it is) that I have to declare the Service #mm8 proposed as a Singleton. That however is a little bit problematic in case (and I have that case) I need to open a new Window that needs different global data at this time. That would mean that I either need to change the lifetime to something like "kind of scoped" or (breaking the single Responsibility of the class) by adding more fields for different Purposes or I create n Services for the n possible Windows I maybe need to open. To the first Idea of splitting the Service: I would like to because that would mitigate all the above mentioned problems but that would make the sharing of Data problematic because I don't know a reliable way to communicate this global data from the Writeservice to the readservice while something async or parallell running is happening in a Background Thread that could trigger the writeservice to update it's data.

You could use a shared service that you inject your view models with. It can for example implement two interfaces, one for write operations and one for read operations only, e.g.:
public interface IReadDataService
{
object Read();
}
public interface IWriteDataService : IReadDataService
{
void Write();
}
public class GlobalDataService : IReadDataService, IWriteDataService
{
public object Read()
{
throw new NotImplementedException();
}
public void Write()
{
throw new NotImplementedException();
}
}
You would then inject the view models that should have write access with a IWriteDataService (and the other ones with a IReadDataService):
public ViewModel(IWriteDataService dataService) { ... }
This solution both makes the code easy to understand and easy to test.

Related

Interfaces: Convert My existing concrete code to an abstract code

I am working on a UWP app. I have a PCL that has managers and services. My managers interact with my services and provide the output. In my services I use async await calls for interacting with my API. I've created a dummy solution. The code is as below:
My Dummy Managers:
public class AccountManager
{
public string uniqueId { get; set; }
public int GetAccountId()
{
Services.AccountServices HelloAccount = new Services.AccountServices();
return HelloAccount.GenerateAccountId(uniqueId);
}
}
public class DummyManager
{
public ICollection<string> GetDeviceNames(int accountId)
{
Services.NameService MyNameService = new Services.NameService(accountId);
return MyNameService.ProvideNames();
}
}
My Dummy Services:
internal class NameService
{
public NameService(int Id)
{
AccountId = Id;
}
public int AccountId = 0;
public ICollection<string> ProvideNames()
{
return new List<string>()
{
"Bob",
"James",
"Foo",
"Bar"
};
}
}
internal class AccountServices
{
public int GenerateAccountId(string uniqueID)
{
return 11;
}
}
Now that I have my services and managers the same structure as I use them, below is how I interact with my Public Managers and keeping the services internal:
In my UI MainPage CodeBehind:
protected override void OnNavigatedTo(NavigationEventArgs e)
{
DataServices.Managers.AccountManager Hello = new DataServices.Managers.AccountManager();
Hello.uniqueId = "AsBbCc"; //fetched from another service.
var id = Hello.GetAccountId();
DataServices.Managers.DummyManager Dummy = new DataServices.Managers.DummyManager();
var names = Dummy.GetDeviceNames(id);
}
My Question is currently my MainPage is very Tightly coupled with my manager and even if I use the MVVM pattern, then my ViewModel would be Tightly coupled with my managers. How do I add a layer of abstraction? What out of these entities (Managers, services, DataBank) should be an Interface that helps to provide abstraction? I need help. I've uploaded a dummy solution for the same. Thanks :)
My Entire dummy solution for better understanding.
As shown here, the managers add little (in fact: no) value, so why have them? Refactoring explicitly talks about this situation and suggests the Inline Class refactoring.
How do I add a layer of abstraction?
That is quite a broad question, and depends on various circumstances, most important of which is: Which problem are you hoping to solve by adding a layer of abstraction?
FWIW, my book Dependency Injection in .NET contains a comprehensive MMVM example, although in WPF, instead of UWP.

ASP.NET maintaining static variables

Recently we learned about AppDomain Recycling of IIS and how it affects static variables setting them to their primary values (nulls, 0s, etc).
We use some static variables that are initialized in a static constructor (for first time initialization, configuration values like "number of decimal places", "administrator email", etc... that are retrieved from DB) and then only read their value along the website execution.
Whats the best way of solving this problem? Some possible ideas:
Checking if variable is null/0 at each retrieval (don't like it because of a possible performance impact + time spent to add this check to each variable + code overload added to the project)
Somehow preventing AppDomain Recycling (this reset logic doesn't happen in Windows forms with static variables, shouldn't it work similarly as being the same language in both environments? At least in terms of standards as static variables management)
Using some other way of holding these variables (but we think that for being some values used for info as global reference for all users, static variables were the best option performance/coding wise)
Subscribing to an event that is triggered in those AppDomain Recycling so we can reinitialize all those variables (maybe best option if recycling can't be prevented...)
Ideas?
I would go with the approach that you don't like.
Checking if variable is null/0 at each retrieval (don't like it because of a possible performance impact + time spent to add this check to each variable + code overload added to the project)
I think it's faster than retireving from web.config.
You get a typed object
Its not a performance impact as you are not going to database on every retrieval request. You'll go to database (or any source) only when you find that current value set to its default value.
Checking the null wrapped into code:
public interface IMyConfig {
string Var1 { get; }
string Var2 { get; }
}
public class MyConfig : IMyConfig {
private string _Var1;
private string _Var2;
public string Var1 { get { return _Var1; } }
public string Var2 { get { return _Var2; } }
private static object s_SyncRoot = new object();
private static IMyConfig s_Instance;
private MyConfig() {
// load _Var1, _Var2 variables from db here
}
public static IMyConfig Instance {
get {
if (s_Instance != null) {
return s_Instance;
}
lock (s_SyncRoot) {
s_Instance = new MyConfig();
}
return s_Instance;
}
}
}
Is there any reason why you can't store these values in your web.config file and use ConfiguationManager.AppSettings to retrieve them?
ConfigurationManager.AppSettings["MySetting"] ?? "defaultvalue";
In view of your edit, why not cache the required values when they're first retrieved?
var val = HttpContext.Cache["MySetting"];
if (val == null)
{
val = // Database retrieval logic
HttpContext.Cache["MySetting"] = val;
}
It sounds like you need a write-through (or write-behind) cache, which can be done with static variables.
Whenever a user changes the value, write it back to the database. Then, whenever the AppPool is recycled (which is a normal occurrence and shouldn't be avoided), the static constructors can read the current values from the database.
One thing you'll have to consider: If you ever scale out to a web farm, you'll need to have some sort of "trigger" when a shared variable changes so the other servers on the farm can know to retrieve the new values from the server.
Comments on other parts of your question:
(don't like [Checking if variable is null/0 at each retrieval] because of a possible performance impact + time spent to add this check to each variable + code overload added to the project
If you use a write-through cache you won't need this, but in either case The time spent to check a static variable for 0 or null should be negligible.
[AppDomain recycling] doesn't happen in Windows forms with static variables, shouldn't it work similarly as being the same language in both environments?
No, WebForms and WinForms are completely different platforms with different operating models. Web sites should be able to respond to many (up to millions) of concurrent users. WinForms are built for single-user access.
've resolved this kind of issue, following a pattern similar to this. This enabled me to cater for handling circumstances where the data could change. I set up my ISiteSettingRepository in the bootstrapper. In 1 application I get the configuration from an XML file but in others I get it from the database, as and when I need it.
public class ApplicationSettings
{
public ApplicationSettings()
{
}
public ApplicationSettings(ApplicationSettings settings)
{
ApplicationName = settings.ApplicationName;
EncryptionAlgorithm = settings.EncryptionAlgorithm;
EncryptionKey = settings.EncryptionKey;
HashAlgorithm = settings.HashAlgorithm;
HashKey = settings.HashKey;
Duration = settings.Duration;
BaseUrl = settings.BaseUrl;
Id = settings.Id;
}
public string ApplicationName { get; set; }
public string EncryptionAlgorithm { get; set; }
public string EncryptionKey { get; set; }
public string HashAlgorithm { get; set; }
public string HashKey { get; set; }
public int Duration { get; set; }
public string BaseUrl { get; set; }
public Guid Id { get; set; }
}
Then a "Service" Interface to
public interface IApplicaitonSettingsService
{
ApplicationSettings Get();
}
public class ApplicationSettingsService : IApplicaitonSettingsService
{
private readonly ISiteSettingRepository _repository;
public ApplicationSettingsService(ISiteSettingRepository repository)
{
_repository = repository;
}
public ApplicationSettings Get()
{
SiteSetting setting = _repository.GetAll();
return setting;
}
}
I would take a totally different approach, one that doesn't involve anything static.
First create a class to strongly-type the configuration settings you're after:
public class MyConfig
{
int DecimalPlaces { get; set; }
string AdministratorEmail { get; set; }
//...
}
Then abstract away the persistence layer by creating some repository:
public interface IMyConfigRepository
{
MyConfig Load();
void Save(MyConfig settings);
}
The classes that can read and write these settings can then statically declare that they depend on an implementation of this repository:
public class SomeClass
{
private readonly IMyConfigRepository _repo;
public MyClass(IMyConfigRepository repo)
{
_repo = repo;
}
public void DoSomethingThatNeedsTheConfigSettings()
{
var settings = _repo.Load();
//...
}
}
Now implement the repository interface the way you want (today you want the settings in a database, tomorrow might be serializing to a .xml file, and next year using a cloud service) and the config interface as you need it.
And you're set: all you need now is a way to bind the interface to its implementation. Here's a Ninject example (written in a NinjectModule-derived class' Load method override):
Bind<IMyConfigRepository>().To<MyConfigSqlRepository>();
Then, you can just swap the implementation for a MyConfigCloudRepository or a MyConfigXmlRepository implementation when/if you ever need one.
Being an asp.net application, just make sure you wire up those dependencies in your Global.asax file (at app start-up), and then any class that has a IMyConfigRepository constructor parameter will be injected with a MyConfigSqlRepository which will give you MyConfigImplementation objects that you can load and save as you please.
If you're not using an IoC container, then you would just new up the MyConfigSqlRepository at app start-up, and manually inject the instance into the constructors of the types that need it.
The only thing with this approach, is that if you don't already have a DependencyInjection-friendly app structure, it might mean extensive refactoring - to decouple objects and eliminate the newing up of dependencies, making unit tests much easier to get focused on a single aspect, and much easier to mock-up the dependencies... among other advantages.

Converting an existing instance of a class to a more concrete subclass

Situation: I have a large shrink wrapped application that my company bought. It is supposed to be extensible, yada, yada. It has a DB, DAL and BLL in the form of SQL and DLLs. It also has a MVC project (the extensible part) but 95% of the "Model" part is in the DAL/BLL libraries.
Problem: I need to extend one of the "Models" located in the BLL. It is an User object with 47 properties, 0 methods and no constructor. What I started was a simple deivation of their class like:
public class ExtendedUser : BLL.DTO.User
{
public bool IsSeller { get; set; }
public bool IsAdmin { get; set; }
}
This works fine if I just create a new ExtendedUser. However, it is populated by another call into their BLL like:
BLL.DTO.User targetUser = UserClient.GetUserByID(User.Identity.Name, id);
I tried the straight forward brute force attempt, which of course throws a Cast Exception:
ExtendedUser targetUser = (ExtendedUser)UserClient.GetUserByID(User.Identity.Name, id);
I am drawing a complete blank on this very simple OO concept. I don't want to write a Constructor that accepts the existing User object then copies each of the properties into my extended object. I know there is a right way to do this. Can someone slap me upside the head and tell me the obvious?
TIA
If you do want to use inheritance, then with 47 properties, something like Automapper might help you copy all the values across - http://automapper.codeplex.com/ - this would allow you to use:
// setup
Mapper.CreateMap<BLL.DTO.User, ExtendedUser>();
// use
ExtendedUser extended = Mapper.Map<BLL.DTO.User, ExtendedUser>(user);
Alternatively, you might be better off using aggregation instead of inheritance - e.g.
public class AggregatedUser
{
public bool IsSeller { get; set; }
public bool IsAdmin { get; set; }
public BLL.DTO.User User { get; set; }
}
What about this approach (basically Aggregation):
public sealed class ExtendedUser
{
public ExtendedUser(BLL.DTO.User legacyUser)
{
this.LegacyUser = legacyUser;
}
public BLL.DTO.User LegacyUser
{
get;
private set;
}
}
I don't want to write a Constructor that accepts the existing User object then copies each of the properties into my extended object.
This is typically the "right" way to do this, unless you have compile time access to the BLL. The problem is that a cast will never work- an ExtendedUser is a concrete type of User, but every User is not an ExtendedUser, which would be required for the cast to succeed.
You can handle this via aggregation (contain the instance of the User as a member), but not directly via inheritance.
This is often handled at compile time via Partial Classes. If the BLL is setup to create the classes (ie: User) as a partial class, you can add your own logic into a separate file, which prevents this from being an issue. This is common practice with many larger frameworks, ORMs, etc.

How to implement auditing in the business layer

I'm trying to implement basic auditing for a system where users can login, change their passwords and emails etc.
The functions I want to audit are all in the business layer and I would like to create an Audit object that stores the datetime the function was called including the result.
I recently attended a conference and one of the sessions was on well-crafted web applications and I am trying to implement some of the ideas. Basically I am using an Enum to return the result of the function and use a switch statement to update the UI in that layer. The functions use an early return which doesn't leave any time for creating, setting and saving the audit.
My question is what approaches do others take when auditing business functions and what approach would you take if you had a function like mine (if you say ditch it I'll listen but i'll be grumpy).
The code looks a little like this:
function Login(string username, string password)
{
User user = repo.getUser(username, password);
if (user.failLogic1) { return failLogic1Enum; }
if (user.failLogic2) { return failLogic2Enum; }
if (user.failLogic3) { return failLogic3Enum; }
if (user.failLogic4) { return failLogic4Enum; }
user.AddAudit(new (Audit(AuditTypeEnum LoginSuccess));
user.Save();
return successEnum;
}
I could expand the if statements to create a new audit in each one but then the function starts to get messy. I could do the auditing in the UI layer in the switch statement but that seems wrong.
Is it really bad to stick it all in try catch with a finally and use the finally to create the Audit object and set it's information in there thus solving the early return problem? My impression is that a finally is for cleaning up not auditing.
My name is David, and I'm just trying to be a better code. Thanks.
I can't say I have used it, but this seems like a candidate for Aspect Oriented Programming. Basically, you can inject code in each method call for stuff like logging/auditing/etc in an automated fashion.
Separately, making a try/catch/finally block isn't ideal, but I would run a cost/benefit to see if it is worth it. If you can reasonably refactor the code cheaply so that you don't have to use it, do that. If the cost is exorbitant, I would make the try/finally. I think a lot of people get caught up in the "best solution", but time/money are always constraints, so do what "makes sense".
The issue with an enum is it isn't really extensible. If you add new components later, your Audit framework won't be able to handle the new events.
In our latest system using EF we created a basic POCO for our audit event in the entity namespace:
public class AuditEvent : EntityBase
{
public string Event { get; set; }
public virtual AppUser AppUser { get; set; }
public virtual AppUser AdminUser { get; set; }
public string Message{get;set;}
private DateTime _timestamp;
public DateTime Timestamp
{
get { return _timestamp == DateTime.MinValue ? DateTime.UtcNow : _timestamp; }
set { _timestamp = value; }
}
public virtual Company Company { get; set; }
// etc.
}
In our Task layer, we implemented an abstract base AuditEventTask:
internal abstract class AuditEventTask<TEntity>
{
internal readonly AuditEvent AuditEvent;
internal AuditEventTask()
{
AuditEvent = InitializeAuditEvent();
}
internal void Add(UnitOfWork unitOfWork)
{
if (unitOfWork == null)
{
throw new ArgumentNullException(Resources.UnitOfWorkRequired_Message);
}
new AuditEventRepository(unitOfWork).Add(AuditEvent);
}
private AuditEvent InitializeAuditEvent()
{
return new AuditEvent {Event = SetEvent(), Timestamp = DateTime.UtcNow};
}
internal abstract void Log(UnitOfWork unitOfWork, TEntity entity, string appUserName, string adminUserName);
protected abstract string SetEvent();
}
Log must be implemented to record the data associated with the event, and SetEvent is implemented to force the derived task to set it's event's type implicitly:
internal class EmailAuditEventTask : AuditEventTask<Email>
{
internal override void Log(UnitOfWork unitOfWork, Email email, string appUserName, string adminUserName)
{
AppUser appUser = new AppUserRepository(unitOfWork).Find(au => au.Email.Equals(appUserName, StringComparison.OrdinalIgnoreCase));
AuditEvent.AppUser = appUser;
AuditEvent.Company = appUser.Company;
AuditEvent.Message = email.EmailType;
Add(unitOfWork);
}
protected override string SetEvent()
{
return AuditEvent.SendEmail;
}
}
The hiccup here is the internal base task - the base task COULD be public so that later additions to the Task namespace could use it - but overall I think that gives you the idea.
When it comes to implementation, our other tasks determine when logging should occur, so in your case:
AuditEventTask task;
if (user.failLogic1) { task = new FailLogin1AuditEventTask(fail 1 params); }
if (user.failLogic2) { task = new FailLogin2AuditEventTask(fail 2 params); }
if (user.failLogic3) { task = new FailLogin3AuditEventTask(etc); }
if (user.failLogic4) { task = new FailLogin4AuditEventTask(etc); }
task.Log();
user.Save();

Domain modelling - Implement an interface of properties or POCO?

I'm prototyping a tool that will import files via a SOAP api to an web based application and have modelled what I'm trying to import via C# interfaces so I can wrap the web app's model data in something I can deal with.
public interface IBankAccount
{
string AccountNumber { get; set; }
ICurrency Currency { get; set; }
IEntity Entity { get; set; }
BankAccountType Type { get; set; }
}
internal class BankAccount
{
private readonly SomeExternalImplementation bankAccount;
BankAccount(SomeExternalImplementation bankAccount)
{
this.bankAccount = bankAccount;
}
// Property implementations
}
I then have a repository that returns collections of IBankAccount or whatever and a factory class to create BankAccounts for me should I need them.
My question is, it this approach going to cause me a lot of pain down the line and would it be better to create POCOs? I want to put all of this in a separate assembly and have a complete separation of data access and business logic, simply because I'm dealing with a moving target here regarding where the data will be stored online.
This is exactly the approach I use and I've never had any problems with it. In my design, anything that comes out of the data access layer is abstracted as an interface (I refer to them as data transport contracts). In my domain model I then have static methods to create business entities from those data transport objects..
interface IFooData
{
int FooId { get; set; }
}
public class FooEntity
{
static public FooEntity FromDataTransport(IFooData data)
{
return new FooEntity(data.FooId, ...);
}
}
It comes in quite handy where your domain model entities gather their data from multiple data contracts:
public class CompositeEntity
{
static public CompositeEntity FromDataTransport(IFooData fooData, IBarData barData)
{
...
}
}
In contrast to your design, I don't provide factories to create concrete implementations of the data transport contracts, but rather provide delegates to write the values and let the repository worry about creating the concrete objects
public class FooDataRepository
{
public IFooData Insert(Action<IFooData> insertSequence)
{
var record = new ConcreteFoo();
insertSequence.Invoke(record as IFooData);
this.DataContext.Foos.InsertOnSubmit(record); // Assuming LinqSql in this case..
return record as IFooData;
}
}
usage:
IFooData newFoo = FooRepository.Insert(f =>
{
f.Name = "New Foo";
});
Although a factory implementation is an equally elegant solution in my opinion. To answer your question, In my experience of a very similar approach I've never come up against any major problems, and I think you're on the right track here :)

Categories

Resources