Interfaces: Convert My existing concrete code to an abstract code - c#

I am working on a UWP app. I have a PCL that has managers and services. My managers interact with my services and provide the output. In my services I use async await calls for interacting with my API. I've created a dummy solution. The code is as below:
My Dummy Managers:
public class AccountManager
{
public string uniqueId { get; set; }
public int GetAccountId()
{
Services.AccountServices HelloAccount = new Services.AccountServices();
return HelloAccount.GenerateAccountId(uniqueId);
}
}
public class DummyManager
{
public ICollection<string> GetDeviceNames(int accountId)
{
Services.NameService MyNameService = new Services.NameService(accountId);
return MyNameService.ProvideNames();
}
}
My Dummy Services:
internal class NameService
{
public NameService(int Id)
{
AccountId = Id;
}
public int AccountId = 0;
public ICollection<string> ProvideNames()
{
return new List<string>()
{
"Bob",
"James",
"Foo",
"Bar"
};
}
}
internal class AccountServices
{
public int GenerateAccountId(string uniqueID)
{
return 11;
}
}
Now that I have my services and managers the same structure as I use them, below is how I interact with my Public Managers and keeping the services internal:
In my UI MainPage CodeBehind:
protected override void OnNavigatedTo(NavigationEventArgs e)
{
DataServices.Managers.AccountManager Hello = new DataServices.Managers.AccountManager();
Hello.uniqueId = "AsBbCc"; //fetched from another service.
var id = Hello.GetAccountId();
DataServices.Managers.DummyManager Dummy = new DataServices.Managers.DummyManager();
var names = Dummy.GetDeviceNames(id);
}
My Question is currently my MainPage is very Tightly coupled with my manager and even if I use the MVVM pattern, then my ViewModel would be Tightly coupled with my managers. How do I add a layer of abstraction? What out of these entities (Managers, services, DataBank) should be an Interface that helps to provide abstraction? I need help. I've uploaded a dummy solution for the same. Thanks :)
My Entire dummy solution for better understanding.

As shown here, the managers add little (in fact: no) value, so why have them? Refactoring explicitly talks about this situation and suggests the Inline Class refactoring.
How do I add a layer of abstraction?
That is quite a broad question, and depends on various circumstances, most important of which is: Which problem are you hoping to solve by adding a layer of abstraction?
FWIW, my book Dependency Injection in .NET contains a comprehensive MMVM example, although in WPF, instead of UWP.

Related

Mediatr Notifications on ViewModel in WPF MVVM

While implementing a WPF Application I stumbled on the problem that my application needs some global data in every ViewModel. However some of the ViewModels only need reading access while other need read/write access for this Field. At First I stumbled upon the Microsoft Idea of a SessionContext like so:
public class SessionContext
{
#region Public Members
public static string UserName { get; set; }
public static string Role { get; set; }
public static Teacher CurrentTeacher { get; set; }
public static Parent CurrentParent { get; set; }
public static LocalStudent CurrentStudent { get; set; }
public static List<LocalGrade> CurrentGrades { get; set; }
#endregion
#region Public Methods
public static void Logon(string userName, string role)
{
UserName = userName;
Role = role;
}
public static void Logoff()
{
UserName = "";
Role = "";
CurrentStudent = null;
CurrentTeacher = null;
CurrentParent = null;
}
#endregion
}
This isn't (in my Opinion at least) nicely testable and it gets problematic in case my global data grows (A think that could likely happen in this application).
The next thing I found was the implementation of a Mediator/the Mediator Pattern from this link. I liked the Idea of the Design Norbert is going here and thought about implementing something similar for my project. However in this project I am already using the impressive Mediatr Nuget Package and that is also a Mediator implementation. So I thought "Why reinvent the Wheel" if I could just use a nice and well tested Mediator. But here starts my real Question: In case of sending changes to the global data by other ViewModels to my Readonly ViewModels I would use Notifications. That means:
public class ReadOnlyViewModel : NotificationHandler<Notification>
{
//some Member
//global Data
public string Username {get; private set;}
public async Task Handle(Notification notification, CancellationToken token)
{
Username = notification.Username;
}
}
The Question(s) now:
1. Is this a good Practice for using MVVM (It's just a Feeling that doing this is wrong because it feels like exposing Business Logic in the ViewModel)
2. Is there a better way to seperate this so that my Viewmodel doesn't need to inherit 5 to 6 different NotificationHandlers<,>?
Update:
As Clarification to what I want to achieve here:
My Goal is to implement a wpf application that manages some Global Data (lets say a Username as mentioned above) for one of its Window. That means because i am using a DI Container (and because of what kind of data it is) that I have to declare the Service #mm8 proposed as a Singleton. That however is a little bit problematic in case (and I have that case) I need to open a new Window that needs different global data at this time. That would mean that I either need to change the lifetime to something like "kind of scoped" or (breaking the single Responsibility of the class) by adding more fields for different Purposes or I create n Services for the n possible Windows I maybe need to open. To the first Idea of splitting the Service: I would like to because that would mitigate all the above mentioned problems but that would make the sharing of Data problematic because I don't know a reliable way to communicate this global data from the Writeservice to the readservice while something async or parallell running is happening in a Background Thread that could trigger the writeservice to update it's data.
You could use a shared service that you inject your view models with. It can for example implement two interfaces, one for write operations and one for read operations only, e.g.:
public interface IReadDataService
{
object Read();
}
public interface IWriteDataService : IReadDataService
{
void Write();
}
public class GlobalDataService : IReadDataService, IWriteDataService
{
public object Read()
{
throw new NotImplementedException();
}
public void Write()
{
throw new NotImplementedException();
}
}
You would then inject the view models that should have write access with a IWriteDataService (and the other ones with a IReadDataService):
public ViewModel(IWriteDataService dataService) { ... }
This solution both makes the code easy to understand and easy to test.

C# Processing same object with different "processors" a flyweight pattern?

I've been doing a lot of research on different design patterns and I'm trying to determine the correct way of doing this.
I have an image uploading MVC app that I'm developing which needs to process the image in several different ways, such as create a thumbnail and save a database record. Would the best way to approach this be via a flyweight pattern? Using this as an example:
var image = new Image();
List<IProcessors> processors = processorFactory.GetProcessors(ImageType.Jpeg);
foreach(IProcessor processor in processors)
{
processor.process(image);
}
I have second part to this question as well. What if the processor has smaller related "sub-processors"? An example that I have in my head would be a book generator.
I have a book generator
that has page generators
that has paragraph generators
that has sentence generators
Would this be a flyweight pattern as well? How would I handle the traversal of that tree?
EDIT
I asked this question below but I wanted to add it here:
All the examples that I've see of the composite pattern seems to relate to handling of values while the flyweight pattern seems to deal with processing (or sharing) of an object's state. Am I just reading into the examples too much? Would combining the patterns be the solution?
I can at least handle the second part of the question. To expand a tree (or a composite), use simple recursion.
void Recursion(TreeItem parent)
{
// First call the same function for all the children.
// This will take us all the way to the bottom of the tree.
// The foreach loop won't execute when we're at the bottom.
foreach (TreeItem child in parent.Children)
{
Recursion(child);
}
// When there are no more children (since we're at the bottom)
// then finally perform the task you want. This will slowly work
// it's way up the entire tree from bottom most items to the top.
Console.WriteLine(parent.Name);
}
What your describing could have some flyweights representing each of those nested classes. But in this case that would be more of an implementation detail. In my experience, flyweights are usually called for at the architectural level or implementation level but rarely as an element of design.
Consider this class -
public interface IMyData {
IdType MyId { get; }
byte[] BlobData { get; }
long SizeOfBlob { get; }
}
public class MyData : IMyData {
public IdType MyId { get; private set; }
public byte[] BlobData { get; set; }
public long SizeOfBlob { get { return BlobData.LongLength; } }
}
}
In your multi-tiered application, this object needs to travel from the source database, to a manager's IPhone for approval based on the blob size, and then to an accounting system for billing. So instead of transporting the whole thing to the IPhone App, you substitute the flyweight:
public class MyDataFlyWeight : IMyData {
public MyDataFlyWeight(IdType myId, long blobSize){
MyId = myId;
BlobSize = blobSize;
}
public IdType MyId { get; set; }
public byte[] MutableBlobData { get {
throw new NotImplmentedException();
}
}
public long BlobSize { get; private set; }
}
}
By having both implement IMyData and by building the system with the interface and not the concrete type (you did this, right?!), then you could use MyDataFlyweight objects from the IPhone App and MyData objects in the rest of the system. All you have to do is properly initialize MyDataFlyweight with the blob size.
The architecture which calls for an IPhone App would dictate that a flyweight is used within the IPhone App.
In addition, consider the newer Lazy<T> class:
public class MyData : IMyData {
public IdType MyId { get; private set; }
private Lazy<byte[]> _blob = new Lazy<byte[]>(() =>
StaticBlobService.GetBlob(MyId));
public byte[] BlobData { get { return _blob.Value; } }
public long SizeOfBlob { get { return BlobData.LongLength; } }
}
}
This is an example of using the flyweight purely as an implementation detail.

Business object with context or not?

Which one is more preferred way to implement business object (and why)?
Without separate "context"
class Product
{
public string Code { get; set; }
public void Save()
{
using (IDataService service = IoC.GetInstance<IDataService>())
{
service.Save(this);
}
}
}
And usage would be:
Product p = new Product();
p.Code = "A1";
p.Save();
With separate "context"
class Product
{
private IContext context;
public Product(IContext context)
{
this.context = context;
}
public string Code { get; set; }
public void Save()
{
this.context.Save(this);
}
}
And usage would be:
using (IContext context = IoC.GetInstance<IContext>())
{
Product p = new Product(context);
p.Code = "A1";
p.Save();
}
This all is happening at BL layer (except usage examples), nothing to do with database etc. IDataService is interface to data layer to save business object "somewhere". IContext basically wraps IDataService somehow. Actual business objects are more complex with more properties and references to each other (like Order -> OrderRow <- Product).
My opinion is that first approach is (too) simple and second choice gives more control outside single business object instance....? Is there any guidelines for something like this?
I personally opt for a third version where the object itself does not know how to save itself, but instead relies on another component to save it. This becomes interesting when there are multiple ways to save an object, say saving it to a database, a json stream, an xml stream. Such objects are usually referred to as Serializers.
So in your case, I would go for as simple as this:
class Product
{
public string Code { get; set; }
}
a serialize for IContext serialization would be:
class ContextSerializer
{
public void SaveProduct(Product prod)
{
using(IContext context = IoC.GetInstance<IContext>())
{
context.Save(prod);
}
}
}
usage would be:
public void SaveNewProduct(string code)
{
var prod = new Product() { Code = code };
var contextSerializer = new ContextSerialzer();
contextSerializer.SaveProduct(prod);
}
This prevents the object from holding on to the context (the field in your example) and keeps your business objects simple. It also seperates concerns.
If you get into the situation where you have inheritance in your business objects, consider the Visitor Pattern.

How to implement auditing in the business layer

I'm trying to implement basic auditing for a system where users can login, change their passwords and emails etc.
The functions I want to audit are all in the business layer and I would like to create an Audit object that stores the datetime the function was called including the result.
I recently attended a conference and one of the sessions was on well-crafted web applications and I am trying to implement some of the ideas. Basically I am using an Enum to return the result of the function and use a switch statement to update the UI in that layer. The functions use an early return which doesn't leave any time for creating, setting and saving the audit.
My question is what approaches do others take when auditing business functions and what approach would you take if you had a function like mine (if you say ditch it I'll listen but i'll be grumpy).
The code looks a little like this:
function Login(string username, string password)
{
User user = repo.getUser(username, password);
if (user.failLogic1) { return failLogic1Enum; }
if (user.failLogic2) { return failLogic2Enum; }
if (user.failLogic3) { return failLogic3Enum; }
if (user.failLogic4) { return failLogic4Enum; }
user.AddAudit(new (Audit(AuditTypeEnum LoginSuccess));
user.Save();
return successEnum;
}
I could expand the if statements to create a new audit in each one but then the function starts to get messy. I could do the auditing in the UI layer in the switch statement but that seems wrong.
Is it really bad to stick it all in try catch with a finally and use the finally to create the Audit object and set it's information in there thus solving the early return problem? My impression is that a finally is for cleaning up not auditing.
My name is David, and I'm just trying to be a better code. Thanks.
I can't say I have used it, but this seems like a candidate for Aspect Oriented Programming. Basically, you can inject code in each method call for stuff like logging/auditing/etc in an automated fashion.
Separately, making a try/catch/finally block isn't ideal, but I would run a cost/benefit to see if it is worth it. If you can reasonably refactor the code cheaply so that you don't have to use it, do that. If the cost is exorbitant, I would make the try/finally. I think a lot of people get caught up in the "best solution", but time/money are always constraints, so do what "makes sense".
The issue with an enum is it isn't really extensible. If you add new components later, your Audit framework won't be able to handle the new events.
In our latest system using EF we created a basic POCO for our audit event in the entity namespace:
public class AuditEvent : EntityBase
{
public string Event { get; set; }
public virtual AppUser AppUser { get; set; }
public virtual AppUser AdminUser { get; set; }
public string Message{get;set;}
private DateTime _timestamp;
public DateTime Timestamp
{
get { return _timestamp == DateTime.MinValue ? DateTime.UtcNow : _timestamp; }
set { _timestamp = value; }
}
public virtual Company Company { get; set; }
// etc.
}
In our Task layer, we implemented an abstract base AuditEventTask:
internal abstract class AuditEventTask<TEntity>
{
internal readonly AuditEvent AuditEvent;
internal AuditEventTask()
{
AuditEvent = InitializeAuditEvent();
}
internal void Add(UnitOfWork unitOfWork)
{
if (unitOfWork == null)
{
throw new ArgumentNullException(Resources.UnitOfWorkRequired_Message);
}
new AuditEventRepository(unitOfWork).Add(AuditEvent);
}
private AuditEvent InitializeAuditEvent()
{
return new AuditEvent {Event = SetEvent(), Timestamp = DateTime.UtcNow};
}
internal abstract void Log(UnitOfWork unitOfWork, TEntity entity, string appUserName, string adminUserName);
protected abstract string SetEvent();
}
Log must be implemented to record the data associated with the event, and SetEvent is implemented to force the derived task to set it's event's type implicitly:
internal class EmailAuditEventTask : AuditEventTask<Email>
{
internal override void Log(UnitOfWork unitOfWork, Email email, string appUserName, string adminUserName)
{
AppUser appUser = new AppUserRepository(unitOfWork).Find(au => au.Email.Equals(appUserName, StringComparison.OrdinalIgnoreCase));
AuditEvent.AppUser = appUser;
AuditEvent.Company = appUser.Company;
AuditEvent.Message = email.EmailType;
Add(unitOfWork);
}
protected override string SetEvent()
{
return AuditEvent.SendEmail;
}
}
The hiccup here is the internal base task - the base task COULD be public so that later additions to the Task namespace could use it - but overall I think that gives you the idea.
When it comes to implementation, our other tasks determine when logging should occur, so in your case:
AuditEventTask task;
if (user.failLogic1) { task = new FailLogin1AuditEventTask(fail 1 params); }
if (user.failLogic2) { task = new FailLogin2AuditEventTask(fail 2 params); }
if (user.failLogic3) { task = new FailLogin3AuditEventTask(etc); }
if (user.failLogic4) { task = new FailLogin4AuditEventTask(etc); }
task.Log();
user.Save();

Domain modelling - Implement an interface of properties or POCO?

I'm prototyping a tool that will import files via a SOAP api to an web based application and have modelled what I'm trying to import via C# interfaces so I can wrap the web app's model data in something I can deal with.
public interface IBankAccount
{
string AccountNumber { get; set; }
ICurrency Currency { get; set; }
IEntity Entity { get; set; }
BankAccountType Type { get; set; }
}
internal class BankAccount
{
private readonly SomeExternalImplementation bankAccount;
BankAccount(SomeExternalImplementation bankAccount)
{
this.bankAccount = bankAccount;
}
// Property implementations
}
I then have a repository that returns collections of IBankAccount or whatever and a factory class to create BankAccounts for me should I need them.
My question is, it this approach going to cause me a lot of pain down the line and would it be better to create POCOs? I want to put all of this in a separate assembly and have a complete separation of data access and business logic, simply because I'm dealing with a moving target here regarding where the data will be stored online.
This is exactly the approach I use and I've never had any problems with it. In my design, anything that comes out of the data access layer is abstracted as an interface (I refer to them as data transport contracts). In my domain model I then have static methods to create business entities from those data transport objects..
interface IFooData
{
int FooId { get; set; }
}
public class FooEntity
{
static public FooEntity FromDataTransport(IFooData data)
{
return new FooEntity(data.FooId, ...);
}
}
It comes in quite handy where your domain model entities gather their data from multiple data contracts:
public class CompositeEntity
{
static public CompositeEntity FromDataTransport(IFooData fooData, IBarData barData)
{
...
}
}
In contrast to your design, I don't provide factories to create concrete implementations of the data transport contracts, but rather provide delegates to write the values and let the repository worry about creating the concrete objects
public class FooDataRepository
{
public IFooData Insert(Action<IFooData> insertSequence)
{
var record = new ConcreteFoo();
insertSequence.Invoke(record as IFooData);
this.DataContext.Foos.InsertOnSubmit(record); // Assuming LinqSql in this case..
return record as IFooData;
}
}
usage:
IFooData newFoo = FooRepository.Insert(f =>
{
f.Name = "New Foo";
});
Although a factory implementation is an equally elegant solution in my opinion. To answer your question, In my experience of a very similar approach I've never come up against any major problems, and I think you're on the right track here :)

Categories

Resources