Business object with context or not? - c#

Which one is more preferred way to implement business object (and why)?
Without separate "context"
class Product
{
public string Code { get; set; }
public void Save()
{
using (IDataService service = IoC.GetInstance<IDataService>())
{
service.Save(this);
}
}
}
And usage would be:
Product p = new Product();
p.Code = "A1";
p.Save();
With separate "context"
class Product
{
private IContext context;
public Product(IContext context)
{
this.context = context;
}
public string Code { get; set; }
public void Save()
{
this.context.Save(this);
}
}
And usage would be:
using (IContext context = IoC.GetInstance<IContext>())
{
Product p = new Product(context);
p.Code = "A1";
p.Save();
}
This all is happening at BL layer (except usage examples), nothing to do with database etc. IDataService is interface to data layer to save business object "somewhere". IContext basically wraps IDataService somehow. Actual business objects are more complex with more properties and references to each other (like Order -> OrderRow <- Product).
My opinion is that first approach is (too) simple and second choice gives more control outside single business object instance....? Is there any guidelines for something like this?

I personally opt for a third version where the object itself does not know how to save itself, but instead relies on another component to save it. This becomes interesting when there are multiple ways to save an object, say saving it to a database, a json stream, an xml stream. Such objects are usually referred to as Serializers.
So in your case, I would go for as simple as this:
class Product
{
public string Code { get; set; }
}
a serialize for IContext serialization would be:
class ContextSerializer
{
public void SaveProduct(Product prod)
{
using(IContext context = IoC.GetInstance<IContext>())
{
context.Save(prod);
}
}
}
usage would be:
public void SaveNewProduct(string code)
{
var prod = new Product() { Code = code };
var contextSerializer = new ContextSerialzer();
contextSerializer.SaveProduct(prod);
}
This prevents the object from holding on to the context (the field in your example) and keeps your business objects simple. It also seperates concerns.
If you get into the situation where you have inheritance in your business objects, consider the Visitor Pattern.

Related

Best practice to manage entities values and value changing events

Little introduction: we have a complex entity and overgrown business logic related to it. With various fields that we can change and fields that updates from external project management software (PMS) like MS Project and some others.
The problem is that it's hard to centralize business logic for changing every fields cause that changes can offend other fields some fields are calculated but should be calculated only in several business scenarios. And different synchronization processes uses different business logic that depends on external data of specific PMS.
At this moment we have such ways to change the fields in our solution:
Constructor with parameters and private parameterless constructor
public class SomeEntity
{
public string SomeField;
private SomeEntity ()
{
}
public SomeEntity (string someField)
{
SomeField = someField;
}
}
Private set with public method to change field value
public class SomeEntity
{
public string SomeField {get; private set;}
public void SetSomeField(string newValue)
{
// there may be some checks
if (string.IsNullOrEmpty(newValue))
{
throw new Exception();
}
SomeField = newValue;
}
}
Event methods that perform operations and set some fields
public class SomeEntity
{
public string SomeField { get; private set; }
public string SomePublishedField { get; private set; }
public void PublishEntity(string publishValue)
{
SomeField = publishValue;
SomePublishedField = $"{publishValue} {DateTime.Now()}";
}
}
Public setters
public class SomeEntity
{
public string SomeField { get; set; }
}
Services that implements business logic:
public class SomeService : ISomeService
{
private DbContext _dbContext;
private ISomeApprovalsService _approvalsService;
public async Task UpdateFromMspAsync (MspSomeEntity mspEntity,
CancellationToken cancellationToken = default)
{
var entity = await _dbContext.SomeEntities
.Include (e => e.Process)
.SingleAsync (e => e.MspId == mspEntity.Id, cancellationToken);
switch mspEntity.Status:
case MspStatusEnum.Cancelled:
entity.Process.State = ProcessStateEnum.Rejected;
entity.Status = EntityStatusEnum.Stopped;
break;
case MspStatusEnum.Accepted:
_approvalsService.SendApprovals (entity.Process);
entity.Status = EntityStatusEnum.Finished;
break;
await _dbContext.SaveChangesAsync (cancellationToken);
}
}
State machine inside entity
public class SomeEntity
{
private StateMachine<TriggerEnum, StateEnum> _stateMachine;
public SomeEntity()
{
ConfigureStateMachine();
}
public string SomeField1 { get; set; }
public string SomeField2 { get; set; }
public string SomeField3 { get; set; }
private void ConfigureStateMachine()
{
_statusStateMachine.Configure(StateEnum.Processing)
.OnEntry(s=>SomeField1 = null)
.Permit(TriggerEnum.Approve, StateEnum.Approved);
_statusStateMachine.Configure(StateEnum.Approved)
.OnEntry(s=> SomeField1 = SomeField2 + SomeField3)
.Permit(TriggerEnum.Publish, StateEnum.Finished)
.Permit(TriggerEnum.Cancel, StateEnum.Canceled);
// etc
}
public void Trigger (TriggerEnum trigger) => _statusStateMachine.Fire(trigger);
}
State machine as service to prevent buisness logic leaks inside of entity.
var machine = _services.GetService<IStateMachine<SomeEntity, TriggerEnum>>();
var entity = await _dbContext.SomeEntities.FirstAsync();
IAttachedStateMachine<TriggerEnum> attachedMachine = machine.AttachToEntity(entity);
attachedMachine.Trigger(TriggerEnum.Publish);
It's wrong by architecture to have so many ways of changing values and we want to refactor this but to change approach, best practice must be chosen.
Please share your experience of resolving similar situation.
Update: found approach for DDD that called "aggregation root". It's looks good but only on paper (in theory) and works good with simple examples like "User, customer, shopping cart, order". On practice on every private setter you will create setter method (like in #2 of my examples). Also different methods for every system you work with. Not even talking about business logic inside database entity that violates SOLID's "single responsibility principle".

Interfaces: Convert My existing concrete code to an abstract code

I am working on a UWP app. I have a PCL that has managers and services. My managers interact with my services and provide the output. In my services I use async await calls for interacting with my API. I've created a dummy solution. The code is as below:
My Dummy Managers:
public class AccountManager
{
public string uniqueId { get; set; }
public int GetAccountId()
{
Services.AccountServices HelloAccount = new Services.AccountServices();
return HelloAccount.GenerateAccountId(uniqueId);
}
}
public class DummyManager
{
public ICollection<string> GetDeviceNames(int accountId)
{
Services.NameService MyNameService = new Services.NameService(accountId);
return MyNameService.ProvideNames();
}
}
My Dummy Services:
internal class NameService
{
public NameService(int Id)
{
AccountId = Id;
}
public int AccountId = 0;
public ICollection<string> ProvideNames()
{
return new List<string>()
{
"Bob",
"James",
"Foo",
"Bar"
};
}
}
internal class AccountServices
{
public int GenerateAccountId(string uniqueID)
{
return 11;
}
}
Now that I have my services and managers the same structure as I use them, below is how I interact with my Public Managers and keeping the services internal:
In my UI MainPage CodeBehind:
protected override void OnNavigatedTo(NavigationEventArgs e)
{
DataServices.Managers.AccountManager Hello = new DataServices.Managers.AccountManager();
Hello.uniqueId = "AsBbCc"; //fetched from another service.
var id = Hello.GetAccountId();
DataServices.Managers.DummyManager Dummy = new DataServices.Managers.DummyManager();
var names = Dummy.GetDeviceNames(id);
}
My Question is currently my MainPage is very Tightly coupled with my manager and even if I use the MVVM pattern, then my ViewModel would be Tightly coupled with my managers. How do I add a layer of abstraction? What out of these entities (Managers, services, DataBank) should be an Interface that helps to provide abstraction? I need help. I've uploaded a dummy solution for the same. Thanks :)
My Entire dummy solution for better understanding.
As shown here, the managers add little (in fact: no) value, so why have them? Refactoring explicitly talks about this situation and suggests the Inline Class refactoring.
How do I add a layer of abstraction?
That is quite a broad question, and depends on various circumstances, most important of which is: Which problem are you hoping to solve by adding a layer of abstraction?
FWIW, my book Dependency Injection in .NET contains a comprehensive MMVM example, although in WPF, instead of UWP.

Manual Creation of Classes Vs. DBML

I'm currently creating objects for an application of mine when this stuff come to mind. I know that using DBML's over Manual Creation of classes(see class below) can improve the speed of my application development but I'm really confused of what would be the other disadvantages and advantages of using DBML's over Manual Creation of classes like what I'm doing below thanks for all people who would help. :)
[Serializable]
public class Building
{
public Building()
{
LastEditDate = DateTime.Now.Date;
LastEditUser = GlobalData.CurrentUser.FirstName + " " + GlobalData.CurrentUser.LastName;
}
public int BuildingID { get; set; }
public string BuildingName { get; set; }
public bool IsActive { get; set; }
public DateTime LastEditDate { get; set; }
public string LastEditUser { get; set; }
public static bool CheckIfBuildingNameExists(string buildingName, int buildingID = 0)
{
return BuildingsDA.CheckIfBuildingNameExists(buildingName, buildingID);
}
public static Building CreateTwin(Building building)
{
return CloningUtility.DeepCloner.CreateDeepClone(building);
}
public static List<Building> GetBuildingList()
{
return BuildingsDA.GetBuildingList();
}
public static List<Building> GetBuildingList(bool flag)
{
return BuildingsDA.GetBuildingList(flag).ToList();
}
public static Building SelectBuildingRecord(int buildingId)
{
return BuildingsDA.SelectBuilding(buildingId);
}
public static void InsertBuildingRecord(Building building)
{
BuildingsDA.InsertBuilding(building);
}
public static void UpdateBuildingRecord(Building building)
{
BuildingsDA.UpdateBuilding(building);
}
public static void DeleteBuildingRecord(int building)
{
BuildingsDA.DeleteBuilding(building);
}
}
and my DAL is like this:
internal static class BuildingsDA
{
internal static Building SelectBuilding(int buildingId)
{
SqlCommand commBuildingSelector = ConnectionManager.MainConnection.CreateCommand();
commBuildingSelector.CommandType = CommandType.StoredProcedure;
commBuildingSelector.CommandText = "Rooms.asp_RMS_Building_Select";
commBuildingSelector.Parameters.AddWithValue("BuildingID", buildingId);
SqlDataReader dreadBuilding = commBuildingSelector.ExecuteReader();
if (dreadBuilding.HasRows)
{
dreadBuilding.Read();
Building building = new Building();
building.BuildingID = int.Parse(dreadBuilding.GetValue(0).ToString());
building.BuildingName = dreadBuilding.GetValue(1).ToString();
building.IsActive = dreadBuilding.GetValue(2).ToString() == "Active";
building.LastEditDate = dreadBuilding.GetValue(3).ToString() != string.Empty ? DateTime.Parse(dreadBuilding.GetValue(3).ToString()) : DateTime.MinValue;
building.LastEditUser = dreadBuilding.GetValue(4).ToString();
dreadBuilding.Close();
return building;
}
dreadBuilding.Close();
return null;
}
....................
}
I would also want to know if what could be the faster between the two methods of OOP implementation thanks :)
DBML
Pros:
You can get your job done fast!
Cons:
You can't shape your entity the way you want, for example you need 5 columns from the table but it has 10 columns you will get all of them, at least its schema. If you don't care much about data volum
You client side will have dependency with DAL (Data Access Layer), if you change property name, type in DAL you need to change in both BLL (Business Logic Layer) and client (Presentation Layer)
If you manual create class you might take a little bit more time to code but you get more flexible with it. Your client code will not depend on your DAL, any changes on DAL will not cause problems on client code.
Creating your model classes manually you can put additional attributes to properties (it cannot be done with DBML), apply your own data validation (as far as I remember it is possible to be done with DBML using partial methods).
With many tables and assocatiations DBML could become hard to read.
Disadventage of creating model classes manually is that you have to do all DBML stuff (attributes and a lot of code).
If you want to create model classes manually you can take a look at Entity Framework Code First or Fluent NHibernate. Both allows creating model easily.

Domain modelling - Implement an interface of properties or POCO?

I'm prototyping a tool that will import files via a SOAP api to an web based application and have modelled what I'm trying to import via C# interfaces so I can wrap the web app's model data in something I can deal with.
public interface IBankAccount
{
string AccountNumber { get; set; }
ICurrency Currency { get; set; }
IEntity Entity { get; set; }
BankAccountType Type { get; set; }
}
internal class BankAccount
{
private readonly SomeExternalImplementation bankAccount;
BankAccount(SomeExternalImplementation bankAccount)
{
this.bankAccount = bankAccount;
}
// Property implementations
}
I then have a repository that returns collections of IBankAccount or whatever and a factory class to create BankAccounts for me should I need them.
My question is, it this approach going to cause me a lot of pain down the line and would it be better to create POCOs? I want to put all of this in a separate assembly and have a complete separation of data access and business logic, simply because I'm dealing with a moving target here regarding where the data will be stored online.
This is exactly the approach I use and I've never had any problems with it. In my design, anything that comes out of the data access layer is abstracted as an interface (I refer to them as data transport contracts). In my domain model I then have static methods to create business entities from those data transport objects..
interface IFooData
{
int FooId { get; set; }
}
public class FooEntity
{
static public FooEntity FromDataTransport(IFooData data)
{
return new FooEntity(data.FooId, ...);
}
}
It comes in quite handy where your domain model entities gather their data from multiple data contracts:
public class CompositeEntity
{
static public CompositeEntity FromDataTransport(IFooData fooData, IBarData barData)
{
...
}
}
In contrast to your design, I don't provide factories to create concrete implementations of the data transport contracts, but rather provide delegates to write the values and let the repository worry about creating the concrete objects
public class FooDataRepository
{
public IFooData Insert(Action<IFooData> insertSequence)
{
var record = new ConcreteFoo();
insertSequence.Invoke(record as IFooData);
this.DataContext.Foos.InsertOnSubmit(record); // Assuming LinqSql in this case..
return record as IFooData;
}
}
usage:
IFooData newFoo = FooRepository.Insert(f =>
{
f.Name = "New Foo";
});
Although a factory implementation is an equally elegant solution in my opinion. To answer your question, In my experience of a very similar approach I've never come up against any major problems, and I think you're on the right track here :)

C#, problem mixing Xml Serialization with Nhibernate

I am working on a program that uses Nhibernate to persist objects, and Xml Serialization to import and export data. I can't use the same properties for collections as, for example, Nhibernate needs them to be Ilists, because it has it's own implementation of that interface, and I can't Serialize interfaces. But as I need both properties to be synchronized, I thought I could use two different properties for the same Field. The properties will be according to what I need for each framework, and they will update the Field accrodingly.
So, I have the following field:
private IList<Modulo> modulos;
And the following properties:
[XmlIgnore]
public virtual IList<Modulo> Modulos
{
get { return modulos; }
set { modulos = value; }
}
[XmlArray]
[XmlArrayItem(typeof(Modulo))]
public virtual ArrayList XmlModulos
{
get
{
if (modulos == null) return new ArrayList();
var aux = new ArrayList();
foreach (Modulo m in modulos)
aux.Add(m);
return aux;
}
set
{
modulos = new List<Modulo>();
foreach (object o in value)
modulos.Add((Modulo)o);
}
}
The first one is working perfectly, being quite standard, but I have some problems with the second one. The get is working great as I am having no problems Serializing objects (meaning it correctly takes the info from the field). But when I need to Deserialize, it is not getting all the info. The debugger says that after the Deserialization, the field is not updated (null) and the Property is empty (Count = 0).
The obvious solution would be using two unrelated properties, one for each framework, and passing the information manually when needed. But the class structure is quite complicated and I think there should be a more simple way to do this.
Any Idea how I can modify my property for it to do what I want? Any help will be appreciated.
The short answer is that you cant.
Typically you would create a DTO ( Data transfer object ) separate from your NHibernate objects. For example:
public class PersonDto
{
[XmlAttribute(AttributeName = "person-id")]
public int Id { get; set; }
[XmlAttribute(AttributeName = "person-name")]
public string Name{ get; set; }
}
On your DTO object you only put the properties that you intend to serialize. You than create a DTO from your domain model when you need to serialize one.
There is a great little library called automapper that makes mapping from your domain objects to your dto's pretty straight forward. See: http://automapper.codeplex.com/
Here is an example of a person class that supports mapping to the above DTO.
public class Person
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
static Person()
{
Mapper.CreateMap<PersonDto, Person>();
Mapper.CreateMap<Person, PersonDto>();
}
public Person(PersonDto dto)
{
Mapper.Map<PersonDto, Person>(dto, this);
}
public PersonDto ToPersonDto()
{
var dto = new PersonDto();
Mapper.Map<Person, PersonDto>(this, dto);
return dto;
}
}

Categories

Resources