I have a domain in which employees can have a list of roles.. There is a new role adding feature also.. While adding a new role, we need to check whether the employee already has a “VP” role. If it is already present new role should not be added. This logic need to be present in the Employee Domain entity.
I started it by adding a method name IsNewRoleAllowed() which will return a Boolean.. If it is true, the business layer will insert the new role to database.
But to be more natural OO, I decided to change the Employee object’s responsibility by making a function AddRole. Instead of returning the Boolean, it will perform the role adding responsibility.
I achieved the above by receiving an Action<int, int> as parameter. It is working fine.
QUESTION
Is it a correct practice to pass the DAL method to entity?
UPDATE
#ThomasWeller added to important points to which I agree...
Having a role is a pure concept of the BL. It has nothing to do with the DAL.
In this approach, the BL would have a dependency of code that resides in the DAL. It (BL) even should work when a DAL does not even physically exist.
But, since I am not using ORM, how would I modify the code to work like the suggested approach?
REFERENCES
Grouping IDataRecord individual records to a collection
CODE
Domain Entities
public class Employee
{
public int EmployeeID { get; set; }
public string EmployeeName { get; set; }
public List<Role> Roles { get; set; }
//Add Role to Employee
public int AddRole(Role role, Action<int, int> insertMethod)
{
if (!Roles.Any(r => r.RoleName == "VP"))
{
insertMethod(this.EmployeeID, role.RoleID);
return 0;
}
else
{
return -101;
}
}
//IDataRecord Provides access to the column values within each row for a DataReader
//IDataRecord is implemented by .NET Framework data providers that access relational databases.
//Factory Method
public static Employee EmployeeFactory(IDataRecord record)
{
var employee = new Employee
{
EmployeeID = (int)record[0],
EmployeeName = (string)record[1],
Roles = new List<Role>()
};
employee.Roles.Add(new Role { RoleID = (int)record[2], RoleName = (string)record[3] });
return employee;
}
}
BusinessLayer.Manager
public class EmployeeBL
{
public List<Employee> GetEmployeeList()
{
List<Employee> employees = EmployeeRepositoryDAL.GetEmployees();
return employees;
}
public void AddRoleToEmployee(Employee emp, Role role)
{
//Don't trust the incoming Employee object. Use only id from it
Employee employee = EmployeeRepositoryDAL.GetEmployeeByID(emp.EmployeeID);
employee.AddRole<Employee>(role, EmployeeRepositoryDAL.InsertEmployeeRole);
//EmployeeRepositoryDAL.InsertEmployeeRole(emp.EmployeeID, role.RoleID);
}
}
DAL
public static void InsertEmployeeRole(int empID, int roleID)
{
string commandText = #"INSERT INTO dbo.EmployeeRole VALUES (#empID, #roleID)";
List<SqlParameter> commandParameters = new List<SqlParameter>()
{
new SqlParameter {ParameterName = "#empID",
Value = empID,
SqlDbType = SqlDbType.Int},
new SqlParameter {ParameterName = "#roleID",
Value = roleID,
SqlDbType = SqlDbType.Int}
};
CommonDAL.ExecuteNonQuery(commandText, commandParameters);
}
No. Having a role is a pure concept of the BL in the first place, it has nothing to do with the DAL. Also, in your approach, the BL would have a dependency of code that resides in the DAL, which would be the wrong direction. The BL should be persistence agnostic (i.e. it shouldn't depend in any way on something that would happen in the DAL - it even should work when a DAL does not even physically exist.). Furthermore, the responsibility of the DAL is only to persist objects - not to handle any collections that reside in memory.
Keep it as simple as possible, and just do:
public int AddRole(Role role)
{
if (!Roles.Any(r => r.RoleName == "VP"))
{
Roles.Add(role.RoleName);
return 0;
}
else
{
return -101;
}
}
... in your Employee class, and let the DAL handle all persistence related questions (if you use an ORM it will do cascading updates anyway).
Is it a correct practice to pass the DAL method to entity?
I avoid injection of DAL logic into my Domain Model.
It is not needed to update Data Base once Domain Entity (e.g. Employee) is updated.
The common solution is:
Load entities to update from DB into memory (Identity Map, PoEAA).
Create/ update/ delete entities in memory
Save all changes into DB
In order to track new/dirty/deleted entities Unit of Work pattern
is used usually:
The Unit Of Work Pattern And Persistence Ignorance
Unit of Work and Repository Design Pattern Implementation
Related
Following is the action that is adding a Loan request to the database:
[HttpPost]
public ActionResult Add(Models.ViewModels.Loans.LoanEditorViewModel loanEditorViewModel)
{
if (!ModelState.IsValid)
return View(loanEditorViewModel);
var loanViewModel = loanEditorViewModel.LoanViewModel;
loanViewModel.LoanProduct = LoanProductService.GetLoanProductById(loanViewModel.LoanProductId); // <-- don't want to add to this table in database
loanViewModel.Borrower = BorrowerService.GetBorrowerById(loanViewModel.BorrowerId); //<-- don't want to add to this table in database
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
loanService.AddNewLoan(loan);
return RedirectToAction("Index");
}
Following is the AddNewLoan() method:
public int AddNewLoan(Models.Loans.Loan loan)
{
loan.LoanStatus = Models.Loans.LoanStatus.PENDING;
_LoanService.Insert(loan);
return 0;
}
And here is the code for Insert()
public virtual void Insert(TEntity entity)
{
if (entity == null)
throw new ArgumentNullException(nameof(entity));
try
{
entity.DateCreated = entity.DateUpdated = DateTime.Now;
entity.CreatedBy = entity.UpdatedBy = GetCurrentUser();
Entities.Add(entity);
context.SaveChanges();
}
catch (DbUpdateException exception)
{
throw new Exception(GetFullErrorTextAndRollbackEntityChanges(exception), exception);
}
}
It is adding one row successfully in Loans table but it is also adding rows to LoanProduct and Borrower table as I showed in first code comments.
I checked the possibility of multiple calls to this action and Insert method but they are called once.
UPDATE
I am facing similar problem but opposite in functioning problem here: Entity not updating using Code-First approach
I think these two have same reason of Change Tracking. But one is adding other is not updating.
The following code seems a bit odd:
var loanViewModel = loanEditorViewModel.LoanViewModel;
loanViewModel.LoanProduct = LoanProductService.GetLoanProductById(loanViewModel.LoanProductId); // <-- don't want to add to this table in database
loanViewModel.Borrower = BorrowerService.GetBorrowerById(loanViewModel.BorrowerId); //<-- don't want to add to this table in database
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
You are setting entity references on the view model, then calling automapper. ViewModels should not hold entity references, and automapper should effectively be ignoring any referenced entities and only map the entity structure being created. Automapper will be creating new instances based on the data being passed in.
Instead, something like this should work as expected:
// Assuming these will throw if not found? Otherwise assert that these were returned.
var loanProduct = LoanProductService.GetLoanProductById(loanViewModel.LoanProductId);
var borrower = BorrowerService.GetBorrowerById(loanViewModel.BorrowerId);
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
loan.LoanProduct = loanProduct;
loan.Borrower = borrower;
Edit:
The next thing to check is that your Services are using the exact same DbContext reference. Are you using Dependency Injection with an IoC container such as Autofac or Unity? If so, make sure that the DbContext is set registered as Instance Per Request or similar lifetime scope. If the Services effectively new up a new DbContext then the LoanService DbContext will not know about the instances of the Product and Borrower that were fetched by another service's DbContext.
If you are not using a DI library, then you should consider adding one. Otherwise you will need to update your services to accept a single DbContext with each call or leverage a Unit of Work pattern such as Mehdime's DbContextScope to facilitate the services resolving their DbContext from the Unit of Work.
For example to ensure the same DbContext:
using (var context = new MyDbContext())
{
var loanProduct = LoanProductService.GetLoanProductById(context, loanViewModel.LoanProductId);
var borrower = BorrowerService.GetBorrowerById(context, loanViewModel.BorrowerId);
Models.Loans.Loan loan = AutoMapper.Mapper.Map<Models.Loans.Loan>(loanEditorViewModel.LoanViewModel);
loan.LoanProduct = loanProduct;
loan.Borrower = borrower;
LoanService.AddNewLoan(context, loan);
}
If you are sure that the services are all provided the same DbContext instance, then there may be something odd happening in your Entities.Add() method. Honestly your solution looks to have far too much abstraction around something as simple as a CRUD create and association operation. This looks like a case of premature code optimization for DRY without starting with the simplest solution. The code can more simply just scope a DbContext, fetch the applicable entities, create the new instance, associate, add to the DbSet, and SaveChanges. There's no benefit to abstracting out calls for rudimentary operations such as fetching a reference by ID.
public ActionResult Add(Models.ViewModels.Loans.LoanEditorViewModel loanEditorViewModel)
{
if (!ModelState.IsValid)
return View(loanEditorViewModel);
var loanViewModel = loanEditorViewModel.LoanViewModel;
using (var context = new AppContext())
{
var loanProduct = context.LoanProducts.Single(x => x.LoanProductId ==
loanViewModel.LoanProductId);
var borrower = context.Borrowers.Single(x => x.BorrowerId == loanViewModel.BorrowerId);
var loan = AutoMapper.Mapper.Map<Loan>(loanEditorViewModel.LoanViewModel);
loan.LoanProduct = loanProduct;
loan.Borrower = borrower;
context.SaveChanges();
}
return RedirectToAction("Index");
}
Sprinkle with some exception handling and it's done and dusted. No layered service abstractions. From there you can aim to make the action test-able by using an IoC container like Autofac to manage the Context and/or introducing a repository/service layer /w UoW pattern. The above would serve as a minimum viable solution for the action. Any abstraction etc. should be applied afterwards. Sketch out with pencil before cracking out the oils. :)
Using Mehdime's DbContextScope it would look like:
public ActionResult Add(Models.ViewModels.Loans.LoanEditorViewModel loanEditorViewModel)
{
if (!ModelState.IsValid)
return View(loanEditorViewModel);
var loanViewModel = loanEditorViewModel.LoanViewModel;
using (var contextScope = ContextScopeFactory.Create())
{
var loanProduct = LoanRepository.GetLoanProductById( loanViewModel.LoanProductId).Single();
var borrower = LoanRepository.GetBorrowerById(loanViewModel.BorrowerId);
var loan = LoanRepository.CreateLoan(loanViewModel, loanProduct, borrower).Single();
contextScope.SaveChanges();
}
return RedirectToAction("Index");
}
In my case I leverage a repository pattern that uses the DbContextScopeLocator to resolve it's ContextScope to get a DbContext. The Repo manages fetching data and ensuring that the creation of entities are given all required data necessary to create a complete and valid entity. I opt for a repository-per-controller rather than something like a generic pattern or repository/service per entity because IMO this better manages the Single Responsibility Principle given the code only has one reason to change (It serves the controller, not shared between many controllers with potentially different concerns). Unit tests can mock out the repository to serve expected data state. Repo get methods return IQueryable so that the consumer logic can determine how it wants to consume the data.
Finally with the help of the link shared by #GertArnold Duplicate DataType is being created on every Product Creation
Since all my models inherit a BaseModel class, I modified my Insert method like this:
public virtual void Insert(TEntity entity, params BaseModel[] unchangedModels)
{
if (entity == null)
throw new ArgumentNullException(nameof(entity));
try
{
entity.DateCreated = entity.DateUpdated = DateTime.Now;
entity.CreatedBy = entity.UpdatedBy = GetCurrentUser();
Entities.Add(entity);
if (unchangedModels != null)
{
foreach (var model in unchangedModels)
{
_context.Entry(model).State = EntityState.Unchanged;
}
}
_context.SaveChanges();
}
catch (DbUpdateException exception)
{
throw new Exception(GetFullErrorTextAndRollbackEntityChanges(exception), exception);
}
}
And called it like this:
_LoanService.Insert(loan, loan.LoanProduct, loan.Borrower);
By far the simplest way to tackle this is to add the two primitive foreign key properties to the Loan class, i.e. LoanProductId and BorrowerId. For example like this (I obviously have to guess the types of LoanProduct and Borrower):
public int LoanProductId { get; set; }
[ForeignKey("LoanProductId")]
public Product LoanProduct { get; set; }
public int BorrowerId { get; set; }
[ForeignKey("BorrowerId")]
public User Borrower { get; set; }
Without the primitive FK properties you have so-called independent associations that can only be set by assigning objects of which the state must be managed carefully. Adding the FK properties turns it into foreign key associations that are must easier to set. AutoMapper will simply set these properties when the names match and you're done.
Check Models.Loans.Loan?Is it a joined model of Loans table , LoanProduct and Borrower table.
You have to add
Loans lentity = new Loans()
lentity.property=value;
Entities.Add(lentity );
var lentity = new Loans { FirstName = "William", LastName = "Shakespeare" };
context.Add<Loans >(lentity );
context.SaveChanges();
I'm currently working with ASP .NET Core 1.0 using Entity Framework Core. I have some complex calculations with data from the database and I'm not sure how to build a proper architecture using Dependency Injection without building an anemic domain model (http://www.martinfowler.com/bliki/AnemicDomainModel.html)
(Simplified) Example:
I have the following entities:
public class Project {
public int Id {get;set;}
public string Name {get;set;}
}
public class TimeEntry
{
public int Id {get;set;}
public DateTime Date {get;set;}
public int DurationMinutes {get;set;}
public int ProjectId {get;set;}
public Project Project {get;set;}
}
public class Employee {
public int Id {get;set;}
public string Name {get;set;}
public List<TimeEntry> TimeEntries {get;set;}
}
I want to do some complex calculations to calculate a monthly TimeSheet. Because I can not access the database within the Employee entity I calculate the TimeSheet in a EmployeeService.
public class EmployeeService {
private DbContext _db;
public EmployeeService(DbContext db) {
_db = db;
}
public List<CalculatedMonth> GetMonthlyTimeSheet(int employeeId) {
var employee = _db.Employee.Include(x=>x.TimeEntry).ThenInclude(x=>x.Project).Single();
var result = new List<CalculatedMonth>();
//complex calculation using TimeEntries etc here
return result;
}
}
If I want to get the TimeSheet I need to inject the EmployeeService and call GetMonthlyTimeSheet.
So - I end up with a lot of GetThis() and GetThat() methods inside my service although this methods would perfectly fit into the Employee class itself.
What I want to achieve is something like:
public class Employee {
public int Id {get;set;}
public string Name {get;set;}
public List<TimeEntry> TimeEntries {get;set;}
public List<CalculatedMonth> GetMonthlyTimeSheet() {
var result = new List<CalculatedMonth>();
//complex calculation using TimeEntries etc here
return result;
}
}
public IActionResult GetTimeSheets(int employeeId) {
var employee = _employeeRepository.Get(employeeId);
return employee.GetTimeSheets();
}
...but for that I need to make sure that the list of TimeEntries is populated from the database (EF Core does not support lazy loading). I do not want to .Include(x=>y) everything on every request because sometimes I just need the employee's name without the timeentries and it would affect the performance of the application.
Can anyone point me in a direction how to architect this properly?
Edit:
One possibility (from the comments of the first answer) would be:
public class Employee {
public int Id {get;set;}
public string Name {get;set;}
public List<TimeEntry> TimeEntries {get;set;}
public List<CalculatedMonth> GetMonthlyTimeSheet() {
if (TimeEntries == null)
throw new PleaseIncludePropertyException(nameof(TimeEntries));
var result = new List<CalculatedMonth>();
//complex calculation using TimeEntries etc here
return result;
}
}
public class EmployeeService {
private DbContext _db;
public EmployeeService(DbContext db) {
_db = db;
}
public Employee GetEmployeeWithoutData(int employeeId) {
return _db.Employee.Single();
}
public Employee GetEmployeeWithData(int employeeId) {
return _db.Employee.Include(x=>x.TimeEntry).ThenInclude(x=>x.Project).Single();
}
}
public IActionResult GetTimeSheets(int employeeId) {
var employee = _employeeService.GetEmployeeWithData(employeeId);
return employee.GetTimeSheets();
}
Do not try to solve querying problems with your aggregates. Your aggregates are meant to process commands and protect invariants. They form a consistency boundary around a set of data.
Is the Employee object responsible for protecting the integrity of an employee's timesheet? If it doesn't then this data doesn't belong into the Employee class.
Lazy-loading may be fine for CRUD models, but is usually considered an anti-pattern when we design aggregates because those should be as small and cohesive as possible.
Are you taking business decisions based on the calculated result from timesheets? Is there any invariants to protect? Does it matter if the decision was made on stale timesheet data? If the answer to these questions is no then your calculation is really nothing more than a query.
Placing queries in service objects is fine. These service objects may even live outside the domain model (e.g. in the application layer), but there is no strict rule to follow. Also, you may choose to load a few aggregates in order to access the required data to process the calculations, but it's usually better to go directly in the database. This allows a better separation between your reads & writes (CQRS).
If I understood your question correctly you can use a trick with injecting a service into your entities that helps it do the job, e.g.:
public class Employee()
{
public object GetTimeSheets(ICalculatorHelper helper)
{
}
}
Then in your service that holds the employees you would obtain it in the constructor and pass to the employee class for calculations. This service can be a Facade e.g. for getting all the data and perform initialization or whatever you really need.
As for the TimeEntries, you can get them using a function like this:
private GetTimeEntries(ICalculationHelper helper)
{
if (_entries == null)
{
_entries = helper.GetTimeEntries();
}
return _entries;
}
It depends of course on you strategy of caching and so on if this pattern fits you.
Personally I find it rather easy to work with anemic classes and have a lot of the business logic in services. I do put some in the objects, like e.g. calculating FullName out of FirstName and LastName. Usually stuff that does not involve other services. Though it's a matter of preference.
I'm curious about best practice when developing n-tier application with Linq-to-SQL and WCF service.
In particular, I'm interested, for example, how to return to presentation tier data from two related tables. Suppose next situation (much simplified):
Database has tables:
Orders (id, OrderName)
OrderDetails (id, orderid, DetailName)
Middle tier has CRUD methods for OrderDetails. So, I need to have way to rebuild entity for attaching to the context for update or insert when it come back from presentation layer.
In presentation layer I need to display list of OrderDetails with corresponding OrderName from the parent table.
There are two approach for classes, that returned from the service:
Use DTO custom class that will encapsulate data from both tables and projection:
class OrderDetailDTO
{
public int Id { get; set; }
public string DetailName { get; set; }
public string OrderName { get; set; }
}
IEnumerable<OrderDetailDTO> GetOrderDetails()
{
var db = new LinqDataContext();
return (from od in db.OrderDetails
select new OrderDetailDTO
{
Id = od.id,
DetailName = od.DetailName,
OrderName = od.Order.OrderName
}).ToList();
}
Cons: need to assign every field which is important for presentation layer in both ways (when returning data and when creating new entity for attaching to context, when data comes back from presentation layer)
Use customized Linq-to-SQL entity partial class:
partial class OrderDetail
{
[DataMember]
public string OrderName
{
get
{
return this.Order.OrderName // return value from related entity
}
set {}
}
}
IEnumerable<OrderDetail> GetOrderDetails()
{
var db = new LinqDataContext();
var loadOptions = new DataLoadOptions();
loadOptions.LoadWith<OrderDetail>(item => item.Order);
db.LoadOptions = options;
return (from od in db.OrderDetails
select od).ToList();
}
Cons: database query will include all columns from Orders table, Linq-to-SQL will materialize whole Order entity, although I need only one field from it.
Sorry for such long story. May be I missed something? Will appreciate any suggestions.
I would say use DTO and Automapper, not a good idea to expose DB entity as datacontract
Is usage of Linq to SQL a requirement for you or you are still in design stage where you can choose technologies? If latest, I would suggest using Entity Framework with Self Tracking Entities (STE). Than when you get entity back from client all client changes will be handled for you automatically by STEs, you will just have to call Save. Including related entities is also easy then: (...some query...).Orders.Include(c => c.OrderDetails)
This is my first experience with EF so I'm probably doing something stupid. Any comments on the architecture are welcome.
So I have the typical class of Users. Users have a username and a list of roles:
public class User
{
public string UserID{ get; set; }
public List<Role> Roles { get; set; }
public int Id { get; set; }
public User()
{
Roles = new List<Role>();
}
}
My domain objects live in their own code library along with the interfaces for their repositories. So in this case there would be an IUserRepository with all the CRUD methods plus any specialized data access methods I might need. What I'm trying to do is implement these repository interfaces with EF4 in another class library. Any problems with this design so far?
Now in the db (sql server) I have the typical tables: Users, Roles, and a many-to-many table mapping users to roles UsersRoles.
I have successfully set up most of the CRUD methods in the EF lib. Here is what Save looks like
public void Save(Drc.Domain.Entities.Staff.User member)
{
using (var ctx = new DrcDataContext())
{
var efUser = MapFromDomainObject(member);
if(member.Id < 1)
{
ctx.Users.AddObject(efUser);
}
else
{
ctx.Users.Attach(efUser);
ctx.ObjectStateManager.ChangeObjectState(efUser, EntityState.Modified);
}
ctx.SaveChanges();
member.Id = efUser.UserId;
}
}
Now I'm not sure if this is the proper way of accomplishing this but it works. However, I run into problems when doing a delete. The problem is with the related tables
public void Delete(Drc.Domain.Entities.Staff.User member)
{
using (var ctx = new DrcDataContext())
{
var efUser = MapFromDomainObject(member); ctx.Users.Attach(efUser);
while (efUser.Roles.Count > 0)
{
ctx.ObjectStateManager.ChangeObjectState(efUser.Roles.First(), EntityState.Deleted);
}
ctx.SaveChanges();
ctx.ObjectStateManager.ChangeObjectState(efUser, EntityState.Deleted);
ctx.SaveChanges();
}
}
If I don't delete the roles in the while loop I get a DELETE conflict with reference constraint error. If I run the code above it does delete the proper rows in the many-to-many table but it also deletes rows in the Roles table. I'm at a dead end now and considering scraping the ORM idea and writing my repository implementations in good ole reliable ADO.net.
--Edit I'm guessing that this is not the correct way to implement repositories with EF. Is it possible to do without littering your domain with a bunch of EF-centric stuff?
Use simply the standard approach and don't mess around with the entity state:
public void Delete(Drc.Domain.Entities.Staff.User member)
{
using (var ctx = new DrcDataContext())
{
var efUser = MapFromDomainObject(member);
ctx.Users.Attach(efUser);
ctx.Users.DeleteObject(efUser);
ctx.SaveChanges();
}
}
There is usually a cascading delete in the database from the User table to the join table (if you didn't disable it by hand). So deleting the user will delete the corresponding rows in the join table as well (but not the roles of course).
Setting the state of an entity to Deleted is not the same as calling DeleteObject. It will only set the parent to deleted and leave the children in an undeleted state in the context, leading to the constraint violation exception. DeleteObject will also mark the children in the context as Deleted and therefore avoid the exception.
I'm building my first enterprise grade solution (at least I'm attempting to make it enterprise grade). I'm trying to follow best practice design patterns but am starting to worry that I might be going too far with abstraction.
I'm trying to build my asp.net webforms (in C#) app as an n-tier application. I've created a Data Access Layer using an XSD strongly-typed dataset that interfaces with a SQL server backend. I access the DAL through some Business Layer Objects that I've created on a 1:1 basis to the datatables in the dataset (eg, a UsersBLL class for the Users datatable in the dataset). I'm doing checks inside the BLL to make sure that data passed to DAL is following the business rules of the application. That's all well and good. Where I'm getting stuck though is the point at which I connect the BLL to the presentation layer. For example, my UsersBLL class deals mostly with whole datatables, as it's interfacing with the DAL. Should I now create a separate "User" (Singular) class that maps out the properties of a single user, rather than multiple users? This way I don't have to do any searching through datatables in the presentation layer, as I could use the properties created in the User class. Or would it be better to somehow try to handle this inside the UsersBLL?
Sorry if this sounds a little complicated... Below is the code from the UsersBLL:
using System;
using System.Data;
using PedChallenge.DAL.PedDataSetTableAdapters;
[System.ComponentModel.DataObject]
public class UsersBLL
{
private UsersTableAdapter _UsersAdapter = null;
protected UsersTableAdapter Adapter
{
get
{
if (_UsersAdapter == null)
_UsersAdapter = new UsersTableAdapter();
return _UsersAdapter;
}
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Select, true)]
public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsers()
{
return Adapter.GetUsers();
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Select, false)]
public PedChallenge.DAL.PedDataSet.UsersDataTable GetUserByUserID(int userID)
{
return Adapter.GetUserByUserID(userID);
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Select, false)]
public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByTeamID(int teamID)
{
return Adapter.GetUsersByTeamID(teamID);
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Select, false)]
public PedChallenge.DAL.PedDataSet.UsersDataTable GetUsersByEmail(string Email)
{
return Adapter.GetUserByEmail(Email);
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Insert, true)]
public bool AddUser(int? teamID, string FirstName, string LastName,
string Email, string Role, int LocationID)
{
// Create a new UsersRow instance
PedChallenge.DAL.PedDataSet.UsersDataTable Users = new PedChallenge.DAL.PedDataSet.UsersDataTable();
PedChallenge.DAL.PedDataSet.UsersRow user = Users.NewUsersRow();
if (UserExists(Users, Email) == true)
return false;
if (teamID == null) user.SetTeamIDNull();
else user.TeamID = teamID.Value;
user.FirstName = FirstName;
user.LastName = LastName;
user.Email = Email;
user.Role = Role;
user.LocationID = LocationID;
// Add the new user
Users.AddUsersRow(user);
int rowsAffected = Adapter.Update(Users);
// Return true if precisely one row was inserted,
// otherwise false
return rowsAffected == 1;
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Update, true)]
public bool UpdateUser(int userID, int? teamID, string FirstName, string LastName,
string Email, string Role, int LocationID)
{
PedChallenge.DAL.PedDataSet.UsersDataTable Users = Adapter.GetUserByUserID(userID);
if (Users.Count == 0)
// no matching record found, return false
return false;
PedChallenge.DAL.PedDataSet.UsersRow user = Users[0];
if (teamID == null) user.SetTeamIDNull();
else user.TeamID = teamID.Value;
user.FirstName = FirstName;
user.LastName = LastName;
user.Email = Email;
user.Role = Role;
user.LocationID = LocationID;
// Update the product record
int rowsAffected = Adapter.Update(user);
// Return true if precisely one row was updated,
// otherwise false
return rowsAffected == 1;
}
[System.ComponentModel.DataObjectMethodAttribute
(System.ComponentModel.DataObjectMethodType.Delete, true)]
public bool DeleteUser(int userID)
{
int rowsAffected = Adapter.Delete(userID);
// Return true if precisely one row was deleted,
// otherwise false
return rowsAffected == 1;
}
private bool UserExists(PedChallenge.DAL.PedDataSet.UsersDataTable users, string email)
{
// Check if user email already exists
foreach (PedChallenge.DAL.PedDataSet.UsersRow userRow in users)
{
if (userRow.Email == email)
return true;
}
return false;
}
}
Some guidance in the right direction would be greatly appreciated!!
Thanks all!
Max
The sort of layering you're trying for usually involves moving away from the DataTable approach to something that uses an instance for (roughly) each row in the database. In other words, the DAL would return either a single User or a collection of Users, depending on which static Load method you call. This means that all of the methods that take a bunch of parameters to represent the user would instead accept a User DTO.
A DAL for users would look something like this:
public static class UserDal
{
public static User Load(int id) { }
public static User Save(User user) } { }
public static IEnumerable<User> LoadByDiv(int divId) { }
}
It's static because it has no state.
(Arguably, it could have a database
connection as its state, but that's
not a good idea in most cases, and
connection pooling removes any
benefit. Others might argue for a
singleton pattern.)
It operates at the level of the User
DTO class, not DataTable or any
other database-specific abstraction.
Perhaps the implementation uses a
database, perhaps it uses LINQ: the
caller need not know either way.
Note how it returns an IEnumerable
rather than committing to any
particular sort of collection.
It is concerned only with data access,
not business rules. Therefore, it should
be callable only from within a business
logic class that deals with users. Such
a class can decide what level of access
the caller is permitted to have, if any.
DTO stands for Data Transfer Object, which
usually amounts to a class containing just
public properties. It may well have a dirty
flag that is automatically set when properties
are changed. There may be a way to explictly
set the dirty flag, but no public way to clear
it. Also, the ID is typically read-only (so
that it can only be filled in from
deserialization).
The DTO intentionally does not contain business
logic that attempts to ensure correctness; instead,
the corresponding business logic class is what
contextually enforces rules. Business logic
changes, so if the DTO or DAL were burdened with
it, the violation of the single responsibility
principle would lead to disasters such as not
being able to deserialize an object because its
values are no longer considered legal.
The presentation layer can instantiate a User
object, fill it in and ask the business logic layer
to please call the Save method in the DAL. If the
BLL chooses to do this, it will fill in the ID and
clear the dirty flag. Using this ID, the BLL can
then retrieve persisted instances by calling the
DAL's Load-by-ID method.
The DAL always has a Save method and a Load-by-ID
method, but it may well have query-based load methods,
such as the LoadByDiv example above. It needs to offer
whatever methods the BLL requires for efficient
operation.
The implementation of the DAL is a secret as far as
the BLL and above are concerned. If the backing is a
database, there would typically be stored procedures
corresponding to the various DAL methods, but this is
an implementation detail. In the same way, so is any
sort of caching.
To faciliate your design, you definitely do not want to be pulling back entire data tables and searching through them in the presentation tier. The beauty of a database is that it is indexed to facilitate fast querying of row level data (i.e. get row by indexed identifier).
Your DAL should expose a method like GetUserByUserID(int userID). You should then expose that method via the BLL, enforcing any needed business logic.
Additionally, I would stear clear of the Type Data Sets and consider an ORM tool such as Entity Framework.