Consider the following simplified scenario:
public class Person
{
public string Name { get; set; }
public int Age { get; set; }
// restricted
public string SocialSecurityNumber { get; set; }
// restricted
public string MothersMaidenName { get; set; }
}
So, in the application, many users can view Person data. Some users can view all; other users can view only Name and Age.
Having the UI display only authorized data is easy enough on the client side, but I really don't want to even send that data to the client.
I've tried to achieve this by creating a FullPerson : BasicPerson hierarchy (table-per-class-hierarchy). I used two implementations of a StaffRepository to get the desired type, but the necessary casting fails at runtime because of the NH proxies. Of course in the RDBMS any given row in the People table can represent a FullPerson or a BasicPerson, and giving them both the same discriminator value does not work either.
I considered mapping only FullPerson and using AliasToBean result transformer to filter down to BasicPerson, but I understand this to be a one-way street, whereas I want the full benefit of entity management and lazy loading (though the example above doesn't include collections) in the session.
Another thought I had was to wrap up all the restricted fields into a class and add this as a property. My concerns with this approach are several:
It compromises my domain model,
I'd have to declare the property as a collection (always of 1) in order to have it load lazily, and
I'm not even sure how I'd prevent that lazy collection from loading.
All this feels wrong. Is there a known approach to achieve the desired result?
clarification:
This in an intranet-only desktop application; the session lives on the client. While I can certainly create an intermediate service layer, I would have to give up lazy loading and change tracking, which I'd really like to keep in place.
First, let me say that I do not think it is NHibernate's responsibility to handle security, and data redaction based on same. I think you're overcomplicating this by trying to put it in the data access layer.
I would insert a layer into the service or controller that receives this data request from the client (which shouldn't be the Repository itself) and will perform the data redaction based on user permissions. So, you'd perform the full query to the DB, and then based on user permissions, the service layer would clear out fields of the result set before returning that result set over the service connection. It is not the absolute most performant solution, but is both more secure and more performant than sending all the data to the client and having the client software "censor" it. The Ethernet connection between DB and service layers of the server architecture can handle far more bandwidth than the Internet connection between service layer and client, and in a remote client app you generally have very little control over what the client does with the data; you could be talking to a hacked copy of your software, or a workalike, that doesn't give two flips about user security.
If network bandwidth between service and DB is of high importance, or if a LOT of information is restricted, Linq2NH should be smart enough to let you specify what should or shouldn't be included in a query results using a select list:
if(!user.CanSeeRestrictedFields)
var results = from p as Repository.AsQueryable<Person>()
//rest of Linq statement
select new Person {
Name = p.Name,
Age = p.Age
};
else
var results = from p as Repository.AsQueryable<Person>()
//rest of Linq statement
select new Person {
Name = p.Name,
Age = p.Age,
SocialSecurityNumber = p.SocialSecurityNumber,
MothersMaidenName = p.MothersMaidenName
};
I do not know if Linq2NH is smart enough to parse conditional operators into SQL; I doubt it, but on the off-chance it's possible, you can specify conditional operators in the initializer for the SSN and MMN fields based on whether the user has rights to see them, allowing you to combine these two queries.
I would leave your domain model alone, and instead use Automapper to map to specific DTOs based on the security level of the current user (or whatever criteria you use to determine access to the specific properties). This should be done in some sort of service layer that sits between your UI and your repositories.
Edit:
Based upon your requirements of keeping lazy-loading and change tracking in place, perhaps using the proxy pattern to wrap your domain object is a viable alternative? You could wrap your original domain model in a proxy that performed your security checks on each given property. I believe CSLA.NET uses a method like this for field-level security, so it may be worth browsing the source to get some inspiration. You could perhaps take this one step further and use interfaces implemented by your proxy that only exposed the properties that the user had access to.
Related
I am building a web application that is a recreation of an older system, and I am trying to build it in an architected, yet pragmatic and maintainable way (unlike the old system). Anyways, I am currently designing my queries for my models in my application. The old system allows developers to assign any field through a boolean to be a searchable value from a table, meaning a single view for maintaining some models' records might contain 20 searchable fields in the front-end and doing that only requires ticking a single box.
Now I would like to implement something similar in this new system with C# with a backend using EF as the data mapper, but I am not sure what approach is the most maintainable. In my current approach the filters are sent by the client as a record that (at most) contains all the possible filterable fields e.g
public record GetOrderQuery()
{
public string OrderReference { get; set;}
public string OrdererName { get; set; }
public int ItemCount { get; set; }
//etc...
}
I am fine with it, if the record limits filters which can be applied ( should I have the record contain an object that has fieldName, fieldValue, queryType and have that as an iterable property in the record instead?), but I would like to streamline the actual filtering. Basically if the client sent any of the above fields in the request (as JSON and none are required), the filtering is applied to those fields. I am currently thinking that I could implement this with reflection: I try to find a field in the actual model where the property name is the same as in the record, then I construct the predicate for the Where() by chaining expressions.
I construct expressions for each property that has a value in the query and can be found through reflection (a property with the same name), then I link those together using a Binary Expressions, combinining each of the filters in to a single expression. I am not sure if this is the best approach or even what is a good way to implement this though (performance or maintainability wise or just in general). Are there any other ways to implement this, are there any pitfalls in this I should look out for, any resources I should read? Thanks!
We currently have Repository Layer and App Service Layer.
Repo gets data from SQL Server database with Entity Framework.
App Service Layer Collects data, and does more things: send emails, parse flat files,
Repo Layer
public Task<Sales> GetBySalesId(int salesId)
{
var salesData = _context.Sales
.Include(c => c.Customer)
.FirstOrDefaultAsync(c => c.SalesId == salesId);
return salesData ;
}
Service Layer:
public async Task<SalesDto> GetSalesByIdAppService(int salesId)
{
var salesData = await _salesRepository.GetBySalesId(salesId);
var salesDto = _mapper.Map<SalesDto>(salesData);
return salesDto;
}
This is currently working fine. However tomorrow, one of my colleagues may require More columns, when they aren't needed in my specific portion of application.
Here two more Linq includes are added: However, I do not need Product or Employee.
New Repo Addition:
public Task<Sales> GetBySalesId(int salesId)
{
var salesData = _context.Sales
.Include(c => c.Customer)
.Include(c => c.ProductType)
.Include(c => c.Employee)
.FirstOrDefaultAsync(c => c.SalesId == salesId);
return salesData ;
}
Background One Suggestion is create Another Middle Domain Layer that everyone can utilize. In the API DTO Level, everyone can have separate DTOs, which only collects the only required class members from Domain. This would essentially entail creating Another layer, where DTO is a subset of the new "Domain" layer.
*Another suggestion, is to apply serialization only for the columns which are required. I keep hearing about this, however, how can this be done? Is it possible to application serialization to the Controller API without adding another layer? Does Newtonsoft have a tool, or any syntax in C# ?
API Controller
public async Task<ActionResult<SalesDto>> GetSalesBySalesId(string salesId)
{
var dto = await _service.GetBySalesId(salesId);
return Ok(dto);
}
JSON Ignore may not work , because we all share same DTO, and ignoring for one area, may be required for other part of application.
Decorate your members in class with [JsonIgnore] attribute which are NOT required in response. JsonIgnore is available in
System.Text.Json.Serialization namespace.
public class SalesDto
{
[JsonIgnore]
public string Customer { get; set; }
public string ProductType { get; set; }
public string Employee { get; set; }
}
Bind all properties of model with data and when you send it to UI, Customer property would not be available in response.
We should fetch all the data from database and the process that data in our presentation layer. GraphQL, could be a winner for this scenario but need to explore
OData as a Middle Domain Layer can be useful to support this type of requirement. It gives the caller some control over the shape of the Object Graph that is returned. The code to acheive this is too involved to include as a single POST, however these types of requirements are often better solved through implementing an architecture that is designed specifically to deal with them, rather than rolling your own quick fix.
You can also look at GraphQL that allows greater flexibility over the shape of the returned data, however that is a steeper learning curve for the server and client side.
The problem with just using JsonIgnore is that this is a permanent definition on your DTO, which would make it hard to use the same DTO definitions for different applications that might require a different shape/view of the data.
A solution to that problem is then to create a tightly-coupled branch of DTOs for each application that inherits from the base but overrides the properties by decorating their properties with JsonIgnore attributes.
You want to avoid tightly coupled scenarios where the UI enforces too much structure on your data model, this can will make it harder to maintain your data model and can lead to many other anti-patterns down the track.
OData allows a single set of DTOs that have a consistent structure in the backend and the client end, whilst allowing the client end to reduce/omit/ignore fields that it either does not know about yet or does not want transmitted.
The key here is that the Client now has (some) control over the graph, rather than the API needing to anticipate or tightly define the specific properties that each application MUST consume.
It has a rich standard convention based query language that means many off the shelf products may be able to integrate directly with your API
Some Pros for consideration:
Single Defintion of DTOs
Write once, defining the full structure of your data model that all applications will have access to.
Clients can specify the fields that they need through query projections.
Expose Business Logic through the API
This is a feature of all APIs / Service layers, however OData has a standard convention for supporting and documenting custom business Functions and Actions
Minimal DTOs over the wire
OData serializer can be configured to only transmit properties that have values across the wire
Callers can specify which fields to include in query results
You can configure the default properties and navigation links to send for resource requests when the client does not specify the projection.
PATCH (delta) Updates
OData has a simple mechanism to support partial updates to objects, you only send the changed properties over the wire for any object
Sure, this is really part of the previous point, but it is a really strong argument for using OData over a standard REST based API
You can easily inject business rules into the query validation pipeline (or the execution) to prevent updates to specific fields, this can include role based or user based security context rules.
.Net LINQ Support
Whilst the API is accessed via URL conventions, these conventions can be easily mapped to .Net LINQ queries on the client side via the ODataLib or Simple OData Client.
Not all possible LINQ functions and queries are supported, but there is enough there to get the job done.
This means you can easily support C# client side applications.
Versioning can be avoided
This becomes a Con as well, however additive changes to the data model can be easily absorbed into the runtime without having to publish a new version of the API.
All of the above points contribute to allow Additive changes to be freely supported by down level clients without interruptions.
It is possible to also map some old fields across to new definitions in cases where you want to rename fields, or you can provide default implementations for fields that have been removed.
The OData Model is a configuration that defines how URL requests are mapped or translated to endpoints on the API controllers, meaning there is a layer where you can change how client requests are mapped to
Key Cons to be aware of:
Performance - OData API uses HTTP protocols, so there is an inherent performance hit when compared to local DLL calls, windows services or RPC, even when the API and Application are located on the same machine
This performance hit can be minimised and is usually an acceptable overall cost
GraphQL and most other REST/HTTP based API architectures will have similar JSON over HTTP performance issues.
OData is still a Good fit for JSON over HTTP based remote API hosting scenarios, like javascript based clients.
Poor support for updating related objects
While PATCH is great for single objects, it doesn't work for nested object graphs, to support updates to nested objects you either need to manually keep a repository of changes on the client side and manually call path on each of the nested objects.
The OData protocol does have a provision for batching multiple queries so they can be executed as an atomic operation, but you still have to construct the individual updates.
There is a .Net based OData Client Generation Tool can be used to generate a client side repo to manage this for you.
How often are you expecting your client to send back a rich collection of objects to update in a single hit?
Is it a good idea to allow the client to do this, does an omitted field from the client mean we should set that property to null, does it mean we should delete that related data?
Consider creating actions on the API to execute operations that affect multiple records to keep your clients thin and to consolidate logic so that each of your client applications does not have to re-implement the complex logic.
Version Support
Versioning can be a long term issue if you want to allow destructive changes to your data model and DTOs. While the standard URL conventions support versioning the code to implement it is still complex
Versioning can be hard to implement in any APIs, however the ability to set default projections for each DTO and with the client able to control their own specific projections means that the OData model is more resilient to additive-only changes, such as adding more tables, or fields.
Additive changes can be implemented without interrupting client applications.
Fixed Schema
Although clients can request specific fields to be sent over the wire for the DTOs, including navigation properties, the client cannot easily request the data to come back in a totally different structure. Clients can only really request that certain fields are omitted from results.
There is support for $apply URL parameter to return the results of aggregate operations, but this doesn't provide full control over the shape of the results.
GraphQL does address this exact issue. Moving the mapping from the API to the client side, giving the client a more control over the schema.
the solution for this problem is very simple , you can use half generic query for this , WITHOUT changing anything in your DTO.
first let your repo function takes a generic include like this:
public Task<Sales> GetBySalesId(string salesId, Func<IQueryable<Sales>, IIncludableQueryable<Sales, object>> include = null)
{
var query = _context.Sales.Where(x => x.Id == salesId);
if (include != null)
query = include(query);
var salesData = query.FirstOrDefaultAsync();
return salesData;
}
this can be used in service layer like this:
public async Task<Sales> GetById(string salesId)
{
var result = await _yourRepo.GetBySalesId(salesId,
include: source => source
.Include(a => a.SOMETHING)
.Include(a => a.SOMETHING)
.ThenInclude(a => a.SOMETHING));
return result;
}
now to specialize the query result you can either do it based on your token (if you are using authorization in your api) or can create a number of service functions , call them based on a condition you recieve in the controller like an integer or something.
public async Task<Sales> test(string salesId)
{
Func<IQueryable<Sales>, IIncludableQueryable<Sales, object>> include = null;
if (UserRole == "YOU")
include = a => a.Include(a => a.SOMETHING);
else if (UserRole == "SomeoneElse")
include = a => a.Include(a => a.SOMETHING).ThenInclude(a=>a.SOMETHINGELSE);
var result = await _yourRepo.GetBySalesId(salesId,
include: include);
return result;
}
First of all
your logic is strange: you request DB return all columns and then take only few needed, it is inefficient. Imagine you have 20 columns...
var salesData = await _salesRepository.GetBySalesId(salesId);
var salesDto = _mapper.Map<SalesDto>(salesData);
Should a sales repository be able to include customers and products?
It is probably a holywar subject. Generally speaking, if your architecture wont allow you to switch from DB to file storage and from ASP.NET MVC to Console App, most likely it has design flaws (and it might be perfectly OK for your company current needs)
Summary
You need to make more service methods that do build results not just transferring data from Repo to caller as is.
For your case you need your service to cover more scenarios
AppService.GetSalesById(salesId)
AppService.GetSalesWithProductsById(sales)
AppService.GetSalesById(salesId, includeProducts, includeCustomers)
My personal preference is to change multiple parameters with Commands and make service methods return Results.
If your colleague were to add say 2 columns - it is easier to add them to existing result, if colleague is writing something new – its better to introduce new Method and result
Commands and results
Command stands for some situation and its variations, service looks nice are clean. This approach is time tested on one of my projects for last 10 years. We have switched database 3 times, several ORMs and 2 UIs. To be specific we use ICommand and IResult to make it super flexible.
API Controller
public async Task<ActionResult<SalesDto>> GetSalesBySalesId(string salesId)
{
UISalesTable dto = await (UISalesTable) _service.GetSales(new GetSalesForUICommand
{
SalesId = salesId,
IncludeProductType = true,
IncludeCustomer = false
});
return Ok(dto);
}
public async Task<ActionResult<MonthlySales>> GetSalesReport(string salesId)
{
MonthlySales dto = await (MonthlySales) _service.GetSales(new GetMonthlySalesReportCommand
{
SalesId = salesId,
// extra filters goes here
});
return Ok(dto);
}
Service layer
You make as many DTOs as there are results (it costs nothing)
public async Task<UISalesTable> GetSales(GetMonthlySalesReportCommand command)
{
UISalesTable result = new UISalesTable();
// good place to use Builder pattern
result.SalesByMonthes = .....;
TopProductsCalculator calc = new TopProductsCalculator();
result.TopProducts = calc.Calculate(command.FromDate, command.ToDate);
result.Etc = .....;
return result;
}
It all depends
Unfortunately, there is no recipe. It is always tradeoff between quality and time-to-market. Last years I preferer to keep things simple and even abandoned idea of repositories and now I work with DataContext directly because if I were to switch say to MongoDB, I would have to write every single repository method again and it happens few times in a life.
We are at the design stage of developing an application.
We decided to implement DDD oriented design.
Our application has a service layer.
Each method in the service has its own task.
Sample
let's consider a user service.
This method gets all users
public User GetAll()
{
//codes
}
Now if I want to sort all users by name.
Option-1
Use another method
public User GetAllAndOrderByAsc()
Or
public User GetAllAndOrderByDesc()
Different method for each different situation.
it doesn't look good at all
Option-2
Continue query at Api or Application level
public IQueryable<User> GetAll()
{
//codes
}
Api or Application Level
var users = from u in _userService.GetAll()
select u;
switch (sortOrder)
{
case "name_desc":
users = users.OrderByDescending(u => u.Name);
break;
case "name_asc":
users = students.OrderBy(u => u.Name);
break;
default:
users = students.OrderBy(u => u.Name);
break;
}
The options that come to my mind are limited to these.
am I making a mistake somewhere or couldn't grasp the logic. would you help me?
You're not crazy: this is a common problem, and the options you've listed are legitimate, each with pros and cons, and commonly used in practice.
Depending on the context of your app, it's theoretically possible to expose the IQueryable<> all the way to the presentation layer (using OData if you're running a web service, e.g.). But I find that it's wiser to exercise a little more control over the API to avoid the possibility of having someone hit the endpoint with a request that basically downloads your entire database.
In your service layer, you want to think about what use cases you really expect people to use.
It's rare for someone to literally need to see all the users: that might only happen when it's part of an export process, and rather than returning a collection you might want to return an IObservable that pushes values out as they emerge in a stream from the database.
More often, you're going to want to get a page of users at a time to show in the UI. This will probably need to have sorting and searching applied to it as well.
You'll almost certainly want a way to get one user whose ID you already know (maybe one that was picked from the paged list).
You might also expect to need to query based on specific criteria. Maybe people should typically only see "Active" users. Maybe they should be thinking in terms of specific groupings of users.
In my experience, it has worked well to make my Service layer represent these Domain-level assumptions about how the application should think about things, and write interfaces that support the use cases you actually want to support, even if that means you often find yourself adding new Service methods to support new features. For example:
Task<IReadOnlyCollection<User>> GetPageOfActiveUsersAsync(
PagingOptions pagingOptions,
SortOptions sortOptions,
string searchTerm,
IReadOnlyCollection<int> organizationIds);
Task<User> GetActiveUserByUsernameAsync(string username);
Task<User> GetActiveUserByIdAsync(int userId);
IObservable<User> GetAllUsersForAuditExport();
The project I currently work on uses a class like this:
public class SortSettings
{
public string SortField { get; set; }
public SortDirection SortDirection { get; set; } = SortDirection.Asc;
}
Then, depending on what we're querying, we may use reflection, a switch statement, or some other mechanism to pick a SortExpression (Expression<Func<TSource, TResult>>), and then apply the sort like this:
query = sortOptions.SortDirection == SortDirection.Asc
? query.OrderBy(sortExpression)
: query.OrderByDescending(sortExpression);
That logic is fairly easy to extract out into a separate method if you find you're using it in a lot of places.
You'll likely find other common features like Searching and Paging require a similar approach.
I'm building some sort of a complicated inventory system, accepting offers from suppliers, etc.
Normally I would have basic permissions like Read/Write/Edit/Delete and I can easily update read and write bool variables on each page to check if true do this, else do that.
But that's not the case. I have some permissions like (Owner, SamePseudoCity) which respectively means that the user is allowed access to the records he only created, and the other means returning the records that belongs to the PsuedoCity as the user.
Currently the UI has local variables with the applicable permissions and when the UI request some data from the db it calls the BL that first get the permissions the user is entitled to and bind them to UI/Page local variables.
Also it checks if the permission list contains 'Owner' then it will get the records created by the UserID, if it contains 'SamePseudoCity' it will get all the records in the same city.
I'm not sure if this is a good design and if I can modify it. I know there is no right or wrong here but there's smelly-design, okay-design, and better-design. So I'm just looking for some ideas if someone implemented this before.
It took a lot to explain my problem if it's still not clear please let me know and I can post some snippets from my code.
Getting to grips with your requirements
What you need is an authorization framework that is capable enough of handling your requirements. In your post, you mention that you have
Permissions e.g. Read/Write/Edit/Delete
other parameters e.g. Owner, SamePseudoCity that mean different things:
user is allowed access to the records he only created
returning the records that belongs to the PsuedoCity as the user.
Attribute-based access control
You need to turn to [tag: ABAC] (attribute based access control) which will give you the ability to define your requirements using attributes (key-value pairs) and policies. The policies are maintained and evaluated inside a 3rd party engine called the Policy Decision Point (PDP).
ABAC is a standard defined by NIST. It is an evolution of RBAC (role-based access control). XACML, the eXtensible Access Control Markup Language is an implementation of ABAC,
You can apply abac to different layers in your architecture from the UI (presentation tier) to the business tier and all the way down to your data tier.
Policies are expressed in alfa or xacml.
Example:
namespace stackoverflow{
You first define your attributes
namespace user{
attribute identifier{
category = subjectCat
id = "user.identifier"
type = string
}
attribute city{
category = subjectCat
id = "user.city"
type = string
}
}
namespace item{
attribute owner{
category = resourceCat
id = "item.owner"
type = string
}
attribute city{
category = resourceCat
id = "item.city"
type = string
}
}
attribute actionId{
category = actionCat
id = "actionId"
type = string
}
Then you define the policies that use these attributes
/**
* Control access to the inventory
*/
policy inventory{
apply firstApplicable
/**
* Anyone can view a record they own
*/
rule viewRecord{
target clause actionId == "view"
condition user.identifier == item.owner
permit
}
/**
* Anyone can view a record that is in the same city
*/
rule viewRecordsSameCity{
target clause actionId == "view"
condition user.city == item.city
permit
}
}
}
Next step
You then need to deploy a Policy Decision Point / Policy Server. You can choose from several:
Axiomatics Policy Server (disclaimer I work for Axiomatics)
SunXACML
Oracle Entitlements Server
WSO2 Identity Server
If you want to apply your policies to both the UI and the database, then you can use a feature called dynamic data masking via an Axiomatics product called Data Access Filter MD.
Update
The OP later commented the following
ABAC" never heard of it and it sounds brilliant but I assume I need access to a dedicated server or a VPS to install the PDP, right ? .. I know this can be too much to ask but I have 3 questions, Can I programmatically change the rules ? Is it possible to implement this scenario where Every product has a pseudo city and every manager also has a pseudo city and managers are only allowed access to their own city products ? is it possible do simple read/write/edit rules and hide and show UI based on that ? –
So first off, let's start with the ABAC architectural diagram:
The PEP is the policy enforcement point, the piece responsible for protecting your apps, APIs, and databases. It enforces authorization decisions.
The PDP is the policy decision point, the piece responsible for evaluating policies and reaching decisions. It processes requests it receives from the PEP and returns authorization decisions (Permit, Deny).
The PAP is the policy administration point where you define and manage your policies
the PIP is the policy information point. It's the interface the PDP uses to connect to third party attribute sources e.g. A user LDAP, a database, or a web service. The PDP uses the PIP when it needs to know more about the user or the resource.
I assume I need access to a dedicated server or a VPS to install the PDP, right ?
Yes, you would install the PDP on a server (or the cloud). It becomes part of your infrastructure.
Can I programmatically change the rules?
Yes you can. The Axiomatics PAP has an API that you can use to upload, export, and create policies programmatically.
Is it possible to implement this scenario where Every product has a pseudo city and every manager also has a pseudo city and managers are only allowed access to their own city products ?
Yes, that is actually what I wrote in my original example and that is the beauty of ABAC. You write a single policy that works no matter the number of cities: A user can view a record if user.city==record.city
is it possible do simple read/write/edit rules and hide and show UI based on that ?
Yes, you can use any number of attributes in your policies. So for instance you could have:
Deny users access to records outside their city
users can view records
a user can edit a record they own
a user can approve a record they do not own
You can use the logic to drive authorization in your UI or your business layer or even at the data layer. So you could ask the PDP:
Can I show the Edit button?
Can I show the Details button?
Can I show the Delete button?
I have a two part application. One part is a web application (C# 4.0) which runs on a hosted machine with a hosted MSSQL database. That's nice and standard. The other part is a Windows Application that runs locally on our network and accesses both our main database (Advantage) and the web database. The website has no way to access the Advantage database.
Currently this setup works just fine (provided the network is working), but we're now in the process of rebuilding the website and upgrading it from a Web Forms /.NET 2.0 / VB site to a MVC3 / .NET 4.0 / C# site. As part of the rebuild, we're adding a number of new tables where the internal database has all the data, and the web database has a subset thereof.
In the internal application, tables in the database are represented by classes which use reflection and attribute flags to populate themselves. For example:
[AdvantageTable("warranty")]
public class Warranty : AdvantageTable
{
[Advantage("id", IsKey = true)]
public int programID;
[Advantage("w_cost")]
public decimal cost;
[Advantage("w_price")]
public decimal price;
public Warranty(int id)
{
this.programID = id;
Initialize();
}
}
The AdvantageTable class's Initialize() method uses reflection to build a query based on all the keys and their values, and then populates each field based on the database column specified. Updates work similarly - We call AdvantageTable.Update() on whichever object, and it handles all the database writes. It works quite well, hides all the standard CRUD, and lets us rapidly create new classes when we add a new table. We'd rather not change it, but I'm not going to entirely rule it out if there's a solution that would require it.
The web database needs to have this table, but doesn't have a need for the cost data. I could create a separate class that's backed by the web database (via stored procedures, reflection, LINQ-TO-SQL, ADO data objects, etc), but there may be other functionality in the Warranty object which I want to behave the same way regardless of whether it's called from the website or the internal app, without the need to maintain two sets of code. For example, we might change the logic of how we decide which warranty applies to a product - I want to need to create and test that in only one place, not two.
So my question is: Can anyone think of a good way to allow this class to sometimes be populated from the Advantage database and sometimes the web database? It's not just a matter of connection strings, because they have two very different methods of access (even aside from the reflection). I considered adding [Web("id")] type tags to the Advantage tags, and only putting them on the fields which exist in the web database to designate its columns, then having a switch of some kind to control which set of logic is used for reading/writing, but I have the feeling that that would get painful (Is this method web-safe? How do I set the flag before instantiating it?). So I have no ideas I like and suspect there's a solution I'm not even aware exists. Any input?
I think the fundamental issue is that you want to put business logic in the Warranty object, which is a data layer object. What you really want to do is have a common data contract (could be an interface in this case) that both data sources support, with logic encapsulated in a separate class/layer that can operate with either data source. This side-steps the issue of having a single data class attempt to operate with two different data sources by establishing a common data contract that your business layer can use, regardless of how the data is pulled.
So, with your example, you might have an AdvantageWarranty and WebWarranty, both of which implement IWarranty. You have a separate WarrantyValidator class that can operate on any IWarranty to tell you whether the warranty is still valid for given conditions. Incidentally, this gives you a nice way to stub out your data if you want to unit test your business logic in the WarrantyValidator class.
The solution I eventually came up with was two-fold. First, I used Linq-to-sql to generate objects for each web table. Then, I derived a new class from AdvantageTable called AdvantageWebTable<TABLEOBJECT>, which contains the web specific code, and added web specific attributes. So now the class looks like this:
[AdvantageTable("warranty")]
public class Warranty : AdvantageWebTable<WebObjs.Warranty>
{
[Advantage("id", IsKey = true)][Web("ID", IsKey = true)]
public int programID;
[Advantage("w_cost")][Web("Cost")]
public decimal cost;
[Advantage("w_price")][Web("Price")]
public decimal price;
public Warranty(int id)
{
this.programID = id;
Initialize();
}
}
There's also hooks for populating web-only fields right before saving to the web database, and there will be (but isn't yet since I haven't needed it) a LoadFromWeb() function which uses reflection to populate the fields.