Restrict access to a specific assembly - c#

I'm working on a Winforms project with sql server, splitted in several assemblies.
The first assembly Entities contains DTO like :
public class Attribution
{
public short UserId { get; set; }
public User User { get; set; }
}
public class User
{
public short Id { get; set; }
}
The second assembly Repository is accessing Sql Server database.
The third assembly Service is the link between previous.
There are other layers but this is not the point. I need of course DTO's everywhere in the app.
In sql server, Attribution.UserId and User.Id are the same datas, located in 2 separate tables, linked by Ìnner join.
Attribution.UserId must be public because I need access from Repository,Service, etc... But I don't need it in the "logical" part of the app, what I need is Attribution.User.
At this time I have a UserService class in which there is a GetUser() method and I call this method to get the user in my AttributionService.GetAttribution() method.
Is there a way to restrict access to Attribution.UserId property to Service assembly? Or is it a kind of "good practice violation" to query a User DTO in AttributionService class?
Many thanks for your recommandation.
`

One option would be to make the set of the property internal and the use the InternalsVisibleTo attribute to grant access to internals to the Repository assembly.
Another less technical and more logical option would be to make the setter private and let the only way for it to be modified be the classes constructor. That way, your repository can get users built, but nobody can modify the ID later.
As a last option you could create an interface that contains only what the non-repository classes should have access to and pass this around. I'm not a big fan because that means you have to cast it back to your concrete class in the repository and that basically means your repository is lying (saying it accepts an ISomething, but then throwing if the ISomething is not the exact, concrete Something it expects).

Related

Viewmodels vs Domain models vs entities [duplicate]

When I use Web (MVC), I always to create a separate classes layer. These classes often the same as DTO classes, but with attributes like [Display(Name = "Street")] and validation. But for web api Display attributes are not necessary, validation can be used by FluentValidation. Should Api controller returns ViewModels classes or DTO classes will be fine too?
the answer, as always is .... it depends.
If your API is serving multiple clients , apps etc, then returning DTOs is a better options.
ViewModels are specific to the MVC client and should already be prepared for display, meaning the data should already be formatted in a specific way, some fields maybe combined, they should satisfy whatever requirements the display pages have. They are called ViewNodels for a reason. The point is that they are rarely exactly the same as the data the API returns, which should be a bit more generic and follow a certain pattern to make sense to its users.
If your ViewModels are exactly the same and you only have one client then it's up to you if you want to create a set of duplicated classed just to avoid having the attributes.
Mapping from DTO to ViewModel and viceversa is not exactly complicated, but the process does introduce one more complication, one more layer.
Don't forget one thing though. API DTOs are supposed to return the data they have on any entity regardless of the requirements of any UI. Requirements can change anyway, new fields added or discarded. You're more than likely to leave the API alone when that happens and simply change your ViewModels.
Your ViewModels are specific to a UI page and should contain only the data required by that page. This means that you can end up with multiple ViewModels for the same data, it's just that the display requirements are different for each.
My vote goes towards keeping the ViewModels and DTOs separate, even if, at this point in time they are exactly the same. Thins always change and this is one of those things you can actually be ready for.
Actually it depends on application's architecture how we want to return response. In this case yes we can return DTO classes but i think that would not be the good approach because we should create a separate Resource classes that will map with DTO and then return. Just see the below example:
public class CustomerDTO
{
public int ID { get; set; }
public string Name { get; set; }
public int DepartmentId { get; set; }
}
public class CustomerResource
{
[JsonObject]
public string Name { get; set; }
[JsonObject]
public string Department { get; set; }
}
Suppose we have CustomerDTO class and we want to return response in the following json format
{
"name":"Abc xyz",
"department":"Testing"
}
So in this case we should we have separate class that will return as a response to the end user as i created CustomerResource. In this scenario we will create a mapper that will map DTO with resource object.
And also with this implementation we can test resources independently

Full Anemia - Where can I move this data out of my Model?

I was given a few dozen legacy SQL statements that are each hundred(s) of lines long. Each SQL is mapped to code with its own unique POCO in a shared Models project.
For example, the SQL Select Name, Birthday From People has an equivilent POCO in the Models project:
public class BirthdayPerson : SqlResultBase {
public string Name { get; set; }
public datetime Birthday { get; set; }
//SqlResultBase abstraction:
public string HardcodedSql { get {
return "Select Name, Birthday From People";
}}
}
In my DAL, I have a single generic SQL runner whose <T> represents the POCO for the SQL. So my business logic can call GetSqlResult<BirthdayPerson>():
public IEnumerable<T> GetSqlResult<T>() where T : SqlResultBase, new() {
return context.Database.SqlQuery<T>((new T()).HardcodedSql);
}
The problem is that my Models library is used across the application, and I don't want SQL exposed across the application in that HardcodedSql property.
This is the architecture I'm using:
At first you have to separate your model (i.e. POCOs) from the SQL which actually belongs to the DAL. Inversion of Control is right way to do this. Instead of generic sql runner it is better to register mappings in the IoC container from abstract repositores (e.g. IRepository<MyPOCO>) to implementations that contain the SQLs.
EDIT: To be more concrete, a possible solution:
Place all SQLs to a separate file(s) inside DAL, for example to a set of embedded resource files with name convention, e.g. Legacy-{0}.sql where {0} is name of the POCO.
Create a generic implementation of legacy repository that uses POCO name as a key and picks corresponding Legacy-{0}.sql file from the resource set. Note that there may be other implementations as well that use other data access techniques, like ORM.
In the composition root register explicitly all mappings from the legacy POCOs to the legacy implementation: IRepository<MyPOCO1> => LegacyRepo<MyPOCO1>; IRepository<MyPOCO2> => LegacyRepo<MyPOCO2>; etc. Moreover you may register other mappings from non-legacy entities to other implementations of repository.
The simplest solution would be to make HardcodedSql internal instead of public so it's only visible within a DAL Project. If the DAL is a separate project from the model you could use InternalsVisibleTo to expose it to that project. This assumes you can configure your project structure accordingly.
I suggest perhaps two possible ways of dealing with the question.
As for the first method, I would rather change how the sql is accessed and wrap the call locally in a method. So the class may have a function called public IEnumerable GetFromSql() you could pass in a context, or create a new one, I am not sure how you have setup EF in your project. this way you never publically expose the raw sql, as you would rather make it a private variable or local constant perhaps and simply access it from within the function.
As a second option, and I have actually done this and it turned out pretty great, was I moved all the sql to views and used EF to access them. That way there was no sql pollution in my code.
Seeing that the models already exists, the result from calling the views would match the types that you already have.

Add property to POCO class at runtime

I selected ServiceStack OrmLite for my project which is a pure Data-Oriented application. I am willing to allow the end user to create his own Object Types defined in an XML format that will be used to generate classes at runtime using CodeDOM.
I will be also defining some "system" objects required by the application (i.e. User) but I cannot foresee all the properties the end user will use and therefore I am looking for a way to allow extending the classes I create in design time. Sample bellow
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
}
The end user wants to have an Email and an Address. He should be able to add the 2 properties to the upper class and the whole class will be (which still can be used by OrmLite, since it allows overwriting :
public class User
{
public Guid Uid { get; set; }
public String Username { get; set; }
public String Password { get; set; }
public String Email{ get; set; }
public String Address { get; set; }
}
I know that there might be a risk of doing so to crash the system (if the class is already instantiated) so I am looking for the best way to avoid this issue and mimic the need I have.
It seems that there are two parts to what you're doing here. You need to create types dynamically to support the additional properties. You also need to ensure that you never end up with duplicate types in your AppDomain, i.e. two different definitions of User.
Runtime type generation
The various suggestions already given handle how to create the types. In one project, we had something similar. We created a base class that had the core properties and a dictionary to store the 'extension' properties. Then we used Reflection.Emit to create a derived type that had the desired properties. Each property definition simply read from or wrote to the dictionary in the base class. Since Reflection.Emit entails writing low-level IL code, it seems complex at first. We wrote some sample derived classes in another class library and compiled them. These were examples of what we'd actually need to achieve at runtime. Then we used ildasm.exe to see what code the compiler produced. This made it quite easy to work out how we could generate the same code at runtime.
Avoiding namespace collisions
Your second challenge is to avoid having duplicate type names. We appended a guid (with invalid characters removed) to the name of each generated type to make sure this never happened. Easy fix, though I don't know whether you could get away with that with your ORM.
If this is server code, you also need to consider the fact that assemblies are never unloaded in .NET. So if you're repeatedly generating new types at runtime, your process will continue to grow. The same will happen in client code, but this may be less of an issue if you don't expect the process to run for an extended period of time.
I said assemblies are not unloaded; however, you can unload an entire AppDomain. So if this is server code you could have the entire operation run in its own appdomain, then tear it down afterwards to ensure that the dynamically created types are unloaded.
Check out the ExpandoObject, which provides dynamic language support for doing something like this. You can use it to add additional properties to your POCO's at runtime. Here's a link on using .NET's DLR features: http://msdn.microsoft.com/en-us/library/system.dynamic.expandoobject%28v=vs.100%29.aspx
Why not use a key value pair for all its properties, or at least the dynamic ones?
http://msdn.microsoft.com/en-us/library/system.collections.hashtable.aspx
You can do it the way you're describing with Reflection but it will take a performance hit, this way will allow removal of properties also.
The project I'm currently working on has a similar requirement. We have a system already in production and had a client request addition fields.
We solved this by simply adding a CustomFields property to our model.
public class Model: IHasId<Guid>
{
[PrimaryKey]
[Index(Unique = true)]
public Guid Id { get; set; }
// Other Fields...
/// <summary>
/// A store of extra fields not required by the data model.
/// </summary>
public Dictionary<string, object> CustomFields { get; set; }
}
We've been using this for a few weeks with no issues.
An additional benefit we found from this was that each row could have its own custom fields so we could handle them on a per record basis instead of requiring them for every record.

Inheritance in .Net EntityFramework 4.0 for similar database tables

I am looking for a solution in the following problem:
I have many tables that differentiate from one another in few/none columns (Why this happens is not the case as I have not designed said database nor can I change it).
Example:
Table User: Columns{name,surname,age}
Table OldUser: Columns(name,surname,age,lastDateSeen}
etc
Is there any way to indicate to the EntityFramework 4.0 in Visual Studio to extend a base class consisting of _name,_surname,_age fields and their corresponding properties so that I can use that class for batch jobs in the code?
My current solution is to make this class and use converters to pass the values from the persistent objects its' objects. It works but it is not elegant and I don't like it.
I come from a java/hibernate environment where this is basic functionality.
(for future refernce can the same thing be done if I want the classes to implement an interface?)
Thanks in advance.
Since your RDBMS (at least SQL Server 2008 and older) doesn't allow for table inheritance, I would recommend that you do not use inheritance in the DB model in C#. This is especially recommended when you cannot control the design of the tables.
Instead use an interface if you actually have clients of those classes who will benefit from the abstraction, but not being able to control the design of the DB makes this less valuable because the DB designer could change the tables thereby making your EF classes no longer implement the interface:
public interface IUser {
public string Name { get; }
// etc...
}
public class User : IUser {
public string Name { get; set; }
// etc...
}
public class OldUser : IUser {
public string Name { get; set; }
// rest of IUser
public DateTime? LastSeenOn { get; set; }
}

How i do for work with objects with composition

my name is aderson and in this moment i have something question about composition referents to performance. In this model
i have a simple userbase and departmentbase. The userbase have a property of type deparmentbase and departmentbase have a list property of type departmentbase.
When i have a instance of userbase in this moment load information about department but then DepartmentBase load information about Departments too!!!.
Now, when i have a list of userbase for all user the process load again for all users, this is a good practise or what is the better form?
alt text http://img146.imageshack.us/img146/3949/diagram.jpg
I don't know if it is a better (or even applicable) approach, but I sometimes make brief versions of objects that I use for references from other objects. The breif version acts as a base class for the full version of the object, and will typically contain the information that would be visible in a listing of such objects. It will often not contain lists of other objects, and any references to other classes will usually refer to the brief version of that class. This eliminates some unnecessary data loading, as well as some cases of circular references. Example:
public class DepartmentBrief
{
public string Name { get; set; }
}
public class Department : DepartmentBrief
{
public Department()
{
Departments = new List<DepartmentBrief>();
}
public IEnumerable<DepartmentBrief> Departments { get; private set; }
}
public class UserBase
{
public DepartmentBrief Department { get; set; }
}
One difference between this approach and having full object references paired with lazy loading is that you will need to explicitly load extra data when it is needed. If you have a UserBase instance, and you need the department list from the Department of that UserBase, you will need to write some code to fetch the Department object that the DepartmentBrief object in UserBase is identifying. This could be considered a downside, but I personally like the fact that it will be clear when looking at the code exactly when it is going to hit the data store.
It depends, if you need all the department data directly after loading the user list, then this is the best approach. If you don't need it immediately, you better use lazy loading for the department data. This means you postpone the loading of the department data until an explicit method (or property) has been called.

Categories

Resources