I have a number of tables in my SQL DB that store basic KVP data. they all have the pattern of Id(int), Description(varchar(..)).
I require them all at one stage or another in my MVC application and i am trying to create an anonymous method that will take the type of entity class and return a list of that items data. I would like to be able to use this method for any number of tables in my database without having to extend the methods i am writing by too much.
Example Data:
Id: 1, Description: Red
Id: 2, Description: Blue
Id: 3, Description: Green
ISet Implementation:
public interface ISet
{
int Id { get; set; }
string Description { get; set; }
}
Possible Method Implementation
public static IList<T> BuildSet<T>()
{
using (var db = new somethingEntities())
return db.Set(typeof(T)).Cast<ISet>().Select(c => new { c.Id, c.Description }).ToList();
}
Usage
var data = BuildSet<YourType>();
Runtime Error
Cannot create a DbSet<ISet> from a non-generic DbSet for objects of type 'YourType'.
Does anyone know how to do this?
Thanks.
I assume that the DbSet's are small as ToList() will fetch all the results in one go?
However, here's a simpler version of your code (you specify the interface ISet in the generic type constraint):
public static IList<ISet> BuildSet<T>()
where T : class, ISet
{
using (var db = new somethingEntities())
return db.Set<T>().ToList();
}
There's also the option of an extension method for your DbContext:
public IEnumerable<ISet> BuildSet<T>(this DbContext context)
where T : class, ISet
{
return context.Set<T>().AsEnumerable();
}
which you can then use within the lifetime of your context, possible for a more efficient operation. (I know I'm assuming too much but I feel it's useful information anyway.)
public void SomeOperation()
{
using (var db = new somethingEntities())
{
foreach (var item in db.BuildSet<SimpleSet>())
{
//do something with item
//SaveChanges() every 100 rows or whatever
}
}
}
Problem solved.
Method Declaration
public static List<ISet> BuildSet<T>() where T : class
{
using (var db = new somethingEntities())
return db.Set<T>().ToList().Cast<ISet>().ToList();
}
Usage
var data = ...BuildSet<YourType>();
Related
I am trying to create an app the can handle multiple database types.
So far I have created my Interface like so. Its very simple and all the database will do is Load and Save a profile
public interface IDataManager
{
Profile LoadProfile(int profileId);
bool SaveProfile(Profile profile);
bool CreateDatabase();
bool OpenConnection();
bool CloseConnection();
}
and lets just say the Profile class for the above looks like this.
public class Profile
{
public int Id { get; set; }
public string Name { get; set; }
}
My question is what is the best way to make it so that all the implementations of IDataManager return the same object types?
Here is an example of what I mean by this. (This is not quality code its just an example)
I create an SQLite class that implements IDataManager and then create an instance.
public IDataManager DataManager = new SQLiteDataManager();
Later in the code I want to load a Profile so I call the LoadProfile.
Profile profile = DataManager.LoadProfile(1);
My SQLite implementation of the LoadProfile method looks like this
public Profile LoadProfile(int profileId)
{
// Copied and pasted from a WinRT app
using (var conn = new global::SQLite.Net.SQLiteConnection(new global::SQLite.Net.Platform.WinRT.SQLitePlatformWinRT(), _sqlpath))
{
var tmp = conn.Table<PROFILE>().First(x => x.ID == profileId);
}
// do something and return
}
Now as you can see the return type from the query (tmp = type PROFILE) is not the same type as the LoadProfile method return type (Profile).
Do I have to convert tmp to Profile? which means it must be done in all the methods with return types, and for every different database implementation. or is there a better way of doing this?
Hope this makes sense.
If you use Entity Framework you can just use the provider for the db type you will be using but keep all the api and models the same
A list of Entity Framework providers for various databases
I'm trying to get my head around this issue where I am using the Entity Framework (6) in an N-tier application. Since data from the repository (which contains all communication with the database) should be used in a higher tier (the UI, services etc), I need to map it to DTOs.
In the database, there's quite a few many-to-many relationships going on, so the datastructure can/will get complex somewhere along the line of the applications lifetime. What I stumbled upon is, that I am repeating the exact same code when writing the repository methods. An example of this is my FirmRepository which contains a GetAll() method and GetById(int firmId) method.
In the GetById(int firmId) method, I have the following code (incomplete since there's a lot more relations that needs to be mapped to DTOs):
public DTO.Firm GetById(int id)
{
// Return result
var result = new DTO.Firm();
try
{
// Database connection
using (var ctx = new MyEntities())
{
// Get the firm from the database
var firm = (from f in ctx.Firms
where f.ID == id
select f).FirstOrDefault();
// If a firm was found, start mapping to DTO object
if (firm != null)
{
result.Address = firm.Address;
result.Address2 = firm.Address2;
result.VAT = firm.VAT;
result.Email = firm.Email;
// Map Zipcode and City
result.City = new DTO.City()
{
CityName = firm.City.City1,
ZipCode = firm.City.ZipCode
};
// Map ISO code and country
result.Country = new DTO.Country()
{
CountryName = firm.Country.Country1,
ISO = firm.Country.ISO
};
// Check if this firm has any exclusive parameters
if (firm.ExclusiveParameterType_Product_Firm.Any())
{
var exclusiveParamsList = new List<DTO.ExclusiveParameterType>();
// Map Exclusive parameter types
foreach (var param in firm.ExclusiveParameterType_Product_Firm)
{
// Check if the exclusive parameter type isn't null before proceeding
if (param.ExclusiveParameterType != null)
{
// Create a new exclusive parameter type DTO
var exclusiveParameter = new DTO.ExclusiveParameterType()
{
ID = param.ExclusiveParameterType.ID,
Description = param.ExclusiveParameterType.Description,
Name = param.ExclusiveParameterType.Name
};
// Add the new DTO to the list
exclusiveParamsList.Add(exclusiveParameter);
}
}
// A lot more objects to map....
// Set the list on the result object
result.ExclusiveParameterTypes = exclusiveParamsList;
}
}
}
// Return DTO
return result;
}
catch (Exception e)
{
// Log exception
Logging.Instance.Error(e);
// Simply return null
return null;
}
}
This is just one method. The GetAll() method will then have the exact same mapping logic which results in duplicated code. Also, when more methods gets added, i.e. a Find or Search method, the same mapping needs to be copied again. This is, of course, not ideal.
I have read a lot about the famous AutoMapper framework that can map entites to/from DTOs, but since I have these many-to-many relations it quickly feels bloated with AutoMapper config code. I've also read this article, which make sense in my eyes: http://rogeralsing.com/2013/12/01/why-mapping-dtos-to-entities-using-automapper-and-entityframework-is-horrible/
Is there any other way of doing this without copy/pasting the same code over and over again?
Thanks in advance!
You can make an extension method on Entity firm (DB.Firm) like this,
public static class Extensions
{
public static DTO.Firm ToDto(this DB.Firm firm)
{
var result = new DTO.Firm();
result.Address = firm.Address;
result.Address2 = firm.Address2;
//...
return result;
}
}
Then you can convert DB.Firm object anywhere in your code like firm.ToDto();
An alternate strategy is to use a combination of the class constructor and an explicit and/or implicit conversion operator(s). It allows you to cast one user-defined entity to another entity. The feature also has the added benefit of abstracting the process out so you aren't repeating yourself.
In your DTO.Firm class, define either an explicit or implicit operator (Note: I am making assumptions about the name of your classes):
public class Firm {
public Firm(DB.Firm firm) {
Address = firm.Address;
Email = firm.Email;
City = new DTO.City() {
CityName = firm.City.City1;
ZipCode = firm.City.ZipCode;
};
// etc.
}
public string Address { get; set;}
public string Email { get; set; }
public DTO.City City { get; set; }
// etc.
public static explicit operator Firm(DB.Firm f) {
return new Firm(f);
}
}
You can then use it in your repository code like this:
public DTO.Firm GetById(int id) {
using (var ctx = new MyEntities()) {
var firm = (from f in ctx.Firms
where f.ID == id
select f).FirstOrDefault();
return (DTO.Firm)firm;
}
}
public List<DTO.Firm> GetAll() {
using (var ctx = new MyEntities()) {
return ctx.Firms.Cast<DTO.Firm>().ToList();
}
}
Here's the reference in MSDN.
About mapping: it actually does not really matter if you use Automapper or prepare you mappings completely manually in some method (extension one or as explicit casting operator as mentioned in other answers) - the point is to have it in one place for reusability.
Just remember - you used FirstOrDefault method, so you actually called the database for a Firm entity. Now, when you are using properties of this entity, especiallly collections, they will be lazy loaded. If you have a lot of them (as you suggest in your question), you may face a huge amount of additional call and it might be a problem, especcially in foreach loop. You may end up with dozen of calls and heavy performace issues just to retrieve one dto. Just rethink, if you really need to get such a big object with all its relations.
For me, your problem is much deeper and considers application architecture. I must say, I personally do not like repository pattern with Entity Framework, in addition with Unit Of Work pattern. It seems to be very popular (at least of you take a look at google results for the query), but for me it does not fit very well with EF. Of course, it's just my opinion, you may not agree with me. For me it's just building another abstraction over already implemented Unit Of Work (DbContext) and repositories (DbSet objects). I found this article very interesing considering this topic. Command/query separation way-of-doing-things seems much more elegant for me, and also it fits into SOLID rules much better.
As I said, it's just my opinion and you may or may not agree with me. But I hope it gives you some perpective here.
I'm trying to create a way to make an unique search into the database and build the right object for my needs. I mean, I use a SQL query that returns me a lot of rows and then I build the collections based on that database rows. E.g.:
We have a table called People and another table called Phones.
Let's suppose that this is my SQL query and will return the following below:
SELECT
P.[Id], P.[Name], PH.[PhoneNumber]
FROM
[dbo].[People] P
INNER JOIN
[dbo].[Phones] PH ON PH.[Person] = P.[Id]
And that's the results returned:
1 NICOLAS (123)123-1234
1 NICOLAS (235)235-2356
So, my class will be:
public interface IModel {
void CastFromReader(IDataReader reader);
}
public class PhoneModel : IModel {
public string PhoneNumber { get; set; }
public PhoneModel() { }
public PhoneModel(IDataReader reader) : this() {
CastFromReader(reader);
}
public void CastFromReader(IDataReader reader) {
PhoneNumber = (string) reader["PhoneNumber"];
}
}
public class PersonModel : IModel {
public int Id { get; set; }
public string Name { get; set; }
public IList<PhoneModel> Phones { get; set; }
public PersonModel() {
Phones = new List<PhoneModel>();
}
public PersonModel(IDataReader reader) : this() {
CastFromReader(reader);
}
public void CastFromReader(IDataReader reader) {
Id = Convert.ToInt32(reader["Id"]);
Name = (string) reader["Name"];
var phone = new PhoneModel();
phone.CastFromReader(reader);
Phones.Add(phone);
// or
Phones.Add(new PhoneModel {
PhoneNumber = (string) reader["PhomeNumber"]
});
}
}
This code will generate a PersonModel object with two phone numbers. That's good so far.
However, I'm struggling to make some good way to deal when I want to manage more tables with this process.
Let's suppose, then, I have a new table called Appointments. It stores the user's appointments to the schedule.
So, adding this table to the query, the result will be:
1 NICOLAS (123)123-1234 17/09/2014
1 NICOLAS (123)123-1234 19/09/2014
1 NICOLAS (123)123-1234 27/09/2014
1 NICOLAS (235)235-2356 17/09/2014
1 NICOLAS (235)235-2356 19/09/2014
1 NICOLAS (235)235-2356 17/09/2014
As you guys can see, the problem is to manage the phones and the appointments this way. Do you can think in anything that could solve this issue?
Thank you all for the opinions!
You cannot transfer your query result to strongly typed objects without first defining these objects' types. If you want to keep query data in memory, I recommend that you transfer it into objects of a previously defined type at some point.
What follows is therefore not something that I would actually recommend doing. But I want to demonstrate to you a possibility. Judge for yourself.
As I suggested in a previous comment, you can mimick strongly typed DTOs using the Dynamic Language Runtime (DLR), which has become available with .NET 4.
Here is an example for a custom DynamicObject type that provides a seemingly strongly-typed façade for a IDataReader.
using System.Data;
using System.Dynamic; // needs assembly references to System.Core & Microsoft.CSharp
using System.Linq;
public static class DataReaderExtensions
{
public static dynamic AsDynamic(this IDataReader reader)
{
return new DynamicDataReader(reader);
}
private sealed class DynamicDataReader : DynamicObject
{
public DynamicDataReader(IDataReader reader)
{
this.reader = reader;
}
private readonly IDataReader reader;
// this method gets called for late-bound member (e.g. property) access
public override bool TryGetMember(GetMemberBinder binder, out object result)
{
int index = reader.GetOrdinal(binder.Name);
result = index >= 0 ? reader.GetValue(index) : null;
return index >= 0;
}
}
}
Then you can use it like this:
using (IDataReader reader = someSqlCommand.ExecuteReader(…))
{
dynamic current = reader.AsDynamic(); // façade representing the current record
while (reader.Read())
{
// the magic will happen in the following two lines:
int id = current.Id; // = reader.GetInt32(reader.GetOrdinal("Id"))
string name = current.Name; // = reader.GetString(reader.GetOrdinal("Name"))
…
}
}
But beware, with this implementation, all you get is a façade for the current record. If you want to keep data of several records in memory, this implementation won't help a lot. For that purpose, you could look into several further possibilities:
Use anonymous objects: cachedRecords.Add(new { current.Id, current.Name });. This is only any good if you access the cachedRecords in the same method where you build it, because the anonymous type used will not be usable outside of the method.
Cache current's data in an ExpandoObject.
If you want to manually write a data type for each combination of columns resulting from your queries, then you have a lot of work to do, and you will end up with lots of very similar, but slightly different classes that are hard to name. Note also that these data types should not be treated as something more than what they are: Data Transfer Objects (DTOs). They are not real domain objects with domain-specific behaviour; they should just contain and transport data, nothing else.
What follows are two suggestions, or ideas. I will only scratch at the surface here and not go into too many details; since you haven't asked a very specific question, I won't provide a very specific answer.
1. A better approach might be to determine what domain entity types you've got (e.g. Person, Appointment) and what domain value types you have (e.g. Phone Number), and then build an object model from that:
struct PhoneNumber { … }
partial interface Person
{
int Id { get; }
string Name { get; }
PhoneNumber PhoneNumber { get; }
}
partial interface Appointment
{
DateTime Date { get; }
Person[] Participants { get; }
}
and then have your database code map to these. If, for example, some query returns a Person Id, Person Name, Phone Number, and an Appointment Date, then each attribute will have to be put into the correct entity type, and they will have to be linked together (e.g. via Participants) correctly. Quite a bit of work. Look into LINQ to SQL, Entity Framework, NHibernate or any other ORM if you don't want to do this manually. If your database model and your domain model are too different, even these tools might not be able to make the translation.
2. If you want to hand-code your data query layer that transforms data into a domain model, you might want to set up your queries in such a way that if they return one attribute A of entity X, and entity X has other attributes B, C, and D, then the query should also return these, such that you can always build a complete domain object from the query result. For example, if a query returned a Person Id and a Person Phone Number, but not the Person Name, you could not build Person objects (as defined above) from the query because the name is missing.
This second suggestion will at least partially save you from having to define lots of very similar DTO types (one per attribute combination). This way, you can have a DTO for a Person record, another for a Phone Number record, another for an Appointment record, perhaps (if needed) another for a combination of Person and Phone Number; but you won't need to distinguish between types such as PersonWithAllAttributes, PersonWithIdButWithoutNameOrPhoneNumber, PersonWithoutIdButWithPhoneNumber, etc. You'll just have Person containing all attributes.
For work, we have specific types of records that come in, but each project has its own implementation. The columns and the like are different, but in general the process is the same (records are broken into batches, batches are assigned, batches are completed, batches are returned, batches are sent out, etc.). Many of the columns are common, too, but sometimes there are name changes (BatchId in one vs Id in another. [Column("name")] takes care of this issue).
Currently this is what I have for the implementation of the batch assignment functionality with the common components given in the interface:
public interface IAssignment
{
// properties
...
// methods
T GetAssignmentRecord<T>(int UserId, int BatchId) where T : IAssignment;
List<T> GetAssignmentRecords<T>(int UserId) where T : IAssignment;
}
Now I currently have two projects that have batch assignment. Due to these being done in EntityFramework, Assignment in Namespace1 and Assignment in Namespace2 are completely different things but are bound by certain common components (an ID, an assigned user, checked in, etc.) which drive all of the methods for returning them.
I think my main question is if I'm doing this incorrectly and if there is a better way to achieve this such that I can pipe data into my Controllers and have the controllers look somewhat similar project to project while having as much of the method work being handled automatically (primarily so that a "fix one, fix all" scenario occurs when I need to do updates).
Here's an example of how I'm doing the implementation for namespace1:
public class Assignment
{
...
public T GetAssignmentRecord<T>(int UserId, int BatchId) where T : IAssignment
{
var db = new Database1Context();
return (T) Convert.ChangeType(db.Assignment.Where(c => c.UserId == UserId && c.BatchId == BatchId && c.Assigned).First(), typeof(T));
}
}
In the Controller:
Assignment assignment = new Assignment();
var record = assignment.GetAssignmentRecord<Assignment>(userid, batchid);
// do stuff
The controller code is actually how I'm assuming it would work. I've completed through the Assignment class and now I'm perplexed if I'm doing it the proper way. The reason I feel this may be incorrect is I'm basically saying "The interface is looking for a generic, I'm getting a strong typed object from the database using entity framework, I'm casting it to a generic, and when I'm making the request, I'm asking for the same strong typed object that I converted to generic initially."
Is there a better way of doing this? Or a completely different direction I should be going?
Providing I understood correctly what your goal is, I'd do it e.g. this way...
interface IAssignment
{
}
interface IRepo<out T> where T : IAssignment
{
T GetAssignmentRecord(int UserId, int BatchId);
IEnumerable<T> GetAssignmentRecords(int UserId);
}
class AssignmentRecord : IAssignment
{
}
class AssignmentWeb : IAssignment
{
}
class RepoDb : IRepo<AssignmentRecord>
{
public AssignmentRecord GetAssignmentRecord(int UserId, int BatchId)
{
//using(var db = new MyDbContext())
//{
// return db.Assignment.Where(c => c.UserId == UserId && c.BatchId == BatchId && c.Assigned).First();
//}
return new AssignmentRecord();
}
public IEnumerable<AssignmentRecord> GetAssignmentRecords(int UserId)
{
//using(var db = new MyDbContext())
//{
// return db.Assignment.Where(c => c.UserId == UserId && c.BatchId == BatchId && c.Assigned);
//}
return new List<AssignmentRecord>
{
new AssignmentRecord(),
new AssignmentRecord(),
new AssignmentRecord(),
new AssignmentRecord(),
new AssignmentRecord(),
new AssignmentRecord(),
new AssignmentRecord(),
new AssignmentRecord(),
};
}
}
class RepoWeb : IRepo<AssignmentWeb>
{
public AssignmentWeb GetAssignmentRecord(int UserId, int BatchId)
{
// fetch it from some web service...
return new AssignmentWeb();
}
public IEnumerable<AssignmentWeb> GetAssignmentRecords(int UserId)
{
//using(var db = new MyDbContext())
//{
// return db.Assignment.Where(c => c.UserId == UserId && c.BatchId == BatchId && c.Assigned);
//}
return new List<AssignmentWeb>
{
new AssignmentWeb(),
new AssignmentWeb(),
new AssignmentWeb(),
};
}
}
class MYController
{
public IRepo<IAssignment> Repository { get; set; } // you can inject this e.g. DI
public IAssignment GetAssignment(int userid, int batchid)
{
return Repository.GetAssignmentRecord(userid, batchid);
}
public IEnumerable<IAssignment> GetAllAssignments(int userid)
{
return Repository.GetAssignmentRecords(userid);
}
}
class ProgramAssignment
{
static void Main(string[] args)
{
try
{
var controller = new MYController();
controller.Repository = new RepoDb();
IAssignment assignment = controller.GetAssignment(0, 0);
IEnumerable<IAssignment> all = controller.GetAllAssignments(0);
controller.Repository = new RepoWeb();
assignment = controller.GetAssignment(0, 0);
all = controller.GetAllAssignments(0);
}
catch
{
Console.WriteLine("");
}
}
}
As to why the out - here is some more in my other post...
How to make generic class that contains a Set of only its own type or subtypes as Children?
Assuming that the 2 Assignment has different properties (maybe some additional), but some of the property is same, and they are from different database, there are many ways to doing it. But "the best" (for me) is by doing dependency injection.
Your activities (methods) in Assignment class, should be moved to a separated "service" class. This increases the modularity of Assignment, as it only became a POCO.
For data access, create a separated class (repository) to retrieve/insert/update/delete your data. Example will be like:
public AssignmentRepository: IAssignmentRepository{
public Assignment GetAssignmentRecord(int userId, int batchId){
}
}
public BatchAssignmentRepository: IAssignmentRepository{
public Assignment GetAssignmentRecord(int userId, int batchId){
}
}
If you ask why there are 2 repository instead of 1, will it make the code redundant? Yes it is, but you also must consider it will increase the modularity. If you change something in BatchAssignment (maybe change the column name, add additional column, etc) then you do not need to apply the same in Assignment, and avoiding you of "if batchAssignment else" logic inside.
The use from the caller will be like this:
IAssignmentService service = new AssignmentService();
IAssignmentRepository repository = new AssignmentRepository();
Assignment a = repository.GetAssignmentRecord(userId, batchId);
service.DoSomething(a);
Think about an adapter layer. That layer should transform the incoming data to a common structure/class and then can be handled consistently, generics notwithstanding. Of course it also re-transforms on the "outbound" side to that expected by the particular databases. This assumes that no datasource has data that is undefined in the others, or that you can define valid default values for said missing data.
I imagine you need different adapters for the different projects. Perhaps this is a job for dependency injection. Basically at runtime you fetch the particular code (adapter class) needed.
Introduction to Unity.
We have a specific struct called Measure and we'd like to use this type instead of the database field type e.g. double.
So we have an entity:
public class MyEnity
{
public int MyValue { get; set; }
}
And we have a transfer object:
public class MyDto
{
public Measure MyMeasureValue{ get; set; }
}
If the property type would match, we just can fill our dto's per projection:
enities.Select(i => new MyDto { MyMeasureValue = new Measure(i.MyValue, _unitsService.GetUnit("km")) });
But since EF does not support such statements, we have to refill this, or load the whole entity:
entities.Select(i => new { MyValue = i.MyValue })
.AsEnumerable()
.Select(i => new MyDto { MyMeasureValue = new Measure(i.MyValue, _unitsService.GetUnit("km")) } );
We want to avoid this looping several times in the refill process, especially because there are a lot of properties to fill. Is there a way we can go with the first statement and teach EF to execute the Measure creation? (e.g interception etc.)
PS. It is not an option to create a EF complex type and map it!
Thanks Enyra
You may use complex types while fetching some properties of the entity using Linq-Entity. For instance;
Model1Container container = new Model1Container();
var temp = from o in container.MasterSet
select new
{
x = o.LastModifiedBy,
y = o.LastModifiedDate
};
By the way, instead of mapping DTOS by hand, it is better to use autommaper. It has the functionality to map matching proper names without explicity declaring.