EF denormalize result of each group join - c#

I have a 1-to-many relationship between a user and his/her schools. I often want to get the primary school for the user (the one with the highest "Type"). This results in having to join the primary school for every query I want to run. A user's schools barely ever change. Are there best practices on how to do this to avoid the constant join? Should I denormalize the models and if so, how? Are there other approaches that are better?
Thanks.
public class User
{
public int Id { get; set; }
public virtual IList<UserSchool> UserSchools { get; set; }
...
}
public class UserSchool
{
public int UserId { get; set; }
public string Name { get; set; }
public int Type { get; set; }
...
}
...
var schools = (from r in _dbcontext.UserSchools
group r by r.UserId into grp
select grp.OrderByDescending(x => x.Type).FirstOrDefault());
var results = (from u in _dbcontext.Users
join us in schools on u.Id equals us.UserId
select new UserContract
{
Id = u.Id,
School = us.Name
});

In past projects, when I opted to denormalize data, I have denormalized it into separate tables which are updated in the background by the database itself, and tried to keep as much of the process contained in the database software, which handles these things much better. Note that any sort of "run every x seconds" solution will cause a lag in how up-to-date your data is. For something like this, it doesn't sound like the data changes that often, so being a few seconds (or minutes, or days, by the sound of it) out of date is not a big concern. If you're considering denormalization, then retrieval speed must be much more important.
I have never had a "hard and fast" criteria for when to denormalize, but in general the data must be:
Accessed often. Like multiple times per page load often. Absolutely
critical to the application often. Retrieval time must be paramount.
Time insensitive. If the data you need is changing all the time, and it is critical that the data you retrieve is up-to-the-minute, denormalization will have too much overhead to buy you much benefit.
Either an extremely large data set or the result of a relatively complex query. Simple joins can usually be handled by proper indexing, and maybe an indexed view.
Already optimized as much as possible. We've already tried things like indexed views, reorganizing indexes, rewriting underlying queries, and things are still too slow.
Denormalizing can be very helpful, but it introduces its own headaches, so you want to be very sure that you are ready to deal with those before you commit to it as a solution to your problem.

Related

Entity Framework LINQ SQL Query Performance

Hello everyone I'm working on an API that returns a dish with its restaurant details from a database that has restaurants and their dishes.
I'm wondering if the following makes the query any efficient by converting the first, to second:
from res in _context.Restaurant
join resdish in _context.RestaurantDish
on res.Id equals resdish.RestaurantId
where resdish.RestaurantDishId == dishId
Second:
from resdish in _context.RestaurantDish
where resdish.RestaurantDishId == dishId
join res in _context.Restaurant
on resdish.RestaurantId equals res.Id
The reason why I'm debating this is because I feel like the second version filters to the single restaurant dish, then joining it, rather than joining all dishes then filtering.
Is this correct?
You can use a profiler on your database to capture the SQL in both cases, or inspect the SQL that EF generates and you'll likely find that the SQL in both cases is virtually identical. It boils down to how the reader (developers) interprets the intention of the logic.
As far as building efficient queries in EF goes, EF is an ORM meaning it offers to map between an object-oriented model and a relational data model. It isn't just an API to enable translating Linq to SQL. Part of the power for writing simple and efficient queries is through the use of navigation properties and projection. A Dish will be considered the property of a particular Restaurant, while a Restaurant has many Dishes on its menu. This forms a One-to-Many relationship in the database, and navigation properties can map this relationship in your object model:
public class Restaurant
{
[Key]
public int RestaurantId { get; set; }
// ... other fields
public virtual ICollection<Dish> Dishes { get; set; } = new List<Dish>();
}
public class Dish
{
[Key]
public int DishId { get; set; }
//[ForeignKey(nameof(Restaurant))]
//public int RestaurantId { get; set; }
public virtual Restaurant Restaurant { get; set; }
}
The FK propery for the Restaurant ID is optional and can be configured to use a Shadow Property. (One that EF knows about and generates, but isn't exposed in the Entity) I recommend using shadow properties for FKs mainly to avoid 2 sources of truth for relationships. (dish.RestaurantId and dish.Restaurant.RestaurantId) Changing the FK does not automatically update the relationship unless you reload the entity, and updating the relationship does not automatically update the FK until you call SaveChanges.
Now if you wanted to get a particular dish and it's associated restaurant:
var dish = _context.Dishes
.Include(d => d.Restaurant)
.Single(d => d.DishId == dishId);
This fetches both entities. Note that there is no need now to manually write Joins like you would with SQL. EF supports Join, but it should only be used in very rare cases where a schema isn't properly normalized/relational and you need to map loosely joined entities/tables. (Such as a table using an "OwnerId" that could join to a "This" or a "That" table based on a discriminator such as OwnerType.)
If you leave off the .Include(d => d.Restaurant) and have lazy loading enabled on the DbContext, then EF would attempt to automatically load the Restaurant if and when the first attempt of the code to access dish.Restaurant. This provides a safety net, but can incur some steep performance penalties in many cases, so it should be avoided or treated as a safety net, not a crutch.
Eager loading works well when dealing with single entities and their related data where you will need to do things with those relationships. For instance if I want to load a Restaurant and review, add/remove dishes, or load a Dish and possibly change the Restaurant. However, eager loading can come at a significant cost in how EF and SQL provides that related data behind the scenes.
By default when you use Include, EF will add an INNER or LEFT join between the associated tables. This creates a Cartesian Product between the involved tables. If you have 100 restaurants that have an average of 30 dishes each and select all 100 restaurants eager loading their dishes, the resulting query is 3000 rows. Now if a Dish has something like Reviews and there are an average of 5 reviews per dish and you eager load Dishes and Reviews, that would be a resultset of every column across all three tables with 15000 rows in total. You can hopefully appreciate how this can grow out of hand pretty fast. EF then goes through that Cartesian and populates the associated entities in the object graph. This can lead to questions about why "my query runs fast in SSMS but slow in EF" since EF can have a lot of work to do, especially if it has been tracking references from restaurants, dishes, and/or reviews to scan through and provide. Later versions of EF can help mitigate this a bit by using query splitting so instead of JOINs, EF can work out to fetch the related data using multiple separate SELECT statements which can execute and process a fair bit faster, but it still amounts to a lot of data going over the wire and needing memory to materialize to work with.
Most of the time though, you won't need ALL rows, nor ALL columns for each and every related entity. This is where Projection comes in such as using Select. When we pull back our list of restaurants, we might want to list the restaurants in a given city along with their top 5 dishes based on user reviews. We only need the RestaurantId & Name to display in these results, along with the Dish name and # of positive reviews. Instead of loading every column from every table, we can define a view model for Restaurants and Dishes for this summary View, and project the entities to these view models:
public class RestaurantSummaryViewModel
{
public int RestaurantId { get; set; }
public string Name { get; set; }
public ICollection<DishSummaryViewModel> Top5Dishes { get; set; } = new List<DishSummaryViewModel>();
}
public class DishSummaryViewModel
{
public string Name { get; set; }
public int PositiveReviewCount {get; set; }
}
var restaurants = _context.Restaurants
.Where(r => r.City.CityId == cityId)
.OrderBy(r => r.Name)
.Select(r => new RestaurantSummaryViewModel
{
RestaurantId = r.RestaurantId,
Name = r.Name,
Top5Dishes = r.Dishes
.OrderByDescending(d => d.Reviews.Where(rv => rv.Score > 3).Count())
.Select(d => new DishSummaryViewModel
{
Name = d.Name,
PositiveReviewCount = d.Reviews.Where(rv => rv.Score > 3).Count()
}).Take(5)
.ToList();
}).ToList();
Notice that the above Linq example doesn't use Join or even Include. Provided you follow a basic set of rules to ensure that EF can work out what you want to project down to SQL you can accomplish a fair bit producing far more efficient queries. The above statement would generate SQL to run across the related tables but would only return the fields needed to populate the desired view models. This allows you to tune indexes based on what data is most commonly needed, and also reduces the amount of data going across the wire, plus memory usage on both the DB and app servers. Libraries like Automapper and it's ProjectTo method can simplify the above statements even more, configuring how to select into the desired view model once, then replacing that whole Select( ... ) with just a ProjectTo<RestaurantSummaryViewModel>(config) where "config" is a reference to the Automapper configuration where it can resolve how to turn Restaurants and their associated entities into the desired view model(s).
In any case it should give you some avenues to explore with EF and learning what it can bring to the table to produce (hopefully:) easy to understand, and efficient query expressions.

.net ORM that will allow me to extend my code without modifying it (Open Closed Principle)

Disclaimer: Below is my very simplified description of a problem. Please while reading it imagine some complicated modular application (ERP, Visual Studio, Adobe Photoshop) that is evolving over the years during which its features will be added and removed in the hope that it will not end up in spaghetti code.
Let say I have following entity and corresponding table in database
class Customer
{
public int Id { get; set; }
}
then I will use some ORM, build DataContext and create my application
static void Main(string[] args)
{
//Data Layer
var context = new DataContext();
IQueryable<Customer> customers = context.Customers;
//GUI
foreach (var customer in customers)
foreach (var property in customer.GetType().GetProperties())
Console.WriteLine($"{property.Name}:{property.GetValue(customer)}");
}
Application is done, my customer is happy, case closed.
Half year later my customer asks me to add Name to the Customer.
I want to do it without touching previous code.
So first I will create new entity and corresponding table in database and add it in ORM (please ignore the fact that I'm modifying the same DataContext, this is easy to fix)
class CustomerName
{
public int Id { get; set; }
public string Name { get; set; }
}
CustomerName will add new property to Customer but to have complete Customer information we need to join them together, so let's try to modify our application without touching previous code
static void Main(string[] args)
{
//Data Layer
var context = new DataContext();
IQueryable<Customer> customers = context.Customers;
//new code that doesn't even compile
customers = from c in customers
join cn in context.CustomerNames on c.Id equals cn.Id
select new {c, cn}; //<-- what should be here??
//GUI
foreach (var customer in customers)
foreach (var property in customer.GetType().GetProperties())
Console.WriteLine($"{property.Name}:{property.GetValue(customer)}");
}
As you can see I have no idea to what map my information from join so that it still will be a valid Customer object.
And no, I cannot use inheritance.
Why?
Because at the same time another developer can be asked for functionality to block customer and create following entity:
class BlockedCustomer
{
public int Id { get; set; }
public bool Blocked { get; set; }
}
He will not know anything about Customer Name, therefore he may only depend on Customer, and at runtime our both features will result in something like this:
static void Main(string[] args)
{
//Data Layer
var context = new DataContext();
IQueryable<Customer> customers = context.Customers;
//new code that doesn't even compile
customers = from c in customers
join cn in context.CustomerNames on c.Id equals cn.Id
select new {c, cn}; //<-- what should be here??
customers = from c in customers
join b in context.BlockedCustomers on c.Id equals b.Id
select new { c, b }; //<-- what should be here??
//GUI
foreach (var customer in customers)
foreach (var property in customer.GetType().GetProperties())
Console.WriteLine($"{property.Name}:{property.GetValue(customer)}");
}
I have 2 ideas how to solve it
Create some container class that will inherit from Customer and play with casting/converting it to CustomerName or BlockedCustomer when needed.
Something like this:
class CustomerWith<T> : Customer
{
private T Value;
public CustomerWith(Customer c, T value) : base(c)
{
Value = value;
}
}
and then
customers = from c in customers
join cn in context.CustomerNames on c.Id equals cn.Id
select new CustomerWith<CustomerName>(c, cn);
Use ConditionalWeakTable to store (at data layer level) CustomerName and BlockedCustomer associated with Customer and modify (once) UI to be aware of such things
As to my knowledge, both solutions unfortunately require me to write my own LINQ mapper (including change tracking) and I want to avoid it.
Do you know any ORM that know how to handle such requirements?
Or maybe there is much better/simpler solution to write applications and don't violate Open/Closed principle?
Edit - Some clarifications after comments:
I'm talking about properties that have one-to-one relationship with Customer. Usually such properties are added as additional columns in the table.
I want to send only one SQL query to database. So adding 20 such new features/properties/columns shouldn't end up in 20 queries.
What I showed was a simplified version of application I have in mind. I'm thinking about app which dependencies will be structured in the following way [UI]-->[Business Logic]<--[Data]. All 3 should be open for extension and closed for modification but in this question I'm focusing on [Data] layer. [Business Logic] will ask [Data] layer (using Linq) for Customers. So even after extending it, Customer BL will just ask for Customer (in Linq), but extension to [Business Logic] will need CustomerName. Question is how to extend [Data] Layer so that it will still return Customer to Customer BL, but provide CustomerName to Customer name BL and still send one query to database.
When I showed Joins as a proposed solution, it may be not clear that they will be not hard coded in [Data] layer, but rather [Data] layer should know that it should call some methods that may want to extend the query and which will be registered by main() module. Main() module is the only one that knows about all dependencies and for the purpose of this question it's not important how.
I have following entity and corresponding table in database
I think that this basic premise is the root cause for most problems.
To be clear, there's nothing inherently bad about basing an application architecture on an underlying relational database schema. Sometimes, a simple CRUD application is all the stakeholders need.
It's important to realise, though, that whenever you make that choice, application architecture is going to be inherently relational (i.e. not object-oriented).
A fully normalised relational database is characterised by relations. Tables relate to other tables via foreign keys. These relationships are enforced by the database engine. Usually, in a fully normalised database, (almost) all tables are (transitively) connected to all other tables. In other words, everything's connected.
When things are connected, in programming we often call it coupling, and it's something we'd like to avoid.
You want to divide your application architecture into modules, yet still base it on a database where everything's coupled. I know of no practical way of doing that; I don't think it's possible.
ORMs just make this worse. As Ted Neward taught us more than a decade ago, ORMs is the Vietnam war of computer science. I've given up on them years ago, because they just make everything worse, including software architecture.
A more promising way to design a complex modular system is CQRS (see e.g. my own attempt at explaining the concept back in 2011).
This would enable you to treat your domain events like naming a customer, or blocking a customer as Commands that encapsulate not only how, but also why something should happen.
On the read side, then, you could provide data via (persistent) projections of transactional data. In a CQRS architecture, however, transactional data often fits better with event sourcing, or a document database.
This sort of architecture fits well with microservices, but you can also write larger, modular applications this way.
Relational databases, like OOD, sounded like a good idea for a while, but it has ultimately been my experience that both ideas are dead ends. They don't help us reduce complexity. They don't make things easier to maintain.

Performance hit / Memory consumption when aggregating/filtering navigation properties

Let us say I have the following set of classes :
public class MegaBookCorporation
{
public int ID { get; private set}
public int BooksInStock
{
get
{
return Stores.Sum( x => x.BooksInStock)
}
}
public virtual ICollection<MegaBookCorporationStore> Stores { get; set; }
}
public class MegaBookCorporationStore
{
public int ID { get; private set; }
public string BookStoreName { get; private get; }
public virtual MegaBookCorporation ManagingCorporation { get; private set;}
public int BooksInStock
{
get
{
return Books.Where( x=> !x.IsSold).Count();
}
}
public virtual ICollection<Book> Books { get; set; }
}
public class Book
{
public int IndividualBookTrackerID { get; private set; }
public virtual MegaBookCorporationStore { get; private set; }
public bool IsSold { get; private set; }
public DateTime? SellingDate { get; private set;}
}
I had a discussion at work regarding the performance hit involved when retrieving the NumberOfBooks in a MegaBookCorporation. Two important facts :
1/ We're using EF 6 with Lazy Loading as suggested by the virtual keywords.
2/ Since every book is tracked individually the number of Book entries in the database will become great quickly. The table will likely have a size of hundreds of millions on the long run. We will perhaps be adding up to 100,000 books per day.
The opinion I supported is that the current implementation is fine and that we're not going to run into problems. My understanding is that a SQL statement would be generated to filter the collection when GetEnumerator is called.
The other suggestion made by my coworker is to cache the number of books. That means updating a field "int ComputedNumberOfBooks" whenever the AddBookToStock() or SellBook() methods would be called. This field would need to be repeated and updated in both the Store and Corporation classes. (Then of course we would need to take care of concurrency)
I know adding these fields wouldn't be a big deal, but I really feel bad about this idea. To me it looks like pre-engineering a problem that doesn't exist, and that in my opinion won't exist.
I decided to check again my claims with SO and found 2 contradicting answers :
One saying that the whole Books collection would be pulled to memory, since ICollection only inherits from IEnumerable.
The other saying the opposite : the navigation property will be treated as an IQueryable until it is evaluated.(Why not since the property is wrapped by a proxy)
So here are my questions :
1- What is the truth ?
2- Even if the whole collection is referenced, don't you think that it's not a big deal since it would be an IEnumerable (low memory usage).
3- What do you think of the memory consumption / performance hit on this example, and what would be the best way to go ?
Thank you
What is the truth?
If you use MegaBookCorporation.BooksInStock to get the total number of books stored, all books are going to be loaded from the database. There is no way the query provider can generate an SQL expression for a property getter's body other than just fetching all the data and evaluating it in-memory.
Even if the whole collection is referenced, don't you think that it's not a big deal since it would be an IEnumerable (low memory usage).
Yes, it's a big deal since it does not scale at all. It has nothing to do with the fact that it's IEnumerable. The problem is fetching all the data before evaluating Count().
What do you think of the memory consumption / performance hit on this example, and what would be the best way to go?
The memory consumption will grow with the number of books stored in the database. Since you only want to get their count, that's clearly a no-go. Here you can see how to do it properly.
The verdict
The truth is that with the properties you defined the whole collection of books is loaded. Here's why.
Ideally, you want to be able to do
var numberOfBooks = context.MegaBookCorporations
.Where(m => m.ID == someId)
.Select(m => m.BooksInStock)
.Single();
If EF would be able to turn this into SQL, you'd have a query that only returns an integer and loads no entities into memory whatsoever.
But, unfortunately, EF can't do this. It will throw an exception that there is no SQL translation for BooksInStock.
To circumvent this exception you could do:
var numberOfBooks = context.MegaBookCorporations
.Where(m => m.ID == someId)
.Single()
.BooksInStock;
This dramatically changes things. Single() draws one MegaBookCorporation into memory. Accessing its BooksInStock property triggers lazy loading of MegaBookCorporation.Stores. Subsequently, for each Store the complete Books collections are loaded. Finally, the LINQ operations (x => !x.IsSold, Count, Sum) are applied in memory.
So in this case, the first link is correct. Lazy loading always loads complete collections. Once the collections are loaded, they will not be loaded again.
But the second link is correct too :).
As long as you manage to do everything in one LINQ statement that can be translated into SQL, the navigation properties and predicates will be evaluated in the database and no lazy loading will occur. But then you can't use the BooksInStock properties.
The only way to achieve this is by a LINQ statement like
var numberOfBooks = context.MegaBookCorporations
.Where(m => m.ID == someId)
.SelectMany(m => m.Stores)
.SelectMany(s => s.Books)
.Count();
This executes a pretty efficient query with one join and a COUNT, returning only the count.
So unfortunately, your key assumption...
that a SQL statement would be generated to filter the collection when GetEnumerator is called.
Is not entirely correct. A SQL statement is generated, but not including the filter. With the numbers of books you mention this will cause severe performance and memory problems.
So what to do?
Something should be done if you need these counts frequently and you don't want to query them separately all the time. Your coworker's idea, a redundant ComputedNumberOfBooks field in the database could be a solution, but I share your objections.
Redundancy should be avoided at (nearly) all costs. The worst part is that it always requires a client application to keep both sides in sync. Or database triggers.
But talking about the database... If these counts are important and frequently queried, I would introduce a computed column BooksInStock in the MegaBookCorporationStore table. Its formula could simply do the count of books in store. Then you can add this computed column to your entity as a property that is marked as DatabaseGeneratedOption.Computed. No redundancy.

Are 'heavy' aggregate functions in RavenDB advisable?

I'm working on a proof-of-concept timesheet application in C# that allows users to simply enter lots of timesheet records. The proof-of-concept will use RavenDB as storage provider, however the question below is perhaps more related to the nosql concept in general.
A user will typically enter between 1 and about 10 records each working day. Let's just say that for the sake of the discussion there will be a lot of records by the end of the year (tens or hundreds of thousands) for this specific collection.
The model for a record will be defined as:
class TimesheetRecord {
public long Id { get; set; }
public int UserId { get; set; }
public bool IsApproved { get; set; }
public DateTime DateFrom { get; set; }
public DateTime DateTill { get; set; }
public int? ProjectId { get; set; }
public int? CustomerId { get; set; }
public string Description { get; set; }
}
Logically, the application will allow the users, or project managers, to create reports on the fly. Think of on the fly reports like:
Total time spent for a project, customer or user
Time spent for a project, or customer in a certain time span like a week, month or between certain dates
Total amount of hours not approved already, by user - or for all users
Etc.
Of course, it is an option to add additional fields, like integers for weeknumber, month etc. to decrease the amount of crunching needed to filter on date/period. The idea is to basically use Query<T> functions by preference in order to generate the desired data.
In a 'regular' relational table this all would be no problem. With or without normalization this woulb be a breeze. The proof-of-concept is based on: will it blend as well in a nosql variant? This question is because I'm having some doubts after being warned about these 'heavy' aggregate functions (like nested WHERE constraints and SUM etc.) not being ideal in a document store variant.
Considering all this, I have two questions:
Is this advisable in a nosql variant, specifically RavenDB?
Is the approach correct?
I can imagine storing all the data redundantly, instead of querying on the fly, would be more performant. Like in adding hours spent by a certain user in a Project() or Customer() object. This, however, will increase complexity with updates considerably. Not to mention create immense redundant data all over the collections, which on its turn seems like a direct violation of seperation of concern and DRY.
Any advise or thoughts would be great!
I'm a big fan of RavenDB, but it is not a silver bullet or golden hammer. It has scenarios for which it is not the best tool for the job, and this is probably one of them.
Specifically, document databases in general, and RavenDB in particular, aren't very applicable when the specific data access patterns are not known. RavenDB has the ability to create Map/Reduce indexes that can do some amazing things with aggregating data, but you have to know ahead of time how you want to aggregate it.
If you only have need for (let's say) 4 specific views on that data, then you can store that data in Raven, apply Map/Reduce indexes, and you will be able to access those reports with blazing speed because they will be asynchronously updated and always available with great performance, because the data will already be there and nothing has to be crunched at runtime. Of course, then some manager will go "You know what would be really great is if we could also see __." If it's OK that manager's request will require additional development time to create a new Map/Reduce index, UI, etc., then Raven could still be the tool for the job.
However, it sounds like you have a scenario with a table of data that would essentially fit perfectly in Excel, and you want to be able to query that data in crazy ways that cannot be known until run time. In that case, you are better off going with a relational database. They were created specifically for that task and they're great at it.

Best strategies when working with micro ORM?

I started using PetaPOCO and Dapper and they both have their own limitations. But on the contrary, they are so lightning fast than Entity Framework that I tend to let go the limitations of it.
My question is: Is there any ORM which lets us define one-to-many, many-to-one and many-to-many relationships concretely? Both Dapper.Net and PetaPOCO they kind of implement hack-ish way to fake these relationship and moreover they don't even scale very well when you may have 5-6 joins. If there isn't a single micro ORM that can let us deal with it, then my 2nd question is should I let go the fact that these micro ORMs aren't that good in defining relationships and create a new POCO entity for every single type of query that I would be executing that includes these types of multi joins? Can this scale well?
I hope I am clear with my question. If not, let me know.
I generally follow these steps.
I create my viewmodel in such a way that represents the exact data and format I want to display in a view.
I query straight from the database via PetaPoco on to my view models.
In my branch I have a
T SingleInto<T>(T instance, string sql, params object[] args);
method which takes an existing object and can map columns directly on to it matched by name. This works brilliantly for this scenario.
My branch can be found here if needed.
https://github.com/schotime/petapoco/
they don't even scale very well when you may have 5-6 joins
Yes, they don't, but that is a good thing, because when the system you will be building starts to get complex, you are free to do the joins you want, without performance penalties or headaches.
Yes, I miss when I don't needed to write all this JOINS with Linq2SQL, but then I created a simple tool to write the common joins so I get the basic SQL for any entity and then I can build from there.
Example:
[TableName("Product")]
[PrimaryKey("ProductID")]
[ExplicitColumns]
public class Product {
[PetaPoco.Column("ProductID")]
public int ProductID { get; set; }
[PetaPoco.Column("Name")]
[Display(Name = "Name")]
[Required]
[StringLength(50)]
public String Name { get; set; }
...
...
[PetaPoco.Column("ProductTypeID")]
[Display(Name = "ProductType")]
public int ProductTypeID { get; set; }
[ResultColumn]
public string ProductType { get; set; }
...
...
public static Product SingleOrDefault(int id) {
var sql = BaseQuery();
sql.Append("WHERE Product.ProductID = #0", id);
return DbHelper.CurrentDb().SingleOrDefault<Product>(sql);
}
public static PetaPoco.Sql BaseQuery(int TopN = 0) {
var sql = PetaPoco.Sql.Builder;
sql.AppendSelectTop(TopN);
sql.Append("Product.*, ProductType.Name as ProductType");
sql.Append("FROM Product");
sql.Append(" INNER JOIN ProductType ON Product.ProductoTypeID = ProductType.ProductTypeID");
return sql;
}
Would QueryFirst help here? You get the speed of micro orms, with the added comfort of every-error-a-compile-time-error, plus intellisense both for your queries and their output. You define your joins in SQL as god intended. If typing out join conditions is really bugging you, DBForge might be the answer, and because you're working in SQL, these tools are compatible, and you're not locked in.

Categories

Resources