Is this Repository pattern efficient with LINQ-to-SQL? - c#

I'm currently reading the book Pro Asp.Net MVC Framework. In the book, the author suggests using a repository pattern similar to the following.
[Table(Name = "Products")]
public class Product
{
[Column(IsPrimaryKey = true,
IsDbGenerated = true,
AutoSync = AutoSync.OnInsert)]
public int ProductId { get; set; }
[Column] public string Name { get; set; }
[Column] public string Description { get; set; }
[Column] public decimal Price { get; set; }
[Column] public string Category { get; set; }
}
public interface IProductsRepository
{
IQueryable<Product> Products { get; }
}
public class SqlProductsRepository : IProductsRepository
{
private Table<Product> productsTable;
public SqlProductsRepository(string connectionString)
{
productsTable = new DataContext(connectionString).GetTable<Product>();
}
public IQueryable<Product> Products
{
get { return productsTable; }
}
}
Data is then accessed in the following manner:
public ViewResult List(string category)
{
var productsInCategory = (category == null) ? productsRepository.Products : productsRepository.Products.Where(p => p.Category == category);
return View(productsInCategory);
}
Is this an efficient means of accessing data? Is the entire table going to be retrieved from the database and filtered in memory or is the chained Where() method going to cause some LINQ magic to create an optimized query based on the lambda?
Finally, what other implementations of the Repository pattern in C# might provide better performance when hooked up via LINQ-to-SQL?

I can understand Johannes' desire to control the execution of the SQL more tightly and with the implementation of what i sometimes call 'lazy anchor points' i have been able to do that in my app.
I use a combination of custom LazyList<T> and LazyItem<T> classes that encapsulate lazy initialization:
LazyList<T> wraps the IQueryable functionality of an IList collection but maximises some of LinqToSql's Deferred Execution functions and
LazyItem<T> will wrap a lazy invocation of a single item using the LinqToSql IQueryable or a generic Func<T> method for executing other code deferred.
Here is an example - i have this model object Announcement which may have an attached image or pdf document:
public class Announcement : //..
{
public int ID { get; set; }
public string Title { get; set; }
public AnnouncementCategory Category { get; set; }
public string Body { get; set; }
public LazyItem<Image> Image { get; set; }
public LazyItem<PdfDoc> PdfDoc { get; set; }
}
The Image and PdfDoc classes inherit form a type File that contains the byte[] containing the binary data. This binary data is heavy and i might not always need it returned from the DB every time i want an Announcement. So i want to keep my object graph 'anchored' but not 'populated' (if you like).
So if i do something like this:
Console.WriteLine(anAnnouncement.Title);
..i can knowing that i have only loaded from by db the data for the immediate Announcement object. But if on the following line i need to do this:
Console.WriteLine(anAnnouncement.Image.Inner.Width);
..i can be sure that the LazyItem<T> knows how to go and get the rest of the data.
Another great benefit is that these 'lazy' classes can hide the particular implementation of the underlying repository so i don't necessarily have to be using LinqToSql. I am (using LinqToSql) in the case of the app I'm cutting examples from, but it would be easy to plug another data source (or even completely different data layer that perhaps does not use the Repository pattern).
LINQ but not LinqToSql
You will find that sometimes you want to do some fancy LINQ query that happens to barf when the execution flows down to the LinqToSql provider. That is because LinqToSql works by translating the effective LINQ query logic into T-SQL code, and sometimes that is not always possible.
For example, i have this function that i want an IQueryable result from:
private IQueryable<Event> GetLatestSortedEvents()
{
// TODO: WARNING: HEAVY SQL QUERY! fix
return this.GetSortedEvents().ToList()
.Where(ModelExtensions.Event.IsUpcomingEvent())
.AsQueryable();
}
Why that code does not translate to SQL is not important, but just believe me that the conditions in that IsUpcomingEvent() predicate involve a number of DateTime comparisons that simply are far too complicated for LinqToSql to convert to T-SQL.
By using .ToList() then the condition (.Where(..) and then .AsQueryable() i'm effectively telling LinqToSql that i need all of the .GetSortedEvents() items even tho i'm then going to filter them. This is an instance where my filter expression will not render to SQL correctly so i need to filter it in memory. This would be what i might call the limitation of LinqToSql's performance as far as Deferred Execution and lazy loading goes - but i only have a small number of these WARNING: HEAVY SQL QUERY! blocks in my app and i think further smart refactoring could eliminate them completely.
Finally, LinqToSql can make a fine data access provider in large apps if you want it to. I found that to get the results i want and to abstract away and isolate certain things i've needed to add code here and there. And where i want more control over the actual SQL performance from LinqToSql, i've added smarts to get the desired results. So IMHO LinqToSql is perfectly ok for heavy apps that need db query optimization provided you understand how LinqToSql works. My design was originally based on Rob's Storefront tutorial so you might find it useful if you need more explanation about my rants above.
And if you want to use those lazy classes above, you can get them here and here.

Is this an efficient means of
accessing data? Is the entire table
going to be retrieved from the
database and filtered in memory or is
the chained Where() method going to
cause some LINQ magic to create an
optimized query based on the lambda?
It is efficient, if you wish to say so. The Repository exposes an IQueryable inteface, which basically represents any LINQ Data Provider (in this case Linq2Sql).
Queries are executed the moment you start iterating over the result.
IQueryable therefore supports query composition. You can add any .Where() or .GroupBy() or .OrderBy() call to a query and it will be statisfied by the database.
If you put an enumeration in your query, such as .ToList(), everything after that will happen in memory (LinqToObjects).
But I think the repository implementation is useless. I want my repository to control query execution, which is impossible when exposing IQueryable.

Yes linq2sql will generate magic to make it more efficient. It depends on you using the IQueryable interface. If you want to check clamp the SQL profiler on and you can see it generate the appropriate query.
I would recommend introducing a service layer to abstract away your dependancy on linq2sql.

I've also read that book recently and this is the SQL generated when I ran the sample code:
SELECT [t1].[Category]
FROM ( SELECT DISTINCT [t0].[Category]
FROM [Products] AS [t0] ) AS [t1] ORDER BY [t1].[Category]
I don't think you can write anything more efficient given that database. However in most real databases your Categories would be in a separate table to keep things DRY.

Related

Entity Framework 6: virtual collections lazy loaded even explicitly loaded on a query

I have a problem with EF6 when trying to optimize the queries. Consider this class with one collection:
public class Client
{
... a lot of properties
public virtual List<Country> Countries { get; set; }
}
As you might know, with Lazy Loading I have this n+1 problem, when EF tries to get all the Countries, for each client.
I tried to use Linq projections; for example:
return _dbContext.Clients
.Select(client => new
{
client,
client.Countries
}).ToList().Select(data =>
{
data.client.Countries = data.Countries; // Here is the problem
return data.client;
}).ToList();
Here I'm using two selects: the first for the Linq projection, so EF can create the SQL, and the second to map the result to a Client class. The reason for that is because I'm using a repository interface, which returns List<Client>.
Despite the query is generated with the Countries in it, EF still is using Lazy Loading when I try to render the whole information (the same n+1 problem). The only way to avoid this, is to remove the virtual accessor:
public class Client
{
... a lot of properties
public List<Country> Countries { get; set; }
}
The issue I have with this solution is that we still want to have this property as virtual. This optimization is only necessary for a particular part of the application, whilst on the other sections we want to have this Lazy Loading feature.
I don't know how to "inform" EF about this property, that has been already lazy-loaded via this Linq projection. Is that possible? If not, do we have any other options? The n+1 problems makes the application to take several seconds to load like 1000 rows.
Edit
Thanks for the responses. I know I can use the Include() extension to get the collections, but my problem is with some additional optimizations I need to add (I'm sorry for not posting the complete example, I thought with the Collection issue would be enough):
public class Client
{
... a lot of properties
public virtual List<Country> Countries { get; set; }
public virtual List<Action> Actions { get; set; }
public virtual List<Investment> Investments { get; set; }
public User LastUpdatedBy {
get {
if(Actions != null) {
return Actions.Last();
}
}
}
}
If I need to render the clients, the information about the last update and the number of investments (Count()), with the Include() I practically need to bring all the information from the database. However, if I use the projection like
return _dbContext.Clients
.Select(client => new
{
client,
client.Countries,
NumberOfInvestments = client.Investments.Count() // this is translated to an SQL query
LastUpdatedBy = client.Audits.OrderByDescending(m => m.Id).FirstOrDefault(),
}).ToList().Select(data =>
{
// here I map back the data
return data.client;
}).ToList();
I can reduce the query, getting only the required information (in the case of LastUpdatedBy I need to change the property to a getter/setter one, which is not a big issue, as its only used for this particular part of the application).
If I use the Select() with this approach (projection and then mapping), the Include() section is not considered by EF.
If i understand correctly you can try this
_dbContext.LazyLoading = false;
var clientWithCountres = _dbContext.Clients
.Include(c=>c.Countries)
.ToList();
This will fetch Client and only including it Countries. If you disable lazy-loading the no other collection will load from the query. Unless you are specifying a include or projection.
FYI : Projection and Include() doesn't work together see this answer
If you are projection it will bypass the include.
https://stackoverflow.com/a/7168225/1876572
don't know what you want to do, you are using lambda expression not linq, and your second select it's unnecessary.
data.client is client, data.Countries is client.Countries, so data.client.Countries = data.Countries alway true.
if you don't want lazy load Countries, use _dbContext.Clients.Include("Countries").Where() or select ().
In order to force eager loading of virtual properties you are supposed to use Include extension method.
Here is a link to MSDN https://msdn.microsoft.com/en-us/library/jj574232(v=vs.113).aspx.
So something like this should work:
return _dbContext.Clients.Include(c=>c.Countries).ToList();
Im not 100% sure but I think your issue is that you are still maintaining a queryable for your inner collection through to the end of the query.
This queryable is lazy (because in the model it was lazy), and you havent done anything to explain that this should not be the case, you have simply projected that same lazy queryable into the result set.
I cant tell you off the top of my head what the right answer here is but I would try things around the following:
1 use a projection on the inner queriable also eg
return _dbContext.Clients
.Select(client => new
{
client,
Countries = client.Countries.Select(c=>c)// or a new Country
})
2 Put the include at the end of the query (Im pretty sure include applies to the result not the input. It definitally doesnt work if you put it before a projection) eg:
_dbContext.Clients
.Select(client => new
{
client,
client.Countries
}.Include(c=>c.Countries)`
3 Try specifying the enumeration inside the projection eg:
_dbContext.Clients
.Select(client => new
{
client,
Countries = client.Countries.AsEnumerable() //perhaps tolist if it works
}`
I do want to caviat this by saying that I havent tried any of the above but I think this will set you on the right path.
A note on lazy loading
IMO there are very few good use cases for lazy loading. It almost always causes too many queries to be generated, unless your user is following a lazy path directly on the model. Use it only with extreme caution and IMO not at all in request response (eg web) apps.

Performance hit / Memory consumption when aggregating/filtering navigation properties

Let us say I have the following set of classes :
public class MegaBookCorporation
{
public int ID { get; private set}
public int BooksInStock
{
get
{
return Stores.Sum( x => x.BooksInStock)
}
}
public virtual ICollection<MegaBookCorporationStore> Stores { get; set; }
}
public class MegaBookCorporationStore
{
public int ID { get; private set; }
public string BookStoreName { get; private get; }
public virtual MegaBookCorporation ManagingCorporation { get; private set;}
public int BooksInStock
{
get
{
return Books.Where( x=> !x.IsSold).Count();
}
}
public virtual ICollection<Book> Books { get; set; }
}
public class Book
{
public int IndividualBookTrackerID { get; private set; }
public virtual MegaBookCorporationStore { get; private set; }
public bool IsSold { get; private set; }
public DateTime? SellingDate { get; private set;}
}
I had a discussion at work regarding the performance hit involved when retrieving the NumberOfBooks in a MegaBookCorporation. Two important facts :
1/ We're using EF 6 with Lazy Loading as suggested by the virtual keywords.
2/ Since every book is tracked individually the number of Book entries in the database will become great quickly. The table will likely have a size of hundreds of millions on the long run. We will perhaps be adding up to 100,000 books per day.
The opinion I supported is that the current implementation is fine and that we're not going to run into problems. My understanding is that a SQL statement would be generated to filter the collection when GetEnumerator is called.
The other suggestion made by my coworker is to cache the number of books. That means updating a field "int ComputedNumberOfBooks" whenever the AddBookToStock() or SellBook() methods would be called. This field would need to be repeated and updated in both the Store and Corporation classes. (Then of course we would need to take care of concurrency)
I know adding these fields wouldn't be a big deal, but I really feel bad about this idea. To me it looks like pre-engineering a problem that doesn't exist, and that in my opinion won't exist.
I decided to check again my claims with SO and found 2 contradicting answers :
One saying that the whole Books collection would be pulled to memory, since ICollection only inherits from IEnumerable.
The other saying the opposite : the navigation property will be treated as an IQueryable until it is evaluated.(Why not since the property is wrapped by a proxy)
So here are my questions :
1- What is the truth ?
2- Even if the whole collection is referenced, don't you think that it's not a big deal since it would be an IEnumerable (low memory usage).
3- What do you think of the memory consumption / performance hit on this example, and what would be the best way to go ?
Thank you
What is the truth?
If you use MegaBookCorporation.BooksInStock to get the total number of books stored, all books are going to be loaded from the database. There is no way the query provider can generate an SQL expression for a property getter's body other than just fetching all the data and evaluating it in-memory.
Even if the whole collection is referenced, don't you think that it's not a big deal since it would be an IEnumerable (low memory usage).
Yes, it's a big deal since it does not scale at all. It has nothing to do with the fact that it's IEnumerable. The problem is fetching all the data before evaluating Count().
What do you think of the memory consumption / performance hit on this example, and what would be the best way to go?
The memory consumption will grow with the number of books stored in the database. Since you only want to get their count, that's clearly a no-go. Here you can see how to do it properly.
The verdict
The truth is that with the properties you defined the whole collection of books is loaded. Here's why.
Ideally, you want to be able to do
var numberOfBooks = context.MegaBookCorporations
.Where(m => m.ID == someId)
.Select(m => m.BooksInStock)
.Single();
If EF would be able to turn this into SQL, you'd have a query that only returns an integer and loads no entities into memory whatsoever.
But, unfortunately, EF can't do this. It will throw an exception that there is no SQL translation for BooksInStock.
To circumvent this exception you could do:
var numberOfBooks = context.MegaBookCorporations
.Where(m => m.ID == someId)
.Single()
.BooksInStock;
This dramatically changes things. Single() draws one MegaBookCorporation into memory. Accessing its BooksInStock property triggers lazy loading of MegaBookCorporation.Stores. Subsequently, for each Store the complete Books collections are loaded. Finally, the LINQ operations (x => !x.IsSold, Count, Sum) are applied in memory.
So in this case, the first link is correct. Lazy loading always loads complete collections. Once the collections are loaded, they will not be loaded again.
But the second link is correct too :).
As long as you manage to do everything in one LINQ statement that can be translated into SQL, the navigation properties and predicates will be evaluated in the database and no lazy loading will occur. But then you can't use the BooksInStock properties.
The only way to achieve this is by a LINQ statement like
var numberOfBooks = context.MegaBookCorporations
.Where(m => m.ID == someId)
.SelectMany(m => m.Stores)
.SelectMany(s => s.Books)
.Count();
This executes a pretty efficient query with one join and a COUNT, returning only the count.
So unfortunately, your key assumption...
that a SQL statement would be generated to filter the collection when GetEnumerator is called.
Is not entirely correct. A SQL statement is generated, but not including the filter. With the numbers of books you mention this will cause severe performance and memory problems.
So what to do?
Something should be done if you need these counts frequently and you don't want to query them separately all the time. Your coworker's idea, a redundant ComputedNumberOfBooks field in the database could be a solution, but I share your objections.
Redundancy should be avoided at (nearly) all costs. The worst part is that it always requires a client application to keep both sides in sync. Or database triggers.
But talking about the database... If these counts are important and frequently queried, I would introduce a computed column BooksInStock in the MegaBookCorporationStore table. Its formula could simply do the count of books in store. Then you can add this computed column to your entity as a property that is marked as DatabaseGeneratedOption.Computed. No redundancy.

How to insert an ObservableCollection property to a local sqlite-net db?

I have a quick question about the sqlite-net library which can be found here : https://github.com/praeclarum/sqlite-net.
The thing is I have no idea how collections, and custom objects will be inserted into the database, and how do I convert them back when querying, if needed.
Take this model for example:
[PrimaryKey, AutoIncrement]
public int Id { get; set; }
private string _name; // The name of the subject. i.e "Physics"
private ObservableCollection<Lesson> _lessons;
Preface: I've not used sqlite-net; rather, I spent some time simply reviewing the source code on the github link posted in the question.
From the first page on the sqlite-net github site, there are two bullet points that should help in some high level understanding:
Very simple methods for executing CRUD operations and queries safely (using parameters) and for retrieving the results of those
query in a strongly typed fashion
In other words, sqlite-net will work well with non-complex models; will probably work best with flattened models.
Works with your data model without forcing you to change your classes. (Contains a small reflection-driven ORM layer.)
In other words, sqlite-net will transform/map the result set of the SQL query to your model; again, will probably work best with flattened models.
Looking at the primary source code of SQLite.cs, there is an InsertAll method and a few overloads that will insert a collection.
When querying for data, you should be able to use the Get<T> method and the Table<T> method and there is also an Query<T> method you could take a look at as well. Each should map the results to the type parameter.
Finally, take a look at the examples and tests for a more in-depth look at using the framework.
I've worked quite a bit with SQLite-net in the past few months (including this presentation yesterday)
how collections, and custom objects will be inserted into the database
I think the answer is they won't.
While it is a very capable database and ORM, SQLite-net is targeting lightweight mobile apps. Because of this lightweight focus, the classes used are generally very simple flattened objects like:
public class Course
{
public int CourseId { get; set; }
public string Name { get; set; }
}
public class Lesson
{
public int LessonId { get; set; }
public string Name { get; set; }
public int CourseId { get; set; }
}
If you then need to Join these back together and to handle insertion and deletion of related objects, then that's down to you - the app developer - to handle. There's no auto-tracking of related objects like there is in a larger, more complicated ORM stack.
In practice, I've not found this a problem. I find SQLite-net very useful in my mobile apps.

Improving efficiency with Entity Framework

I have been using the Entity Framework with the POCO First approach. I have pretty much followed the pattern described by Steve Sanderson in his book 'Pro ASP.NET MVC 3 Framework', using a DI container and DbContext class to connect to SQL Server.
The underlying tables in SQL server contain very large datasets used by different applications. Because of this I have had to create views for the entities I need in my application:
class RemoteServerContext : DbContext
{
public DbSet<Customer> Customers { get; set; }
public DbSet<Order> Orders { get; set; }
public DbSet<Contact> Contacts { get; set; }
...
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Customer>().ToTable("vw_Customers");
modelBuilder.Entity<Order>().ToTable("vw_Orders");
...
}
}
and this seems to work fine for most of my needs.
The problem I have is that some of these views have a great deal of data in them so that when I call something like:
var customers = _repository.Customers().Where(c => c.Location == location).Where(...);
it appears to be bringing back the entire data set, which can take some time before the LINQ query reduces the set to those which I need. This seems very inefficient when the criteria is only applicable to a few records and I am getting the entire data set back from SQL server.
I have tried to work around this by using stored procedures, such as
public IEnumerable<Customer> CustomersThatMatchACriteria(string criteria1, string criteria2, ...) //or an object passed in!
{
return Database.SqlQuery<Customer>("Exec pp_GetCustomersForCriteria #crit1 = {0}, #crit2 = {1}...", criteria1, criteria2,...);
}
whilst this is much quicker, the problem here is that it doesn't return a DbSet and so I lose all of the connectivity between my objects, e.g. I can't reference any associated objects such as orders or contacts even if I include their IDs because the return type is a collection of 'Customers' rather than a DbSet of them.
Does anyone have a better way of getting SQL server to do the querying so that I am not passing loads of unused data around?
var customers = _repository.Customers().Where(c => c.Location == location).Where(...
If Customers() returns IQueryable, this statement alone won't actually be 'bringing back' anything at all - calling Where on an IQueryable gives you another IQueryable, and it's not until you do something that causes query execution (such as ToList, or FirstOrDefault) that anything will actually be executed and results returned.
If however this Customers method returns a collection of instantiated objects, then yes, since you are asking for all the objects you're getting them all.
I've never used either code-first or indeed even then repository pattern, so I don't know what to advise, other than staying in the realm of IQueryable for as long as possible, and only executing the query once you've applied all relevant filters.
What I would have done to return just a set of data would have been the following:
var customers = (from x in Repository.Customers where <boolean statement> &&/|| <boolean statement select new {variableName = x.Name , ...).Take(<integer amount for amount of records you need>);
so for instance:
var customers = (from x in _repository.Customers where x.ID == id select new {variableName = x.Name} ).take(1000);
then Iterate through the results to get the data: (remember, the linq statement returns an IQueryable)...
foreach (var data in customers)
{
string doSomething = data.variableName; //to get data from your query.
}
hope this helps, not exactly the same methods, but I find this handy in my code
Probably it's because your Cusomters() method in your repository is doing a GetAll() kind of thing and fetching the entire list first. This prohibits LINQ and your SQL Server from creating smart queries.
I don't know if there's a good workaround for your repository, but if you would do something like:
using(var db = new RemoteServerContext())
{
var custs = db.Customers.Where(...);
}
I think that will be a lot quicker. If your project is small enough, you can do without a repository. Sure, you'll lose an abstraction layer, but with small projects this may not be a big problem.
On the other hand, you could load all Customers in your repository once and use the resulting collection directly (instead of the method-call that fills the list). Beware of adding, removing and modifying Customers though.
You need the LINQ query to return less data like sql paging like top function in sql or do manual querying using stored procedures. In either cases, you need to rewrite your querying mechanism. This is one of the reasons why I didn't use EF, because you don't have a lot of control over the code it seems.

Best strategies when working with micro ORM?

I started using PetaPOCO and Dapper and they both have their own limitations. But on the contrary, they are so lightning fast than Entity Framework that I tend to let go the limitations of it.
My question is: Is there any ORM which lets us define one-to-many, many-to-one and many-to-many relationships concretely? Both Dapper.Net and PetaPOCO they kind of implement hack-ish way to fake these relationship and moreover they don't even scale very well when you may have 5-6 joins. If there isn't a single micro ORM that can let us deal with it, then my 2nd question is should I let go the fact that these micro ORMs aren't that good in defining relationships and create a new POCO entity for every single type of query that I would be executing that includes these types of multi joins? Can this scale well?
I hope I am clear with my question. If not, let me know.
I generally follow these steps.
I create my viewmodel in such a way that represents the exact data and format I want to display in a view.
I query straight from the database via PetaPoco on to my view models.
In my branch I have a
T SingleInto<T>(T instance, string sql, params object[] args);
method which takes an existing object and can map columns directly on to it matched by name. This works brilliantly for this scenario.
My branch can be found here if needed.
https://github.com/schotime/petapoco/
they don't even scale very well when you may have 5-6 joins
Yes, they don't, but that is a good thing, because when the system you will be building starts to get complex, you are free to do the joins you want, without performance penalties or headaches.
Yes, I miss when I don't needed to write all this JOINS with Linq2SQL, but then I created a simple tool to write the common joins so I get the basic SQL for any entity and then I can build from there.
Example:
[TableName("Product")]
[PrimaryKey("ProductID")]
[ExplicitColumns]
public class Product {
[PetaPoco.Column("ProductID")]
public int ProductID { get; set; }
[PetaPoco.Column("Name")]
[Display(Name = "Name")]
[Required]
[StringLength(50)]
public String Name { get; set; }
...
...
[PetaPoco.Column("ProductTypeID")]
[Display(Name = "ProductType")]
public int ProductTypeID { get; set; }
[ResultColumn]
public string ProductType { get; set; }
...
...
public static Product SingleOrDefault(int id) {
var sql = BaseQuery();
sql.Append("WHERE Product.ProductID = #0", id);
return DbHelper.CurrentDb().SingleOrDefault<Product>(sql);
}
public static PetaPoco.Sql BaseQuery(int TopN = 0) {
var sql = PetaPoco.Sql.Builder;
sql.AppendSelectTop(TopN);
sql.Append("Product.*, ProductType.Name as ProductType");
sql.Append("FROM Product");
sql.Append(" INNER JOIN ProductType ON Product.ProductoTypeID = ProductType.ProductTypeID");
return sql;
}
Would QueryFirst help here? You get the speed of micro orms, with the added comfort of every-error-a-compile-time-error, plus intellisense both for your queries and their output. You define your joins in SQL as god intended. If typing out join conditions is really bugging you, DBForge might be the answer, and because you're working in SQL, these tools are compatible, and you're not locked in.

Categories

Resources