I am currently developing a Staff management system for my company. The fields may vary and change time to time, so I have an interface for each field like this:
public interface IStaffInfoField
{
// ...
IQueryable<Staff> Filter(IQueryable<Staff> pList, string pAdditionalData);
// ...
}
For each field, I implement the Filter method, for example with Name:
class NameStaffInfoField : BaseStaffInfoField
{
// ...
public override IQueryable<Staff> Filter(IQueryable<Staff> pList, string pAdditionalData)
{
return pList.Where(q => q.Name.Contains(pAdditionalData));
}
// ...
}
Now the users want to search with multiple conditions, it's easy, I just iterate through the list and call Filter. However, they also want a OR condition (say, staff which have name A, OR name B, AND Department Name C, OR Age 30). Note: Users are end-users and they input the search queries through comboboxes and textboxes.
Can I modify my pattern or the lambda expression somehow to achieve that? Because throughout the progress, I don't save the original list to Union it for OR condition. I think it will be slow if I save the expression and Union it for OR condition.
The only solution I can think of now is to add a method to interface that require raw SQL WHERE statement. But my entire program hasn't used pure SQL query yet, is it bad to use it now?
Since your method returns IQueryable, clients already can use it for arbitrarily complicated queries.
IQueryable<Staff> result = xxx.Filter( .... );
result = result.Where( ... );
if ( ... )
result = result.Where( s => ( s.Age > 30 || s.Salary < 1 ) && s.Whatever == "something" );
The IQueryable is very flexible. The query tree is evaluated and translated to sql when you start to enumerate results.
I only wonder why would you need the interface at all?! Since your Filter method expects the IQueryable, this means that client already has the IQueryable! Why would she call your Filter method then if she can already apply arbitrarily complicated query operators on her own?
Edit:
After your additional explanation, if I were you I would create a simple interface to let users create their own query trees containing OR and AND clauses and create a simple function that would translate the user query tree to linq expression tree.
In other words, do not let end users work at linq query tree level, this is too abstract and also too dangerous to let users touch such low level layer of your code. But abstract trees manually translated to linq trees sound safe and easy.
You can download Albahari's LINQKit. It contains a PredicateBuilder that allows you, among other useful things, to concatenate LINQ expressions with OR in a dynamic way.
var predicate = PredicateBuilder.False<Staff>();
predicate = predicate.Or(s => s.Name.Contains(data));
predicate = predicate.Or(s => s.Age > 30);
return dataContext.Staff.Where(predicate);
You can also download the source code and see how it is implemented.
If your users are end users, and they enter criteria through a UI, you may want to look at a UI control that supports IQueryable. Telerik has a large number of pre-baked controls. In most cases end users interact with a grid and they apply filters to the columns. There are several other vendors that do the same thing.
A second option, if you want to make your life difficult, you could take the input text that the user supplies, parse it into a expression tree and then map that expression tree to a IQueryable. If you are not familiar with parsers this task will be fairly difficult to implement.
Related
I know from the MSDN's article about How to: Modify Expression Trees what an ExpressionVisitor is supposed to do. It should modify expressions.
Their example is however pretty unrealistic so I was wondering why would I need it? Could you name some real-world cases where it would make sense to modify an expression tree? Or, why does it have to be modified at all? From what to what?
It has also many overloads for visiting all kinds of expressions. How do I know when I should use any of them and what should they return? I saw people using VisitParameter and returning base.VisitParameter(node) the other on the other hand were returning Expression.Parameter(..).
There was a issue where on the database we had fields which contained 0 or 1 (numeric), and we wanted to use bools on the application.
The solution was to create a "Flag" object, which contained the 0 or 1 and had a conversion to bool. We used it like a bool through all the application, but when we used it in a .Where() clause the EntityFramework complained that it is unable to call the conversion method.
So we used a expression visitor to change all property accesses like .Where(x => x.Property) to .Where(x => x.Property.Value == 1) just before sending the tree to EF.
Could you name some real-world cases where it would make sense to modify an expression tree?
Strictly speaking, we never modify an expression tree, as they are immutable (as seen from the outside, at least, there's no promise that it doesn't internally memoise values or otherwise have mutable private state). It's precisely because they are immutable and hence we can't just change a node that the visitor pattern makes a lot of sense if we want to create a new expression tree that is based on the one we have but different in some particular way (the closest thing we have to modifying an immutable object).
We can find a few within Linq itself.
In many ways the simplest Linq provider is the linq-to-objects provider that works on enumerable objects in memory.
When it receives enumerables directly as IEnumerable<T> objects it's pretty straight-forward in that most programmers could write an unoptimised version of most of the methods pretty quickly. E.g. Where is just:
foreach (T item in source)
if (pred(item))
yield return item;
And so on. But what about EnumerableQueryable implementing the IQueryable<T> versions? Since the EnumerableQueryable wraps an IEnumerable<T> we could do the desired operation on the one or more enumerable objects involved, but we have an expression describing that operation in terms of IQueryable<T> and other expressions for selectors, predicates, etc, where what we need is a description of that operation in terms of IEnumerable<T> and delegates for selectors, predicates, etc.
System.Linq.EnumerableRewriter is an implementation of ExpressionVisitor does exactly such a re-write, and the result can then simply be compiled and executed.
Within System.Linq.Expressions itself there are a few implementations of ExpressionVisitor for different purposes. One example is that the interpreter form of compilation can't handle hoisted variables in quoted expressions directly, so it uses a visitor to rewrite it into working on indices into a a dictionary.
As well as producing another expression, an ExpressionVisitor can produce another result. Again System.Linq.Expressions has internal examples itself, with debug strings and ToString() for many expression types working by visiting the expression in question.
This can (though it doesn't have to be) be the approach used by a database-querying linq provider to turn an expression into a SQL query.
How do I know when I should use any of them and what should they return?
The default implementation of these methods will:
If the expression can have no child expressions (e.g. the result of Expression.Constant()) then it will return the node back again.
Otherwise visit all the child expressions, and then call Update on the expression in question, passing the results back. Update in turn will either return a new node of the same type with the new children, or return the same node back again if the children weren't changed.
As such, if you don't know you need to explicitly operate on a node for whatever your purposes are, then you probably don't need to change it. It also means that Update is a convenient way to get a new version of a node for a partial change. But just what "whatever your purposes are" means of course depends on the use case. The most common cases are probably go to one extreme or the other, with either just one or two expression types needing an override, or all or nearly all needing it.
(One caveat is if you are examining the children of those nodes that have children in a ReadOnlyCollection such as BlockExpression for both its steps and variables or TryExpression for its catch-blocks, and you will only sometimes change those children then if you haven't changed you are best to check for this yourself as a flaw [recently fixed, but not in any released version yet] means that if you pass the same children to Update in a different collection to the original ReadOnlyCollection then a new expression is created needlessly which has effects further up the tree. This is normally harmless, but it wastes time and memory).
The ExpressionVisitor enables the visitor pattern for Expression's.
Conceptually, the problem is that when you navigate an Expression tree, all you know is that any given node is an Expression, but you don't know specifically what kind of Expression. This pattern allows you to know what kind of Expression you're working with and specify type-specific handling for different kinds.
When you have an Expression, you can just call .Modify. The Expression knows its own type, so it'll call back the appropriate override.
Looking at the MSDN example you linked:
public class AndAlsoModifier : ExpressionVisitor
{
public Expression Modify(Expression expression)
{
return Visit(expression);
}
protected override Expression VisitBinary(BinaryExpression b)
{
if (b.NodeType == ExpressionType.AndAlso)
{
Expression left = this.Visit(b.Left);
Expression right = this.Visit(b.Right);
// Make this binary expression an OrElse operation instead of an AndAlso operation.
return Expression.MakeBinary(ExpressionType.OrElse, left, right, b.IsLiftedToNull, b.Method);
}
return base.VisitBinary(b);
}
}
In this example, if the Expression happens to be a BinaryExpression, it'll call back VisitBinary(BinaryExpression b) given in the example. Now, you can deal with that BinaryExpression knowing that it's a BinaryExpression. You could also specify other override methods that handle other kinds of Expression's.
It's worth noting that, since this is an overloaded resolution trick, visited Expression's will call back the best-fitting method. So, if there're different kinds of BinaryExpression's, then you could write an override for one specific subtype; if another subtype calls back, it'll just use the default BinaryExpression handling.
In short, this pattern allows you to navigate an Expression tree knowing what kind of Expression's you're working with.
Specific real world example I have just encountered occurred when shifting to EF Core and migrating from Sql Server (MS Specific) to SqlLite (platform independent).
The existing business logic revolved around a middle tier/ service layer interface that assumed Full Text Search (FTS) happened auto-magically in the background which it does with SQL Server. Search related queries were passed into this tier via Expressions and FTS against an Sql Server store required no additional FTS specific entities.
I didn't want to change any of this but with SqlLite you have to target a specific virtual table for a Full Text Search which would in turn have meant changing all the middle tier calls to re-target the FTS tables/entities and then joining them to the business entity tables to get a similar result set.
But by sub-classing ExpressionVisitor I was able to intercept the calls in the DAL layer and simply rewrite the incoming expression (or more precisely some of the BinaryExpressions within the overall search expression) to specifically handle SqlLites FTS requirements.
This meant that specialization of the datalayer to the data store happened within a single class that was called from a single place within a repository base class. No other aspects of the application needed to be altered in order to support FTS via EFCore and any SqlLite FTS related entities could be contained in a single pluggable assembly.
So ExpressionVisitor is really very useful, especially when combined with the whole notion of being able to pass around expression trees as data via various forms of IPC.
I have a project with a large codebase that uses an in-house data access layer to work with the database. However, we want to support OData access to the system. I'm quite comfortable with expression trees in C#. How do I get at something I can parse here in order to get the structure of their actual query?
Is there a way to get an AST out of this thing that I can turn into sql code?
Essentially, you need to implement you own Query Provider which known how to translate the expression tree to an underlying query.
A simplified version of a controller method would be:
[ODataRoute("foo")]
public List<Foo> GetFoo(ODataQueryOptions<Foo> queryOptions)
{
var queryAllFoo = _myQueryProvider.QueryAll<Foo>();
var modifiedQuery = queryOptions.ApplyTo(queryAllFoo);
return modifiedQuery.ToList();
}
However!
This is not trivial, it took me about 1 month to implement custom OData query processing
You need to build the EDM model, so the WebApi OData can process and build right expression trees
It might involve reflection, creation of types at runtime in a dynamic assembly (for the projection), compiling lambda expressions for the best performance
WebAPI OData component has some limitations, so if you want to get relations working, you need to spend much more extra time, so in our case we did some custom query string transformation (before processing) and injecting joins into expression trees when needed
There are too many details to explain in one answer, it's a long way..
Good luck!
You can use ODataQueryOptions<T> to get abstract syntax trees for the $filter and $orderby query options. ($skip and $top are also available as parsed integers.) Since you don't need/want LINQ support, you could then simply pass the ASTs to a repository method, which would then visit the ASTs to build up the appropriate SQL stored proc invocation. You will not call ODataQueryOptions.ApplyTo. Here's a sketch:
public IEnumerable<Thing> Get(ODataQueryOptions<Thing> opts)
{
var filter = opts.Filter.FilterClause.Expression;
var ordering = opts.OrderBy.OrderByClause.Expression;
var skip = opts.Skip.Value;
var top = opts.Top.Value;
return this.Repository.GetThings(key, filter, ordering, skip, top);
}
Note that filter and ordering in the above are instances of Microsoft.OData.Core.UriParser.Semantic.SingleValueNode. That class has a convenient Accept<T> method, but you probably do not want your repository to depend on that class directly. That is, you should probably use a helper to produce an intermediate form that is independent of Microsoft's OData implementation.
If this is a common pattern, consider using parameter binding so you can get the various query options directly from the controller method's parameter list.
I have written a custom IQueryProvider class that takes an expression and analyses it against a SQL database (I know I could use Linq2Sql but there are some modifications and tweaks that I need that unfortunately make Linq2Sql unsuitable). The class will identify and do something with the properties that are marked (using attributes) but any that aren't I would like to be able to pass the expression on to a LinqToObject provider and allow it to filter the results after.
For example, suppose I have the following linq expression:
var parents=Context.Parents
.Where(parent=>parent.Name.Contains("T") && parent.Age>18);
The Parents class is a custom class that implements IQueryProvider and IQueryable interfaces, but only the Age property is marked for retrieval, so the Age property will be processed, but the Name property is ignored because it is not marked. After I've finished processing the Age property, I'd like to pass the whole expression to LinqToObjects to process and filter, but I don't know how.
N.B. It doesn't need to remove the Age clause of the expression because the result will be the same even after I've processed it so I will always be able to send the whole expression on to LinqToObjects.
I've tried the following code but it doesn't seem to work:
IEnumerator IEnumerable.GetEnumerator() {
if(this.expression != null && !this.isEnumerating) {
this.isEnumerating = true;
var queryable=this.ToList().AsQueryable();
var query = queryable.Provider.CreateQuery(this.expression);
return query.GetEnumerator();
}
return this;
}
this.isEnumerating is just a boolean flag set to prevent recursion.
this.expression contains the following:
{value(namespace.Parents`1[namespace.Child]).Where(parent => ((parent.Name.EndsWith("T") AndAlso parent.Name.StartsWith("M")) AndAlso (parent.Test > 0)))}
When I step through the code, despite converting the results to a list, it still uses my custom class for the query. So I figured that because the class Parent was at the beginning of the expression, it was still routing the query back to my provider, so I tried setting this.expression to Argument[1] of the method call so it was as such:
{parent => ((parent.Name.EndsWith("T") AndAlso parent.Name.StartsWith("M")) AndAlso (parent.Test > 0))}
Which to me looks more like it, however, whenever I pass this into the CreateQuery function, I get this error 'Argument expression is not valid'.
The node type of the expression is now 'Quote' though and not 'Call' and the method is null. I suspect that I just need to make this expression a call expression somehow and it will work, but I'm not sure how to.
Please bear in mind that this expression is a where clause, but it may be any kind of expression and I'd prefer not to be trying to analyse the expression to see what type it is before passing it in to the List query provider.
Perhaps there is a way of stripping off or replacing the Parent class of the original expression with the list provider class but still leaving it in a state that can just be passed in as expression into the List provider regardless of the type of expression?
Any help on this would be greatly appreciated!
You were so close!
My goal was to avoid having to "replicate" the full mind-numbingly convoluted SQL-to-Object expressions feature set. And you put me on the right track (thanks!) here's how to piggy-back SQL-to-Object in a custom IQueryable:
public IEnumerator<T> GetEnumerator() {
// For my case (a custom object-oriented database engine) I still
// have an IQueryProvider which builds a "subset" of objects each populated
// with only "required" fields, as extracted from the expression. IDs,
// dates, particular strings, what have you. This is "cheap" because it
// has an indexing system as well.
var en = ((IEnumerable<T>)this.provider.Execute(this.expression));
// Copy your internal objects into a list.
var ar = new List<T>(en);
var queryable = ar.AsQueryable<T>();
// This is where we went wrong:
// queryable.Provider.CreateQuery(this.expression);
// We can't re-reference the original expression because it will loop
// right back on our custom IQueryable<>. Instead, swap out the first
// argument with the List's queryable:
var mc = (MethodCallExpression)this.expression;
var exp = Expression.Call(mc.Method,
Expression.Constant(queryable),
mc.Arguments[1]);
// Now the CLR can do all of the heavy lifting
var query = queryable.Provider.CreateQuery<T>(exp);
return query.GetEnumerator();
}
Can't believe this took me 3 days to figure out how to avoid reinventing wheel on LINQ-to-Object queries.
For a utility I'm working on, the client would like to be able to generate graphic reports on the data that has been collected. I can already generate a couple canned graphs (using ZedGraph, which is a very nice library); however, the utility would be much more flexible if the graphs were more programmable or configurable by the end-user.
TLDR version
I want users to be able to use something like SQL to safely extract and select data from a List of objects that I provide and can describe. What free tools or libraries will help me accomplish this?
Full version
I've given thought to using IronPython, IronRuby, and LuaInterface, but frankly they're all a bit overpowered for what I want to do. My classes are fairly simple, along the lines of:
class Person:
string Name;
int HeightInCm;
DateTime BirthDate;
Weight[] WeighIns;
class Weight:
int WeightInKg;
DateTime Date;
Person Owner;
(exact classes have been changed to protect the innocent).
To come up with the data for the graph, the user will choose whether it's a bar graph, scatter plot, etc., and then to actually obtain the data, I would like to obtain some kind of List from the user simply entering something SQL-ish along the lines of
SELECT Name, AVG(WeighIns) FROM People
SELECT WeightInKg, Owner.HeightInCm FROM Weights
And as a bonus, it would be nice if you could actually do operations as well:
SELECT WeightInKg, (Date - Owner.BirthDate) AS Age FROM Weights
The DSL doesn't have to be compliant SQL in any way; it doesn't even have to resemble SQL, but I can't think of a more efficient descriptive language for the task.
I'm fine filling in blanks; I don't expect a library to do everything for me. What I would expect to exist (but haven't been able to find in any way, shape, or form) is something like Fluent NHibernate (which I am already using in the project) where I can declare a mapping, something like
var personRequest = Request<Person>();
personRequest.Item("Name", (p => p.Name));
personRequest.Item("HeightInCm", (p => p.HeightInCm));
personRequest.Item("HeightInInches", (p => p.HeightInCm * CM_TO_INCHES));
// ...
var weightRequest = Request<Weight>();
weightRequest.Item("Owner", (w => w.Owner), personRequest); // Indicate a chain to personRequest
// ...
var people = Table<Person>("People", GetPeopleFromDatabase());
var weights = Table<Weight>("Weights", GetWeightsFromDatabase());
// ...
TryRunQuery(userInputQuery);
LINQ is so close to what I want to do, but AFAIK there's no way to sandbox it. I don't want to expose any unnecessary functionality to the end user; meaning I don't want the user to be able to send in and process:
from p in people select (p => { System.IO.File.Delete("C:\\something\\important"); return p.Name })
So does anyone know of any free .NET libraries that allow something like what I've described above? Or is there some way to sandbox LINQ? cs-script is close too, but it doesn't seem to offer sandboxing yet either. I'd be hesitant to expose the NHibernate interface either, as the user should have a read-only view of the data at this point in the usage.
I'm using C# 3.5, and pure .NET solutions would be preferred.
The bottom line is that I'm really trying to avoid writing my own parser for a subset of SQL that would only apply to this single project.
There is a way to sandbox LINQ or even C#: A sandboxed appdomain. I would recommend you look into accepting and compiling LINQ in a locked-down domain.
Regarding NHibernate, perhaps you can pass the objects into the domain without exposing NHibernate at all (I don't know how NHibernate works). If this is not possible, perhaps the connection to the database used within the sandbox can be logged in as a user who is granted only SELECT permissions.
Maybe the expressions will come handy for You.
You could provide simple entry places for:
a) what to select - user is expected to enter an expression only _ probably member and arithmetic expressions - those are subclasses of the expression class
b) how to filter the things = again only expressions are expected
c) ordering
d) joining?
Expressions don't let You do File.Delete because You operate only on precise domain objects (which probably don't have this functionality). The only thing You have to check is whether the parameters of the said expressions are of Your domain types. and Return types of said expressions are of domain types (or generic types in case of IEnumerable<> or IQuerable<>
this might prove helpful
I.E. expressions don't let You write multi-line statements.
Then You build your method chain in code
and voila.
There comes the data
I ended up using a little bit of a different approach. Instead of letting users pick arbitrary fields and make arbitrary graphs, I'm still presenting canned graphs, but I'm using Flee to let the user filter out exactly what data is used in the source of the graph. This works out nicely, because I ended up making a set of mappings from variable names to "accessors", and then using those mappings to inject variables into the user-entered filters. It ended up something like:
List<Mapping<Person>> mappings;
// ...
mappings.Add(new Mapping("Weight", p => p.Weight, "The person's weight (in pounds)"));
// ...
foreach (var m in mappings)
{
context.Variables[m.Name] = m.Accessor(p);
}
// ...
And you can even give an expression context an "owner" (think Ruby's instance_eval, where the context is executed with score of the specified object as this); then the user can even enter a filter like Weight > InputNum("The minimum weight to see"), and then they will be prompted thusly when the filter is executed, because I've defined a method InputNum in the owning class.
I feel like it was a good balance between effort involved and end result. I would recommend Flee to anyone who has a need to parse simple statements, especially if you need to extend those statements with your own variables and functions as well.
Ok, understand that I come from Cold Fusion so I tend to think of things in a CF sort of way, and C# and CF are as different as can be in general approach.
So the problem is: I want to pull a "table" (thats how I think of it) of data from a SQL database via LINQ and then I want to do some computations on it in memory. This "table" contains 6 or 7 values of a couple different types.
Right now, my solution is that I do the LINQ query using a Generic List of a custom Type. So my example is the RelevanceTable. I pull some data out that I want to do some evaluation of the data, which first start with .Contains. It appears that .Contains wants to act on the whole list or nothing. So I can use it if I have List<string>, but if I have List<ReferenceTableEntry> where ReferenceTableEntry is my custom type, I would need to override the IEquatable and tell the compiler what exactly "Equals" means.
While this doesn't seem unreasonable, it does seem like a long way to go for a simple problem so I have this sneaking suspicion that my approach is flawed from the get go.
If I want to use LINQ and .Contains, is overriding the Interface the only way? It seems like if there way just a way to say which field to operate on. Is there another collection type besides LIST that maybe has this ability. I have started using List a lot for this and while I have looked and looked, a see some other but not necessarily superior approaches.
I'm not looking for some fine point of performance or compactness or readability, just wondering if I am using a Phillips head screwdriver in a Hex screw. If my approach is a "decent" one, but not the best of course I'd like to know a better, but just knowing that its in the ballpark would give me little "Yeah! I'm not stupid!" and I would finish at least what I am doing completely before switch to another method.
Hope I explained that well enough. Thanks for you help.
What exactly is it you want to do with the table? It isn't clear. However, the standard LINQ (-to-Objects) methods will be available on any typed collection (including List<T>), allowing any range of Where, First, Any, All, etc.
So: what is you are trying to do? If you had the table, what value(s) do you want?
As a guess (based on the Contains stuff) - do you just want:
bool x= table.Any(x=>x.Foo == foo); // or someObj.Foo
?
There are overloads for some of the methods in the List class that takes a delegate (optionally in the form of a lambda expression), that you can use to specify what field to look for.
For example, to look for the item where the Id property is 42:
ReferenceTableEntry found = theList.Find(r => r.Id == 42);
The found variable will have a reference to the first item that matches, or null if no item matched.
There are also some LINQ extensions that takes a delegate or an expression. This will do the same as the Find method:
ReferenceTableEntry found = theList.FirstOrDefault(r => r.Id == 42);
Ok, so if I'm reading this correctly you want to use the contains method. When using this with collections of objects (such as ReferenceTableEntry) you need to be careful because what you're saying is you're checking to see if the collection contains an object that IS the same as the object you're comparing against.
If you use the .Find() or .FindAll() method you can specify the criteria that you want to match on using an anonymous method.
So for example if you want to find all ReferenceTableEntry records in your list that have an Id greater than 1 you could do something like this
List<ReferenceTableEntry> listToSearch = //populate list here
var matches = listToSearch.FindAll(x => x.Id > 1);
matches will be a list of ReferenceTableEntry records that have an ID greater than 1.
having said all that, it's not completely clear that this is what you're trying to do.
Here is the LINQ query involved that creates the object I am talking about, and the problem line is:
.Where (searchWord => queryTerms.Contains(searchWord.Word))
List<queryTerm> queryTerms = MakeQueryTermList();
public static List<RelevanceTableEntry> CreateRelevanceTable(List<queryTerm> queryTerms)
{
SearchDataContext myContext = new SearchDataContext();
var productRelevance = (from pwords in myContext.SearchWordOccuranceProducts
where (myContext.SearchUniqueWords
.Where (searchWord => queryTerms.Contains(searchWord.Word))
.Select (searchWord => searchWord.Id)).Contains(pwords.WordId)
orderby pwords.WordId
select new {pwords.WordId, pwords.Weight, pwords.Position, pwords.ProductId});
}
This query returns a list of WordId's that match the submitted search string (when it was List and it was just the word, that works fine, because as an answerer mentioned before, they were the same type of objects). My custom type here is queryTerms, a List that contains WordId, ProductId, Position, and Weight. From there I go about calculating the relevance by doing various operations on the created object. Sum "Weight" by product, use position matches to bump up Weights, etc. My point for keeping this separate was that the rules for doing those operations will change, but the basic factors involved will not. I would have even rather it be MORE separate (I'm still learning, I don't want to get fancy) but the rules for local and interpreted LINQ queries seems to trip me up when I do.
Since CF has supported queries of queries forever, that's how I tend to lean. Pull the data you need from the db, then do your operations (which includes queries with Aggregate functions) on the in-memory table.
I hope that makes it more clear.