I have a problem with roslyn method DescendantNodes()..With this line
var blockNodes = root.DescendantNodes(n => n.IsKind(SyntaxKind.Block))
When I use lambda expression in this case in Debug mode it works..But when I build DLL and connect it to project as analyzer it dont work..It works only when I rewrite it to this:
var nodes = root.DescendantNodes();
var blockNodes = nodes.Where(n => n.IsKind(SyntaxKind.Block));
Where is the problem and how can i fix it ?
I don't know why debug mode works differently for you. However, I think you may be using the method DescendantNodes incorrectly.
The function that is passed to DescendantNodes is a predicate that determines if the algorithm walking down the syntax tree finding descendants continues on to a given node's children or not. If this function returns false, then no additional children down that syntax tree path are returned.
This is very different than using the LINQ Where method, which filters the set to only nodes that match the predicate.
For example, the first one might find all the nested pure blocks in a method body, but not find any blocks that are parts of other kinds of statements, since those will never be considered because other kinds of statements are not themselves blocks. However, using the second form (Where method), the function considers all nodes under the root, and thus finds all blocks.
Related
I am trying to create a function that evaluates a DynamicLinq expression. While the expression itself is valid, the Parameter objects it has available to use may not always be what it needs.
I would like some method of checking if I have all the Parameters available that the expression needs before actually executing it. Currently the best option I have found is to wrap it in a try-catch and ignore the missing param exception.
var ValidLambdaExpression = "ObjectType.Attribute == \"testvalue\" && ObjectType2.Attribute == \"testvalue2\"";
var paramObjects = new List<System.Linq.Expressions.ParameterExpression>();
var p = System.Linq.Expressions.Expression.Parameter(typeof(ObjectType), "ObjectType");
paramObjects.Add(p);
var lam = System.Linq.Dynamic.DynamicExpression.ParseLambda(paramObjects.ToArray(), null, ValidLambdaExpression);
//var lambody = System.Linq.Dynamic.DynamicExpression.Parse(null, ValidLambdaExpression);
//var lam = System.Linq.Expressions.Expression.Lambda(lambody, paramObjects);
var result = DataValidation.ToBoolean(lam.Compile().DynamicInvoke(data));
In the block of code above, the ValidLambdaExpression variable may be referencing objects that do not exist in the data array. If that happens both the ParseLambda and Parse lines blow up. I have not found any method of parsing the lambda then checking for missing parameters, or even required parameters.
This block will blow up with the error:
ParseException -> Unknown identifier 'ObjectType2'
At the time of execution paramObjects gets dynamically built, it is not hard coded, so I do not know what objects will be put into it.
Does anyone have a better method "in terms of speed" of validating what parameters the Lambda needs before parsing it?
It seems as though no one had a solution for this problem. Here is the work-around I ended up coming up with.
At the time the Lambda expression was being built, I knew which objects were being put into it, even if I did not have that information by the time I needed to use the expression.
So I ended up prefixing the expression with a csv list of objects so I could gain access to them in the method that was using the expression.
The expression ended up looking like this:
ObjType1,ObjType2,ObjType3|ObjType1.Attribute == ObjType2.Attribute
Then I wrote a wrapper around the DynamicExpressionParser that was able to parse this string and make some intelligent decisions off of it before trying to invoke the expression.
I'm currently using LINQ to load a list of files into XDocuments, like so:
var fileNames = new List<string>(){ "C:\file.xml" };
var xDocs = fileNames.Select(XDocument.Load);
var xDocs2 = xDocs.ToList(); // Crashes here
If I deliberately 'lock' one of the files with a different process, the IOException is only thrown when I actually start to look at the XDocuments I've been generating, ie when ToList() is called.
Can anyone explain why this is, and how best to handle this error? I'd like to have access to the working XDocuments still, if possible.
Can anyone explain why this is
As many pointed out, this is because of the so called deferred execution of many LINQ methods. For instanse, the Enumerable.Select method documentation states
This method is implemented by using deferred execution. The immediate return value is an object that stores all the information that is required to perform the action. The query represented by this method is not executed until the object is enumerated either by calling its GetEnumerator method directly or by using foreach in Visual C# or For Each in Visual Basic.
while the Enumerable.ToList documentation contains
The ToList<TSource>(IEnumerable<TSource>) method forces immediate query evaluation and returns a List that contains the query results. You can append this method to your query in order to obtain a cached copy of the query results.
So the XDocument.Load is really executed for each file name during the ToList call. I guess that covers the why part.
and how best to handle this error? I'd like to have access to the working XDocuments still, if possible.
I don't know what does "best" mean in this context, but if you want to ignore the errors and include the "working XDocuments", then you can use something like this
var xDocs = fileNames.Select(fileName =>
{
try { return XDocument.Load(fileName); }
catch { return null; }
});
and then either append .Where(doc => doc != null), or account for null documents when processing the list.
This is why the linq .Select is an IEnumerable and the elements are first called if you make your IEnumerable to an List. Then you go through all of your elements.
I'm trying to understand why I can't inspect a returned value from a LINQ query when on a breakpoint. Expanding the results view simply says "Children Could Not Be Evaluated".
On the other hand enumerating with a foreach in code or using a ToList does let the collection be inspected in the debugger.
I would have thought, as it does it most other scenarios, expanding results in the debugger is equivalent to a ToList on the collection which is why I'm expecting it to work. The only thing that is a little different is that I'm calling from an EXE into a DLL, the DLL being where the objects are defined and the initial query built and returned. But I can't see it being that.
var timeboxes = assetRepo.ActiveTimeboxes();
// This can't be evaluated in the debugger
var stories = timeboxes.SelectMany(c => assetRepo.AllStories(c));
// This can be inspected in the debugger
var executedStories = stories.ToList();
It's not possible to debug from VS, but I there is LINQPad that could sometimes help you.
Your best option is to split your query into small statements.
I just came across the concept of expression trees which I have heard multiple times. I just want to understand what is meant by an expression tree and its purpose.
I would love it if someone could also direct me to simple explanations and samples of use.
An Expression Tree is a data structure that contains Expressions, which is basically code. So it is a tree structure that represents a calculation you may make in code. These pieces of code can then be executed by "running" the expression tree over a set of data.
A great thing about expression trees is that you can build them up in code; that is, you build executable code (or a sequence of steps) in code. You can also modify the code before you execute it by replacing expressions by other expressions.
An Expression is then a function delegate, such as (int x => return x * x).
See also http://blogs.msdn.com/b/charlie/archive/2008/01/31/expression-tree-basics.aspx
Why is the main purpose of the extension method Single()?
I know it will throw an exception if more than an element that matches the predicate in the sequence, but I still don't understand in which context it could be useful.
Edit:
I do understand what Single is doing, so you don't need to explain in your question what this method does.
It's useful for declaratively stating
I want the single element in the list and if more than one item matches then something is very wrong
There are many times when programs need to reduce a set of elements to the one that is interesting based an a particular predicate. If more than one matches it indicates an error in the program. Without the Single method a program would need to traverse parts of the potentially expensive list more once.
Compare
Item i = someCollection.Single(thePredicate);
To
Contract.Requires(someCollection.Where(thePredicate).Count() == 1);
Item i = someCollection.First(thePredicate);
The latter requires two statements and iterates a potentially expensive list twice. Not good.
Note: Yes First is potentially faster because it only has to iterate the enumeration up until the first element that matches. The rest of the elements are of no consequence. On the other hand Single must consider the entire enumeration. If multiple matches are of no consequence to your program and indicate no programming errors then yes use First.
Using Single allows you to document your expectations on the number of results, and to fail early, fail hard if they are wrong. Unless you enjoy long debugging sessions for their own sake, I'd say it's enormously useful for increasing the robustness of your code.
Every LINQ operator returns a sequence, so an IEnumerable<T>. To get an actual element, you need one of the First, Last or Single methods - you use the latter if you know for sure the sequence only contains one element. An example would be a 1:1 ID:Name mapping in a database.
A Single will return a single instance of the class/object and not a collection. Very handy when you get a single record by Id. I never expect more than one row.