i am not sure if this is a typical stackoverflow question, but i am working on an application where i should constantly examine some conditions (for example if a certain variable's value is over a threshold). Conditions can be changed at any time and preferably from outside the code.
People suggested i should be using expression parsers, but i still don't understand what advantage do they provide me over basic mathematical operations provided by .NET.
Do you recommend a good .NET expression parser?
I think you need Dynamic LINQ. You can pass the conditions as strings.
Here is a blog post about that by ScottGu: http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx
I found this by a similar question: Dynamic WHERE clause in LINQ
An expression parser would offer more flexibility. Your expressions could be written as formulars in strings and they could be application data instead of harcoded classes/methods/whatever.
You could do things like:
// Assign an action to an expression given as a string
ExpressionObserver.Add("(a+b+c)/2 > x-y", () => { DoSomething(); });
Or:
// Replace the old expression by something written by the user in the UI
someExpressionActionAssignment.Expression = MyLineEdit1.Text;
But I do not know if the added complexity of all this really pays off in your case. If you only have a few simple expressions then it's probably overkill.
Related
I just saw this bit of code that has a count++ side-effect in the .GroupBy predicate. (originally here).
object[,] data; // This contains all the data.
int count = 0;
List<string[]> dataList = data.Cast<string>()
.GroupBy(x => count++ / data.GetLength(1))
.Select(g => g.ToArray())
.ToList();
This terrifies me because I have no idea how many times the implementation will invoke the key selector function. And I also don't know if the function is guaranteed to be applied to each item in order. I realize that, in practice, the implementation may very well just call the function once per item in order, but I never assumed that as being guaranteed, so I'm paranoid about depending on that behaviour -- especially given what may happen on other platforms, other future implementations, or after translation or deferred execution by other LINQ providers.
As it pertains to a side-effect in the predicate, are we offered some kind of written guarantee, in terms of a LINQ specification or something, as to how many times the key selector function will be invoked, and in what order?
Please, before you mark this question as a duplicate, I am looking for a citation of documentation or specification that says one way or the other whether this is undefined behaviour or not.
For what it's worth, I would have written this kind of query the long way, by first performing a select query with a predicate that takes an index, then creating an anonymous object that includes the index and the original data, then grouping by that index, and finally selecting the original data out of the anonymous object. That seems more like a correct way of doing functional programming. And it also seems more like something that could be translated to a server-side query. The side-effect in the predicate just seems wrong to me - and against the principles of both LINQ and functional programming, so I would assume there would be no guarantee specified and that this may very well be undefined behaviour. Is it?
I realize this question may be difficult to answer if the documentation and LINQ specification actually says nothing regarding side effects in predicates. I want to know specifically whether:
Specs say it's permissible and how. (I doubt it)
Specs say it's undefined behaviour (I suspect this is true and am looking for a citation)
Specs say nothing. (Sloppy spec, if you ask me, but it would be nice to know if others have searched for language regarding side-effects and also come up empty. Just because I can't find it doesn't mean it doesn't exist.)
According to official C# Language Specification, on page 203, we can read (emphasis mine):
12.17.3.1 The C# language does not specify the execution semantics of query expressions. Rather, query expressions are
translated into invocations of methods that adhere to the
query-expression pattern (§12.17.4). Specifically, query expressions
are translated into invocations of methods named Where, Select,
SelectMany, Join, GroupJoin, OrderBy, OrderByDescending, ThenBy,
ThenByDescending, GroupBy, and Cast. These methods are expected to
have particular signatures and return types, as described in §12.17.4.
These methods may be instance methods of the object being queried or
extension methods that are external to the object. These methods
implement the actual execution of the query.
From looking at the source code of GroupBy in corefx on GitHub, it does seems like the key selector function is indeed called once per element, and it is called in the order that the previous IEnumerable provides them. I would in no way consider this a guarantee though.
In my view, any IEnumerables which cannot be enumerated multiple times safely are a big red flag that you may want to reconsider your design choices. An interesting issue that could arise from this is that for example if you view the contents of this IEnumerable in the Visual Studio debugger, it will probably break your code, since it would cause the count variable to go up.
The reason this code hasn't exploded up until now is probably because the IEnumerable is never stored anywhere, since .ToList is called right away. Therefore there is no risk of multiple enumerations (again, with the caveat about viewing it in the debugger and so on).
I have a use case in Q# where I have qubit register qs and need to apply the CNOT gate on every qubit except the first one, using the first one as control. Using a for loop I can do it as follows:
for (i in 1..Length(qs)-1) {
CNOT(qs[0], qs[i]);
}
Now, I wanted to give it a more functional flavor and tried instead to do something like:
ApplyToEach(q => CNOT(qs[0], q), qs[1..Length(qs)-1]);
The Q# compiler does not accept an expression like this, informing me that it encountered an unexpected code fragment. That's not too informative for my taste. Some documents claim that Q# supports anonymous functions a'la C#, hence the attempt above. Can anybody point me to a correct usage of lambdas in Q# or dispel my false belief?
At the moment, Q# doesn't support lambda functions and operations (though that would be a great feature request to file at https://github.com/microsoft/qsharp-compiler/issues/new/choose). That said, you can get a lot of the functional flavor that you get from lambdas by using partial application. In your example, for instance, I could also write that for loop as:
ApplyToEach(CNOT(Head(qs), _), Rest(qs));
Here, since CNOT has type (Qubit, Qubit) => Unit is Adj + Ctl, filling in one of the two inputs as CNOT(Head(qs), _) results in an operation of type Qubit => Unit is Adj + Ctl.
Partial application is a very powerful feature, and is used all throughout the Q# standard libraries to provide a functional way to build up quantum programs. If you're interested in learning more, I recommend checking out the docs at https://learn.microsoft.com/quantum/language/expressions#callable-invocation-expressions.
I am looking for an algorithm or approach to evaluate mathematical expressions that are stated as string. The expression contains mathematical components but also custom functions. I look to implement said algorithm in C#/.Net.
I am aware that Roslyn allows me to evaluate an expression of the kind
"var value = 3+5*11-Math.Sqrt(9);"
I am also familiar how to use "node re-writing" in order to accomplish avoidance of variable declarations or fully qualified function names or the omission of the trailing semicolon in order to evaluate
"value = 3+5*11-Sqrt(9)"
However, what I want to implement on top of this is to offer custom script functions such as
"value = Ratio(A,B)", where Ratio is a custom function that divides each element in vector A by each element in vector B and returns a same length vector.
or
"value = Sma(A, 10)", where Sma is a custom function that calculates the simple moving average of vector/timeseries A with a lookback window of 10.
Ideally I want to get to the ability to provide more complexity such as
"value = Ratio(A,B) * Pi + 0.5 * Spread(C,D) + Sma(E, lookback)", whereby the parsing engine would respect operator precedence and build a parsing tree in order to fetch values, required to evaluate the expression.
I can't wrap my head around how I could solve such kind of problem with Roslyn.
What other approaches are out there to get me started or am I missing features that Roslyn offers that may assist in solving this problem?
Assuming that all your expressions are valid C# expressions you can make use of Roslyn in multiple ways.
You could use Roslyn only for parsing. SyntaxFactory.ParseExpression would give you the syntax tree of an expression. Note that your first (var v = expr;) example is not an expression, but a variable declaration. However v = expr is an expression, namely an AssignmentExpressionSyntax. Then you could traverse this AST, and do with each node what you want to do, basically you'd write an interpreter. The benefit of this approach is that you don't have to write your own parser, walking an AST is very simple, and this approach would be flexible, as defining what you do with "unknown" methods would be perfectly up to you.
Use Roslyn for evaluation too. This can be done in multiple flavors: either putting together a valid C# file, and compiling that into an assembly, or you could go through the Scripting API. This approach would basically require a class library that contains the implementation of all your extra methods, like Sma, Spread, ... But these would also be needed in some form in the first approach, so it's not really an extra effort.
If the only goal is to evaluate the expression, then I would go with the 2nd approach. If there are extra requirements (which you haven't mentioned) like being able to let's say produce a simplified form of an expression, then I'd consider the first solution.
If you find a library that does exactly what you need (and the perf is good, and you don't mind the dependency on 3rd party tools, ...), I'd go with that. MathParser.org-mXparser suggested in the comment seems pretty much what you're looking for.
What I really need is a method for identifying if an XPath 2.0 expression is referred to element(s) or attribute(s).
Suppose a method with the following prototype:
XPathResultType IdentifyXPathResultType (string xpath)
For input like
//Parent[#id='1']/Children/child[#name]
the method should return something like
XPathResultType.Element
For input like
//Parent[#id='1']/Children/child/#name
the method should return something like
XPathResultType.Attribute
The above example is simple to implement but XPath 2.0 has many features and a smart parser needs to be implemented.
Is there any library or facility in Javascript that can accomplish such thing?
I really need this for client side with Javascript.
Also solutions in C# for .NET are acceptable.
XPath 2.0 defines a instance of operator. This way you can test something like
//Parent[#id='1']/Children/child/#name instance of attribute()
which in this case will yield true. Depending on what XPath parser you are using, you might be able to define your own function, which then could test for the different element types and return a appropriate response item.
If not, you could also run the XPath several times, each time tesing against a different data type. However, this sounds less than optimal, so the first option is preferred.
If you use Saxon as your XPath parser then after parsing the expression you will be able to dig down to the Expression object which has a getItemType() method giving the information you are looking for. Details depend on what platform you are running on.
I have these two lines, that do exactly the same thing. But are written differently. Which is the better practice, and why?
firstRecordDate = (DateTime)(from g in context.Datas
select g.Time).Min();
firstRecordDate = (DateTime)context.Datas.Min(x => x.Time);
there is no semantic difference between method syntax and query
syntax. In addition, some queries, such as those that retrieve the
number of elements that match a specified condition, or that retrieve
the element that has the maximum value in a source sequence, can only
be expressed as method calls.
http://msdn.microsoft.com/en-us/library/bb397947.aspx
Also look here: .NET LINQ query syntax vs method chain
It comes down to what you are comfortable with and what you find is more readable.
The second one use lambda expressions. I like it as it is compact and easier to read (although some will find the former easier to read).
Also, the first is better suited if you have a SQL background.
I'd say go with what is most readable or understandable with regards to your development team. Come back in a year or so and see if you can remember that LINQ... well, this particular LINQ is obviously simple so that's moot :-)
Best practice is also quite opinionated, you aren't going to get one answer here. In this case, I'd go for the second item because it's concise and I can personally read and understand it faster than the first, though only slightly faster.
I personally much prefer using lambda expressions. As far as I know there is no real difference as you say you can do exactly the same thing both ways. We agreed to all use the lambda as it is easy to read, follow and to pick up for people who don't like SQL.
There is absolutely no difference in terms of the results, assuming you do actually write equivalent statements in each format.
Go for the most readable one for any given query. Complex queries with joins and many where clauses etc are often easier to write/read in the linq query syntax, but really simple ones like context.Employees.SingleOrDefault(e => e.Id == empId) are easier using the method-chaining syntax. There's no general "one is better" rule, and two people may have a difference of opinion for any given example.
There is no semantic difference between the two statements. Which you choose is purely a matter of style preference
Do you need the explicit cast in either of them? Isn't Time already a DateTime?
Personally I prefer the second approach as I find the extension method syntax more familiar than the LINQ syntax, but it is really just personal preference, they perform the same.
The second one written to more exactly look like the first would be context.Datas.Select(x => x.Time).Min(). So you can see how you wrote it with Min(x => x.Time) might be slightly more efficient, because you only have on operation instead of two
The query comprehension syntax is actually compiled down to a series of calls to the extension methods, which means that the two syntaxes are semantically identical. Whichever style you prefer is the one you should use.