Can you please guide me how can I test my lambda expressions inside Razor view engine setting a break point?
For example, I have below code:
#(Html.DropDownList("Condition4",
new SelectList(Model
.Conditions
.Where(c =>
c.TxCondition.TxConditionTypeId == Model.ConditionTypes.Single
ct => ct.TxConditionType.ConditionTypeCode == "Region")
.TxConditionType
.TxConditionTypeId),
"TxCondition.TxConditionId",
"ConditionTitle",
Model.SearchCondition.Condition4),
"All"))
On break point I tried testing the following code using "Quick Watch Windows" but the error was "Expression cannot contain lambda expressions"
Can you please guide me how to test lambda expressions in MVC Razor view ?
Thank you so much for your time and help.
Model.Conditions.Where(c => c.TxCondition.TxConditionTypeId == 1)
Debugging and Lambda is always tricky to deal with.
A user asked this question: Visual Studio debugging "quick watch" tool and lambda expressions and it was explained that anonymous functions are actually very complex and require a lot of work on the compiler's part. Therefore you can't really put them into the quick watch or similar.
I can't really solve your problem, but I'd like to suggest a slightly different approach.
In MVC views are supposed to be stupid; they should really be "doing stuff". What I mean is that they shouldn't really concern themselves with creating variables, performing logic, selecting or instantiating objects and so on. Instead the should simply take objects that are given to it and attempt to display them.
This forces you to put all of those things elsewhere in your codebase. Appropriate usage of good architecture, layering, and separation of concerns will help you organise things, including business logic. Furthermore I'd suggest that, when writing logic using Lambda and if that Lambda is a little complex, divide the components into pieces so that it's easier to debug and step through.
ICollection<object> filter1 = someCollection.Where(x => x.IsAvailable);
object myObject = filter1.SingleOrDefault(x => x.SomeString = "aValue").Distinct();
You can dissociate your lamba expression in order to inspect it (may not be the exact Razor syntax):
var conditionTypeId = Model
.ConditionTypes
.Single(ct => ct.TxConditionType.ConditionTypeCode == "Region")
.TxConditionType
.TxConditionTypeId;
var selectListContent = Model
.Conditions
.Where(c => c.TxCondition.TxConditionTypeId == conditionTypeId)
.ToList();
#(Html.DropDownList("Condition4",
new SelectList(selectListContent, "TxCondition.TxConditionId", "ConditionTitle",Model.SearchCondition.Condition4),
"All"))
Take a look at the .ToList() after the Where statement, this way you can inspect the content of the result list when debugging. Besides this will add some readability to your code (other developers will thank you, as well as your future-yourself).
Keeping conditionTypeId in a separate variable will evaluate once.
Related
In C#, I have a collection of objects that I want to transform to a different type. This conversion, which I would like to do with LINQ Select(), requires multiple operations in sequence. To perform these operations, is it better to chain together multiple Select() queries like
resultSet.Select(stepOneDelegate).Select(stepTwoDelegate).Select(stepThreeDelegate);
or instead to perform these three steps in a single call?
resultSet.Select(item => stepThree(stepTwo(stepOne(item))));
Note: The three steps themselves are not necessarily functions. They are meant to be a concise demonstration of the problem. If that has an effect on the answer please include that information.
Any performance difference would be negligible, but the definitive answer to that is simply "test it". The question would be more around readability, which one is easier to understand and grasp what is going on.
Cases where I have needed to Project on a Projection would include when working with EF Linq expressions where I ultimately need to do something that isn't supported by EF Linq so I need to materialize a projection (usually to an anonymous type) then finish the expression before selecting the final output. In these cases you would need to use the first example.
Personally I'd probably just stick to the first scenario as to me it is easy to understand what is going on, and it easily supports additions for other operations such as filtering with Where or using OrderBy etc. The second scenario only really comes up when I'm building a composite model.
.Select(x => new OuterModel
{
Id = x.Id,
InnerModel = x.Inner.Select(i => new InnerModel
{
// ...
})
}) // ...
In most cases though this can be handled through Automapper.
I'd be wary of any code that I felt needed chaining a lot of Select expressions as it would smell like trying to do too much in one expression chain. Making something easy to understand, even if it involves a few extra steps and might add a few milliseconds to the execution is far better than having the risk of needing to track down bugs that someone introduced because they misunderstood potentially complex looking code.
I have a use case in Q# where I have qubit register qs and need to apply the CNOT gate on every qubit except the first one, using the first one as control. Using a for loop I can do it as follows:
for (i in 1..Length(qs)-1) {
CNOT(qs[0], qs[i]);
}
Now, I wanted to give it a more functional flavor and tried instead to do something like:
ApplyToEach(q => CNOT(qs[0], q), qs[1..Length(qs)-1]);
The Q# compiler does not accept an expression like this, informing me that it encountered an unexpected code fragment. That's not too informative for my taste. Some documents claim that Q# supports anonymous functions a'la C#, hence the attempt above. Can anybody point me to a correct usage of lambdas in Q# or dispel my false belief?
At the moment, Q# doesn't support lambda functions and operations (though that would be a great feature request to file at https://github.com/microsoft/qsharp-compiler/issues/new/choose). That said, you can get a lot of the functional flavor that you get from lambdas by using partial application. In your example, for instance, I could also write that for loop as:
ApplyToEach(CNOT(Head(qs), _), Rest(qs));
Here, since CNOT has type (Qubit, Qubit) => Unit is Adj + Ctl, filling in one of the two inputs as CNOT(Head(qs), _) results in an operation of type Qubit => Unit is Adj + Ctl.
Partial application is a very powerful feature, and is used all throughout the Q# standard libraries to provide a functional way to build up quantum programs. If you're interested in learning more, I recommend checking out the docs at https://learn.microsoft.com/quantum/language/expressions#callable-invocation-expressions.
i am not sure if this is a typical stackoverflow question, but i am working on an application where i should constantly examine some conditions (for example if a certain variable's value is over a threshold). Conditions can be changed at any time and preferably from outside the code.
People suggested i should be using expression parsers, but i still don't understand what advantage do they provide me over basic mathematical operations provided by .NET.
Do you recommend a good .NET expression parser?
I think you need Dynamic LINQ. You can pass the conditions as strings.
Here is a blog post about that by ScottGu: http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx
I found this by a similar question: Dynamic WHERE clause in LINQ
An expression parser would offer more flexibility. Your expressions could be written as formulars in strings and they could be application data instead of harcoded classes/methods/whatever.
You could do things like:
// Assign an action to an expression given as a string
ExpressionObserver.Add("(a+b+c)/2 > x-y", () => { DoSomething(); });
Or:
// Replace the old expression by something written by the user in the UI
someExpressionActionAssignment.Expression = MyLineEdit1.Text;
But I do not know if the added complexity of all this really pays off in your case. If you only have a few simple expressions then it's probably overkill.
I'm using a few functions like
ICollection<ICache> caches = new HashSet<ICache>();
ICollection<T> Matches<T>(string dependentVariableName)
{
return caches
.Where(x => x.GetVariableName() == dependentVariableName)
.Where(x => typeof(T).IsAssignableFrom(x.GetType()))
.Select(x => (T) x)
.ToList();
}
in my current class design. They work wonderfully from an architecture perspective--where I can arbitrarily add objects of various related types (in this case ICaches) and retrieve them as collections of concrete types.
An issue is that the framework here is a scientific package, and these sorts of functions lie on very hot code paths that are getting called thousands of times over a few minute period. The result:
and functions like the above are the main consumers of the COMDelegate::DelegateConstruct.
As you can see from the relative distribution of the sample %, this isn't a deal breaker, but it would be fantastic to reduce the overhead a bit!
Thanks in advance.
1) i dont see how the code you posted it related to the performance data... the functions listed dont look like they are called from this code at all. so really i can't answer your question except to say that maybe you are interpreting he performance report wrong.
2) don't call .ToList at the end... just return the IEnumerable. that will help performance. only do ToList when you really do need a list that you can later add/remove/sort things in.
3) i dont have enough context but it seems this method could be eliminated by making use of the dynamic keyword
Can someone help to change to following to select unique Model from Product table
var query = from Product in ObjectContext.Products.Where(p => p.BrandId == BrandId & p.ProdDelOn == null)
orderby Product.Model
select Product;
I'm guessing you that you still want to filter based on your existing Where() clause. I think this should take care of it for you (and will include the ordering as well):
var query = ObjectContext.Products
.Where(p => p.BrandId == BrandId && p.ProdDelOn == null)
.Select(p => p.Model)
.Distinct()
.OrderBy(m => m);
But, depending on how you read the post...it also could be taken as you're trying to get a single unique Model out of the results (which is a different query):
var model = ObjectContext.Products
.Where(p => p.BrandId == BrandId && p.ProdDelOn == null)
.Select(p => p.Model)
.First();
Change the & to && and add the following line:
query = query.Distinct();
I'm afraid I can't answer the question - but I want to comment on it nonetheless.
IMHO, this is an excellent example of what's wrong with the direction the .NET Framework has been going in the last few years. I cannot stand LINQ, and nor do I feel too warmly about extension methods, anonymous methods, lambda expressions, and so on.
Here's why: I have yet to see a situation where either of these things actually contribute anything to solving real-world programming problems. LINQ is ceratinly no replacement for SQL, so you (or at least the project) still need to master that. Writing the LINQ statements is not any simpler than writing the SQL, but it does add run-time processing to build an expression tree and "compile" it into an SQL statement. Now, if you could solve complex problems more easily with LINQ than with SQL directly, or if it meant you didn't need to also know SQL, and if you could trust LINQ would produce good-enough SQL all the time, it might still have been worth using. But NONE of these preconditions are met, so I'm left wondering what the benefit is supposed to be.
Of course, in good old-fashioned SQL the statement would be
SELECT DISTINCT [Model]
FROM [Product]
WHERE [BrandID] = #brandID AND [ProdDelOn] IS NULL
ORDER BY [Model]
In many cases the statements can be easily generated with dev tools and encapsulated by stored procedures. This would perform better, but I'll grant that for many things the performance difference between LINQ and the more straightforward stored procs would be totally irrelevant. (On the other hand, performance problems do have a tendency to sneak in, as we devs often work with totally unrealistic amounts of data and on environments that have little in common with those hosting our software in real production systems.) But the advantages of just not using LINQ are HUGE:
1) Fewer skills required (since you must use SQL anyway)
2) All data access can be performed in one way (since you need SQL anyway)
3) Some control over HOW to get data and not just what
4) Less chance of being rightfully accused of writing bloatware (more efficient)
Similar things can be said with respect to many of the new language features introduced since C# 2.0, though I do appreciate and use some of them. The "var" keyword with type inferrence is great for initializing locals - it's not much use getting the same type information two times on the same line. But let's not pretend this somehow helps one bit if you have a problem to solve. Same for anonymous types - nested private types served the same purpose with hardly any more code, and I've found NO use for this feature since trying it out when it was new and shiny. Extention methods ARE in fact just plain old utility methods, and I have yet to hear any good explanation of why one should use the SAME syntax for instance methods and static methods invoked on another class! This actually means that adding a method to a class and getting no build warnings or errors can break an application. (In case you doubt: If you had an extension method Bar() for your Foo type, Foo.Bar() invokes a completely different implementation which may or may not do something similar to what your extension method Bar() did the day you introduce an instance method with the same signature. It'll build and crash at runtime.)
Sorry to rant like this, and maybe there is a better place to post this than in response to a question. But I really think anyone starting out with LINQ is wasting their time - unless it's in preparation for an MS certification exam, which AFAIU is also something a bit removed from reality.