Orchard client-side validation - how SHOULD it look/work? - c#

I'm attempting to use client-side validation in an admin page in Orchard. I've been successful at making it work using the techniques discussed in this question, but after doing some digging in the Orchard source and online, it seems to me that commenting out these lines
// Register localized data annotations
ModelValidatorProviders.Providers.Clear();
ModelValidatorProviders.Providers.Add(new LocalizedModelValidatorProvider());
is subverting some built-in Orchard functionality which allows for localized error strings. At this point, either having these lines in our out of OrchardStarter.cs is the only difference between validation working and not working for me.
What I'm hoping for is some guidance on this, maybe from the Orchard team. If these lines have to be out in order for validation to work, why are they there in the first place? If they are there for a good reason, what am I (and others) doing wrong in our attempts to get client-side validation working? I'm happy to post code samples if needs be, although it's a pretty standard ViewModel with data annotations. Thanks.

The lines are there to replace the DataAnnotationsModelValidatorProvider (DAMVP) with Orchard's own implementation, which allows localizing the validation messages the Orchard way. The way it does this is by replacing e.g. [Required] with [LocalizedRequired] before passing control on to the DAMVP. Note that DAMVP does get to do its job - but only after Orchard has "messed" with the attributes.
The problem is that DAMVP uses the type of the Attribute to apply client validation attributes. And now it won't find e.g. RequiredAttribute, because it's been replaced by LocalizedRequiredAttribute. So it won't know what - if any - client validation attributes it should add.
So, out-commenting the lines will make you lose Orchard's localization. Leaving them in will make you lose client validation.
One workaround that might work (haven't looked enough through Orchard's code, and haven't the means to test at the moment) would be to make DAMVP aware of Orchard's Localized attributes and what to do with them.
DAMVP has a static RegisterAdapter() method for the purpose of adding new client rules for attributes. It takes the type of the attribute and the type of the Client-side adapter (the class that takes care of adding client attributes) to use.
So, something like the following might work:
In OrchardStarter.cs:
// Leave the LocalizedModelValidatorProvider lines uncommented/intact
// These are the four attributes Orchard replaces. Register the standard
// client adapters for them:
DataAnnotationsModelValidatorProvider.RegisterAdapter(
typeof(LocalizedRegularExpressionAttribute),
typeof(RegularExpressionAttributeAdapter)
);
DataAnnotationsModelValidatorProvider.RegisterAdapter(
typeof(LocalizedRequiredAttribute),
typeof(RequiredAttributeAdapter)
);
DataAnnotationsModelValidatorProvider.RegisterAdapter(
typeof(LocalizedRangeAttribute),
typeof(RangeAttributeAdapter)
);
DataAnnotationsModelValidatorProvider.RegisterAdapter(
typeof(LocalizedStringLengthAttribute),
typeof(StringLengthAttributeAdapter)
);
As for the official word, it would seem this hasn't worked since localized validation was introduced in 1.3, and the impact is considered low: http://orchard.codeplex.com/workitem/18269
So, at the moment, it appears the official answer to the question title is, "it shouldn't".

Related

How to exchange claims based security with header based according to best-practice in .NET Core 3.1?

NB, it's not equivalent to this question - it's too old and considers other versions/platforms.
We're setting authorization policy like so. It works and I was kind of proud of the solution.
options.AddPolicy("ClaimPolicy", config =>
{
config.RequireAssertion(context =>
{
ClaimsPrincipal user = context.User;
Claim awo = user.Claims.Single(a=>a.Type=="awo");
bool authorized = user.IsInRoleAwo("management");
return authorized;
});
});
Now, the requirement has changed. The AWO value will still be used but it won't be a part of the access token as a claim in it. Instead, it will be provided to us as the header of the request. Personally, I see it as intuitively suspicious but I couldn't word my reluctance in technical terms, other than it's unconventional and allows the requested to manipulate the security parameter freely. I failed being sufficiently convincing and had to back off. (Undeniably, ones unability to motivate a choice suggests that it wasn't so great one.)
The new version looks like this. The problem is that I can't find any information about hte headers, the request nor HTTP context in there. There are fields for the user and for the resources. That's it.
options.AddPolicy("HeaderPolicy", config =>
{
config.RequireAssertion(context =>
{
var resource = context.Resource;
// now what?!
return true;
});
});
I was hoping that maybe I can obtain the route and pick AWO value from there (as it's a part of the REST path. I've located something in the resource field that could be cast to (Microsoft.AspNetCore.Routing.RouteEndpoint)context.Resource and from there, I can obtain .RoutePattern.RawText giving me. Regrettably, that produces the actual pattern, i.e. DemoApi/mixed/{awo}/data and not the value that's been passed in.
Should I bounce back to not reying in headers to pass me the info to be used in Authorize picking the policy? Or is there a best-practice approach to getting info on what headers were part of the request (or at least to learn the path including the passed parameters)?
I've got a suggestion to create a custom middleware to put in between AddAuthentication() and Add Authorization and possibly a custom authorization handler. I see much higher complexity as a caveat and I can't tell if it's a wise (or even feasible) solution. No blogs discussing it, as far my googling can see. There's an extensive article in that regard on MSDN but it doesn't treat SPA/headers case and, also, seems to me provide the functionality I already have implemented by RequireAssertion(...). There was also a suggestion on applying API key requirements but that seems to me like a lesser security than access tokens obtainable by autorization code flow that we try to achieve.

How to format/style ///<summary> in Web API 2

Maybe this isn't even possible, but it seems silly that I can't figure it out (nor can find anything conclusive after searching).
With a MVC/C# Web API 2 project, your controllers can be documented using something like:
///<summary>
///This is something really cool that you should use. I want <b>this bold</b>.
///</summary>
[HttpPost]
public MyResponse MyMethod(SomeInput input)
{
....
}
When the API runs, the project automatically builds the help site, and I can see the above endpoint/method, and its description ( text), but I've head to figure out how to do any sort of styling to the summary. It appears that the HTML tags get striped from the help page's output. Notice in my example above, I have "this bold". I'm not so much concerned about bold, but more interested in being able to use unordered lists () and other basic HTML tags to just do some real basic formatting.
Is this even possible?
Is there a trick to it?
Is there some other markup/formatting I should be using?
Note - The actual endpoint that I'm trying to document at moment, happens to be a mime multipart form, and the framework won't document those out of the box. To get around this, I've created some helper methods in HelpPageConfigurationExtensions (to determine if the current endpoint view is one that requires custom documentation), in HelpPageApiModel.cshtml to determine if it should show the stock documentation or the custom docs, a helper library that contains the custom doc information, and a series of help functions that use some reflection to rapidly build HTML tables for the rest of the help page's documentation (e.g. the request and response objs). I'm mentioning this because maybe I just need to further extend my custom doc library to include (hard code) the value, and then in the view I can just #Html.Raw it -- opposed to trying to get the actual method's to output with formatting.
Thoughts?
Thanks!

nunit - set Order attribute from custom attribute of Test method

Let's say we have a custom attribute:
[Precondition(1, "Some precondition")]
This would implement [Test, Order(1), Description("Some precondition")]
Can I access and modify the Order attribute (or create one) for this method?
I can modify the Description and Author, but Order is not a possibility.
I have tried
1: context.Test.Properties["Order"][0] = order;
2:method.CustomAttributes.GetEnumerator()
by walking the stack frames with
Object[] attributes = method.GetCustomAttributes(typeof(PreconditionAttribute), false);
if (attributes.Length >= 1){...}
3:
OrderAttribute orderAttribute = (OrderAttribute)Attribute.GetCustomAttribute(i, typeof(OrderAttribute));
orderAttribute.Order = _order;
Which is readonly.
If I try orderAttribute.Order = new OrderAttribute(myOrd), it doesn't do anything.
I have two answers to choose from. One is in the vein of "Don't do this" and the other is about how to do it. Just for fun, I'm putting both answers up, separately, so they can compete with one another. This one is about why I don't think this is a good idea.
It's easy enough to write either
[Test, Order(1), Description("xxx")] or the equivalent...
[Test(Description="xxx"), Order(1)]
The proposed attribute gives users a second way to specify order, making it possible to assign two different orders to a test. Which of two attributes will win the day depends on (1) how each one is implemented, (2) the order in which the attributes are listed and (3) the platform on which you are running. For all practical purposes, it's non-deterministic.
Keeping the two things separate allows devs to decide which they need independently... which is why NUnit keeps them separate.
Using the standard attributes means that the devs can rely on the nunit documentation to tell them what the attributes do. If you implement your own attribute, you should document what it does in itself as well as what it does in the presence of the standard attributes... As stated above, that's difficult to predict.
I know this isn't a real answer in SO terms, but it's not pure opinion either. There are real technical issues in providing the kind of solution you want. I'd love to see what people think of it in comparison with "how to" I'm going to post next.
See my prior answer first! If you really want to do this, here's the how-to...
In order to combine the action of two existing attributes, you need equivalent code to those two attributes.
In this case both are extremely simple and both have about the same amount of code. DescriptionAttribute is based on PropertyAttribute so some of its code is hidden. OrderAttribute has a bit more logic because it checks to make sure the order has not already been set. Ultimately, both of them have code that implements the IApplyToTest interface.
Because they are both simple, I would copy the code, in order to avoid relying on implementation details that could change. Start with the slightly more complete OrderAttribute. Change its name. Modify the ApplyToTest method to set the description. You're done!
It will look something like this, depending on the names you use for properties...
public void ApplyToTest(Test test)
{
if (!test.Properties.ContainsKey(PropertyNames.Order))
test.Properties.Set(PropertyNames.Order, Order);
test.Properties.Set(PropertyNames.Description, Description);
}
A comment on what you tried...
There is no reason to think that creating an attribute in your code will do anything. NUnit has no way to know about those attributes. Your attribute cannot modify the code so that the test magically has other attributes. The only way Attributes communicate with NUnit is by having their interfaces (like IApplyToTest) called. And only attributes actually present in the code will receive such a call.

Model Validation in Web API

I have to validate three things when a consumer of my API tries to do an update on a customer.
Prevent the customer to be updated if:
The first name or last name are blank
For a certain country, if the customer's inner collection of X is empty, then throw an exception. X is hard to explain, so just assume it's some collection. For all other countries, X doesn't apply / will always be empty. But if it's a certain country, then X is required. So it's almost a conditional required attribute. A customer belongs to a country, so it's figured out from the JSON being sent.
Prevent the customer from being updated if some conditions in the database are true.
So basically i'm stuck with the following problem, and I wanted some advice on the most appropriately way to solve it:
Do I create an Action Filter to do the validation on the customer entity before the saving takes place? Or would it be better to create custom validation attribute derived from ValidationAttribute and override the IsValid member function.
Basically a question of saying
if (first name is empty, if x, if y, etc) vs (!ModelState.IsValid)
And then using IsValid to cause the custom attributes to work.
It seems like validation attributes are best for "simple" validation, i.e. required field. But once you start getting into things like "I need to look at my database, or analyze the http request header for custom values, and based on that, invalid = false" then it almost seems wrong to do this sort of stuff so close to the entity.
Thoughts?
Thanks!
I like FluentValidation a lot: https://github.com/JeremySkinner/FluentValidation
As you mentioned built-in validation attributes are limited. For complex validations you had better implement your own attributes or use a library like this.
One thing I like about FluentValidation is that it performs at model-level rather than field-level, meaning that you can use related fields' values for validation. For example
RuleFor(customer => customer.Discount).NotEqual(0).When(customer => customer.HasDiscount);
(Code excerpt taken from project's Wiki page)
It's also extensible so you can develop your own custom validators on top of this library as well.

Parsing a Auto-Generated .NET Date Object with Javascript/JQuery

There are some posts on this, but not an answer to this specific question.
The server is returning this: "/Date(1304146800000)/"
I would like to not change the server-side code at all and instead parse the date that is included in the .Net generated JSON object. This doesn't seem that hard because it looks like it is almost there. Yet there doesn't seem to be a quick fix, at least in these forums.
From previous posts it sounds like this can be done using REGEX but REGEX and I are old enemies that coldly stare at each other across the bar.
Is this the only way? If so, can someone point me to a REGEX reference that is appropriate to this task?
Regards,
Guido
The link from Robert is good, but we should strive to answer the question here, not to just post links.
Here's a quick function that does what you need. http://jsfiddle.net/Aaa6r/
function deserializeDotNetDate(dateStr) {
var matches = /\/Date\((\d*)\)\//.exec(dateStr);
if(!matches) {
return null;
}
return new Date( parseInt( matches[1] ) );
}
deserializeDotNetDate("/Date(1304146800000)/");
Since you're using jQuery I've extended its $.parseJSON() functionality so it's able to do this conversion for you automatically and transparently.
It doesn't convert only .net dates but ISO dates as well. ISO dates are supported by native JSON converters in all major browsers but they work only one way because JSON spec doesn't support date data type.
Read all the details (don't want to copy blog post content here because it would be too much) in my blog post and get the code as well. The idea is still the same: change jQuery's default $.parseJSON() behaviour so it can detect .Net and ISO dates and converts them automatically when parsing JSON data. This way you don't have to traverse your parsed objects and convert dates manually.
How it's used?
$.parseJSON(yourJSONstring, true);
See the additional variable? This makes sure that all your existing code works as expected without any change. But if you do provide the additional parameter and set it to true it will detect dates and convert them accordingly.
Why is this solution better than manual conversion? (suggested by Juan)
Because you lower the risk of human factor of forgetting to convert some variable in your object tree (objects can be deep and wide)
Because your code is in development and if you change some server-side part that returns JSON to the client (rename variables, add new ones, remove existing etc.), you have to think of these manual conversions on the client side as well. If you do it automatically you don't have to think (or do anything) about it.
Two top reasons from the top of my head.
When overriding jQuery functionality feels wrong
When you don't want to actually override existing $.parseJSON() functionality you can minimally change the code and rename the extension to $.parseJSONwithdates() and then always use your own function when parsing JSON. But you may have a problem when you set your Ajax calls to dataType: "json" which automatically calls the original parser. If you use this setting you will have to override jQuery's existing functionality.
The good thing is also that you don't change the original jQuery library code file. You put this extension in a separate file and use it at your own will. Some pages may use it, others may not. But it's wise to use it everywhere otherwise you have the same problem of human factor with forgetting to include the extension. Just include your extension in some global Javascript file (or master page/template) you may be using.

Categories

Resources