I've a requirement to omit the null valued fields from the response altogether.
I can do this by modifying the JsonFormatter Serialization Setting for a normal webapi response.
config.Formatters.JsonFormatter.SerializationSettings
.NullValueHandling = NullValueHandling.Ignore;
But that does not seem to work once i switch to OData.
Here are my files:
WebApi.config:
public static void Register(HttpConfiguration config)
{
var builder = new ODataConventionModelBuilder();
var workerEntitySet = builder.EntitySet<Item>("Values");
config.Routes.MapODataRoute("Default", "api", builder.GetEdmModel());
}
Item Model:
public class Item
{
public int Id { get; set; }
public string Name { get; set; }
public string OptionalField { get; set; }
}
ValuesController:
public class ValuesController : EntitySetController<Item, int>
{
public static List<Item> items = new List<Item>()
{
new Item { Id = 1, Name = "name1", OptionalField = "Value Present" },
new Item { Id = 3, Name = "name2" }
};
[Queryable(AllowedQueryOptions = AllowedQueryOptions.All)]
public override IQueryable<Item> Get()
{
return items.AsQueryable();
}
[Queryable]
protected override Item GetEntityByKey(int id)
{
return items.Single(i => i.Id == id);
}
}
Here is the response I get for GET: api/Values.
{
"odata.metadata":"http://localhost:28776/api/$metadata#Values",
"value":[
{
"Id":1,
"Name":"name1",
"OptionalField":"Value Present"
},
{
"Id":3,
"Name":"name2",
"OptionalField":null
}
]
}
But I do not need the elements with null values present in the response - in the response below, I need the "OptionalField" not to be present in the second item (As its value is null). I need to achieve it in my response, I do not want the users to query for non-null values only.
In ODataLib v7 things changed drastically around these sorts of customisations thanks to Depencency Injection (DI)
This advice is for anyone who has upgraded to ODataLib v7, who may have implemented the previously accepted answers.
If you have the Microsoft.OData.Core nuget package v7 or later then this applies to you :). If you are still using older versions then use the code provided by #stas-natalenko but please DO NOT stop inheriting from ODataController...
We can globally override the DefaultODataSerializer so that null values are omitted from all Entity and Complex value serialized outputs using the following steps:
Define your custom Serializer that will omit properties with null values
Inherit from Microsoft.AspNet.OData.Formatter.Serialization.ODataResourceSerializer
/// <summary>
/// OData Entity Serilizer that omits null properties from the response
/// </summary>
public class IngoreNullEntityPropertiesSerializer : ODataResourceSerializer
{
public IngoreNullEntityPropertiesSerializer(ODataSerializerProvider provider)
: base(provider) { }
/// <summary>
/// Only return properties that are not null
/// </summary>
/// <param name="structuralProperty">The EDM structural property being written.</param>
/// <param name="resourceContext">The context for the entity instance being written.</param>
/// <returns>The property be written by the serilizer, a null response will effectively skip this property.</returns>
public override Microsoft.OData.ODataProperty CreateStructuralProperty(Microsoft.OData.Edm.IEdmStructuralProperty structuralProperty, ResourceContext resourceContext)
{
var property = base.CreateStructuralProperty(structuralProperty, resourceContext);
return property.Value != null ? property : null;
}
}
Define a Provider that will determine when to use our custom Serializer
Inherit from Microsoft.AspNet.OData.Formatter.Serialization.DefaultODataSerializerProvider
/// <summary>
/// Provider that selects the IngoreNullEntityPropertiesSerializer that omits null properties on resources from the response
/// </summary>
public class IngoreNullEntityPropertiesSerializerProvider : DefaultODataSerializerProvider
{
private readonly IngoreNullEntityPropertiesSerializer _entityTypeSerializer;
public IngoreNullEntityPropertiesSerializerProvider(IServiceProvider rootContainer)
: base(rootContainer) {
_entityTypeSerializer = new IngoreNullEntityPropertiesSerializer(this);
}
public override ODataEdmTypeSerializer GetEdmTypeSerializer(Microsoft.OData.Edm.IEdmTypeReference edmType)
{
// Support for Entity types AND Complex types
if (edmType.Definition.TypeKind == EdmTypeKind.Entity || edmType.Definition.TypeKind == EdmTypeKind.Complex)
return _entityTypeSerializer;
else
return base.GetEdmTypeSerializer(edmType);
}
}
Now we need to Inject this into your Container Builder.
the specifics of this will vary depending on your version of .Net, for many older projects this will be where you are mapping the ODataServiceRoute, this will usually be located in your startup.cs or WebApiConfig.cs
builder => builder
.AddService(ServiceLifetime.Singleton, sp => model)
// Injected our custom serializer to override the current ODataSerializerProvider
// .AddService<{Type of service to Override}>({service lifetime}, sp => {return your custom implementation})
.AddService<Microsoft.AspNet.OData.Formatter.Serialization.ODataSerializerProvider>(ServiceLifetime.Singleton, sp => new IngoreNullEntityPropertiesSerializerProvider(sp));
And there you have it, re-exeecute your query and you should get the following:
{
"odata.metadata":"http://localhost:28776/api/$metadata#Values",
"value":[
{
"Id":1,
"Name":"name1",
"OptionalField":"Value Present"
},
{
"Id":3,
"Name":"name2"
}
]
}
This is a very handy solution that can significantly reduce the data consumption on many data entry applications based on OData Services
NOTE: At this point in time, this technique must be used to override any of these default services: (as defined here OData.Net - Dependency Injection Support
Service Default Implementation Lifetime Prototype?
-------------------------- -------------------------- ---------- ---------
IJsonReaderFactory DefaultJsonReaderFactory Singleton N
IJsonWriterFactory DefaultJsonWriterFactory Singleton N
ODataMediaTypeResolver ODataMediaTypeResolver Singleton N
ODataMessageReaderSettings ODataMessageReaderSettings Scoped Y
ODataMessageWriterSettings ODataMessageWriterSettings Scoped Y
ODataPayloadValueConverter ODataPayloadValueConverter Singleton N
IEdmModel EdmCoreModel.Instance Singleton N
ODataUriResolver ODataUriResolver Singleton N
UriPathParser UriPathParser Scoped N
ODataSimplifiedOptions ODataSimplifiedOptions Scoped Y
UPDATE: How to handle lists or complex types
Another common scenario is to exclude complex types from the output if all of their properties are null, especially now that we do not include the null properties. We can override the WriteObjectInline method in the IngoreNullEntityPropertiesSerializer for this:
public override void WriteObjectInline(object graph, IEdmTypeReference expectedType, Microsoft.OData.ODataWriter writer, ODataSerializerContext writeContext)
{
if (graph != null)
{
// special case, nullable Complex Types, just skip them if there is no value to write
if (expectedType.IsComplex() && graph.GetType().GetProperty("Instance")?.GetValue(graph) == null
&& (bool?)graph.GetType().GetProperty("UseInstanceForProperties")?.GetValue(graph) == true)
{
// skip properties that are null, especially if they are wrapped in generic types or explicitly requested by an expander
}
else
{
base.WriteObjectInline(graph, expectedType, writer, writeContext);
}
}
}
Q: And if we need to omit null list properties as well?
If you wanted to use the same logic to exclude all lists, if they are null, then you could remove the expectedType.IsComplex() clause:
// special case, nullable Complex Types, just skip them if there is no value to write
if (graph.GetType().GetProperty("Instance")?.GetValue(graph) == null
&& (bool?)graph.GetType().GetProperty("UseInstanceForProperties")?.GetValue(graph) == true)
{
// skip properties that are null, especially if they are wrapped in generic types or explicitly requested by an expander
}
I don't recommend this for lists that are navigation properties, navigation properties will only be included in the output if they are explicitly requested in an $expand clause, or by other convention-based logic you may have that does the same thing. An empty or null array in the output might be significant for some client side logic as a confirmation that the requested property data was loaded but that there is no data to return.
I know it does not look anyway logical, but simply adding DefaultODataSerializerProvider and DefaultODataDeserializerProvider to the list of Formatters did the trick for me:
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
//...
var odataFormatters = System.Web.OData.Formatter.ODataMediaTypeFormatters.Create(
System.Web.OData.Formatter.Serialization.DefaultODataSerializerProvider.Instance,
System.Web.OData.Formatter.Deserialization.DefaultODataDeserializerProvider.Instance);
config.Formatters.AddRange(odataFormatters);
UPDATE
Since the global formatters modification didn't work correctly for me, I chose a different way.
First I stepped away from the ODataController and inherited my controller from ApiController using a custom ODataFormatting attribute:
[ODataRouting]
[CustomODataFormatting]
public class MyController : ApiController
{
...
}
public class CustomODataFormattingAttribute : ODataFormattingAttribute
{
public override IList<System.Web.OData.Formatter.ODataMediaTypeFormatter> CreateODataFormatters()
{
return System.Web.OData.Formatter.ODataMediaTypeFormatters.Create(
new CustomODataSerializerProvider(),
new System.Web.OData.Formatter.Deserialization.DefaultODataDeserializerProvider());
}
}
The formatting attribute replaces the DefaultODataSerializerProvider with a modified one:
public class CustomODataSerializerProvider : DefaultODataSerializerProvider
{
public override ODataEdmTypeSerializer GetEdmTypeSerializer(IEdmTypeReference edmType)
{
if(edmType.Definition.TypeKind == EdmTypeKind.Entity)
return new CustomODataEntityTypeSerializer(this);
else
return base.GetEdmTypeSerializer(edmType);
}
}
And the last, the custom serializer filters structural properties with null values:
public class CustomODataEntityTypeSerializer : System.Web.OData.Formatter.Serialization.ODataEntityTypeSerializer
{
public CustomODataEntityTypeSerializer(ODataSerializerProvider provider)
: base(provider) { }
public override ODataProperty CreateStructuralProperty(IEdmStructuralProperty structuralProperty, EntityInstanceContext entityInstanceContext)
{
var property = base.CreateStructuralProperty(structuralProperty, entityInstanceContext);
return property.Value != null ? property : null;
}
}
This doesn't seem to me like the best possible solution, but this is the one I found.
All the method are same I made changes to webapiconfig
var odataFormatters = ODataMediaTypeFormatters.Create(new CustomODataSerializerProvider(), new DefaultODataDeserializerProvider());
config.Formatters.InsertRange(0, odataFormatters);
This help me to solve the result
Related
There is a sample here for creating 100% dynamic OData models in Microsoft.AspNetCore.OData 8.x. However, in our case we have an existing model that we are happy with, but we want to add custom fields to it.
In other words, we want an OData model with entities that have some fixed columns/properties and some dynamically-generated columns/properties that come from the database, like this:
public class ODataEntity
{
[Key]
public int Id { get; set; }
public string Name { get; set; } = "";
// From the perspective of clients like Power BI, this should produce
// a series of additional columns (the columns are the same on all
// instances, but the schema can change at any time)
public Dictionary<string, object> CustomFields { get; set; }
}
To my tremendous surprise, the key-value pairs in CustomFields become properties in the JSON output (i.e. there is no CustomFields column; its contents are inserted into the parent object). However, the custom fields are not recognized by Power BI:
I assume that this is because there is no metadata for the custom fields in https://.../odata/$metadata. So my question is:
How can I modify the following code so that the custom columns are included in the IEdmModel?
static IEdmModel GetEdmModel(params CustomFieldDef[] customFields)
{
var builder = new ODataConventionModelBuilder() {
Namespace = "Namespace",
ContainerName = "Container", // no idea what this is for
};
builder.EntitySet<ODataEntity>("objects");
return builder.GetEdmModel();
}
public class CustomFieldDef
{
public string FieldName;
public Type Type;
}
How can I modify the following startup code so that the IEdmModel is regenerated every time https://.../odata/$metadata is accessed?
IMvcBuilder mvc = builder.Services.AddControllers();
mvc.AddOData(opt => opt.AddRouteComponents("odata", GetEdmModel())
.Select().Filter().OrderBy().Count().Expand().SkipToken());
Regarding the first question, there are two basic approaches that one could take:
1. Dynamic Everything
Use the 100% dynamic approach that is used in the ODataDynamicModel sample. This approach is difficult, especially if you already have a working model, because (i) the way the sample code works is difficult to understand, and (ii) you have to completely rewrite your code, or use reflection to help generate both the schema ($metadata) and the output data.
2. Modify the EdmModel
This is the approach I'm taking in this answer.
Step 1: change the model
The IEdmModel returned by ODataConventionModelBuilder.GetEdmModel is mutable (and actually has type EdmModel), so you can add custom fields to it.
static IEdmModel GetEdmModel(params CustomFieldDef[] customFields)
{
var builder = new ODataConventionModelBuilder() {
Namespace = "Namespace",
ContainerName = "Container", // no idea what this is for
};
builder.EntitySet<ODataEntity>("objects");
var model = (EdmModel) builder.GetEdmModel();
IODataTypeMapper mapper = model.GetTypeMapper();
foreach (var edmType in model.SchemaElements.OfType<EdmEntityType>()) {
if (edmType.Name == nameof(ODataEntity)) {
foreach (var field in customFields) {
var typeRef = mapper.GetEdmTypeReference(model, field.Type);
edmType.AddStructuralProperty(field.FieldName, typeRef);
}
}
}
return model;
}
Step 2: Obtain the custom fields
Normally you call mvc.AddOData in ConfigureServices in Startup.cs, passing it a lambda that will create the IEdmModel. But wait, are your custom field definitions stored in your database? If so, how can you access the database from inside ConfigureServices? Don't worry: the lambda passed to AddOData is called after Configure, so it is possible to access the database or any other required service with some slightly hacky code. This code also installs a class called ODataResourceSerializerForCustomFields which is the subject of step 3:
public void ConfigureServices(IServiceCollection services)
{
...
IMvcBuilder mvc = services.AddControllers(...);
// OData Configuration
mvc.AddOData(opt => {
var scopeProvider = _serviceProvider!.CreateScope().ServiceProvider;
var cfdm = scopeProvider.GetRequiredService<CustomFieldDefManager>();
var edmModel = GetEdmModel(cfdm.GetAll());
opt.AddRouteComponents("odata", edmModel, services => {
services.AddScoped<ODataResourceSerializer,
ODataResourceSerializerForCustomFields>();
}).Select().Filter().OrderBy().Count().Expand().SkipToken();
});
...
}
IServiceProvider? _serviceProvider;
public void Configure(IApplicationBuilder app, ...)
{
_serviceProvider = app.ApplicationServices;
...
}
Of course, this code is not dynamic: it generates the EdmModel only once on startup. I will figure out later how to make this dynamic.
Step 3: Stop it from crashing
Microsoft.AspNetCore.OData.dll isn't designed to support custom fields in the EdmModel. It's a young product, you understand, only version 8.0. As soon as you add a custom field, you'll get an InvalidOperationException like "The EDM instance of type '[ODataEntity Nullable=True]' is missing the property 'ExampleCustomField'., because the library assumes that all properties in the EdmModel are real CLR properties.
I found a way around this problem by overriding a few methods of ODataResourceSerializer. But first, define an IHasCustomFields interface and make sure that any OData entity with custom fields implements this interface:
public interface IHasCustomFields
{
public Dictionary<string, object?> CustomFields { get; set; }
}
Now let's add ODataResourceSerializerForCustomFields, which uses special behavior when this interface is present.
/// <summary>
/// This class modifies the behavior of ODataResourceSerializer to
/// stop it from crashing when the EdmModel contains custom fields.
/// Note: these modifications are designed for simple custom fields
/// (e.g. string, bool, DateTime).
/// </summary>
public class ODataResourceSerializerForCustomFields : ODataResourceSerializer
{
public ODataResourceSerializerForCustomFields(IODataSerializerProvider serializerProvider)
: base(serializerProvider) { }
IHasCustomFields? _hasCustomFields;
HashSet<string>? _realProps;
public override Task WriteObjectInlineAsync(
object graph, IEdmTypeReference expectedType,
ODataWriter writer, ODataSerializerContext writeContext)
{
_hasCustomFields = null;
if (graph is IHasCustomFields hasCustomFields) {
_hasCustomFields = hasCustomFields;
var BF = BindingFlags.Public | BindingFlags.Instance;
_realProps = graph.GetType().GetProperties(BF).Select(p => p.Name).ToHashSet();
}
return base.WriteObjectInlineAsync(graph, expectedType, writer, writeContext);
}
public override ODataResource CreateResource(
SelectExpandNode selectExpandNode, ResourceContext resourceContext)
{
return base.CreateResource(selectExpandNode, resourceContext);
}
public override ODataProperty CreateStructuralProperty(
IEdmStructuralProperty structuralProperty, ResourceContext resourceContext)
{
// Bypass tne base class if the current property doesn't physically exist
if (_hasCustomFields != null && !_realProps!.Contains(structuralProperty.Name)) {
_hasCustomFields.CustomFields.TryGetValue(structuralProperty.Name, out object? value);
return new ODataProperty {
Name = structuralProperty.Name,
Value = ToODataValue(value)
};
}
return base.CreateStructuralProperty(structuralProperty, resourceContext);
}
public static ODataValue ToODataValue(object? value)
{
if (value == null)
return new ODataNullValue();
if (value is DateTime date)
value = (DateTimeOffset)date;
return new ODataPrimitiveValue(value);
}
// The original implementation of this method can't be prevented from
// crashing, so replace it with a modified version based on the original
// source code. This version is simplified to avoid calling `internal`
// methods that are inaccessible, but as a result I'm not sure that it
// behaves quite the same way. If properties that aren't in the EdmModel
// aren't needed in the output, the method body is optional and deletable.
public override void AppendDynamicProperties(ODataResource resource,
SelectExpandNode selectExpandNode, ResourceContext resourceContext)
{
if (_hasCustomFields == null) {
base.AppendDynamicProperties(resource, selectExpandNode, resourceContext);
return;
}
if (!resourceContext.StructuredType.IsOpen || // non-open type
(!selectExpandNode.SelectAllDynamicProperties && selectExpandNode.SelectedDynamicProperties == null)) {
return;
}
IEdmStructuredType structuredType = resourceContext.StructuredType;
IEdmStructuredObject structuredObject = resourceContext.EdmObject;
object value;
if (structuredObject is IDelta delta) {
value = ((EdmStructuredObject)structuredObject).TryGetDynamicProperties();
} else {
PropertyInfo dynamicPropertyInfo = resourceContext.EdmModel.GetDynamicPropertyDictionary(structuredType);
if (dynamicPropertyInfo == null || structuredObject == null ||
!structuredObject.TryGetPropertyValue(dynamicPropertyInfo.Name, out value) || value == null) {
return;
}
}
IDictionary<string, object> dynamicPropertyDictionary = (IDictionary<string, object>)value;
// Build a HashSet to store the declared property names.
// It is used to make sure the dynamic property name is different from all declared property names.
HashSet<string> declaredPropertyNameSet = new HashSet<string>(resource.Properties.Select(p => p.Name));
List<ODataProperty> dynamicProperties = new List<ODataProperty>();
// To test SelectedDynamicProperties == null is enough to filter the dynamic properties.
// Because if SelectAllDynamicProperties == true, SelectedDynamicProperties should be null always.
// So `selectExpandNode.SelectedDynamicProperties == null` covers `SelectAllDynamicProperties == true` scenario.
// If `selectExpandNode.SelectedDynamicProperties != null`, then we should test whether the property is selected or not using "Contains(...)".
IEnumerable<KeyValuePair<string, object>> dynamicPropertiesToSelect =
dynamicPropertyDictionary.Where(x => selectExpandNode.SelectedDynamicProperties == null || selectExpandNode.SelectedDynamicProperties.Contains(x.Key));
foreach (KeyValuePair<string, object> dynamicProperty in dynamicPropertiesToSelect) {
if (string.IsNullOrEmpty(dynamicProperty.Key))
continue;
if (declaredPropertyNameSet.Contains(dynamicProperty.Key))
continue;
dynamicProperties.Add(new ODataProperty {
Name = dynamicProperty.Key,
Value = ToODataValue(dynamicProperty.Value)
});
}
if (dynamicProperties.Count != 0)
resource.Properties = resource.Properties.Concat(dynamicProperties);
}
}
public class InputModel
{
public string Thing { get; set; }
public DateTime AnotherThing { get; set; }
}
public ThingController : ControllerBase
{
public Task DoTheThing([FromQuery] int foo, [FromQuery] InputModel input)
{
// Elided.
}
}
The problem is that when the Swagger documentation is generated for this controller, the following inputs are listed for DoTheThing:
foo: int
Thing: string
AnotherThing: DateTime
Note how the last two inputs start with uppercase, because that's how they are defined in their model. I want them to start with lowercase to be consistent with the non-complex parameters passed to the controller method (remember, ASP.NET model binding doesn't care about casing).
The easy way to do this is to either have those properties be named starting with lowercase on the model, or apply the FromQuery and/or FromBody attribute on them. I don't want to do either of these things because the former is just nasty, and the latter is applying behaviour to properties, when I need that behaviour to be applied on a case-by-case basis.
Ideally I'd like to be able to write something like the following (which currently doesn't work because Swashbuckle doesn't seem to know/care about the DisplayName or Display attributes):
public class InputModel
{
[DisplayName("thing")]
public string Thing { get; set; }
[DisplayName("anotherThing")]
public DateTime AnotherThing { get; set; }
}
However, I'd be happy with any solution that allows me to "rename" the model properties without changing their names.
I have looked at Swashbuckle.AspNetCore.Annotations but it doesn't appear to provide this functionality.
To force all parameters to be lowercase, use the obviously-named but poorly-documented DescribeAllParametersInCamelCase method:
services.AddSwaggerGen(o =>
{
...
o.DescribeAllParametersInCamelCase();
...
});
(This ended up being somewhat of an XY question, because what I wanted was a way to force all params to be described as lowercase, but couldn't find a way to do it, so I asked for a general way to modify param names.)
You can achieve that with a IDocumentFilter in Asp.Net Core, i am not sure that this will work in normal Asp.Net but the solution must be similar.
The DocumentFilter iterates all parameters and lowers the first letter.
using Swashbuckle.AspNetCore.Swagger;
using Swashbuckle.AspNetCore.SwaggerGen;
/// <summary>
/// A DocumentFilter that lowers the first letter of the query parameters.
/// </summary>
public class NameDocumentFilter : IDocumentFilter
{
#region explicit interfaces
/// <inheritdoc />
public void Apply(SwaggerDocument swaggerDoc, DocumentFilterContext context)
{
if (swaggerDoc.Paths.Count <= 0)
{
return;
}
foreach (var path in swaggerDoc.Paths.Values)
{
ToLower(path.Parameters);
// Edit this list if you want other operations.
var operations = new List<Operation>
{
path.Get,
path.Post,
path.Put
};
operations.FindAll(x => x != null)
.ForEach(x => ToLower(x.Parameters));
}
}
#endregion
#region methods
/// <summary>
/// Lowers the first letter of a parameter name.
/// </summary>
private static void ToLower(IList<IParameter> parameters)
{
if (parameters == null)
{
return;
}
foreach (var param in parameters)
{
// limit the renaming only to query parameters
if (!param.In.Equals("query", StringComparison.OrdinalIgnoreCase))
{
continue;
}
// shouldn't happen, just to make sure
if (string.IsNullOrWhiteSpace(param.Name))
{
continue;
}
param.Name = param.Name[0]
.ToString()
.ToLower() + param.Name.Substring(1);
}
}
#endregion
}
Then register die DocumentFilter in the swagger configuration:
services.AddSwaggerGen(
c =>
{
c.SwaggerDoc(
"v1",
new Info
{
Title = "My WebSite",
Version = "v1"
});
c.DocumentFilter<NameDocumentFilter>();
});
I adjusted this code from a sample that describes enum parameters, but the same idea works for renaming.
Using Swashbuckle.AspNetCore in an ASP.NET Core webapp, we have response types like:
public class DateRange
{
[JsonConverter(typeof(IsoDateConverter))]
public DateTime StartDate {get; set;}
[JsonConverter(typeof(IsoDateConverter))]
public DateTime EndDate {get; set;}
}
When using Swashbuckle to emit the swagger API JSON, this becomes:
{ ...
"DateRange": {
"type": "object",
"properties": {
"startDate": {
"format": "date-time",
"type": "string"
},
"endDate": {
"format": "date-time",
"type": "string"
}
}
}
...
}
The problem here is that DateTime is a value type, and can never be null; but the emitted Swagger API JSON doesn't tag the 2 properties as required. This behavior is the same for all other value types: int, long, byte, etc - they're all considered optional.
To complete the picture, we're feeding our Swagger API JSON to dtsgenerator to generate typescript interfaces for the JSON response schema. e.g. the class above becomes:
export interface DateRange {
startDate?: string; // date-time
endDate?: string; // date-time
}
Which is clearly incorrect. After digging into this a little bit, I've concluded that dtsgenerator is doing the right thing in making non-required properties nullable in typescript. Perhaps the swagger spec needs explicit support for nullable vs required, but for now the 2 are conflated.
I'm aware that I can add a [Required] attribute to every value-type property, but this spans multiple projects and hundreds of classes, is redundant information, and would have to be maintained. All non-nullable value type properties cannot be null, so it seems incorrect to represent them as optional.
Web API, Entity Framework, and Json.net all understand that value type properties cannot be null; so a [Required] attribute is not necessary when using these libraries.
I'm looking for a way to automatically mark all non-nullable value types as required in my swagger JSON to match this behavior.
If you're using C# 8.0+ and have Nullable Reference Types enabled, then the answer can be even easier. Assuming it is an acceptable division that all non-nullable types are required, and all other types that are explicitly defined as nullable are not then the following schema filter will work.
public class RequireNonNullablePropertiesSchemaFilter : ISchemaFilter
{
/// <summary>
/// Add to model.Required all properties where Nullable is false.
/// </summary>
public void Apply(OpenApiSchema model, SchemaFilterContext context)
{
var additionalRequiredProps = model.Properties
.Where(x => !x.Value.Nullable && !model.Required.Contains(x.Key))
.Select(x => x.Key);
foreach (var propKey in additionalRequiredProps)
{
model.Required.Add(propKey);
}
}
}
The Apply method will loop through each model property checking to see if Nullable is false and adding them to the list of required objects. From observation it appears that Swashbuckle does a fine job of setting the Nullable property based on if it a nullable type. If you don't trust it, you could always use Reflection to produce the same affect.
As with other schema filters don't forget to add this one in your Startup class as well as the appropriate Swashbuckle extensions to handle nullable objects.
services.AddSwaggerGen(c =>
{
/*...*/
c.SchemaFilter<RequireNonNullablePropertiesSchemaFilter>();
c.SupportNonNullableReferenceTypes(); // Sets Nullable flags appropriately.
c.UseAllOfToExtendReferenceSchemas(); // Allows $ref enums to be nullable
c.UseAllOfForInheritance(); // Allows $ref objects to be nullable
}
I found a solution for this: I was able to implement a Swashbuckle ISchemaFilter that does the trick. Implementation is:
/// <summary>
/// Makes all value-type properties "Required" in the schema docs, which is appropriate since they cannot be null.
/// </summary>
/// <remarks>
/// This saves effort + maintenance from having to add <c>[Required]</c> to all value type properties; Web API, EF, and Json.net already understand
/// that value type properties cannot be null.
///
/// More background on the problem solved by this type: https://stackoverflow.com/questions/46576234/swashbuckle-make-non-nullable-properties-required </remarks>
public sealed class RequireValueTypePropertiesSchemaFilter : ISchemaFilter
{
private readonly CamelCasePropertyNamesContractResolver _camelCaseContractResolver;
/// <summary>
/// Initializes a new <see cref="RequireValueTypePropertiesSchemaFilter"/>.
/// </summary>
/// <param name="camelCasePropertyNames">If <c>true</c>, property names are expected to be camel-cased in the JSON schema.</param>
/// <remarks>
/// I couldn't figure out a way to determine if the swagger generator is using <see cref="CamelCaseNamingStrategy"/> or not;
/// so <paramref name="camelCasePropertyNames"/> needs to be passed in since it can't be determined.
/// </remarks>
public RequireValueTypePropertiesSchemaFilter(bool camelCasePropertyNames)
{
_camelCaseContractResolver = camelCasePropertyNames ? new CamelCasePropertyNamesContractResolver() : null;
}
/// <summary>
/// Returns the JSON property name for <paramref name="property"/>.
/// </summary>
/// <param name="property"></param>
/// <returns></returns>
private string PropertyName(PropertyInfo property)
{
return _camelCaseContractResolver?.GetResolvedPropertyName(property.Name) ?? property.Name;
}
/// <summary>
/// Adds non-nullable value type properties in a <see cref="Type"/> to the set of required properties for that type.
/// </summary>
/// <param name="model"></param>
/// <param name="context"></param>
public void Apply(Schema model, SchemaFilterContext context)
{
foreach (var property in context.SystemType.GetProperties())
{
string schemaPropertyName = PropertyName(property);
// This check ensures that properties that are not in the schema are not added as required.
// This includes properties marked with [IgnoreDataMember] or [JsonIgnore] (should not be present in schema or required).
if (model.Properties?.ContainsKey(schemaPropertyName) == true)
{
// Value type properties are required,
// except: Properties of type Nullable<T> are not required.
var propertyType = property.PropertyType;
if (propertyType.IsValueType
&& ! (propertyType.IsConstructedGenericType && (propertyType.GetGenericTypeDefinition() == typeof(Nullable<>))))
{
// Properties marked with [Required] are already required (don't require it again).
if (! property.CustomAttributes.Any(attr =>
{
var t = attr.AttributeType;
return t == typeof(RequiredAttribute);
}))
{
// Make the value type property required
if (model.Required == null)
{
model.Required = new List<string>();
}
model.Required.Add(schemaPropertyName);
}
}
}
}
}
}
To use, register it in your Startup class:
services.AddSwaggerGen(c =>
{
c.SwaggerDoc(c_swaggerDocumentName, new Info { Title = "Upfront API", Version = "1.0" });
c.SchemaFilter<RequireValueTypePropertiesSchemaFilter>(/*camelCasePropertyNames:*/ true);
});
This results in the DateRange type above becoming:
{ ...
"DateRange": {
"required": [
"startDate",
"endDate"
],
"type": "object",
"properties": {
"startDate": {
"format": "date-time",
"type": "string"
},
"endDate": {
"format": "date-time",
"type": "string"
}
}
},
...
}
In the swagger JSON schema, and:
export interface DateRange {
startDate: string; // date-time
endDate: string; // date-time
}
in the dtsgenerator output. I hope this helps someone else.
I was able to achieve the same effect as the accepted answer using the following schema filter and Swashbuckle 5.4.1:
public class RequireValueTypePropertiesSchemaFilter : ISchemaFilter
{
private readonly HashSet<OpenApiSchema> _valueTypes = new HashSet<OpenApiSchema>();
public void Apply(OpenApiSchema model, SchemaFilterContext context)
{
if (context.Type.IsValueType)
{
_valueTypes.Add(model);
}
if (model.Properties != null)
{
foreach (var prop in model.Properties)
{
if (_valueTypes.Contains(prop.Value))
{
model.Required.Add(prop.Key);
}
}
}
}
}
This relies on the fact that the ISchemaFilter must be applied to the simple schemas of each property before it can be applied to the complex schema that contains those properties - so all we have to do is keep track of the simple schemas that relate to a ValueType, and if we later encounter a schema that has one of those ValueType schemas as a property, we can mark that property name as required.
I struggled with a similar problem for several days before I realized two important things.
A property's nullability and its requiredness are completeley orthogonal concepts and should not be conflated.
As great as C#'s new nullable feature is at helping you avoid null reference exceptions, it's still a compile-time feature. As far as the CLR is concerned, and therefore as far as the reflection API is concerned, all strings (indeed all reference types) are always nullable. Period.
The second point really caused problems for any schema filter I wrote because, regardless of whether I typed something as string or string?, the context parameter of the Apply function always had it's MemberInfo.Nullable property set to true.
So I came up with the following solution.
First, create Nullable attribute.
using System;
[AttributeUsage(AttributeTargets.Property)]
public class NullableAttribute : Attribute {
public NullableAttribute(bool Property = true, bool Items = false) {
this.Property = Property;
this.Items = Items;
}
public bool Property { get; init; }
public bool Items { get; init; }
}
Next, create NullableSchemaFilter.
using MicroSearch.G4Data.Models;
using Microsoft.OpenApi.Models;
using Swashbuckle.AspNetCore.SwaggerGen;
public class NullableSchemaFilter : ISchemaFilter {
public void Apply(OpenApiSchema schema, SchemaFilterContext context) {
var attrs = context.MemberInfo?.GetInlineAndMetadataAttributes();
if (attrs != null) {
foreach (var attr in attrs) {
var nullableAttr = attr as NullableAttribute;
if (nullableAttr != null) {
schema.Nullable = nullableAttr.Property;
if (schema.Items != null)
schema.Items.Nullable = nullableAttr.Items;
}
}
}
}
}
And, of course, you have to add the schema filter in your startup code.
services.AddSwaggerGen(config => {
config.SchemaFilter<NullableSchemaFilter>();
});
The Nullable attribute takes two optional boolean parameters:
Property controls if the property itself is nullable.
Items controls if items in an array are nullable. Obviously, this only applies to properties that are arrays.
Examples:
// these all express a nullable string
string? Name { get; set; }
[Nullable] string? Name { get; set; }
[Nullable(true)] string? Name { get; set; }
[Nullable(Property: true)] string? Name { get; set; }
// non-nullable string
[Nullable(false)] string Name { get; set; }
[Nullable(Property: false)] string Name { get; set; }
// non-nullable array of non-nullable strings
[Nullable(false)] string[] Names { get; set; }
[Nullable(Property: false, Items: false) Names { get; set; }
// nullable array of non-nullable strings
[Nullable(Property: true, Items: false)] string[]? Names { get; set; }
// non-nullable array of nullable strings
[Nullable(Property: false, Items: true)] string?[] Names { get; set; }
// nullable array of nullable strings
[Nullable(Property: true, Items: true)] string?[]? Names { get; set; }
The [Required] attribute can be freely used together with the [Nullable] attribute when necessary. i.e. this does what you would expect.
[Nullable][Required] string? Name { get; set; }
I am using .NET 5 and Swashbuckle.AspNetCore 6.2.3.
Let me suggest solution based on json schema.
This scheme was described in RFC, so it should works like common solution https://json-schema.org/latest/json-schema-validation.html#rfc.section.6.1.1
public class AssignPropertyRequiredFilter : ISchemaFilter
{
public void Apply(Schema schema, SchemaFilterContext context)
{
if (schema.Properties == null || schema.Properties.Count == 0)
{
return;
}
var typeProperties = context.SystemType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
foreach (var property in schema.Properties)
{
if (IsSourceTypePropertyNullable(typeProperties, property.Key))
{
continue;
}
// "null", "boolean", "object", "array", "number", or "string"), or "integer" which matches any number with a zero fractional part.
// see also: https://json-schema.org/latest/json-schema-validation.html#rfc.section.6.1.1
switch (property.Value.Type)
{
case "boolean":
case "integer":
case "number":
AddPropertyToRequired(schema, property.Key);
break;
case "string":
switch (property.Value.Format)
{
case "date-time":
case "uuid":
AddPropertyToRequired(schema, property.Key);
break;
}
break;
}
}
}
private bool IsNullable(Type type)
{
return Nullable.GetUnderlyingType(type) != null;
}
private bool IsSourceTypePropertyNullable(PropertyInfo[] typeProperties, string propertyName)
{
return typeProperties.Any(info => info.Name.Equals(propertyName, StringComparison.OrdinalIgnoreCase)
&& IsNullable(info.PropertyType));
}
private void AddPropertyToRequired(Schema schema, string propertyName)
{
if (schema.Required == null)
{
schema.Required = new List<string>();
}
if (!schema.Required.Contains(propertyName))
{
schema.Required.Add(propertyName);
}
}
}
Or you can try this one
public class AssignPropertyRequiredFilter : ISchemaFilter {
public void Apply(Schema schema, SchemaRegistry schemaRegistry, Type type) {
var requiredProperties = type.GetProperties()
.Where(x => x.PropertyType.IsValueType)
.Select(t => char.ToLowerInvariant(t.Name[0]) + t.Name.Substring(1));
if (schema.required == null) {
schema.required = new List<string>();
}
schema.required = schema.required.Union(requiredProperties).ToList();
}
}
and use
services.AddSwaggerGen(c =>
{
...
c.SchemaFilter<AssignPropertyRequiredFilter>();
});
I am using HttpPatch to partially update an object. To get that working I am using Delta and Patch method from OData (mentioned here: What's the currently recommended way of performing partial updates with Web API?). Everything seems to be working fine but noticed that mapper is case sensitive; when the following object is passed the properties are getting updated values:
{
"Title" : "New title goes here",
"ShortDescription" : "New text goes here"
}
But when I pass the same object with lower or camel-case properties, Patch doesn't work - new value is not going through, so it looks like there is a problem with deserialisation and properties mapping, ie: "shortDescription" to "ShortDescription".
Is there a config section that will ignore case sensitivity using Patch?
FYI:
On output I have camel-case properties (following REST best practices) using the following formatter:
//formatting
JsonSerializerSettings jss = new JsonSerializerSettings();
jss.ContractResolver = new CamelCasePropertyNamesContractResolver();
config.Formatters.JsonFormatter.SerializerSettings = jss;
//sample output
{
"title" : "First",
"shortDescription" : "First post!"
}
My model classes however are follwing C#/.NET formatting conventions:
public class Entry {
public string Title { get; set;}
public string ShortDescription { get; set;}
//rest of the code omitted
}
Short answer, No there is no config option to undo the case sensitiveness (as far as i know)
Long answer: I had the same problem as you today, and this is how i worked around it.
I found it incredibly annoying that it had to be case sensitive, thus i decided to do away with the whole oData part, since it is a huge library that we are abusing....
An example of this implementation can be found at my github github
I decided to implement my own patch method, since that is the muscle that we are actually lacking. I created the following abstract class:
public abstract class MyModel
{
public void Patch(Object u)
{
var props = from p in this.GetType().GetProperties()
let attr = p.GetCustomAttribute(typeof(NotPatchableAttribute))
where attr == null
select p;
foreach (var prop in props)
{
var val = prop.GetValue(this, null);
if (val != null)
prop.SetValue(u, val);
}
}
}
Then i make all my model classes inherit from *MyModel*. note the line where i use *let*, i will excplain that later. So now you can remove the Delta from you controller action, and just make it Entry again, as with the put method. e.g.
public IHttpActionResult PatchUser(int id, Entry newEntry)
You can still use the patch method the way you used to:
var entry = dbContext.Entries.SingleOrDefault(p => p.ID == id);
newEntry.Patch(entry);
dbContext.SaveChanges();
Now, let's get back to the line
let attr = p.GetCustomAttribute(typeof(NotPatchableAttribute))
I found it a security risk that just any property would be able to be updated with a patch request. For example, you might now want the an ID to be changeble by the patch. I created a custom attribute to decorate my properties with. the NotPatchable attribute:
public class NotPatchableAttribute : Attribute {}
You can use it just like any other attribute:
public class User : MyModel
{
[NotPatchable]
public int ID { get; set; }
[NotPatchable]
public bool Deleted { get; set; }
public string FirstName { get; set; }
}
This in this call the Deleted and ID properties cannot be changed though the patch method.
I hope this solve it for you as well. Do not hesitate to leave a comment if you have any questions.
I added a screenshot of me inspecting the props in a new mvc 5 project. As you can see the Result view is populated with the Title and ShortDescription.
It can be done quite easily with a custom contract resolver that inherits CamelCasePropertyNamesContractResolver and implementing CreateContract method that look at concrete type for delta and gets the actual property name instead of using the one that comes from json. Abstract is below:
public class DeltaContractResolver : CamelCasePropertyNamesContractResolver
{
protected override JsonContract CreateContract(Type objectType)
{
// This class special cases the JsonContract for just the Delta<T> class. All other types should function
// as usual.
if (objectType.IsGenericType &&
objectType.GetGenericTypeDefinition() == typeof(Delta<>) &&
objectType.GetGenericArguments().Length == 1)
{
var contract = CreateDynamicContract(objectType);
contract.Properties.Clear();
var underlyingContract = CreateObjectContract(objectType.GetGenericArguments()[0]);
var underlyingProperties =
underlyingContract.CreatedType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
foreach (var property in underlyingContract.Properties)
{
property.DeclaringType = objectType;
property.ValueProvider = new DynamicObjectValueProvider()
{
PropertyName = this.ResolveName(underlyingProperties, property.PropertyName),
};
contract.Properties.Add(property);
}
return contract;
}
return base.CreateContract(objectType);
}
private string ResolveName(PropertyInfo[] properties, string propertyName)
{
var prop = properties.SingleOrDefault(p => p.Name.Equals(propertyName, StringComparison.OrdinalIgnoreCase));
if (prop != null)
{
return prop.Name;
}
return propertyName;
}
}
Can I automatically validate complex child objects when validating a parent object and include the results in the populated ICollection<ValidationResult>?
If I run the following code:
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
namespace ConsoleApplication1
{
public class Person
{
[Required]
public string Name { get; set; }
public Address Address { get; set; }
}
public class Address
{
[Required]
public string Street { get; set; }
[Required]
public string City { get; set; }
[Required]
public string State { get; set; }
}
class Program
{
static void Main(string[] args)
{
Person person = new Person
{
Name = null,
Address = new Address
{
Street = "123 Any St",
City = "New York",
State = null
}
};
var validationContext = new ValidationContext(person, null, null);
var validationResults = new List<ValidationResult>();
var isValid = Validator.TryValidateObject(person, validationContext, validationResults);
Console.WriteLine(isValid);
validationResults.ForEach(r => Console.WriteLine(r.ErrorMessage));
Console.ReadKey(true);
}
}
}
I get the following output:
False
The Name field is required.
But I was expecting something similar to:
False
The Name field is required.
The State field is required.
I offered a bounty for a better child object validation solution but didn't get any takers, ideally
validating child objects to an arbitrary depth
handling multiple errors per object
correctly identifying the validation errors on the child object fields.
I'm still surprised the framework doesn't support this.
Issue - Model Binder Order
This is, unfortunately, the standard behavior of Validator.TryValidateObject which
does not recursively validate the property values of the object
As pointed out in Jeff Handley's article on Validating Object and Properties with the Validator, by default, the validator will validate in order:
Property-Level Attributes
Object-Level Attributes
Model-Level implementation IValidatableObject
The problem is, at each step of the way...
If any validators are invalid, Validator.ValidateObject will abort validation and return the failure(s)
Issue - Model Binder Fields
Another possible issue is that the model binder will only run validation on objects that it has decided to bind. For example, if you don't provide inputs for fields within complex types on your model, the model binder won't need to check those properties at all because it hasn't called the constructor on those objects. According to Brad Wilson's great article on Input Validation vs. Model Validation in ASP.NET MVC:
The reason we don't "dive" into the Address object recursively is that there was nothing in the form that bound any values inside of Address.
Solution - Validate Object at the same time as Properties
One way to solve this problem is to convert object-level validations to property level validation by adding a custom validation attribute to the property that will return with the validation result of the object itself.
Josh Carroll's article on Recursive Validation Using DataAnnotations provides an implementation of one such strategy (originally in this SO question). If we want to validate a complex type (like Address), we can add a custom ValidateObject attribute to the property, so it is evaluated on the first step
public class Person {
[Required]
public String Name { get; set; }
[Required, ValidateObject]
public Address Address { get; set; }
}
You'll need to add the following ValidateObjectAttribute implementation:
public class ValidateObjectAttribute: ValidationAttribute {
protected override ValidationResult IsValid(object value, ValidationContext validationContext) {
var results = new List<ValidationResult>();
var context = new ValidationContext(value, null, null);
Validator.TryValidateObject(value, context, results, true);
if (results.Count != 0) {
var compositeResults = new CompositeValidationResult(String.Format("Validation for {0} failed!", validationContext.DisplayName));
results.ForEach(compositeResults.AddResult);
return compositeResults;
}
return ValidationResult.Success;
}
}
public class CompositeValidationResult: ValidationResult {
private readonly List<ValidationResult> _results = new List<ValidationResult>();
public IEnumerable<ValidationResult> Results {
get {
return _results;
}
}
public CompositeValidationResult(string errorMessage) : base(errorMessage) {}
public CompositeValidationResult(string errorMessage, IEnumerable<string> memberNames) : base(errorMessage, memberNames) {}
protected CompositeValidationResult(ValidationResult validationResult) : base(validationResult) {}
public void AddResult(ValidationResult validationResult) {
_results.Add(validationResult);
}
}
Solution - Validate Model at the Same time as Properties
For objects that implement IValidatableObject, when we check the ModelState, we can also check to see if the model itself is valid before returning the list of errors. We can add any errors we want by calling ModelState.AddModelError(field, error). As specified in How to force MVC to Validate IValidatableObject, we can do it like this:
[HttpPost]
public ActionResult Create(Model model) {
if (!ModelState.IsValid) {
var errors = model.Validate(new ValidationContext(model, null, null));
foreach (var error in errors)
foreach (var memberName in error.MemberNames)
ModelState.AddModelError(memberName, error.ErrorMessage);
return View(post);
}
}
Also, if you want a more elegant solution, you can write the code once by providing your own custom model binder implementation in Application_Start() with ModelBinderProviders.BinderProviders.Add(new CustomModelBinderProvider());. There are good implementations here and here
I also ran into this, and found this thread. Here's a first pass:
namespace Foo
{
using System.ComponentModel.DataAnnotations;
using System.Linq;
/// <summary>
/// Attribute class used to validate child properties.
/// </summary>
/// <remarks>
/// See: http://stackoverflow.com/questions/2493800/how-can-i-tell-the-data-annotations-validator-to-also-validate-complex-child-pro
/// Apparently the Data Annotations validator does not validate complex child properties.
/// To do so, slap this attribute on a your property (probably a nested view model)
/// whose type has validation attributes on its properties.
/// This will validate until a nested <see cref="System.ComponentModel.DataAnnotations.ValidationAttribute" />
/// fails. The failed validation result will be returned. In other words, it will fail one at a time.
/// </remarks>
public class HasNestedValidationAttribute : ValidationAttribute
{
/// <summary>
/// Validates the specified value with respect to the current validation attribute.
/// </summary>
/// <param name="value">The value to validate.</param>
/// <param name="validationContext">The context information about the validation operation.</param>
/// <returns>
/// An instance of the <see cref="T:System.ComponentModel.DataAnnotations.ValidationResult"/> class.
/// </returns>
protected override ValidationResult IsValid(object value, ValidationContext validationContext)
{
var isValid = true;
var result = ValidationResult.Success;
var nestedValidationProperties = value.GetType().GetProperties()
.Where(p => IsDefined(p, typeof(ValidationAttribute)))
.OrderBy(p => p.Name);//Not the best order, but at least known and repeatable.
foreach (var property in nestedValidationProperties)
{
var validators = GetCustomAttributes(property, typeof(ValidationAttribute)) as ValidationAttribute[];
if (validators == null || validators.Length == 0) continue;
foreach (var validator in validators)
{
var propertyValue = property.GetValue(value, null);
result = validator.GetValidationResult(propertyValue, new ValidationContext(value, null, null));
if (result == ValidationResult.Success) continue;
isValid = false;
break;
}
if (!isValid)
{
break;
}
}
return result;
}
}
}
You will need to make your own validator attribute (eg, [CompositeField]) that validates the child properties.