.NET Swagger don't understand seperated files - c#

i create web api in .NET swagger doc with my own openapi spec. I have it in seperated files and i use $ref for it. And swagger sometimes don't understand it and i dont know what to do. I use static files
app.UseStaticFiles(new StaticFileOptions { FileProvider = new PhysicalFileProvider(Path.Combine(builder.Environment.ContentRootPath, $"Swaggers/v1")), RequestPath = $"/Swagger/v1", ServeUnknownFileTypes = true });
post:
requestBody:
description: ''
content:
application/json:
schema:
$ref: '../Schemas/NewItemInputDto.yml'
examples:
'201':
value:
importDate: '12.4'
code: '123'
responses:
'201':
description: 'Success created items'
content:
application/json:
schema:
$ref: '../Schemas/Response.yml'
example:
httpStatusCode: 201
isSuccess: true
For example for schema its working but for example no.
default: description: 'Other codes' content: application/json: schema: allOf: - $ref: '../BaseSchemas/ApiResult.yml' example: $ref: '../BaseExamples/UnexceptedExample.yml'
Thank you for help

Related

Add custom attribute to OpenAPI specification file and swagger in .net core web api

I have a .Net Core 5 Web API project (C#) where I've added and configured Swagger.Net. Everything works fine, but now the client has asked me to add a "custom attribute" in the OAS file to specify that the APIs are not yet ready in production:
x-acme-production-ready=false
As of today I have always provided the json file automatically produced by Swagger.
How can I produce the OAS file with a structure like this:
openapi: "3.0.0"
# REQUIRED - Formal commitments from the API focal point
x-acme-api-commitments:
api-100: We commit to providing a proper valid OpenAPI (swagger) specification file for each API change.....
# REQUIRED - List of versions changes
x-acme-api-changelog:
- version: 1.0.0
changes: Add GET /example
- version: 1.1.0
changes: Add POST /example
info:
# REQUIRED - Functional descriptive name of the API.
title: ACME - Basic template API
The above file looks like is a text representation of the JSON, so maybe should be enough to add the custom field x-acme-production-ready to the JSON, but how can I add it programmatically?
********* UPDATE ***********
Looking at the specification above this custom field should be added at the same level of the tag "info" in the JSON swagger:
openapi: "3.0.1",
x-acme-production-ready: "true",
info: {
title: "my-app-title",
version: "v1.0"
},
servers: [
{
url: "https://localhost:44370"
}
],
paths: {...}
I have added the class CustomModelDocumentFilter to my project, but I can't understand how and where to call it, and how to use it for adding that field in that position.
using Microsoft.OpenApi.Models;
using Swashbuckle.AspNetCore.SwaggerGen;
using System.Collections.Generic;
namespace MyApp.Swagger
{
public class CustomModelDocumentFilter : IDocumentFilter
{
public void Apply(OpenApiDocument swaggerDoc, DocumentFilterContext context)
{
swaggerDoc....
}
}
}
In my startup I have:
services.AddSwaggerGen(c =>
{
c.DocumentFilter<Swagger.CustomModelDocumentFilter>();
c.SwaggerDoc("v1.0", new OpenApiInfo { Title = "my app title", Version = "v1.0", Description = "my app description." });
string xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
string xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
});
If you use Swashbuckle.AspNetCore, you can use a document filter to customise the OpenAPI document.

Azure publish: Failed to update API in Azure

Im trying to publish my .net3.1 webapp on Azure from Visual Studio.
Visual studio fails on 'Starting to update your API' step, this is the output from visual studio:
Build started...
1>------ Build started: Project: WebApplication_XXX, Configuration: Release Any CPU ------
1>WebApplication_XXX -> C:\Users\YYYY\source\repos\WebApplication_XXX\WebApplication_XXX\bin\Release\netcoreapp3.1\WebApplication_XXX.dll
1>Done building project "WebApplication_XXX.csproj".
2>------ Publish started: Project: WebApplication_XXX, Configuration: Release Any CPU ------
WebApplication_XXX -> C:\Users\YYYY\source\repos\WebApplication_XXX\WebApplication_XXX\bin\Release\netcoreapp3.1\WebApplication_XXX.dll
npm install
npm run build -- --prod
> webapplication_xxx#0.0.0 build C:\Users\YYYY\source\repos\WebApplication_XXX\WebApplication_XXX\ClientApp
> ng build "--prod"
Generating ES5 bundles for differential loading...
ES5 bundle generation complete.
....
Publish Succeeded.
Web App was published successfully https://xxxxxxx.azurewebsites.net/
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
========== Publish: 1 succeeded, 0 failed, 0 skipped ==========
Starting to update your API
Generating swagger file to 'C:\Users\YYYY\source\repos\WebApplication_XXX\WebApplication_XXX\bin\Release\netcoreapp3.1\swagger.json'.
Failed to update your API in Azure.
I then check Azure portal and find some error in 'Create API or Update API' json
...
"properties": {
"statusCode": "BadRequest",
"serviceRequestId": "*****",
"statusMessage": "{\"error\":{\"code\":\"ValidationError\",\"message\":\"One or more fields contain incorrect values:\",\"details\":[{\"code\":\"ValidationError\",\"target\":\"representation\",\"message\":\"Parsing error(s): JSON is valid against no schemas from 'oneOf'. Path 'securityDefinitions.Bearer', line 2841, position 15.\"},{\"code\":\"ValidationError\",\"target\":\"representation\",\"message\":\"Parsing error(s): The input OpenAPI file is not valid for the OpenAPI specificate https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md (schema https://github.com/OAI/OpenAPI-Specification/blob/master/schemas/v2.0/schema.json).\"}]}}",
"eventCategory": "Administrative",
"entity": "/subscriptions/*****/resourceGroups/XXX/providers/Microsoft.ApiManagement/service/WebApplicationXXXapi/apis/WebApplicationXXX",
"message": "Microsoft.ApiManagement/service/apis/write",
"hierarchy": "*****"
},
...
So I open the generated swagger.json file from 'C:\Users\YYYY\source\repos\WebApplication_XXX\WebApplication_XXX\bin\Release\netcoreapp3.1\swagger.json' in swagger editor and get the same error:
Structural error at securityDefinitions.Bearer
should have required property 'type'
missingProperty: type
because the Security Definitions Bearer is empty in the json file
securityDefinitions:
Bearer: {
}
if I make the following change in in the swagger editor it gets happy:
securityDefinitions:
Bearer: {
type: apiKey,
name: "JWT Authentication",
in: "header"
}
In my application Startup.cs I got:
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "XXX API", Version = "v1" });
var securityScheme = new OpenApiSecurityScheme
{
Name = "JWT Authentication",
Description = "Enter JWT Bearer token **_only_**",
In = ParameterLocation.Header,
Type = SecuritySchemeType.Http,
Scheme = "bearer", // must be lower case
BearerFormat = "JWT",
Reference = new OpenApiReference
{
Id = JwtBearerDefaults.AuthenticationScheme,
Type = ReferenceType.SecurityScheme
}
};
c.AddSecurityDefinition(securityScheme.Reference.Id, securityScheme);
c.AddSecurityRequirement(new OpenApiSecurityRequirement
{
{securityScheme, new string[] { }}
});
});
what is it Im missing? Shouldnt the code in Startup.cs add the securityDefinition when generating the swagger.json file?
We have the same problem. For now we have used a workaround with disabling update the api during publish. (I use VS2022 and .net6.0)
In the PublishProfiles/...pubxml change the parameter UpdateApiOnPublish to false.
<UpdateApiOnPublish>false</UpdateApiOnPublish>
Can you check if there is any target framework missing also check your NuGet packages dependency.
JwtSecurityTokenHandler class which generated a token needs to be implemented. To understand the correct workflow for JWT implementation check this JWT Authentication Tutorial with Example API.

how to add certificate to POD/Container 's cert store when root filesystem is read-only

I have a .net core application which is dockerized and running in Kubernetes cluster (AKS).
I want to apply securityContext readOnlyRootFilesystem = true to satisfy the requirement Immutable (read-only) root filesystem should be enforced for containers.
securityContext:
privileged: false
readOnlyRootFilesystem: true
For .net core app I want to read TLS certificate and want to add into POD/Container's certificate store and to do this in startup I have below code,
var cert = new X509Certificate2(Convert.FromBase64String(File.ReadAllText(Environment.GetEnvironmentVariable("cert_path"))));
AddCertificate(cert, StoreName.Root);
The problem is when I set readOnlyRootFilesystem = true, I am getting below error from the app,
EXCEPTION: System.Security.Cryptography.CryptographicException: The X509 certificate could not be added to the store.
---> System.IO.IOException: Read-only file system
at System.IO.FileSystem.CreateDirectory(String fullPath)
It's saying for read only file system I can't add certificate. Is there a way to overcome this problem?
Update
If I set emptyDir: {}, I am getting below error? Where I can add it?
spec.template.spec.volumes[0].csi: Forbidden: may not specify more than 1 volume type
volumeMounts:
- name: secrets-store
mountPath: /app/certs
securityContext:
privileged: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: secrets-store
emptyDir: {}
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: azure-kvname
At the location you have defined as the path of the cert store, attach a volume that is not read-only. If you only want that data to last as long as the pod exists, an emptyDir type volume will fit the bill nicely.
For example, if you are creating pods with a deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ct
name: ct
spec:
replicas: 1
selector:
matchLabels:
app: ct
template:
metadata:
labels:
app: ct
spec:
containers:
- image: myapp
name: myapp
env:
- name: cert_path
value: /etc/certstore
securityContext:
readOnlyRootFilesystem: true
You could set up the emptyDir as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ct
name: ct
spec:
replicas: 1
selector:
matchLabels:
app: ct
template:
metadata:
labels:
app: ct
spec:
containers:
- image: myapp
name: myapp
env:
- name: cert_path
value: /etc/certstore
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/certstore
name: certstore
volumes:
- name: certstore
emptyDir: {}
Other types of volumes would work as well. If you wanted to persist these certificates as your pods are cycled, then a persistentVolumeClaim could be used to get you a persistent volume.
The emptyDir won't be read-only, but the rest of the container's root filesystem will be, which should satisfy your security requirement.
Generally, the solution suggested by programmerq looks fine.
As for this error:
spec.template.spec.volumes[0].csi: Forbidden: may not specify more than 1 volume type
Once upon a time I got this error because I have tried to start a specific volume type, but went back to emptyDir using kubectl apply. However, the old volume still existed on the server side. kubectl tried to combine two volume specifications, which caused a problem.
On this site you can find an explanation:
If the admission plugin is turned on, the administrator may specify a default StorageClass. All PVCs that have no storageClassName can be bound only to PVs of that default. Specifying a default StorageClass is done by setting the annotation storageclass.kubernetes.io/is-default-class equal to "true" in a StorageClass object. If the administrator does not specify a default, the cluster responds to PVC creation as if the admission plugin were turned off. If more than one default is specified, the admission plugin forbids the creation of all PVCs.
See also this question.
Make sure your volume is created correctly, the old one does not exist and then try the solution suggested by programmerq.

Using Kubernetes ingress with two services and internal web API routing

I have two ASP.NET Core applications app1 and app2. Inside these apps I have routes as defined in this simplified code:
app1:
endpoints.MapGet("/app1ep1", async context =>
{
await context.Response.WriteAsync("x1");
});
endpoints.MapGet("/app1ep2", async context =>
{
await context.Response.WriteAsync("x2");
});
app2:
endpoints.MapGet("/app2ep1", async context =>
{
await context.Response.WriteAsync("y1");
});
endpoints.MapGet("/app2ep2", async context =>
{
await context.Response.WriteAsync("y2");
});
I am trying to define without success an ingress rule that will apply the following routings:
myhost.com/app1/app1ep1 will route to the service app1 and then internal routing to ep1
myhost.com/app2/app2ep1 will route to the service app2 and then internal routing to ep1
the point goes on, please comment if extra clarification required
My ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: myhost.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: myservice2
port:
number: 80
Actual result is that the services are found but I get 404 error. In other words, browsing to myhost.com/app1/app1ep1 (or any other combination) routes to the service (myservice1) but then internal route is lost.
Is there any way to fix this?
Thanks for helping
Edit:
I am noticing some other problem. I tried to reduce the problem to a single service. So my ingress now looks like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: myhost.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
Also added this controller to the "myservice1" app:
endpoints.MapGet("/", async context =>
{
await context.Response.WriteAsync("x");
});
Going to myhost.com/app1 returns 404. That again means that the app is found but the route isn't found in the application (although I defined "/" route).
Maybe this information can help discover the problem
I have solved this issue with the following ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: myhost.com
http:
paths:
- path: /app1(/|$)(.*)
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
- path: /app2(/|$)(.*)
pathType: Prefix
backend:
service:
name: myservice2
port:
number: 80

Accessing kubernetes service from C# docker

I am trying to access the Kubernetes service using C# docker within kuberentes service.
I have a python docker YAML file and want to create pod using the same YAML programmatically from c# Dotnet core docker that running int the same cluster of python docker. I found Kubernetes api for dotnet core.I created the code for list pods that is below.
using System;
using k8s;
namespace simple
{
internal class PodList
{
private static void Main(string[] args)
{
var config = KubernetesClientConfiguration.InClusterConfig();
IKubernetes client = new Kubernetes(config);
Console.WriteLine("Starting Request!");
var list = client.ListNamespacedPod("default");
foreach (var item in list.Items)
{
Console.WriteLine(item.Metadata.Name);
}
if (list.Items.Count == 0)
{
Console.WriteLine("Empty!");
}
}
}
}
this code getting error Forbidden ("Operation returned an invalid status code 'Forbidden'").
instead of InClusterConfig using BuildConfigFromConfigFile code is working in local environment.Is anythin I missed?
Edited
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-serviceaccount
namespace: api
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: api
name: test-role
rules:
- apiGroups: ["","apps","batch"]
# "" indicates the core API group
resources: ["deployments", "namespaces","cronjobs"]
verbs: ["get", "list", "update", "patch","create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-binding
namespace: api
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-serviceaccount
namespace: api
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2019-07-04T16:05:43Z"
generation: 4
labels:
app: test-console
tier: middle-end
name: test-console
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: test-console
tier: middle-end
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: "2019-07-04T16:05:43Z"
labels:
app: test-console
tier: middle-end
spec:
serviceAccountName: test-serviceaccount
containers:
- image: test.azurecr.io/tester:1.0.0
imagePullPolicy: Always
name: test-console
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: pull
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
C# code
client.CreateNamespacedCronJob(jobmodel, "testnamesapce");
crone job
'apiVersion': 'batch/v1beta1',
'kind': 'CronJob',
'metadata': {
'creationTimestamp': '2020-08-04T06:29:19Z',
'name': 'forcaster-cron',
'namespace': 'testnamesapce'
},
InClusterConfig uses the default service account of the namespace where you are deploying the pod. By default that service account will not have any RBAC which leads to Forbidden error.
The reason it works in local environment is because it uses credential from kubeconfig file which most of the time is admin credential having root level RBAC permission to the cluster.
You need to define a Role and attach that role to the service account using RoleBinding
So if you are deploying the pod in default namespace then below RBAC should work.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myrole
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myrole
subjects:
- kind: ServiceAccount
name: default
namespace: default
Once you apply above RBAC you can check permission of the service account using below command
kubectl auth can-i list pods --as=system:serviceaccount:default:default -n default
yes

Categories

Resources