I have a .NET application divided in microservices. One of the microservices is the orchestrator and authenticaticator: this microservice communicates to 3 other microservices and the intercommunication is done via REST.
The code implemented is fully functional locally and each microservice was also containarized with docker and tested locally (also bug free).
The problems arise when we try to test our services in GCP: the error is always the same, being it a null pointer exception, as can be seen in the image below:
We have defined environment variables in the code that match with the .yml file which is used to make the deployment.
This is the .yml that we are using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: orchauth
namespace: default
labels:
app: orchauth
spec:
strategy:
type: Recreate
replicas: 2
selector:
matchLabels:
app: orchauth
template:
metadata:
labels:
app: orchauth
spec:
serviceAccountName: default
containers:
- name: orchauth
image: "eu.gcr.io/boomoseries-api/orchmicroservice"
ports:
- containerPort: 80
env:
- name: SEARCH_HOST
value: "http://search.default"
- name: USERS_HOST
value: "http://users.default"
- name: PREFS_HOST
value: "http://prefs.default"
---
apiVersion: v1
kind: Service
metadata:
name: orchauth
spec:
selector:
app: orchauth
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
The other microservices are implemented similarly in the file.
We also defined the microservices environment variables in the code, correctly:
private static readonly string microservicesBaseURL = Environment.GetEnvironmentVariable("SEARCH_HOST");
private static readonly string microservicesBaseURL = Environment.GetEnvironmentVariable("USERS_HOST");
private static readonly string microservicesBaseURL = Environment.GetEnvironmentVariable("PREFS_HOST");
This is the ingress .yml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: boomoseries-ingress
spec:
ingressClassName: nginx
defaultBackend:
service:
name: orchauth
port:
number: 80
The ingress communicates with the microservice OrchAuth, which is the one giving the error.
The error occurs in the request: from my understanding, the request variable is null and the null pointer exception occurs here. But I can't understand why, since locally and containarized the service works with no issues. The code is presented below (the class SearchRESTComunicationServiceBooks):
public async Task<List<BookDTO>> ObtainBooksByRating(string type, double minRating)
{
List<BookDTO> booksDtos = new();
//Makes the requests to different microservices
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
var request = httpClient.GetAsync(microservicesBaseURL + "?type=" + type + "&minRating=" + minRating);
stopwatch.Stop();
System.Diagnostics.Debug.WriteLine(stopwatch.ElapsedMilliseconds);
//Get the responses
var response = request.Result;
var responseString = await response.Content.ReadAsStringAsync();
List<BookDTO> deserializedBook = JsonConvert.DeserializeObject<List<BookDTO>>(responseString);
foreach (var item in deserializedBook)
{
booksDtos.Add(item);
}
return booksDtos;
Note: I am fully aware that the code has bad implementation practices, but that is not the main concern. Thank you for the time reading.
So the problem was the fact that in my .yml file I was only configuring my services to use the port 80 (http). In my startup, I was using Hsts, meaning that there would always be a redirection to the https port (which was configured to 443). After adding this port to the .yml file, the problem was solved.
Related
I have a .net core application which is dockerized and running in Kubernetes cluster (AKS).
I want to apply securityContext readOnlyRootFilesystem = true to satisfy the requirement Immutable (read-only) root filesystem should be enforced for containers.
securityContext:
privileged: false
readOnlyRootFilesystem: true
For .net core app I want to read TLS certificate and want to add into POD/Container's certificate store and to do this in startup I have below code,
var cert = new X509Certificate2(Convert.FromBase64String(File.ReadAllText(Environment.GetEnvironmentVariable("cert_path"))));
AddCertificate(cert, StoreName.Root);
The problem is when I set readOnlyRootFilesystem = true, I am getting below error from the app,
EXCEPTION: System.Security.Cryptography.CryptographicException: The X509 certificate could not be added to the store.
---> System.IO.IOException: Read-only file system
at System.IO.FileSystem.CreateDirectory(String fullPath)
It's saying for read only file system I can't add certificate. Is there a way to overcome this problem?
Update
If I set emptyDir: {}, I am getting below error? Where I can add it?
spec.template.spec.volumes[0].csi: Forbidden: may not specify more than 1 volume type
volumeMounts:
- name: secrets-store
mountPath: /app/certs
securityContext:
privileged: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: secrets-store
emptyDir: {}
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: azure-kvname
At the location you have defined as the path of the cert store, attach a volume that is not read-only. If you only want that data to last as long as the pod exists, an emptyDir type volume will fit the bill nicely.
For example, if you are creating pods with a deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ct
name: ct
spec:
replicas: 1
selector:
matchLabels:
app: ct
template:
metadata:
labels:
app: ct
spec:
containers:
- image: myapp
name: myapp
env:
- name: cert_path
value: /etc/certstore
securityContext:
readOnlyRootFilesystem: true
You could set up the emptyDir as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ct
name: ct
spec:
replicas: 1
selector:
matchLabels:
app: ct
template:
metadata:
labels:
app: ct
spec:
containers:
- image: myapp
name: myapp
env:
- name: cert_path
value: /etc/certstore
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /etc/certstore
name: certstore
volumes:
- name: certstore
emptyDir: {}
Other types of volumes would work as well. If you wanted to persist these certificates as your pods are cycled, then a persistentVolumeClaim could be used to get you a persistent volume.
The emptyDir won't be read-only, but the rest of the container's root filesystem will be, which should satisfy your security requirement.
Generally, the solution suggested by programmerq looks fine.
As for this error:
spec.template.spec.volumes[0].csi: Forbidden: may not specify more than 1 volume type
Once upon a time I got this error because I have tried to start a specific volume type, but went back to emptyDir using kubectl apply. However, the old volume still existed on the server side. kubectl tried to combine two volume specifications, which caused a problem.
On this site you can find an explanation:
If the admission plugin is turned on, the administrator may specify a default StorageClass. All PVCs that have no storageClassName can be bound only to PVs of that default. Specifying a default StorageClass is done by setting the annotation storageclass.kubernetes.io/is-default-class equal to "true" in a StorageClass object. If the administrator does not specify a default, the cluster responds to PVC creation as if the admission plugin were turned off. If more than one default is specified, the admission plugin forbids the creation of all PVCs.
See also this question.
Make sure your volume is created correctly, the old one does not exist and then try the solution suggested by programmerq.
I have two ASP.NET Core applications app1 and app2. Inside these apps I have routes as defined in this simplified code:
app1:
endpoints.MapGet("/app1ep1", async context =>
{
await context.Response.WriteAsync("x1");
});
endpoints.MapGet("/app1ep2", async context =>
{
await context.Response.WriteAsync("x2");
});
app2:
endpoints.MapGet("/app2ep1", async context =>
{
await context.Response.WriteAsync("y1");
});
endpoints.MapGet("/app2ep2", async context =>
{
await context.Response.WriteAsync("y2");
});
I am trying to define without success an ingress rule that will apply the following routings:
myhost.com/app1/app1ep1 will route to the service app1 and then internal routing to ep1
myhost.com/app2/app2ep1 will route to the service app2 and then internal routing to ep1
the point goes on, please comment if extra clarification required
My ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: myhost.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: myservice2
port:
number: 80
Actual result is that the services are found but I get 404 error. In other words, browsing to myhost.com/app1/app1ep1 (or any other combination) routes to the service (myservice1) but then internal route is lost.
Is there any way to fix this?
Thanks for helping
Edit:
I am noticing some other problem. I tried to reduce the problem to a single service. So my ingress now looks like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: myhost.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
Also added this controller to the "myservice1" app:
endpoints.MapGet("/", async context =>
{
await context.Response.WriteAsync("x");
});
Going to myhost.com/app1 returns 404. That again means that the app is found but the route isn't found in the application (although I defined "/" route).
Maybe this information can help discover the problem
I have solved this issue with the following ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: myhost.com
http:
paths:
- path: /app1(/|$)(.*)
pathType: Prefix
backend:
service:
name: myservice1
port:
number: 80
- path: /app2(/|$)(.*)
pathType: Prefix
backend:
service:
name: myservice2
port:
number: 80
I am trying to access the Kubernetes service using C# docker within kuberentes service.
I have a python docker YAML file and want to create pod using the same YAML programmatically from c# Dotnet core docker that running int the same cluster of python docker. I found Kubernetes api for dotnet core.I created the code for list pods that is below.
using System;
using k8s;
namespace simple
{
internal class PodList
{
private static void Main(string[] args)
{
var config = KubernetesClientConfiguration.InClusterConfig();
IKubernetes client = new Kubernetes(config);
Console.WriteLine("Starting Request!");
var list = client.ListNamespacedPod("default");
foreach (var item in list.Items)
{
Console.WriteLine(item.Metadata.Name);
}
if (list.Items.Count == 0)
{
Console.WriteLine("Empty!");
}
}
}
}
this code getting error Forbidden ("Operation returned an invalid status code 'Forbidden'").
instead of InClusterConfig using BuildConfigFromConfigFile code is working in local environment.Is anythin I missed?
Edited
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-serviceaccount
namespace: api
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: api
name: test-role
rules:
- apiGroups: ["","apps","batch"]
# "" indicates the core API group
resources: ["deployments", "namespaces","cronjobs"]
verbs: ["get", "list", "update", "patch","create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-binding
namespace: api
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-serviceaccount
namespace: api
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2019-07-04T16:05:43Z"
generation: 4
labels:
app: test-console
tier: middle-end
name: test-console
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: test-console
tier: middle-end
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: "2019-07-04T16:05:43Z"
labels:
app: test-console
tier: middle-end
spec:
serviceAccountName: test-serviceaccount
containers:
- image: test.azurecr.io/tester:1.0.0
imagePullPolicy: Always
name: test-console
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: pull
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
C# code
client.CreateNamespacedCronJob(jobmodel, "testnamesapce");
crone job
'apiVersion': 'batch/v1beta1',
'kind': 'CronJob',
'metadata': {
'creationTimestamp': '2020-08-04T06:29:19Z',
'name': 'forcaster-cron',
'namespace': 'testnamesapce'
},
InClusterConfig uses the default service account of the namespace where you are deploying the pod. By default that service account will not have any RBAC which leads to Forbidden error.
The reason it works in local environment is because it uses credential from kubeconfig file which most of the time is admin credential having root level RBAC permission to the cluster.
You need to define a Role and attach that role to the service account using RoleBinding
So if you are deploying the pod in default namespace then below RBAC should work.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myrole
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myrole
subjects:
- kind: ServiceAccount
name: default
namespace: default
Once you apply above RBAC you can check permission of the service account using below command
kubectl auth can-i list pods --as=system:serviceaccount:default:default -n default
yes
I have the following ingress.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
app: ingress
spec:
rules:
- host:
http:
paths:
- path: /apistarter(/|$)(.*)
backend:
serviceName: svc-aspnetapistarter
servicePort: 5000
- path: //apistarter(/|$)(.*)
backend:
serviceName: svc-aspnetapistarter
servicePort: 5000
After deploying my ASP.Net Core 2.2 API application and navigate to http://localhost/apistarter/, browser debugger console shows errors loading the static content and Javascripts. In addition, navigating to http://localhost/apistarter/swagger/index.html results in
Fetch error Not Found /swagger/v2/swagger.json
I am using the SAME ingress for multiple micro-services using different path prefix. It is running on my local kubernetes cluster using microk8s. Not on any cloud provider yet. I have checked out How to configure an ASP.NET Core multi microservice application and Azure AKS ingress routes so that it doesn't break resources in the wwwroot folder and https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1 but none of these helps.
Follow these steps to run your code:
ingress: remove URL-rewriting from ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
labels:
app: ingress
spec:
rules:
- host:
http:
paths:
- path: /apistarter # <---
backend:
serviceName: svc-aspnetapistarter
servicePort: 5000
deployment: pass environment variable with path base in ingress.yml
apiVersion: apps/v1
kind: Deployment
# ..
spec:
# ..
template:
# ..
spec:
# ..
containers:
- name: test01
image: test.io/test:dev
# ...
env:
# define custom Path Base (it should be the same as 'path' in Ingress-service)
- name: API_PATH_BASE # <---
value: "apistarter"
program: enable loading environment params in Program.cs
var builder = new WebHostBuilder()
.UseContentRoot(Directory.GetCurrentDirectory())
// ..
.ConfigureAppConfiguration((hostingContext, config) =>
{
// ..
config.AddEnvironmentVariables(); // <---
// ..
})
// ..
startup: apply UsePathBaseMiddleware in Startup.cs
public class Startup
{
public Startup(IConfiguration configuration)
{
_configuration = configuration;
}
private readonly IConfiguration _configuration;
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
var pathBase = _configuration["API_PATH_BASE"]; // <---
if (!string.IsNullOrWhiteSpace(pathBase))
{
app.UsePathBase($"/{pathBase.TrimStart('/')}");
}
app.UseStaticFiles(); // <-- StaticFilesMiddleware must follow UsePathBaseMiddleware
// ..
app.UseMvc();
}
// ..
}
I am facing an issue currently with using ASP.NET Web API for the NopCommerce website which I need to connect to my cross-platform application.
The code breaks when I call the API. It produces an exception at:
var scope = EngineContext.Current.ContainerManager.Scope()
In the task.cs page.
I have tried debugging the code, but the exception is not generated regularly; sometimes it comes and sometimes not. I did, however, find some points where it breaks:
var sis = EngineContext.Current.Resolve<StoreInformationSettings>().
I have tried loading it till it loads. i.e., putting it inside a while statement as:
var sis = EngineContext.Current.Resolve<StoreInformationSettings>()`
while(sis == null)
{
sis = EngineContext.Current.Resolve<StoreInformationSettings>()
}
And similarly, a few more instances. This implies it loads, but after a delay, maybe.
There was an instruction in the nopcommerece forum to update the autofac package, which I did.
All the Autofac Nugets are tried individually in Nop.Web.
With reference to this link here, I have tried fixing, but it didn't solve my problem.
I came to the conclusion that it has some issue with the Autofac DI settings (not sure). Or is it that NopCommerce is built not to support APIs?
As Mr. Rigueira suggested, I have tried to register the route from the route configuration class inside my plugin itself.
The route configuration for the API is here:
public void RegisterRoutes(RouteCollection routes)
{
GlobalConfiguration.Configure(config =>
{
config.MapHttpAttributeRoutes();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{action}/{id}",
defaults: new { controller = "TestApi", id = RouteParameter.Optional }
);
});
}
Still, it does not seem to make any difference.
I tried registering the route from the Plugin in the normal pattern:
public void RegisterRoutes(RouteCollection routes)
{
routes.MapRoute(
"WebApi",
"api/{controller}/{action}/{id}",
new { controller = "WebApi", action = "", id = RouteParameter.Optional },
new[] { "Nop.Plugin.WebApi.EduAwake.Controllers" }
);
}
Still no go.
here's the error i get :
Multiple types were found that match the controller named 'Topic'. This can happen if the route that services this request ('api/{controller}/{action}/{id}') does not specify namespaces to search for a controller that matches the request. If this is the case, register this route by calling an overload of the 'MapRoute' method that takes a 'namespaces' parameter.
Line 13: <div class="col-md-12 padding-0" id="home-topic" style="margin-left:-14px;">
Line 14: <div id="home-topic-content" style="width:104%;">
Line 15: #Html.Action("TopicBlock", "Topic", new { systemName = "HomePageText" })
Line 16: </div>
Line 17: </div>
in index.cshtml.
this seems to be a hopeless condition.
I have been working on this since months!
Checking your code would be advisable but, most likely, you have by-passed nopcommerce when registering Web API, and now you're trying to use dependencies before they have been registered in the usual flow. That's why they eventually appear while you're waiting in the loop (I can't imagine how you have found such a workaround anyway...).
As a solution, you should integrate Web API using a plugin and not by modifying the original source code. I did it in the past, and it is not hard. There used to be a couple of samples for that if I don't recall badly.