I have a very simple docker setup - React WebApp (admin panel) and .netCore Customer Api, they connected to bridge network. I am actually creating a health check monitor.
both containers are connected and i am able to ping via ip and container name without any issues. when i curl the health check endpoint it works fine.
However, when I do a fetch from the same url from the admin-app react code, its throwing net::ERR_NAME_NOT_RESOLVED docker. please see the code here
async loadData() {
try {
fetch('http://customer-api:5000/api/hc')
.then(function (response) {
console.log(response)
});
} catch (e) {
console.log(e);
}
}
What i did so far -
ensured both are in same network
port is correct
customer api is accessible from both external browser and container
adminapp docker
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY admin-app/package.json ./
COPY admin-app/package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY ./admin-app ./
# start app
CMD ["npm", "start"]
customer api docker
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
ENV WEB_PORT=5000 \
ASPNETCORE_URLS=http://+:5000 \
ASPNETCORE_ENVIRONMENT="Local"
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["customerprofile/Figg.Customer.Api/Figg.Customer.Api.csproj", "Figg.Customer.Api/"]
COPY ["core/Figg.Shared/Figg.Shared.csproj", "Figg.Shared/"]
COPY ["core/Figg.Core/Figg.Core.csproj", "Figg.Core/"]
COPY ["customerprofile/Figg.Customer.Domain/Figg.Customer.Domain.csproj", "Figg.Customer.Domain/"]
RUN dotnet restore "Figg.Customer.Api/Figg.Customer.Api.csproj"
COPY . .
WORKDIR "/src/customerprofile/Figg.Customer.Api"
RUN dotnet build "Figg.Customer.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Figg.Customer.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
EXPOSE $WEB_PORT
ENTRYPOINT ["dotnet", "Figg.Customer.Api.dll"]
any thoughts
ping works
/app # ping customer-api
PING customer-api (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.062 ms
64 bytes from 172.21.0.2: seq=1 ttl=64 time=0.049 ms
64 bytes from 172.21.0.2: seq=2 ttl=64 time=0.068 ms
64 bytes from 172.21.0.2: seq=3 ttl=64 time=0.049 ms
64 bytes from 172.21.0.2: seq=4 ttl=64 time=0.094 ms
I found the reason why its not reaching my container - but how to solve this ? please see the pic here , that would explain
Your Docker Containers are running on your Host (your own system)
Docker published the following ports to the Host:
8092 for the ClientAPI (first string in "8092:3000")
3000 for the AdminApp. (That is the first 3000 in the "3000:3000" string. That's why I always advise to use different ports to not get confused)
BTW: Where is this 5000 coming from you mentioned? I guess that is the "3000" in the "8092:3000" string. Typo?
The host can reach localhost:3000 (admin-app) and localhost:8092 (ClientAPI).
If one container needs to reach another container, it needs to address the service and the internal port.
In your picture: The AdminApp can reach the ClientAPI with customer-api:3000 (or 5000? Typo? :-) )
If you Client-Render the AdminApp, then it needs an address for the ClientAPI in the outside world (outside of the Host). That means reaching the host (your computer running the containers) and that host can put the request through to customer-api:3000.
Conclusion: You need a ProxyServer (Nginx?).
Put the Nginx either on the Host (not advisable) or as a separate Container next to the other 2.
In the last (and best) option, you give each container an IP address and let Nginx proxy (the verb) request to the right container.
We have this running here on our own environment and Nginx is servicing about 6 different containers.
On the Host we have a docker (external) network which passes every request for port 80 and 443 on to the ProxyServer in the Nginx Container. And that Container passes every request (based on the server_name in the Nginx config file) on to the right container.
Related
Currently, I have app (.Net Core 3.1) which is listening on port 40003. Now I'm trying to put it into a container. I created dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 40003
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["myProject/myProject/myProject.csproj", "myProject/myProject/"]
RUN dotnet restore "myProject/myProject/myProject.csproj" --configfile ./NuGet.Config
COPY . .
WORKDIR "/src/myProject/myProject"
RUN dotnet build "myProject.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myProject.csproj" -c Release -o /app/publish
FROM base AS final
ENV ASPNETCORE_URLS=http://*:40003
ENV ASPNETCORE_ENVIRONMENT="development"
EXPOSE 40003
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myProject.dll"]
As you see I exposed port 40003 and set variable ASPNETCORE_URLS.
I run it:
docker run -it --rm --name myTag -p 40003:40003 myTag
When I log into container:
docker exec -it myTag bash
and executing:
wget http://localhost:40003
I got the expected result. So I believe that my component inside the container is alive and listening
However when I am trying to go to
http://localhost:40003
from my local browser, I'm receiving ERR_EMPTY_RESPONSE.
Can you give some advice? Thank you in advance.
Edit:===================
I posted a complete docker file. (added first 3 lines to docker file which defines base).
Yes I'm sure that I posted to HTTP (not https).
http://localhost:40003 returns correct response which is a custom "selftest"
On not development env this endpoint is blocked. Something like:
[HttpGet]
[HttpPost]
public JsonResult<DiagnosticsResult> ServerStatus()
The response: I really got no response. No 200, 404 etc. simple nothing.
Also I'm logging now that result of self tests. Above when I'm inside the docker
INFORMATION 2021-11-30 09:15:48,738 [Omdc.SelfHost.Base.Tests.MinimumDiskSpaceTest.Run] MinimumDiskSpaceTest starting.
INFORMATION 2021-11-30 09:15:48,741 [Omdc.SelfHost.Base.Tests.MinimumDiskSpaceTest.Run] Drive: /, Volume label: /, Total available space: 82%
But when I'm requesting from a local browser there are neither console logs nor HTTP responses.
So my request to not reach the endpoint (?)
When creating a new .Net 5 Web Api project you can find this in the generated launchSettings.json file
"applicationUrl": "https://localhost:5001;http://localhost:5000",
So when running the project you can call the Api endpoint via
GET https://localhost:5001/weatherforecast
When adding Docker support for the project you might create a Dockerfile like
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet
WORKDIR /app
EXPOSE 80
COPY --from=build /app/out .
ENTRYPOINT [ "dotnet", "Api.dll" ]
Most sample files expose port 80 in that file. When running the Api in a Docker container via
docker run -p 8080:80 my-image
I have to change the call to
GET http://localhost:8080/weatherforecast
So I call port 8080 which maps to port 80 internally but how does port 80 map to port 5000 to forward the request to the Api? It works "somehow". Would someone mind explaining this container networking?
As docs say launchSettings.json file is used only for local development:
The launchSettings.json file:
Is only used on the local development machine.
Is not deployed.
contains profile settings.
As for docker image - it has environment variable ASPNETCORE_URLS set to http://+:80 so app listens to 80 port and there is no mapping happening between 80 and 5000
working my way through tutorials on dockerized the API WeatherForecast web template from ASP.NET core:
https://code.visualstudio.com/docs/containers/quickstart-aspnet-core
https://code.visualstudio.com/docs/containers/docker-compose
I had to start from here, because I wasn't getting a new image to build using the tutorial here: https://docs.docker.com/compose/aspnet-mssql-compose/
"1" works, which is great. However, "2" will not work on the localhost:5000/WeatherForecast port as advertised, and I'm having some trouble debugging why after many reviews of the available docs.
I should make a note that in creating the templated app from the command line, I did choose the --no-https option.
I then used docker ps to bring up the PORTS . The web app is using 5000/tcp, 0.0.0.0:32779->80/tcp . When I substitute 5000 for 32779 , I get the API string returned instead!
I know I'm missing something within docker-compose and could use some extra eyes on it. Thank you!
EDIT: For reference, the files below were generated by my VSCode editor.
1. I ran dotnet new webapi --no-https
2. I then brought up the VSCode "command pallete" and ran Docker: Add Dockerfiles to Workspace and selected 'yes' for the inclusion of docker-compose.yml file and Linux. I also choose to use port 5000. I use Fedora 30.
4. I run dotnet build from the project root in the terminal.
5. If I run from docker commands and make the ports explicit it will work as advertised, but if I run docker-compose -f <yml-file> up -d- --build, it will not.
I just re-read this and find it annoying that I'm stuck within VSCode to fix the issue (according to the docs)
By default Docker will assign a randomly chosen host port to a port exposed by a container (the container port). In this case the exposed (container) port is 5000, but it will be exposed on the host via a random port, such as 32737.
You can use specific port on the host by changing the Docker run options used by docker-run: debug task (defined in .vscode/tasks.json file). For example, if you want to use the same port (5000) to expose the service, the docker-run: debug task definition would look like this:
a. Dockerfile
# Please refer https://aka.ms/HTTPSinContainer on how to setup an
https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
aspdotnetdocker2:
image: aspdotnetdocker2
build:
context: .
dockerfile: Dockerfile
ports:
- 5000
b. docker-compose.yml
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["aspdotnet_docker2.csproj", "./"]
RUN dotnet restore "./aspdotnet_docker2.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "aspdotnet_docker2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "aspdotnet_docker2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "aspdotnet_docker2.dll"]
Have a look at docker-compose docs:
SHORT SYNTAX
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
So try:
ports:
- "5000:<port in ASPNETCORE_URLS>"
# e.g.
# - "5000:80"
# - "44388:443"
I know this is such a noob problem but I am having trouble understanding how to get my .Net Core website to connect to my MySql container. So some background, both the MySql and the .Net core website are in their separate containers. I have already started the MySql container and setup the root account to work. I am using Entity Framework inside of .Net Core project.
I created the MySql container using this statement:
docker run --name mysql_container -d -p 3306:3306
Below is the dockerfile that Visual Studio generated for me.
So what do I tell my .Net Core program to is the IP address of the MySql container if the IP can change?
Inside of .Net Core Program:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
var connection = $"Server={GetDBAddress()};Database=myDataBase;Uid=root;Pwd=root;";
services.AddDbContext<ToDoContext>(options => options.UseMySQL(connection));
}
If I write the GetDBAddress function what goes in there? I cannot simply return localhost because it's another docker container? As of right now I am trying to use localhost and I get connection refused. But I am able to connect to the MySql db using workbench.
Also I am not sure but can these two setups be combined into some file I think they're called docker-compose files maybe?
Dockerfile
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ["ToDoService/ToDoService.csproj", "ToDoService/"]
RUN dotnet restore "ToDoService/ToDoService.csproj"
COPY . .
WORKDIR "/src/ToDoService"
RUN dotnet build "ToDoService.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "ToDoService.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ToDoService.dll"]
If you've launched MySQL exposing the ports you should be able to reach it connecting from localhost, with the port 3306.
Otherwise, as you suggested, there is the possibility to set up a docker-compose file. This file usually contains all the configuration your application needs to run. So, for example, a suitable configuration for your application (note: I'm assuming you're using MySQL 5.7 since you haven't specified one) could be:
version: '3.3'
services: # list of services composing your application
db: # the service hosting your MySQL instance
image: mysql:5.7 # the image and tag docker will pull from docker hub
volumes: # this section allows you to configure persistence within multiple restarts
- db_data:/var/lib/mysql
restart: always # if the db crash somehow, restart it
environment: # env variables, you usually set this to override existing ones
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: todoservice
MYSQL_USER: root
MYSQL_PASSWORD: root
todoservice: # you application service
build: ./ # this tells docker-compose to not pull from docker hub, but to build from the Dockerfile it will find in ./
restart: always
depends_on: # set a dependency between your service and the database: this means that your application will not run if the db service is not running, but it doesn't assure you that the dabase will be ready to accept incoming connection (so your application could crash untill the db initializes itself)
- db
volumes:
db_data: # this tells docker-compose to save your data in a generic docker volume. You can see existing volumes typing 'docker volume ls'
To launch and deploy your application, now you need to type in a terminal:
docker-compose up
This will bring up your deploy. Note that no ports are exposed here: only your service will be able to access the database from db:3306 (you don't need to refer by IP, but you can reach other services using the service name).
For debug purposes, you can still open your db ports adding this line under image:
ports:
- "3306:3306"
Note that this port has to be free (no other system services are using it), otherwise the entire deployment will fail.
Final note: since docker-compose will try to avoid to build your images every time you bring up the service, to force it to build a new one you have to append --build to the docker-compose up command.
To bring down your deploy just use docker-compose down. To delete all the persistent data related to your deploy (i.e. starting with a new db) append the -v flag at the end of the previous command.
Hope it helps!
I'm attempting to run a asp.net core application from a raspberry pi using docker, I think I have the main parts down. I have a repository on github that is a simplistic asp.net core project. I have setup an account on docker cloud which build everytime I push to my github repo.
I have docker pulled my repository onto my pi:
I run the command:
docker run -d -p 8080:80 joro550/radiusnet --network=host
and I can see that it is running:
But when I go to my pi's ip address on port 8080 then I get this:
When I've been searching around for this people have suggested adding these flags (which I have tried and come up with the same results:
adding --network=host to the docker run command
adding -it to the docker run command
Adding Expose 80 into the docker file
I think at this point I'm at a bit of a lose as to how to access this thing.
The docker documentation does suggest running
`docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" myapp`
If I'm using windows 10 Nano containers, which I don't believe I'm using but when I run this command I get a resounding <no value>
Cutting it back to docker inspect -f "{{ .NetworkSettings.IPAddress }}" myapp gives me a different ip address to my ips internal ip address, which I've tried on port 8080 and get the same result
Doing a curl on both addresses gives me the same result of connection refused:
Here's my docker file for anyone interested:
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY /src ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out -r linux-arm
# build runtime image
FROM microsoft/dotnet:2.0.0-runtime-stretch-arm32v7
WORKDIR /app
COPY --from=build-env /app/src/RadiusNet.Web/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "RadiusNet.Web.dll"]
If any more information is needed please ask, I'm pretty new to Docker so I just did a bit of a knowledge dump of my current situation.
Link to github project (if it's needed): https://github.com/joro550/RadiusNet
Any help - at this point will be greatly appreciated
Cutting it back to docker inspect -f "{{ .NetworkSettings.IPAddress }}" myapp gives me a different ip address to my ips internal ip address, which I've tried on port 8080 and get the same result
Try curling the port 5000 and 80 with this IP address from inside the raspbery pi rather than 8080. Plus, are you sure you are exposing the right port? You have "expose 80" but the port 8080 is mapped to 5000, and docker ps shows no mapping for 80