I have deployed a multicontainer app service in Azure using a docker compose file (preview).
The webapplication exposes port 80, so if I go to https://myproj.azurewebsites.net/ the webapplication is displayed, (it's the only public port available).
What I would like to do now is to be able to send a http request from the web to my api which is another container hosted internally. I've tried different urls but I'm unable to make a successful request.
The api container starts successfully so there is nothing wrong there, I just dont know which address to use.
OrderApi Dockerfile
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["MyProj.OrderApi/MyProj.OrderApi.csproj", "MyProj.OrderApi/"]
COPY ["nuget.config", "MyProj.OrderApi/"]
RUN dotnet restore "v.OrderApi/MyProj.OrderApi.csproj" --configfile "MyProj.OrderApi/nuget.config"
RUN rm "MyProj.OrderApi/nuget.config"
COPY . .
WORKDIR "/src/MyProj.OrderApi"
RUN dotnet build "MyProj.OrderApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProj.OrderApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProj.OrderApi.dll"]
Docker compose file used in production (Azure)
version: '3.4'
services:
web:
image: myproj.azurecr.io/myproj/web
ports:
- '80:80'
orderapi:
image: myproj.azurecr.io/myproj/orderapi
Endpoints tested
https://localhost/orderapi/api - ERR_CONNECTION_REFUSED
http://localhost/orderapi/api - blocked by CORS policy (I'm certain cors is configured correctly in the api)
https://host.docker.internal/orderapi/api - ERR_CONNECTION_TIMED_OUT
orderapi/api - Adds current site as baseurl.. tries to send a request to
https://mysite.azurewebsites.net/orderapi/api
http://host.docker.internal/orderapi/api - This request has been blocked; the content must be served over HTTPS
host.docker.internal/orderapi/api - Adds current site as baseurl.. Tries to send a request to
https://mysite.azurewebsites.net/host.docker.internal/orderapi/api
Unfortunately, the Azure Web App for Container only can expose one container to the Internet. It means if you deploy multiple containers in the Web App, you can only access one container from the Internet. See the details about How do I know which container is internet accessible in Multi-container with Docker Compose.
Update:
When you set the service like this in your docker-compose file:
redis:
image: redis
container_name: rediscache
Then here is the example in code to connect the container redis:
[Route("api/[controller]")]
[ApiController]
public class NewsController : ControllerBase
{
// GET api/values
[HttpGet]
public async Task<ActionResult<IEnumerable<string>>> Get()
{
ConnectionMultiplexer connection = await ConnectionMultiplexer.ConnectAsync("redis");
var db = connection.GetDatabase();
//code to parse the XML
}
}
In the docker-compose file, the orderapi doesn´t exposes any port. If no exposes any port, there is not way it could process requests.
Usually, the browser gets the html and sends the request directly to the API.
Maybe orderapi should expose other port (for example 88) so the browser can consume data from the API using this port.
Related
Currently, I have app (.Net Core 3.1) which is listening on port 40003. Now I'm trying to put it into a container. I created dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 40003
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["myProject/myProject/myProject.csproj", "myProject/myProject/"]
RUN dotnet restore "myProject/myProject/myProject.csproj" --configfile ./NuGet.Config
COPY . .
WORKDIR "/src/myProject/myProject"
RUN dotnet build "myProject.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myProject.csproj" -c Release -o /app/publish
FROM base AS final
ENV ASPNETCORE_URLS=http://*:40003
ENV ASPNETCORE_ENVIRONMENT="development"
EXPOSE 40003
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myProject.dll"]
As you see I exposed port 40003 and set variable ASPNETCORE_URLS.
I run it:
docker run -it --rm --name myTag -p 40003:40003 myTag
When I log into container:
docker exec -it myTag bash
and executing:
wget http://localhost:40003
I got the expected result. So I believe that my component inside the container is alive and listening
However when I am trying to go to
http://localhost:40003
from my local browser, I'm receiving ERR_EMPTY_RESPONSE.
Can you give some advice? Thank you in advance.
Edit:===================
I posted a complete docker file. (added first 3 lines to docker file which defines base).
Yes I'm sure that I posted to HTTP (not https).
http://localhost:40003 returns correct response which is a custom "selftest"
On not development env this endpoint is blocked. Something like:
[HttpGet]
[HttpPost]
public JsonResult<DiagnosticsResult> ServerStatus()
The response: I really got no response. No 200, 404 etc. simple nothing.
Also I'm logging now that result of self tests. Above when I'm inside the docker
INFORMATION 2021-11-30 09:15:48,738 [Omdc.SelfHost.Base.Tests.MinimumDiskSpaceTest.Run] MinimumDiskSpaceTest starting.
INFORMATION 2021-11-30 09:15:48,741 [Omdc.SelfHost.Base.Tests.MinimumDiskSpaceTest.Run] Drive: /, Volume label: /, Total available space: 82%
But when I'm requesting from a local browser there are neither console logs nor HTTP responses.
So my request to not reach the endpoint (?)
I have a very simple docker setup - React WebApp (admin panel) and .netCore Customer Api, they connected to bridge network. I am actually creating a health check monitor.
both containers are connected and i am able to ping via ip and container name without any issues. when i curl the health check endpoint it works fine.
However, when I do a fetch from the same url from the admin-app react code, its throwing net::ERR_NAME_NOT_RESOLVED docker. please see the code here
async loadData() {
try {
fetch('http://customer-api:5000/api/hc')
.then(function (response) {
console.log(response)
});
} catch (e) {
console.log(e);
}
}
What i did so far -
ensured both are in same network
port is correct
customer api is accessible from both external browser and container
adminapp docker
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY admin-app/package.json ./
COPY admin-app/package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY ./admin-app ./
# start app
CMD ["npm", "start"]
customer api docker
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
ENV WEB_PORT=5000 \
ASPNETCORE_URLS=http://+:5000 \
ASPNETCORE_ENVIRONMENT="Local"
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["customerprofile/Figg.Customer.Api/Figg.Customer.Api.csproj", "Figg.Customer.Api/"]
COPY ["core/Figg.Shared/Figg.Shared.csproj", "Figg.Shared/"]
COPY ["core/Figg.Core/Figg.Core.csproj", "Figg.Core/"]
COPY ["customerprofile/Figg.Customer.Domain/Figg.Customer.Domain.csproj", "Figg.Customer.Domain/"]
RUN dotnet restore "Figg.Customer.Api/Figg.Customer.Api.csproj"
COPY . .
WORKDIR "/src/customerprofile/Figg.Customer.Api"
RUN dotnet build "Figg.Customer.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Figg.Customer.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
EXPOSE $WEB_PORT
ENTRYPOINT ["dotnet", "Figg.Customer.Api.dll"]
any thoughts
ping works
/app # ping customer-api
PING customer-api (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.062 ms
64 bytes from 172.21.0.2: seq=1 ttl=64 time=0.049 ms
64 bytes from 172.21.0.2: seq=2 ttl=64 time=0.068 ms
64 bytes from 172.21.0.2: seq=3 ttl=64 time=0.049 ms
64 bytes from 172.21.0.2: seq=4 ttl=64 time=0.094 ms
I found the reason why its not reaching my container - but how to solve this ? please see the pic here , that would explain
Your Docker Containers are running on your Host (your own system)
Docker published the following ports to the Host:
8092 for the ClientAPI (first string in "8092:3000")
3000 for the AdminApp. (That is the first 3000 in the "3000:3000" string. That's why I always advise to use different ports to not get confused)
BTW: Where is this 5000 coming from you mentioned? I guess that is the "3000" in the "8092:3000" string. Typo?
The host can reach localhost:3000 (admin-app) and localhost:8092 (ClientAPI).
If one container needs to reach another container, it needs to address the service and the internal port.
In your picture: The AdminApp can reach the ClientAPI with customer-api:3000 (or 5000? Typo? :-) )
If you Client-Render the AdminApp, then it needs an address for the ClientAPI in the outside world (outside of the Host). That means reaching the host (your computer running the containers) and that host can put the request through to customer-api:3000.
Conclusion: You need a ProxyServer (Nginx?).
Put the Nginx either on the Host (not advisable) or as a separate Container next to the other 2.
In the last (and best) option, you give each container an IP address and let Nginx proxy (the verb) request to the right container.
We have this running here on our own environment and Nginx is servicing about 6 different containers.
On the Host we have a docker (external) network which passes every request for port 80 and 443 on to the ProxyServer in the Nginx Container. And that Container passes every request (based on the server_name in the Nginx config file) on to the right container.
working my way through tutorials on dockerized the API WeatherForecast web template from ASP.NET core:
https://code.visualstudio.com/docs/containers/quickstart-aspnet-core
https://code.visualstudio.com/docs/containers/docker-compose
I had to start from here, because I wasn't getting a new image to build using the tutorial here: https://docs.docker.com/compose/aspnet-mssql-compose/
"1" works, which is great. However, "2" will not work on the localhost:5000/WeatherForecast port as advertised, and I'm having some trouble debugging why after many reviews of the available docs.
I should make a note that in creating the templated app from the command line, I did choose the --no-https option.
I then used docker ps to bring up the PORTS . The web app is using 5000/tcp, 0.0.0.0:32779->80/tcp . When I substitute 5000 for 32779 , I get the API string returned instead!
I know I'm missing something within docker-compose and could use some extra eyes on it. Thank you!
EDIT: For reference, the files below were generated by my VSCode editor.
1. I ran dotnet new webapi --no-https
2. I then brought up the VSCode "command pallete" and ran Docker: Add Dockerfiles to Workspace and selected 'yes' for the inclusion of docker-compose.yml file and Linux. I also choose to use port 5000. I use Fedora 30.
4. I run dotnet build from the project root in the terminal.
5. If I run from docker commands and make the ports explicit it will work as advertised, but if I run docker-compose -f <yml-file> up -d- --build, it will not.
I just re-read this and find it annoying that I'm stuck within VSCode to fix the issue (according to the docs)
By default Docker will assign a randomly chosen host port to a port exposed by a container (the container port). In this case the exposed (container) port is 5000, but it will be exposed on the host via a random port, such as 32737.
You can use specific port on the host by changing the Docker run options used by docker-run: debug task (defined in .vscode/tasks.json file). For example, if you want to use the same port (5000) to expose the service, the docker-run: debug task definition would look like this:
a. Dockerfile
# Please refer https://aka.ms/HTTPSinContainer on how to setup an
https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
aspdotnetdocker2:
image: aspdotnetdocker2
build:
context: .
dockerfile: Dockerfile
ports:
- 5000
b. docker-compose.yml
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["aspdotnet_docker2.csproj", "./"]
RUN dotnet restore "./aspdotnet_docker2.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "aspdotnet_docker2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "aspdotnet_docker2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "aspdotnet_docker2.dll"]
Have a look at docker-compose docs:
SHORT SYNTAX
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
So try:
ports:
- "5000:<port in ASPNETCORE_URLS>"
# e.g.
# - "5000:80"
# - "44388:443"
I know this is such a noob problem but I am having trouble understanding how to get my .Net Core website to connect to my MySql container. So some background, both the MySql and the .Net core website are in their separate containers. I have already started the MySql container and setup the root account to work. I am using Entity Framework inside of .Net Core project.
I created the MySql container using this statement:
docker run --name mysql_container -d -p 3306:3306
Below is the dockerfile that Visual Studio generated for me.
So what do I tell my .Net Core program to is the IP address of the MySql container if the IP can change?
Inside of .Net Core Program:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
var connection = $"Server={GetDBAddress()};Database=myDataBase;Uid=root;Pwd=root;";
services.AddDbContext<ToDoContext>(options => options.UseMySQL(connection));
}
If I write the GetDBAddress function what goes in there? I cannot simply return localhost because it's another docker container? As of right now I am trying to use localhost and I get connection refused. But I am able to connect to the MySql db using workbench.
Also I am not sure but can these two setups be combined into some file I think they're called docker-compose files maybe?
Dockerfile
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ["ToDoService/ToDoService.csproj", "ToDoService/"]
RUN dotnet restore "ToDoService/ToDoService.csproj"
COPY . .
WORKDIR "/src/ToDoService"
RUN dotnet build "ToDoService.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "ToDoService.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ToDoService.dll"]
If you've launched MySQL exposing the ports you should be able to reach it connecting from localhost, with the port 3306.
Otherwise, as you suggested, there is the possibility to set up a docker-compose file. This file usually contains all the configuration your application needs to run. So, for example, a suitable configuration for your application (note: I'm assuming you're using MySQL 5.7 since you haven't specified one) could be:
version: '3.3'
services: # list of services composing your application
db: # the service hosting your MySQL instance
image: mysql:5.7 # the image and tag docker will pull from docker hub
volumes: # this section allows you to configure persistence within multiple restarts
- db_data:/var/lib/mysql
restart: always # if the db crash somehow, restart it
environment: # env variables, you usually set this to override existing ones
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: todoservice
MYSQL_USER: root
MYSQL_PASSWORD: root
todoservice: # you application service
build: ./ # this tells docker-compose to not pull from docker hub, but to build from the Dockerfile it will find in ./
restart: always
depends_on: # set a dependency between your service and the database: this means that your application will not run if the db service is not running, but it doesn't assure you that the dabase will be ready to accept incoming connection (so your application could crash untill the db initializes itself)
- db
volumes:
db_data: # this tells docker-compose to save your data in a generic docker volume. You can see existing volumes typing 'docker volume ls'
To launch and deploy your application, now you need to type in a terminal:
docker-compose up
This will bring up your deploy. Note that no ports are exposed here: only your service will be able to access the database from db:3306 (you don't need to refer by IP, but you can reach other services using the service name).
For debug purposes, you can still open your db ports adding this line under image:
ports:
- "3306:3306"
Note that this port has to be free (no other system services are using it), otherwise the entire deployment will fail.
Final note: since docker-compose will try to avoid to build your images every time you bring up the service, to force it to build a new one you have to append --build to the docker-compose up command.
To bring down your deploy just use docker-compose down. To delete all the persistent data related to your deploy (i.e. starting with a new db) append the -v flag at the end of the previous command.
Hope it helps!
I am in the process of migrating an API from Windows .NET Full Framework (4.6.1) to ASP.NET Core.
I was able to spin up a container in our Rancher environment via the following Dockerfile:
root#a84db3bdc3cf:/app# cat Dockerfile
FROM microsoft/aspnetcore:1.1
ARG source=.
WORKDIR /app
EXPOSE 80
COPY $source .
ENTRYPOINT ["dotnet", "myapp.dll"]
I have noticed that anyone in my code that uses HttpClient that makes a call to a HTTPS url, the call fails.
The Message in the InnerException reads:
SSL connect error
Has anyone seen this, and if so, do you know if there is some extra configuration required for the container to be able to do HTTP operations via HTTPS? It seems to work fine with HTTP.
EDIT:
My application doesn't run over HTTPS. It is hosted over HTTP. The code in my application is trying to make a call to a remote API that is hosted over HTTPS, and has a valid certificate.
.net core on linux uses curl to provide the implementation of HttpClient
you can test if that's working for you like this
docker run microsoft/aspnetcore-build:2.0 curl https://server
and if it doesn't, try this to see if you have the same problem as I did:
docker run microsoft/aspnetcore-build:2.0 curl -k https://server
For me, the problem was that part of the certificate chain was missing from the cert store for the site I was connecting to.
I modified the Dockerfile like so:
# build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
COPY missing-cert.crt /usr/share/ca-certificates
RUN echo missing-cert.crt >> /etc/ca-certificates.conf
RUN update-ca-certificates
ENTRYPOINT ["dotnet", "myaspdotnetapp.dll"]
And it works for me
I think you have to expose 443 instead of 80 as the 443 is the default port for Https
root#a84db3bdc3cf:/app# cat Dockerfile
FROM microsoft/aspnetcore:1.1
ARG source=.
WORKDIR /app
EXPOSE 443
COPY $source .
ENTRYPOINT ["dotnet", "myapp.dll"]
But Note that you should be sure that your docker contains a certificate
docker container ssl certificates