Communicating with Docker Service through Published Port - c#

I'm trying to run multiple instances of a Docker image on a single node and send requests to the node allowing Docker to load balance between the instances.
If I use the Run command shown below the container behaves as expected, I can send a request from another machine on port 80 and the request is serviced by container. However, if I try to spin up a service with the Service command shown below I do get 5 replicated tasks running but the request only returns a 404 error.
How can I communicate with the service through my exposed port?
This sample includes a ASP.Net Core 2.0 api that returns a Guid unique to the instance of the app.
Controller
using Microsoft.AspNetCore.Mvc;
using System;
namespace MinimalDockerTest.Controllers {
[Route("api/[controller]")]
public class NodeController : Controller {
[HttpGet]
public IActionResult Get() {
return Ok(NodeId);
}
private static Guid NodeId {
get;
} = Guid.NewGuid();
}
}
Dockerfile
#Context is binary output folder, i.e. bin/publishoutput
FROM microsoft/aspnetcore:2.0-nanoserver-sac2016
EXPOSE 80
WORKDIR /app
Copy . .
ENTRYPOINT ["dotnet", "MinimalDockerTest.dll"]
Build Command
docker build -t minimaltest .
Run Command
docker run -p 80:80 --name minimaltest minimaltest
Service Command
docker service create -p 80:80 --replicas 5 --name minimaltest minimaltest
Request
GET: http://node_ip/api/node
System
Windows 10 1703 Build 15063.0
Docker CE 17.12.0-ce-win46 (15048)
Edit
Found some good info here on SO.
I beleive you need to publish port in "host" mode (learn.microsoft.com/en-us/virtualization/windowscontainers/…‌​). Also it will be one to one port mapping between running container and host and hence you will not be able to run several containers on the same port. Routing mesh is not working on Windows yet.
Source
Now the question remains; When will mesh routing be supported?

Make sure you aren't pulling a different image from the registry. By default, docker service will resolve the image name from the registry so that it will work on multiple nodes in the cluster. While docker run will use the image as it exists on the local node. You can disable this resolution process with the --no-resolve-image option:
docker service create -p 80:80 --replicas 5 --name minimaltest \
--no-resolve-image minimaltest

Apparently mesh routing is not available for Windows 10 Version 1703. Mesh Routing was made the default option on Windows with version 1709.
Box Boat
MS Blog
SO

Related

Host ASP.NET Core in Linux with Apache Docker

I am referencing this article on Microsoft's documentation:
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-apache?view=aspnetcore-3.1
Has anyone tried to accomplish these steps in Docker container?
I have been at this for a couple of days and I can't get the kestrel-helloapp.service file to start my application automatically when I run the container.
After running the container I am able to manually go into it and start my application with dotnet WebApplication3.dll.
I am under the impression that this should happen automatically after enabling the service file.
The only way I am able to get it to work is by adding this to the Dockerfile:
ENTRYPOINT ["dotnet" ,"WebApplication3.dll"]
But when I do this it causes the Apache server to not start up automatically.
Here is my Dockerfile:
FROM centos:7
# install sudo and dotnet sdk
RUN yum install sudo -y
RUN sudo rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm
RUN yum install epel-release -y
RUN yum install dnf -y
RUN sudo dnf install dotnet-sdk-3.1 -y
# copy app files over
COPY ["./publish/", "/var/www/helloapp/publish/"]
# install apache and enable it
RUN sudo yum -y install httpd mod_ssl
RUN systemctl enable httpd.service
RUN yum install initscripts -y
RUN sudo service httpd configtest
# copy and enable service file
COPY ["./kestrel-helloapp.service", "/etc/systemd/system/"]
RUN sudo systemctl enable kestrel-helloapp.service
# start apache
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
EXPOSE 80
docker run command:
docker run -v "C:\Users\Nick\source\repos\docker-testing\version1\helloapp.conf:/etc/httpd/conf.d/helloapp.conf" -e "ASPNETCORE_URLS=http://+:8080" -p 80:80 -p 8080:8080 -t version1
helloapp.conf file:
<VirtualHost *:*>
RequestHeader set "X-Forwarded-Proto" expr="http"
</VirtualHost>
<VirtualHost *:8080>
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:5000/
ProxyPassReverse / http://127.0.0.1:5000/
ServerName www.example.com
ServerAlias *.example.com
ErrorLog /var/log/httpd/helloapp-error.log
CustomLog /var/log/httpd/helloapp-access.log common
</VirtualHost>
kesterl-helloapp.service file:
[Unit]
Description=Example .NET Web MVC App running on CentOS 7
[Service]
WorkingDirectory=/var/www/helloapp/publish
ExecStart=/usr/local/bin/dotnet /var/www/helloapp/publish/WebApplication3.dll
Restart=always
# Restart service after 10 seconds if the dotnet service crashes:
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=dotnet-example
User=apache
Environment=ASPNETCORE_ENVIRONMENT=Production
[Install]
WantedBy=multi-user.target
I know the configuration is correct because everything works fine when I start the application manually. The service file just seems to be not starting the application on boot.
Any help would be greatly appreciated. Thanks!
I've run into a similar problem (just with an ubuntu base image). The problem that you are likely experiencing is related to the fact that there is one process launched from the docker container on entry point and that is not the system daemon (init, systemd, not sure which one centos is using). As a result, your services are actually not started as you described, because they would be launched on system run level change, by that exact same system daemon.
In my opinion, not launching the system daemon is a good thing as you want to minimize services running inside of your container.
On the other hand, you might actually want multiple services inside of the container. An actual solution to your problem is to write an entry point shell script and launch the services that you want to run in parallel of your main application. In my case, I wanted a customized Jenkins image, which has an entry point of /usr/local/bin/jenkins.sh. You can find this in the Dockerfile of the base image that you are using.
I've replaced the original entry point:
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
with:
ENTRYPOINT ["/bin/tini", "--", "/docker-entrypoint.sh"]
Where the content of /docker-entrypoint.sh is:
#! /bin/bash
/usr/bin/cron & # This is the additional service I wanted in the background
/usr/local/bin/jenkins.sh

ASP.NET docker-compose app not coming up on assigned PORT

working my way through tutorials on dockerized the API WeatherForecast web template from ASP.NET core:
https://code.visualstudio.com/docs/containers/quickstart-aspnet-core
https://code.visualstudio.com/docs/containers/docker-compose
I had to start from here, because I wasn't getting a new image to build using the tutorial here: https://docs.docker.com/compose/aspnet-mssql-compose/
"1" works, which is great. However, "2" will not work on the localhost:5000/WeatherForecast port as advertised, and I'm having some trouble debugging why after many reviews of the available docs.
I should make a note that in creating the templated app from the command line, I did choose the --no-https option.
I then used docker ps to bring up the PORTS . The web app is using 5000/tcp, 0.0.0.0:32779->80/tcp . When I substitute 5000 for 32779 , I get the API string returned instead!
I know I'm missing something within docker-compose and could use some extra eyes on it. Thank you!
EDIT: For reference, the files below were generated by my VSCode editor.
1. I ran dotnet new webapi --no-https
2. I then brought up the VSCode "command pallete" and ran Docker: Add Dockerfiles to Workspace and selected 'yes' for the inclusion of docker-compose.yml file and Linux. I also choose to use port 5000. I use Fedora 30.
4. I run dotnet build from the project root in the terminal.
5. If I run from docker commands and make the ports explicit it will work as advertised, but if I run docker-compose -f <yml-file> up -d- --build, it will not.
I just re-read this and find it annoying that I'm stuck within VSCode to fix the issue (according to the docs)
By default Docker will assign a randomly chosen host port to a port exposed by a container (the container port). In this case the exposed (container) port is 5000, but it will be exposed on the host via a random port, such as 32737.
You can use specific port on the host by changing the Docker run options used by docker-run: debug task (defined in .vscode/tasks.json file). For example, if you want to use the same port (5000) to expose the service, the docker-run: debug task definition would look like this:
a. Dockerfile
# Please refer https://aka.ms/HTTPSinContainer on how to setup an
https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
aspdotnetdocker2:
image: aspdotnetdocker2
build:
context: .
dockerfile: Dockerfile
ports:
- 5000
b. docker-compose.yml
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["aspdotnet_docker2.csproj", "./"]
RUN dotnet restore "./aspdotnet_docker2.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "aspdotnet_docker2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "aspdotnet_docker2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "aspdotnet_docker2.dll"]
Have a look at docker-compose docs:
SHORT SYNTAX
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
So try:
ports:
- "5000:<port in ASPNETCORE_URLS>"
# e.g.
# - "5000:80"
# - "44388:443"

Azure app service multi container - http request to internal container (docker compose)

I have deployed a multicontainer app service in Azure using a docker compose file (preview).
The webapplication exposes port 80, so if I go to https://myproj.azurewebsites.net/ the webapplication is displayed, (it's the only public port available).
What I would like to do now is to be able to send a http request from the web to my api which is another container hosted internally. I've tried different urls but I'm unable to make a successful request.
The api container starts successfully so there is nothing wrong there, I just dont know which address to use.
OrderApi Dockerfile
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["MyProj.OrderApi/MyProj.OrderApi.csproj", "MyProj.OrderApi/"]
COPY ["nuget.config", "MyProj.OrderApi/"]
RUN dotnet restore "v.OrderApi/MyProj.OrderApi.csproj" --configfile "MyProj.OrderApi/nuget.config"
RUN rm "MyProj.OrderApi/nuget.config"
COPY . .
WORKDIR "/src/MyProj.OrderApi"
RUN dotnet build "MyProj.OrderApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProj.OrderApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProj.OrderApi.dll"]
Docker compose file used in production (Azure)
version: '3.4'
services:
web:
image: myproj.azurecr.io/myproj/web
ports:
- '80:80'
orderapi:
image: myproj.azurecr.io/myproj/orderapi
Endpoints tested
https://localhost/orderapi/api - ERR_CONNECTION_REFUSED
http://localhost/orderapi/api - blocked by CORS policy (I'm certain cors is configured correctly in the api)
https://host.docker.internal/orderapi/api - ERR_CONNECTION_TIMED_OUT
orderapi/api - Adds current site as baseurl.. tries to send a request to
https://mysite.azurewebsites.net/orderapi/api
http://host.docker.internal/orderapi/api - This request has been blocked; the content must be served over HTTPS
host.docker.internal/orderapi/api - Adds current site as baseurl.. Tries to send a request to
https://mysite.azurewebsites.net/host.docker.internal/orderapi/api
Unfortunately, the Azure Web App for Container only can expose one container to the Internet. It means if you deploy multiple containers in the Web App, you can only access one container from the Internet. See the details about How do I know which container is internet accessible in Multi-container with Docker Compose.
Update:
When you set the service like this in your docker-compose file:
redis:
image: redis
container_name: rediscache
Then here is the example in code to connect the container redis:
[Route("api/[controller]")]
[ApiController]
public class NewsController : ControllerBase
{
// GET api/values
[HttpGet]
public async Task<ActionResult<IEnumerable<string>>> Get()
{
ConnectionMultiplexer connection = await ConnectionMultiplexer.ConnectAsync("redis");
var db = connection.GetDatabase();
//code to parse the XML
}
}
In the docker-compose file, the orderapi doesn´t exposes any port. If no exposes any port, there is not way it could process requests.
Usually, the browser gets the html and sends the request directly to the API.
Maybe orderapi should expose other port (for example 88) so the browser can consume data from the API using this port.

Connect to MySQL container from Web Api .Net Core Container? How to get Ip Address?

I know this is such a noob problem but I am having trouble understanding how to get my .Net Core website to connect to my MySql container. So some background, both the MySql and the .Net core website are in their separate containers. I have already started the MySql container and setup the root account to work. I am using Entity Framework inside of .Net Core project.
I created the MySql container using this statement:
docker run --name mysql_container -d -p 3306:3306
Below is the dockerfile that Visual Studio generated for me.
So what do I tell my .Net Core program to is the IP address of the MySql container if the IP can change?
Inside of .Net Core Program:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
var connection = $"Server={GetDBAddress()};Database=myDataBase;Uid=root;Pwd=root;";
services.AddDbContext<ToDoContext>(options => options.UseMySQL(connection));
}
If I write the GetDBAddress function what goes in there? I cannot simply return localhost because it's another docker container? As of right now I am trying to use localhost and I get connection refused. But I am able to connect to the MySql db using workbench.
Also I am not sure but can these two setups be combined into some file I think they're called docker-compose files maybe?
Dockerfile
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ["ToDoService/ToDoService.csproj", "ToDoService/"]
RUN dotnet restore "ToDoService/ToDoService.csproj"
COPY . .
WORKDIR "/src/ToDoService"
RUN dotnet build "ToDoService.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "ToDoService.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ToDoService.dll"]
If you've launched MySQL exposing the ports you should be able to reach it connecting from localhost, with the port 3306.
Otherwise, as you suggested, there is the possibility to set up a docker-compose file. This file usually contains all the configuration your application needs to run. So, for example, a suitable configuration for your application (note: I'm assuming you're using MySQL 5.7 since you haven't specified one) could be:
version: '3.3'
services: # list of services composing your application
db: # the service hosting your MySQL instance
image: mysql:5.7 # the image and tag docker will pull from docker hub
volumes: # this section allows you to configure persistence within multiple restarts
- db_data:/var/lib/mysql
restart: always # if the db crash somehow, restart it
environment: # env variables, you usually set this to override existing ones
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: todoservice
MYSQL_USER: root
MYSQL_PASSWORD: root
todoservice: # you application service
build: ./ # this tells docker-compose to not pull from docker hub, but to build from the Dockerfile it will find in ./
restart: always
depends_on: # set a dependency between your service and the database: this means that your application will not run if the db service is not running, but it doesn't assure you that the dabase will be ready to accept incoming connection (so your application could crash untill the db initializes itself)
- db
volumes:
db_data: # this tells docker-compose to save your data in a generic docker volume. You can see existing volumes typing 'docker volume ls'
To launch and deploy your application, now you need to type in a terminal:
docker-compose up
This will bring up your deploy. Note that no ports are exposed here: only your service will be able to access the database from db:3306 (you don't need to refer by IP, but you can reach other services using the service name).
For debug purposes, you can still open your db ports adding this line under image:
ports:
- "3306:3306"
Note that this port has to be free (no other system services are using it), otherwise the entire deployment will fail.
Final note: since docker-compose will try to avoid to build your images every time you bring up the service, to force it to build a new one you have to append --build to the docker-compose up command.
To bring down your deploy just use docker-compose down. To delete all the persistent data related to your deploy (i.e. starting with a new db) append the -v flag at the end of the previous command.
Hope it helps!

Cannot run asp.net 5 from docker

I have followed the following guide: Running ASP.NET 5 applications in Linux Containers with Docker and I cannot get this to work on my Windows PC or Linux server. My dockerfile looks like this:
FROM microsoft/aspnet
COPY . /app
WORKDIR /app
RUN ["dnu", "restore"]
EXPOSE 5000/tcp
ENTRYPOINT ["dnx", "-p", "project.json", "web"]
I then ran docker build -t myapp . and then docker run -d -p 80:5000 myapp it says it is running but I cannot open the website in the browser. I know on Windows you are supposed to find the ip address that the actual virtual machine is running against by using docker-machine ip default which ended up being 192.168.99.100 but when I navigated to http://192.168.99.100 I just get the generic "This webpage is not available" error message. I have also tried different variations of this docker run command, such as docker run -it -p 80:5000 myapp, docker run -p 80:5000 myapp, and I have also tried different ports, such as docker run -d -p 5000:5000 myapp but nothing seems to work.
I have tried this both on my windows machine and on my linux server, but they both do not work.
I am able to run dnx web without docker and everything works as expected.
Take a look at my answer here: ASP.NET 5.0 beta 8 in Docker doesn't start
Essentially, Docker is forwarding requests to your container on the 0.0.0.0 network interface, but Kestrel is only listening on localhost by default.
So yes, the requests are being passed off to your docker container, but they are not being accepted by the Kestrel webserver. For that reason, you need to override the server.urls property as others have posted:
ENTRYPOINT ["dnx", "web", "--server.urls", "http://0.0.0.0:5000"]
You should then see:
Now listening on: http://0.0.0.0:5000
when running your container. You can also do a quick docker ps command to verify that 0.0.0.0 is in fact the network interface that Docker is forwarding requests for.
I also wrote a bit about how to get ASP.NET 5 running on Docker on Windows - it's a bit more involved since not only does Docker have to forward requests to the container, but we have to get VirtualBox to pass off requests to the Docker virtual machine boot2docker (typically called default in Virtual Box) before Docker can hand them off to our container.
Post is here: http://dotnetliberty.com/index.php/2015/10/25/asp-net-5-running-in-docker-on-windows/
For a more complete understanding of your app environment, please post your project.json file and the beta version of ASP.net you are working with.
For now you can try cleaning up your Dockerfile by taking out "project.json" and "-p" arguments from the ENTRYPOINT instruction, remove tcp from the EXPOSE command, and finally, specify the "--server.urls" argument in the ENTRYPOINT instruction so that it uses 0.0.0.0 instead of the default localhost as follows:
FROM microsoft/aspnet
COPY . /project
WORKDIR /project
RUN ["dnu", "restore"]
EXPOSE 5000
ENTRYPOINT ["dnx", "web", "--server.urls"]
Alternatively, you can try dropping the EXPOSE command altogether and expose the docker port, 5000, in the ENTRYPOINT instruction as follows:
FROM microsoft/aspnet
COPY . /project
WORKDIR /project
RUN ["dnu", "restore"]
ENTRYPOINT ["dnx", "web", "--server.urls", "http://0.0.0.0:500"]
Either way you would then build your container and run it using something like the following:
$ docker run -it -p 80:5000 myapp
For anyone having this issue now in RC2, commands no longer exists. You have to update Program.cs by chaining in .UseUrls("http://0.0.0.0:5000"). You can also change from 5000 to whatever your desired port is here.
public class Program
{
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseUrls("http://0.0.0.0:5000")
.Build();
host.Run();
}
}
You can find a working, step-by-step tutorial for Docker and ASP.NET Core RC1 here:
https://www.sesispla.net/blog/language/en/2015/12/recipe-asp-net-5-net-core-to-the-docker-container/
The tricky part probably you are missing is to modify your projects.json command as follows:
"commands": {
"web": "Microsoft.AspNet.Server.Kestrel --server.urls http://0.0.0.0:5000"
},
By default, kestrel only accepts localhost connections... With this change you allow connection from any source.

Categories

Resources