I have my simple app in C# that connect with postgreSQL.
I would like to create image with this app and just run with docker.
Everything is ok when I use:
$ docker build
$ docker run postgres
$ docker run my_app
Additionally, there is everything ok, when I use compose from application directory:
$ docker-compose build
$ docker-compose up
But is there any chance for use docker-compose for image that I built previous?
I would like to publish this image to my repo and somebody else from my team just download and run this image (app + database).
When I do compose-build and next compose run my_app I have exception during connecting to database:
dbug: Npgsql.NpgsqlConnection[3]
Opening connection to database 'POSTGRES_USER' on server 'tcp://postgres:5432'.
Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. (No such device or address) ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: No such device or address
My current docker-compose.yml file:
version: '2'
services:
web:
container_name: 'postgrescoreapp'
image: 'postgrescoreapp'
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/var/www/postgrescoreapp
ports:
- "5001:5001"
depends_on:
- "postgres"
networks:
- postgrescoreapp-network
postgres:
container_name: 'postgres'
image: postgres
environment:
POSTGRES_PASSWORD: password
networks:
- postgrescoreapp-network
networks:
postgrescoreapp-network:
driver: bridge
you should build the image with this name: (registryName:RegistryPort)/imagename:version
$ docker build -t myRegistry.example.com:5000/myApp:latest .
$ docker build -t myRegistry.example.com:5000/myDb:latest .
Now add these lines to the docker-compose file :
Myapp:
image: myRegistry.example.com:5000/myApp:latest
MyDb:
image: myRegistry.example.com:5000/myDb:latest
And then push it :
$ docker push myRegistry.example.com:5000/myApp:latest
$ docker push myRegistry.example.com:5000/myDb:latest
Your mate should now be able to pull it now
$ docker pull myRegistry.example.com:5000/myApp:latest
$ docker pull myRegistry.example.com:5000/myDb:latest
Yes, You can use previously created image from repository via docker compose.
Example:
version: '2'
services:
app:
image: farhad/my_app
ports:
- "80:80"
networks:
- testnetwork
postgres:
image: postgres:latest
networks:
- testnetwork
networks:
testnetwork:
external: true
Example Explaination:
I'm creating a container named app from my repository and another container named postgres via library postgres image.
Please note you need to create an push your custom image to repository first.
I'm using user-defined networks here, You need to create testnetwork before docker-compose up.
Related
I know this looks duplicated, but I checked all the others questions and none of them solved my problem.
So, this is my docker-compose yml file:
version: '3.8'
services:
#db:
# image: postgres
messageBroker:
image: rabbitmq:management
hostname: "messageBroker"
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 5s
timeout: 15s
retries: 3
networks:
- services-network
ports:
- "15672:15672"
- "5672:5672"
environment:
RABBITMQ_DEFAULT_USER: "admin"
RABBITMQ_DEFAULT_PASS: "password"
serviceDiscovery:
image: steeltoeoss/eureka-server
hostname: eureka-server
networks:
- services-network
ports:
- "8761:8761"
order-api:
image: ${DOCKER_REGISTRY-}orderapi
hostname: orderapi
environment:
- Eureka__Client__ServiceUrl=http://serviceDiscovery:8761/eureka/
- Eureka__Client__ShouldRegisterWithEureka=true
- Eureka__Client__ValidateCertificates=false
networks:
- services-network
depends_on:
- serviceDiscovery
build:
context: .
dockerfile: Services/Order/Dockerfile
links:
- "serviceDiscovery"
product-api:
image: ${DOCKER_REGISTRY-}productapi
hostname: productapi
restart: on-failure
environment:
- Eureka__Client__ServiceUrl=http://serviceDiscovery:8761/eureka/
- Eureka__Client__ShouldRegisterWithEureka=true
- Eureka__Client__ValidateCertificates=false
networks:
- services-network
depends_on:
messageBroker:
condition: service_healthy
serviceDiscovery:
condition: service_started
build:
context: .
dockerfile: Services/Products/Dockerfile
links:
- "serviceDiscovery"
- "messageBroker"
networks:
services-network:
this is my config file which I connect to RabbitMq:
using RabbitMQ.Client;
namespace MessageBroker;
public static class MessageBrokerConfig
{
public static IModel ChannelConfig()
{
var channel = new ConnectionFactory { Uri = new Uri("amqp://admin:password#messageBroker:5672") }
.CreateConnection()
.CreateModel();
return channel;
}
}
but when I run docker-compose up I still got the error:
product-api_1 | Unhandled exception. RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
product-api_1 | ---> System.AggregateException: One or more errors occurred. (Connection failed)
product-api_1 | ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed
product-api_1 | ---> System.Net.Sockets.SocketException (111): Connection refused
And the product service can register inside the Service Discovery without a problem, and I followed almost the same steps.
And I know that the problem isn't the rabbitmq container taking time to be ready, because I can connect on my machine. And everytime the product service failed to launch, it restarts, but no matter how much time it takes, I still got this error. And the log of the messageBroker container shows it's healthy (and if wasn't, I would not be able to access through my machine ).
I don't have any other ideas, I'm on this problem 3 days alredy and I'm going crazy. I checked tutorials, followed the steps and nothig.
Solved, guys! Well, the configuration was correct. However, whenever I created a new container, it didn't build. Therefore, even if I changed the code, it still had the image of the first version of the code, with the "localhost" as the hostname. So the only thing I did was delete the image of the service and Docker created a new one with the correct code.
I see this obviously as a temporary solution, since it has to build everytime it runs/a new container is created. But this is another subject and I think it won't be hard to implement. Maybe with the arg --build in docker compose is enough. But I will only give attention to this later.
Your docker-compose file does not have the networking setup correctly. IMO you don't need the links. Here is a minimal docker-compose that worked for me. I removed the links and I removed the service discovery which isn't in play here for connectivity between rabbitClient and the broker.
version: '3.8'
services:
#db:
# image: postgres
messageBroker:
image: rabbitmq:management
hostname: "messageBroker"
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 5s
timeout: 15s
retries: 3
networks:
- services-network
ports:
- "15672:15672"
- "5672:5672"
environment:
RABBITMQ_DEFAULT_USER: "admin"
RABBITMQ_DEFAULT_PASS: "password"
product-api:
image: ${DOCKER-REGISTRY-}productapi
hostname: productapi
restart: on-failure
environment:
- Eureka__Client__ServiceUrl=http://serviceDiscovery:8761/eureka/
- Eureka__Client__ShouldRegisterWithEureka=true
- Eureka__Client__ValidateCertificates=false
networks:
- services-network
depends_on:
messageBroker:
condition: service_healthy
build:
context: client
dockerfile: Dockerfile
networks:
services-network:
On the docker-compose, in the network is missing something, put it like this:
networks:
services-network:
driver: bridge
I have 3 services that I am attempting to integrate with Dapr within my local Docker instance. Each of the services is running in an Linux container.
version: '3.4'
networks:
NextWare:
external: true
services:
nextware.daprtest.service1.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DAPR_GRPC_PORT=40001
ports:
- "40001:40001" # Dapr instances communicate over gRPC so we need to expose the gRPC port
- "4001:80"
depends_on:
- redis
- placement
networks:
- NextWare
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
nextware.daprtest.service1.api-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "nextware.daprtest.service1.api",
"-app-port", "3001",
"-dapr-grpc-port", "40001",
"-metrics-port", "9091",
"-placement-host-address", "placement:50006", # Dapr's placement service can be reach via the docker DNS entry
"-components-path", "/components"]
volumes:
- "./components/nextware.daprtest.service1.api:/components" # Mount our components folder for the runtime to use
depends_on:
- nextware.daprtest.service1.api
network_mode: "service:nextware.daprtest.service1.api" # Attach the nodeapp-dapr service to the nodeapp network namespace
nextware.daprtest.service2.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DAPR_GRPC_PORT=40003
ports:
- "40003:40003" # Dapr instances communicate over gRPC so we need to expose the gRPC port
- "4003:80"
depends_on:
- redis
- placement
networks:
- NextWare
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
nextware.daprtest.service2.api-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "nextware.daprtest.service2.api",
"-app-port", "3003",
"-dapr-grpc-port", "40003",
"-metrics-port", "9093",
"-placement-host-address", "placement:50006" # Dapr's placement service can be reach via the docker DNS entry
]
volumes:
- "./components/nextware.daprtest.service2.api:/components" # Mount our components folder for the runtime to use
depends_on:
- nextware.daprtest.service2.api
network_mode: "service:nextware.daprtest.service2.api" # Attach the nodeapp-dapr service to the nodeapp network namespace
nextware.daprtest.service3.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DAPR_GRPC_PORT=40005
ports:
- "40005:40005" # Dapr instances communicate over gRPC so we need to expose the gRPC port
- "4005:80"
depends_on:
- redis
- placement
networks:
- NextWare
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
nextware.daprtest.service3.api-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "nextware.daprtest.service3.api",
"-app-port", "3005",
"-dapr-grpc-port", "40005",
"-metrics-port", "9095",
"-placement-host-address", "placement:50006" # Dapr's placement service can be reach via the docker DNS entry
]
volumes:
- "./components/nextware.daprtest.service3.api:/components" # Mount our components folder for the runtime to use
depends_on:
- nextware.daprtest.service3.api
network_mode: "service:nextware.daprtest.service3.api" # Attach the nodeapp-dapr service to the nodeapp network namespace
Running Docker PS I see the following containers ..
After the services are up and running I attempt to invoke the following code from Service3 to Service1 ...
var request = _mapper.Map<UpdateRequest>(integrationCommand);
await _daprClient.InvokeMethodAsync("nextware.daprtest.service1.api", "Update", request, cancellationToken);
I am getting the following exception...
Is the RequestUri correct ...
"http://127.0.0.1:3500/v1.0/invoke/nextware.daprtest.service1.api/method/Update"
Given this is the DaprClient which is invoking the sidecar, I assume the above RequestUri is correct.
What am I missing?
The reason for the error was due to a malformed ID and method name.
I moved to Tye from Docker compose and hit the same issue.
Then I altered the invocation as per this screen shot.
While this still did not call through to the Update method, it no longer returns the 500 exception.
Instead I was getting the following exception which I am just as stumped to resolve...
The remote certificate is invalid according to the validation procedure: RemoteCertificateNameMismatch, RemoteCertificateChainErrors
This was due to the following startup configuration setting which I commented out and all worked fine after that ...
Dapr Rocks!
I am quite new when it comes to RabbitMQ and I am working on a POC to run a C# solution that's using RabbitMQ in a docker container.
I managed to write the docker-compose.yml, dockerfile and run RabbitMQ. However, my solution cannot reach the RabbitMQ host. I think I might be missing some configuration but I cannnot tell what.
docker-compose.yml
version: '3.4'
services:
rabbit-sender:
image: rabbit-sender
container_name: rabbit-sender
build:
context: ../SenderRabitMQ
dockerfile: debug.Dockerfile
env_file: common.env
networks:
- rabbitPoc
expose:
- "80"
rabbit-receiver:
image: rabbit-receiver
container_name: rabbit-receiver
build:
context: ../ReceiveRabitMQ
dockerfile: debug.Dockerfile
env_file: common.env
networks:
- rabbitPoc
expose:
- "80"
rabbitmq:
image: rabbitmq:3.7.15
hostname: rabbitmq
build:
context: rabbit
dockerfile: debug.Dockerfile
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
debug.Dockerfile
Install RabbitMQ
FROM ubuntu:14.04.1
CMD docker pull dockerfile/rabbitmq
CMD docker build -t="dockerfile/rabbitmq" github.com/dockerfile/rabbitmq
FROM dotnet-core-sdk-2.1-debug:latest AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY SenderRabitMQ/SenderRabitMQ.csproj SenderRabitMQ/
RUN dotnet restore SenderRabitMQ/SenderRabitMQ.csproj
# Copy everything else and build
COPY ./ ./
RUN dotnet publish SenderRabitMQ/SenderRabitMQ.csproj -c Debug -o out --no-restore
# Build runtime image
FROM dotnet-core-aspnet-2.1-debug:latest
WORKDIR /app
COPY --from=build-env /app/SenderRabitMQ/out .
ENTRYPOINT ["dotnet", "SenderRabitMQ.dll"]
RUN command
docker run --hostname myrabbit rabbitmq:3
Connecting to RabbitMQ
var factory = new ConnectionFactory() { HostName = "myrabbit:5672" , DispatchConsumersAsync = true };
This is the error received when running the RabbitSender that's supposed to post a message to the queue.
rabbit-sender | Unhandled Exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the
specified endpoints were reachable ---> System.AggregateException:
One or more errors occurred. (Connection failed) --->
RabbitMQ.Client.Exceptions.ConnectFailureException: Connection
failed --->
System.Net.Internals.SocketExceptionFactory+ExtendedSocketException:
Connection refused 127.0.0.1:5672
Your docker compose sets the RabbitMQ service host name to be rabbitmq and not myrabbit (which is what you're trying to connect to). Try this instead:
var factory = new ConnectionFactory() { HostName = "rabbitmq", port = 5672 , DispatchConsumersAsync = true };
You also will need the Dockerfile rabbitmq section to be on the same network as the other services:
rabbitmq:
image: rabbitmq:3.7.15
hostname: rabbitmq
build:
context: rabbit
dockerfile: debug.Dockerfile
ports:
- "5672:5672"
- "15672:15672"
networks:
- rabbitPoc
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
Hope that helps!
You should use
HostName = "http://host.docker.internal:5672"
or
HostName = "host.docker.internal:5672"
instead of
HostName = "myrabbit:5672"
The reason is:
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Windows.
https://docs.docker.com/docker-for-windows/networking/
In docker container,the place where you are making a connection to locally setup rabbitmq, you need to give the host as following
host.docker.internal
It will work.
I am trying to call an api on my .net core application which saves something into my database, when I run my application regularly it is ok but when I run it with docker entityframework shows this error :
Unable to connect to any of the specified MySQL hosts.
here is my docker-compose.yml file:
version: '3.0'
services:
db:
image: mysql:latest
command: --lower_case_table_names=1
environment:
MYSQL_DATABASE: onlinesmlogs
MYSQL_USER: root
MYSQL_ALLOW_EMPTY_PASSWORD: "true"
volumes:
- dbdata:/var/lib/mysql
- _MySQL_Init_Script:/docker-entrypoint-initdb.d
restart: always
onlinesm.logs:
image: onlinesm
build:
context: .
ports:
- "8080:80"
volumes:
dbdata: {}
_MySQL_Init_Script : {}
and here is my connection string:
"server=127.0.0.1;port=3306; Database=test; Uid=root; Pwd="
I am running Kafka using a Docker Compose file:
version: "2"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
kafka:
container_name: kafka
image: wurstmeister/kafka
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.1.109:9092
KAFKA_CREATE_TOPICS: "test:1:1"
However, when I try and produce a message using my C# client, using "bootstrap.servers: 172.16.1.109:9092" it times out.
Is there another setting that I need?