I am running Kafka using a Docker Compose file:
version: "2"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
kafka:
container_name: kafka
image: wurstmeister/kafka
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.16.1.109:9092
KAFKA_CREATE_TOPICS: "test:1:1"
However, when I try and produce a message using my C# client, using "bootstrap.servers: 172.16.1.109:9092" it times out.
Is there another setting that I need?
Related
I know this looks duplicated, but I checked all the others questions and none of them solved my problem.
So, this is my docker-compose yml file:
version: '3.8'
services:
#db:
# image: postgres
messageBroker:
image: rabbitmq:management
hostname: "messageBroker"
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 5s
timeout: 15s
retries: 3
networks:
- services-network
ports:
- "15672:15672"
- "5672:5672"
environment:
RABBITMQ_DEFAULT_USER: "admin"
RABBITMQ_DEFAULT_PASS: "password"
serviceDiscovery:
image: steeltoeoss/eureka-server
hostname: eureka-server
networks:
- services-network
ports:
- "8761:8761"
order-api:
image: ${DOCKER_REGISTRY-}orderapi
hostname: orderapi
environment:
- Eureka__Client__ServiceUrl=http://serviceDiscovery:8761/eureka/
- Eureka__Client__ShouldRegisterWithEureka=true
- Eureka__Client__ValidateCertificates=false
networks:
- services-network
depends_on:
- serviceDiscovery
build:
context: .
dockerfile: Services/Order/Dockerfile
links:
- "serviceDiscovery"
product-api:
image: ${DOCKER_REGISTRY-}productapi
hostname: productapi
restart: on-failure
environment:
- Eureka__Client__ServiceUrl=http://serviceDiscovery:8761/eureka/
- Eureka__Client__ShouldRegisterWithEureka=true
- Eureka__Client__ValidateCertificates=false
networks:
- services-network
depends_on:
messageBroker:
condition: service_healthy
serviceDiscovery:
condition: service_started
build:
context: .
dockerfile: Services/Products/Dockerfile
links:
- "serviceDiscovery"
- "messageBroker"
networks:
services-network:
this is my config file which I connect to RabbitMq:
using RabbitMQ.Client;
namespace MessageBroker;
public static class MessageBrokerConfig
{
public static IModel ChannelConfig()
{
var channel = new ConnectionFactory { Uri = new Uri("amqp://admin:password#messageBroker:5672") }
.CreateConnection()
.CreateModel();
return channel;
}
}
but when I run docker-compose up I still got the error:
product-api_1 | Unhandled exception. RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
product-api_1 | ---> System.AggregateException: One or more errors occurred. (Connection failed)
product-api_1 | ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed
product-api_1 | ---> System.Net.Sockets.SocketException (111): Connection refused
And the product service can register inside the Service Discovery without a problem, and I followed almost the same steps.
And I know that the problem isn't the rabbitmq container taking time to be ready, because I can connect on my machine. And everytime the product service failed to launch, it restarts, but no matter how much time it takes, I still got this error. And the log of the messageBroker container shows it's healthy (and if wasn't, I would not be able to access through my machine ).
I don't have any other ideas, I'm on this problem 3 days alredy and I'm going crazy. I checked tutorials, followed the steps and nothig.
Solved, guys! Well, the configuration was correct. However, whenever I created a new container, it didn't build. Therefore, even if I changed the code, it still had the image of the first version of the code, with the "localhost" as the hostname. So the only thing I did was delete the image of the service and Docker created a new one with the correct code.
I see this obviously as a temporary solution, since it has to build everytime it runs/a new container is created. But this is another subject and I think it won't be hard to implement. Maybe with the arg --build in docker compose is enough. But I will only give attention to this later.
Your docker-compose file does not have the networking setup correctly. IMO you don't need the links. Here is a minimal docker-compose that worked for me. I removed the links and I removed the service discovery which isn't in play here for connectivity between rabbitClient and the broker.
version: '3.8'
services:
#db:
# image: postgres
messageBroker:
image: rabbitmq:management
hostname: "messageBroker"
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 5s
timeout: 15s
retries: 3
networks:
- services-network
ports:
- "15672:15672"
- "5672:5672"
environment:
RABBITMQ_DEFAULT_USER: "admin"
RABBITMQ_DEFAULT_PASS: "password"
product-api:
image: ${DOCKER-REGISTRY-}productapi
hostname: productapi
restart: on-failure
environment:
- Eureka__Client__ServiceUrl=http://serviceDiscovery:8761/eureka/
- Eureka__Client__ShouldRegisterWithEureka=true
- Eureka__Client__ValidateCertificates=false
networks:
- services-network
depends_on:
messageBroker:
condition: service_healthy
build:
context: client
dockerfile: Dockerfile
networks:
services-network:
On the docker-compose, in the network is missing something, put it like this:
networks:
services-network:
driver: bridge
I have a web api project I setup the log file to be created in the current directory with the name logs.txt (the configuration is inside the appsettings.json).
I run the web api using a docker container the api works (I can make calls) but doesn't create a log file.
The docker file was created by visual studio
This is the docker compose. How do I tell docker to create the log file?
version: '2.0'
networks:
web-net:
name: web-net
services:
web-database:
image: postgres:latest
container_name: web-database
environment:
POSTGRES_DB: web
POSTGRES_USER: web
POSTGRES_PASSWORD: web
networks:
- web-net
restart: on-failure
ports:
- "5431:5432"
web-api:
build:
context: .
dockerfile: WebApi/Dockerfile
container_name: web-api
networks:
- web-net
restart: on-failure
ports:
- "80:80"
depends_on:
- web-database
I have 3 services that I am attempting to integrate with Dapr within my local Docker instance. Each of the services is running in an Linux container.
version: '3.4'
networks:
NextWare:
external: true
services:
nextware.daprtest.service1.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DAPR_GRPC_PORT=40001
ports:
- "40001:40001" # Dapr instances communicate over gRPC so we need to expose the gRPC port
- "4001:80"
depends_on:
- redis
- placement
networks:
- NextWare
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
nextware.daprtest.service1.api-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "nextware.daprtest.service1.api",
"-app-port", "3001",
"-dapr-grpc-port", "40001",
"-metrics-port", "9091",
"-placement-host-address", "placement:50006", # Dapr's placement service can be reach via the docker DNS entry
"-components-path", "/components"]
volumes:
- "./components/nextware.daprtest.service1.api:/components" # Mount our components folder for the runtime to use
depends_on:
- nextware.daprtest.service1.api
network_mode: "service:nextware.daprtest.service1.api" # Attach the nodeapp-dapr service to the nodeapp network namespace
nextware.daprtest.service2.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DAPR_GRPC_PORT=40003
ports:
- "40003:40003" # Dapr instances communicate over gRPC so we need to expose the gRPC port
- "4003:80"
depends_on:
- redis
- placement
networks:
- NextWare
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
nextware.daprtest.service2.api-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "nextware.daprtest.service2.api",
"-app-port", "3003",
"-dapr-grpc-port", "40003",
"-metrics-port", "9093",
"-placement-host-address", "placement:50006" # Dapr's placement service can be reach via the docker DNS entry
]
volumes:
- "./components/nextware.daprtest.service2.api:/components" # Mount our components folder for the runtime to use
depends_on:
- nextware.daprtest.service2.api
network_mode: "service:nextware.daprtest.service2.api" # Attach the nodeapp-dapr service to the nodeapp network namespace
nextware.daprtest.service3.api:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DAPR_GRPC_PORT=40005
ports:
- "40005:40005" # Dapr instances communicate over gRPC so we need to expose the gRPC port
- "4005:80"
depends_on:
- redis
- placement
networks:
- NextWare
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
nextware.daprtest.service3.api-dapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "nextware.daprtest.service3.api",
"-app-port", "3005",
"-dapr-grpc-port", "40005",
"-metrics-port", "9095",
"-placement-host-address", "placement:50006" # Dapr's placement service can be reach via the docker DNS entry
]
volumes:
- "./components/nextware.daprtest.service3.api:/components" # Mount our components folder for the runtime to use
depends_on:
- nextware.daprtest.service3.api
network_mode: "service:nextware.daprtest.service3.api" # Attach the nodeapp-dapr service to the nodeapp network namespace
Running Docker PS I see the following containers ..
After the services are up and running I attempt to invoke the following code from Service3 to Service1 ...
var request = _mapper.Map<UpdateRequest>(integrationCommand);
await _daprClient.InvokeMethodAsync("nextware.daprtest.service1.api", "Update", request, cancellationToken);
I am getting the following exception...
Is the RequestUri correct ...
"http://127.0.0.1:3500/v1.0/invoke/nextware.daprtest.service1.api/method/Update"
Given this is the DaprClient which is invoking the sidecar, I assume the above RequestUri is correct.
What am I missing?
The reason for the error was due to a malformed ID and method name.
I moved to Tye from Docker compose and hit the same issue.
Then I altered the invocation as per this screen shot.
While this still did not call through to the Update method, it no longer returns the 500 exception.
Instead I was getting the following exception which I am just as stumped to resolve...
The remote certificate is invalid according to the validation procedure: RemoteCertificateNameMismatch, RemoteCertificateChainErrors
This was due to the following startup configuration setting which I commented out and all worked fine after that ...
Dapr Rocks!
Here is the part of the docker compose
ecommerce.supplier:
container_name: supplier
image: ${DOCKER_REGISTRY-}ecommercesupplier
build:
context: .
dockerfile: ECommerce.Supplier/Dockerfile
ports:
- "5000:80"
- "5001:443"
env_file: ECommerce.Common/Common.env
environment:
- ConnectionStrings__DefaultConnection=Server=host.docker.internal\MSSQLSERVER01;Database=ECommerceSupplier;User Id=sa;Password=123456;MultipleActiveResultSets=true
restart: on-failure
volumes:
- ./.aspnet/supplier/DataProtection-Keys:/root/.aspnet/DataProtection-Keys
networks:
- ecommerce-network
Here is part of the Docker file for this service which exposes the ports. The file is auto generated by visual studio.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
Here is docker ps:
So what is the result of all of this. When I hit http://localhost:5000/api/V1/Supplier/getlist with get request with proper token I receive a response. But if I try with same end point with https://localhost:5001 I receive "Could not get any response" in postman.
How I can make it work on https://localhost:5001. My level in dev ops is beginner, so if you want additional information please consider to explain carefully what exactly you need.
Side note this is happening for the all services in the screenshot.
It looks like the problem is that, inside your docker container, the application is not listening on 443.
Maybe these two links may help you:
https://learn.microsoft.com/aspnet/core/security/docker-https?view=aspnetcore-3.1
https://learn.microsoft.com/aspnet/core/security/docker-compose-https?view=aspnetcore-3.1
I would give a look at this part:
version: '3.4'
services:
webapp:
image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
ports:
- 80
- 443
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ~/.aspnet/https:/https:ro
at the environment.ASPNETCORE_URLS property.
Hope this is helpful.
I have my simple app in C# that connect with postgreSQL.
I would like to create image with this app and just run with docker.
Everything is ok when I use:
$ docker build
$ docker run postgres
$ docker run my_app
Additionally, there is everything ok, when I use compose from application directory:
$ docker-compose build
$ docker-compose up
But is there any chance for use docker-compose for image that I built previous?
I would like to publish this image to my repo and somebody else from my team just download and run this image (app + database).
When I do compose-build and next compose run my_app I have exception during connecting to database:
dbug: Npgsql.NpgsqlConnection[3]
Opening connection to database 'POSTGRES_USER' on server 'tcp://postgres:5432'.
Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.AggregateException: One or more errors occurred. (No such device or address) ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: No such device or address
My current docker-compose.yml file:
version: '2'
services:
web:
container_name: 'postgrescoreapp'
image: 'postgrescoreapp'
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/var/www/postgrescoreapp
ports:
- "5001:5001"
depends_on:
- "postgres"
networks:
- postgrescoreapp-network
postgres:
container_name: 'postgres'
image: postgres
environment:
POSTGRES_PASSWORD: password
networks:
- postgrescoreapp-network
networks:
postgrescoreapp-network:
driver: bridge
you should build the image with this name: (registryName:RegistryPort)/imagename:version
$ docker build -t myRegistry.example.com:5000/myApp:latest .
$ docker build -t myRegistry.example.com:5000/myDb:latest .
Now add these lines to the docker-compose file :
Myapp:
image: myRegistry.example.com:5000/myApp:latest
MyDb:
image: myRegistry.example.com:5000/myDb:latest
And then push it :
$ docker push myRegistry.example.com:5000/myApp:latest
$ docker push myRegistry.example.com:5000/myDb:latest
Your mate should now be able to pull it now
$ docker pull myRegistry.example.com:5000/myApp:latest
$ docker pull myRegistry.example.com:5000/myDb:latest
Yes, You can use previously created image from repository via docker compose.
Example:
version: '2'
services:
app:
image: farhad/my_app
ports:
- "80:80"
networks:
- testnetwork
postgres:
image: postgres:latest
networks:
- testnetwork
networks:
testnetwork:
external: true
Example Explaination:
I'm creating a container named app from my repository and another container named postgres via library postgres image.
Please note you need to create an push your custom image to repository first.
I'm using user-defined networks here, You need to create testnetwork before docker-compose up.