Docker - dotnet restore fails connection to private feeds - c#

Calling dotnet restore <project> from my Dockerfile is resulting in a NU1301: Unable to load the service index for source error. I've been going through many of the suggested similar questions and continue to have issues. Here is as much info about the things I've tried as I can provide:
Docker Engine has its DNS set to "8.8.8.8"
Using Linux containers
RUN ping google.com succeeds (so I can reach the internet)
Works perfectly fine hitting the nuget.org feed
The nuget.config file currently has credentials in it just to get this working
This will be removed for a different approach once I get this working
These are the same credentials (username/PAT) that I use during development on my host machine
RUN curl <nuget_feed_url> succeeds
Running the restore command with --verbosity detailed doesn't provide any other error messages but the one
Here is the section of the Dockerfile in question
FROM mcr.microsoft.com/dotnet/aspnet:6.0.13 AS base
# Create dockeruser in base layer
RUN addgroup --system --gid 1000 dockergroup \
&& adduser --system --uid 1000 --ingroup dockergroup --shell /bin/sh dockeruser
WORKDIR /app
EXPOSE 8080
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
# Arguments are required in each stage in order to get the correct value
WORKDIR /src
COPY ["src/Nucleus.LumberYard.API/", "Nucleus.LumberYard.API/"]
COPY ["./nuget.config", "./nuget.config"]
WORKDIR "/src/Nucleus.LumberYard.API"
#COPY [".editorconfig", "./"]
RUN dotnet restore "Nucleus.LumberYard.API.csproj"
RUN dotnet build "Nucleus.LumberYard.API.csproj" -c Release --no-restore
Environment info
Docker Desktop v3.3.1
Docker v20.10.5

Based on my understanding you have the following scenario:
a .NET 6 application with some references to nuget packages
some nuget packages are taken from the usual Nuget public repository, some others are taken from a private nuget feed
you are distributing your application via a docker image and during the docker build process you want to run a dotnet restore command targeting one of your csproj file
the dotnet restore command fails because the dotnet cli is unable to talk with your private nuget feed
I encountered the very same situation with the project I'm working on. We have a private Nuget feed hosted in Azure Devops and we too had some troubles figuring out how to solve this.
First of all, let's clarify the root cause of the problem.
You did the right thing veryfying that you are able to reach the nuget feed from your build machine, via the curl command you mentioned.
What is actually failing is the authentication between your build machine and the private nuget feed.
The first thing you need is a personal access token with read permissions for your nuget feed. You can follow this guide to create the personal access token you need.
Once you have the token, you need to provide it to the dotnet cli.
There are several ways to do so, I'm going to explain the one that worked for us.
Instead of adding the nuget source to the nuget.config file, we registers it via a cli command.
I'm quite sure there is a way to do exactly the same thing via the nuget.config file (see here for more details).
This is the cli command we use inside of our docker file:
RUN dotnet nuget add source https://foo.bar.com/something/nuget/v3/index.json -u "whatever" -p "my-personal-access-token" --store-password-in-clear-text --valid-authentication-types "basic"
Notice that:
https://foo.bar.com/something/nuget/v3/index.json is the absolute URL pointing to the index of your private nuget feed
the username can be whatever you like. You do need to provide a value, but I didn't notice any difference even putting there a random string like whatever
the fictious value my-personal-access-token must be substituted with the personal access token you have created as a first step
Here you can find the full reference for the dotnet nuget add source command.
After registering this source with the dotnet cli, you will be able to run your dotnet restore command with no errors.
Hope this helps!

Related

Unable to obtain lock file access (Dotnet CLI)

I use Dev Containers to attach to a container and debug. And it was working just fine.
Recently however it shows this error when I hit F5 or run dotnet build from VS Code's terminal:
/usr/share/dotnet/sdk/7.0.102/NuGet.targets(132,5): error : Unable to obtain lock file access on '/tmp/NuGetScratch/lock/fcd2970c0c875310ff5855562ac5f3954170bddb' for operations on '/Crm/AdminApi/obj/project.nuget.cache'. This may mean that a different user or administrator is holding this lock and that this process does not have permission to access it. If no other process is currently performing an operation on this file it may mean that an earlier NuGet process crashed and left an inaccessible lock file, in this case removing the file '/tmp/NuGetScratch/lock/fcd2970c0c875310ff5855562ac5f3954170bddb' will allow NuGet to continue. [/Crm/AdminApi/Api.csproj]
And it shows the above message after trying to re-install every dependency, while NuGet has cached those dependencies already.
It works if:
1- I open a root bash using docker exec -it container_name bash and run dotnet build
2- I open a non root bash, and simply run sudo dotnet build
3- I open a VS Code termianl (which shows vscode as the user) and run sudo dotnet build
I tried sudo chown -R vscode:vscode /tmp/NuGetScratch/ as mentioned in dotnet error : Unable to obtain lock file access on '/tmp/NuGetScratch/lock/ and here, but that did not change anything. I then tried sudo chmod -R 777 /tmp/NuGetScratch and again no results. I even verified that the owner is changed using ls /tmp -lah | grep NuGet and this is the results:
drwxrwxrwx 1 vscode vscode 4.0K Feb 9 10:43 NuGetScratch
What else can I do?
This was recently logged as an issue: https://github.com/NuGet/Home/issues/12420
Possible workarounds listed from https://github.com/NuGet/Home/issues/12420#issuecomment-1423774814:
Run dotnet nuget locals temp -c to clear the /tmp/NuGetScratch (If there is a sticky bit in /tmp permissions then it won't work)
Set environment variable NUGET_ConcurrencyUtils_DeleteOnClose to 1 before running restore, so the lock files will be cleared after restore (This change is only available in NuGet 6.2 and above, probably .NET 7 preview6 and above)
Set environment variable NUGET_SCRATCH to a path. This variable will override the default temp folder. But it's only applied to NuGet version 6.2 and later, probably .NET 7 preview6 and above).

Azure devops private nuget takes a long time to find package

We are using Azure DevOps pipelines. We have a step that publishes the our private nuget package to the Azure Artifacts. However, the build breaks at other steps because the nuget package (that was published on previous steps) is not found. The strange thing is that after the package is published, I can see it in the package-manager console or on Visual Studio and even in the Artifacts in Azure DevOps. But for some reason, the pipeline doesn't find the package. After 30-50min, I re-run the pipeline and then it finds the package.
What could be happening to take so long time for the pipeline to find the my package?
Edit 1:
This is my yaml for the step with error
- script: |
pwd && ls -la
dotnet restore "$(solution_path)" $(nuget_args)
dotnet publish -c Release -o $(System.DefaultWorkingDirectory)/bin "$(main_project_path)"
mkdir artifact
cp -r $(System.DefaultWorkingDirectory)/bin artifact/bin
displayName: Build Application
The error is:
/data/vstsagent/user/389/s/src/MyProject.csproj
: error NU1102: Unable to find package MyPackage with
version (>= 2.1.0) [/data/vstsagent/user/389/s/src/MySolution.sln]
/data/vstsagent/user/389/s/src/MyProject.csproj
: error NU1102: - Found 28 version(s) in MyPrivateRepository [ Nearest
version: 2.1.0-preview.6 ]
[/data/vstsagent/user/389/s/src/MySolution.sln]
It is not clear what feed is used by dotnet restore command.
Confirm that you have configured your custom nuget feed in nuget.config and using it.
Try adding "--verbose" switch in dotnet restore command and check if your feed is used for restoring packages.
Use the --no-cache option so dotnet restore doesn't use the local cache.

Connect to MySQL container from Web Api .Net Core Container? How to get Ip Address?

I know this is such a noob problem but I am having trouble understanding how to get my .Net Core website to connect to my MySql container. So some background, both the MySql and the .Net core website are in their separate containers. I have already started the MySql container and setup the root account to work. I am using Entity Framework inside of .Net Core project.
I created the MySql container using this statement:
docker run --name mysql_container -d -p 3306:3306
Below is the dockerfile that Visual Studio generated for me.
So what do I tell my .Net Core program to is the IP address of the MySql container if the IP can change?
Inside of .Net Core Program:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
var connection = $"Server={GetDBAddress()};Database=myDataBase;Uid=root;Pwd=root;";
services.AddDbContext<ToDoContext>(options => options.UseMySQL(connection));
}
If I write the GetDBAddress function what goes in there? I cannot simply return localhost because it's another docker container? As of right now I am trying to use localhost and I get connection refused. But I am able to connect to the MySql db using workbench.
Also I am not sure but can these two setups be combined into some file I think they're called docker-compose files maybe?
Dockerfile
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY ["ToDoService/ToDoService.csproj", "ToDoService/"]
RUN dotnet restore "ToDoService/ToDoService.csproj"
COPY . .
WORKDIR "/src/ToDoService"
RUN dotnet build "ToDoService.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "ToDoService.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ToDoService.dll"]
If you've launched MySQL exposing the ports you should be able to reach it connecting from localhost, with the port 3306.
Otherwise, as you suggested, there is the possibility to set up a docker-compose file. This file usually contains all the configuration your application needs to run. So, for example, a suitable configuration for your application (note: I'm assuming you're using MySQL 5.7 since you haven't specified one) could be:
version: '3.3'
services: # list of services composing your application
db: # the service hosting your MySQL instance
image: mysql:5.7 # the image and tag docker will pull from docker hub
volumes: # this section allows you to configure persistence within multiple restarts
- db_data:/var/lib/mysql
restart: always # if the db crash somehow, restart it
environment: # env variables, you usually set this to override existing ones
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: todoservice
MYSQL_USER: root
MYSQL_PASSWORD: root
todoservice: # you application service
build: ./ # this tells docker-compose to not pull from docker hub, but to build from the Dockerfile it will find in ./
restart: always
depends_on: # set a dependency between your service and the database: this means that your application will not run if the db service is not running, but it doesn't assure you that the dabase will be ready to accept incoming connection (so your application could crash untill the db initializes itself)
- db
volumes:
db_data: # this tells docker-compose to save your data in a generic docker volume. You can see existing volumes typing 'docker volume ls'
To launch and deploy your application, now you need to type in a terminal:
docker-compose up
This will bring up your deploy. Note that no ports are exposed here: only your service will be able to access the database from db:3306 (you don't need to refer by IP, but you can reach other services using the service name).
For debug purposes, you can still open your db ports adding this line under image:
ports:
- "3306:3306"
Note that this port has to be free (no other system services are using it), otherwise the entire deployment will fail.
Final note: since docker-compose will try to avoid to build your images every time you bring up the service, to force it to build a new one you have to append --build to the docker-compose up command.
To bring down your deploy just use docker-compose down. To delete all the persistent data related to your deploy (i.e. starting with a new db) append the -v flag at the end of the previous command.
Hope it helps!

No executable found matching command "dotnet-/../.dll" when running dotnet core docker image in Azure Web App on Linux

1. Background
I'm currently working on the following build/deployment pipeline:
Github (https://github.com/devedse/DeveMazeGeneratorCore)
Travis Build (https://travis-ci.org/devedse/DeveMazeGeneratorCore/jobs/196910720)
dotnet restore
dotnet build
dotnet publish
docker create image
docker publish image to hub
Docker image hub (https://hub.docker.com/r/devedse/devemazegeneratorcore/)
Use Azure Web App on Linux to execute deployment (http://devemazegeneratorcoredocker.azurewebsites.net/api/mazes/MazePath/512/512)
.
2. Problem
Whenever I push something to the Github repository, a build is kicked off and step 1-3 are being executed correctly.
However, the website on Azure is unreachable.
I used SCM to browse to the debug console (https://devemazegeneratorcoredocker.scm.azurewebsites.net/DebugConsole/Default.cshtml (for future reference)) and executed the following commands to find the log files that were generated by Docker:
and then used the following commands to read them:
cat docker_128_err.log
cat docker_128_out.log
The out log showed the following results (which seem correct):
Login Succeeded
latest: Pulling from devedse/devemazegeneratorcore
5040bd298390: Already exists
fce5728aad85: Already exists
76610ec20bf5: Already exists
51ee4768b31d: Already exists
4dc55ff439a1: Already exists
9cb727c7d7a0: Already exists
2bea08464ad0: Pulling fs layer
2bea08464ad0: Verifying Checksum
2bea08464ad0: Download complete
2bea08464ad0: Pull complete
Digest: sha256:647f3db3daa3330b7eb109a1c604e5bd403c2c7089b3c18c5e9249a9805d3a4d
Status: Downloaded newer image for devedse/devemazegeneratorcore:latest
Login Succeeded
latest: Pulling from devedse/devemazegeneratorcore
Digest: sha256:647f3db3daa3330b7eb109a1c604e5bd403c2c7089b3c18c5e9249a9805d3a4d
Status: Image is up to date for devedse/devemazegeneratorcore:latest
The error log however, shows the following errors:
2017-01-31T13:11:46.757760723Z No executable found matching command "dotnet-/home/DeveMazeGeneratorCoreWebPublish/DeveMazeGeneratorWeb.dll"
The strange thing is, is that whenever I run the image locally, it all works fine:
docker run -it --rm -p 0.0.0.0:5001:80 devedse/devemazegeneratorcore:latest
Somehow there seems to be a difference in running a Docker image on a Linux machine in Azure, compared to my local Docker installation which runs the Docker images on the default VM that's being installed when you install Docker for Windows.
3. Configuration files used:
.travis.yml: (https://github.com/devedse/DeveMazeGeneratorCore/blob/master/.travis.yml)
Dockerfile: (https://github.com/devedse/DeveMazeGeneratorCore/blob/master/Scripts/Docker/Dockerfile)
4. Summary
So summarizing, it seems that running Docker on Azure is being executed in a different manner then when doing this locally. Does anyone have an idea on what this could be/how to solve it?
Again, (just for easy reference), the error:
2017-01-31T13:11:46.757760723Z No executable found matching command "dotnet-/home/DeveMazeGeneratorCoreWebPublish/DeveMazeGeneratorWeb.dll"
Modify your image to put your application bits somewhere other than /home.
/home is where Azure App Service on Linux bind-mounts the persistent site volume, which is a disk that is shared across instances and is persisted between restarts.
You don't have to use it (you may not have any use for it when running your own image), but anything in your image's /home will disappear at runtime.

How to include dependencies in .NET Core app docker image?

I'm trying to build a .NET Core app docker image. But I can't figure out how I'm supposed to get the project's NuGet dependencies into the image.
For simplicity reasons I've create a .NET Core console application:
using System;
using Newtonsoft.Json;
namespace ConsoleCoreTestApp
{
public class Program
{
public static void Main(string[] args)
{
Console.WriteLine($"Hello World: {JsonConvert.False}");
}
}
}
It just has one NuGet dependency on Newtonsoft.Json. When I run the app from Visual Studio, everything works fine.
However, when I create a Docker image from the project and try to execute the app from there, it can't find the dependency:
# dotnet ConsoleCoreTestApp.dll
Error: assembly specified in the dependencies manifest was not found -- package: 'Newtonsoft.Json', version: '9.0.1', path: 'lib/netstandard1.0/Newtonsoft.Json.dll'
This is to be expected because Newtonsoft.Json.dll is not being copied by Visual Studio to the output folder.
Here's the Dockerfile I'm using:
FROM microsoft/dotnet:1.0.0-core
COPY bin/Debug /app
Is there a recommended way of dealing with this problem?
I don't want to run dotnet restore inside of the container (as I don't want to re-download all dependencies everytime the container runs).
I guess I could add a RUN dotnet restore entry to the Dockerfile but then I couldn't use microsoft/dotnet:<version>-core as base image anymore.
And I couldn't find a way to make Visual Studio copy all dependencies into the output folder (like it does with regular .NET Framework projects).
After some more reading I finally figured it out.
Instead of dotnet build you run:
dotnet publish
This will place all files (including dependencies) in a publish folder. And this folder then can be used directly with a microsoft/dotnet:<version>-core image.
I wrote a tutorial on this recently. The Dockerfile contents I used were (slightly modified to remove the ASP.NET Core bits):
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
ENTRYPOINT ["dotnet", "run"]
When you run docker build, it uses the Dockerfile as a "recipe" to build the image. It'll run dotnet restore and dotnet build first, then package everything up into the image. The resulting image has everything the app needs to run on any Docker host.

Categories

Resources