I have a project developed with .NET Core and C#, running on Docker, that has to call a few functions on a DLL developed with C++.
The problem is: when I run my project without Docker, on Windows using Visual Code, the code runs smoothly, but when I run on Docker, on a Linux container, the code throws an error when trying to execute the DLL function.
I already tried copying the .dll file to the /lib folder, changing it to the parent folder of the project and none of that worked. I started to doubt that the problem is that the file is not found and, by doing some research, I saw that it could be related to the file permissions, so I ran chmod a+wrx on the .dll file, also no success.
This is my Dockerfile configuration:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base
WORKDIR /app
EXPOSE 80
RUN apt-get update \
&& apt-get install -y --allow-unauthenticated \
libc6-dev \
libgdiplus \
libx11-dev \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y poppler-utils
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
COPY . .
RUN dotnet restore --configfile Nuget.config -nowarn:msb3202,nu1503
RUN dotnet publish -c Release -o ./out
FROM base AS final
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MdeGateway.dll"]
This is the code that tries to access the DLL function:
[DllImport("MyDll.dll")]
private static extern int dllfunction(Int32 argc, IntPtr[] argv);
public static void CallDll(string[] args)
{
IntPtr[] argv = ArrayToArgs(args);
dllfunction(args.Length, argv);
FreeMemory(args, argv);
}
The error occurs when the line 'dllfunction(args.Length, argv);' is executed.
The exact message is:
"Unable to load shared library 'MyDll.dll' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libMyDll.dll: cannot open shared object file: No such file or directory"
Also, if someone can teach me how to set the LD_DEBUG environment variable I would appreciate it.
I have a project developed with .NET Core and C#, running on Docker, that has to call a few functions on a DLL developed with C++. The problem is: when I run my project without Docker, on Windows using Visual Code, the code runs smoothly, but when I run on Docker, on a Linux container, the code throws an error when trying to execute the DLL function.
If I am reading this right, you have a C++ application that you compiled to a .dll (on Windows). You can DllImport this .dll on Windows, but not on Linux (container). Is that right?
Are you aware that C++ code compiled into a .dll (shared library) is a Windows-specific thing? Unmanaged code is architecture and platform specific. An unmanaged .dll compiled on x64 won't run on arm64. A unmanaged .dll compiled on Windows wont run on Linux.
Linux (and Linux containers, such as in docker) can't use a .dll built from unmanaged code on Windows. Linux needs the unmanaged (C++) code to be compiled into a shared library (.so file) for DllImport (and the underlying dlopen calls) to work on Linux. Ideally on the same platform as the container it will be running in.
The mono docs cover an (one particular) implementation of DllImport and give more background on how this works on Linux:
https://www.mono-project.com/docs/advanced/pinvoke/
(But keep in mind that Mono != .NET Core. It should still give you some more background information.)
This does not give the solution to OP's problem, but helps answer his 2nd question
Also, if someone can teach me how to set the LD_DEBUG environment variable I would appreciate it.
I am facing a similar issue, and am also struggling to understand what to do with this LD_DEBUG env variable. Turns out that it controls the verbosity of the debugging info for Unix's dynamic linker.
Following the advice here, running LD_DEBUG=help cat in a linux terminal will give you all the valid options for setting LD_DEBUG.
Here's a screenshot of the output of such command:
Additional useful resources:
Linux Apps Debugging Techniques - The Dynamic Linker
Linux LS.SO man page
Quoting from LD.SO man page mentioned above:
LD_DEBUG (since glibc 2.1)
Output verbose debugging information about operation of
the dynamic linker. The content of this variable is one
of more of the following categories, separated by colons,
commas, or (if the value is quoted) spaces:
help Specifying help in the value of this variable does
not run the specified program, and displays a help
message about which categories can be specified in
this environment variable.
all Print all debugging information (except statistics
and unused; see below).
bindings
Display information about which definition each
symbol is bound to.
files Display progress for input file.
libs Display library search paths.
reloc Display relocation processing.
scopes Display scope information.
statistics
Display relocation statistics.
symbols
Display search paths for each symbol look-up.
unused Determine unused DSOs.
versions
Display version dependencies.
Since glibc 2.3.4, LD_DEBUG is ignored in secure-execution
mode, unless the file /etc/suid-debug exists (the content
of the file is irrelevant).
Related
I'm trying to pass a couple of environment variables from an .env file in the root of the project to the Dockerfile's CMD. I've read through the numerous other similar questions here on SO but nothing has made this work. What am I doing wrong?
.env (in root of project)
TOKEN=AToken
DECK=ADeck
docker-compose.yml (in root of project)
version: "3"
services:
discord-initiative-savageworlds:
build:
context: .
environment:
- TOKEN=${TOKEN}
- DECK=${DECK}
And the Dockerfile (in root of project):
FROM ubuntu:latest
ARG TOKEN
ARG DECK
...Run Installers and Stuff...
COPY . .
RUN dotnet publish
RUN cd DiscordInitiative/bin/Debug/net5.0/publish
RUN echo "${TOKEN}"
CMD ./DiscordInitiative --token="${TOKEN}" --deck="${DECK}"
Running docker-compose up builds the project successfully, but the CMD shows that the environment variables are not being replaced:
Step 12/13 : RUN echo "${TOKEN}"
---> Running in 7096ef978516
Removing intermediate container 7096ef978516
---> 6985b62bb75e
Step 13/13 : CMD ./DiscordInitiative --token="${TOKEN}" --deck="${DECK}"
The project & branch this is in is on GitHub here.
Thanks for any tips!
The .env file and the environment directive in your docker-compose file are Docker Compose concepts applying at runtime, not applying to image build. Your configuration would inject those environment variables to the built, running container, but it won't inform the build environment.
(Edit: Upon reflection, this may be what you actually want.)
You could fully leverage the ARG instruction in your Dockerfile by changing your docker-compose to say:
version: "3"
services:
discord-initiative-savageworlds:
build:
context: .
args:
- TOKEN=${TOKEN}
- DECK=${DECK}
This will effectively set your 'environment' at build time, and will enable your Dockerfile to function as pasted above.
Edit:
I suspected there might be things in the redacted portion of the Dockerfile that could throw wrenches. The TL;DR from our comment thread is that your Dockerfile would need to change, too:
FROM ubuntu:latest
...Run Installers and Stuff...
COPY . .
RUN dotnet publish
RUN cd DiscordInitiative/bin/Debug/net5.0/publish
ARG TOKEN
ARG DECK
RUN echo "${TOKEN}"
CMD ./DiscordInitiative --token="${TOKEN}" --deck="${DECK}"
Moving those ARG instructions as near to the bottom as possible sets them from your docker-compose.yml after a bunch of other RUN instructions, and similar things that have the potential to blow away the intermediate container state.
Edit 2:
In reality, though, what you're asking for is variables that are available in the container at runtime (when CMD is executing). So, your question would lead me to believe that you don't need the ARGs at all, and that your container would function as expected.
Also, if you eliminate the ARGs you can get rid of the RUN echo ..., as that would never return anything at that point in the build process.
Injecting ARGs is totally not what you want to do with sensitive values, as the arguments then become a part of your image layers, and subject to snooping. The pattern of injecting sensitive values to the environment at runtime is much preferred (though I wouldn't say most preferred).
Trying to install Matlab run time on a docker image along with the project I'm working on, the project is an engine that will run a variety of measurements based on what is given to it, many of these measurements use Matlab. When I run the docker though I get an error that the "MWArray assembly failed to be initialized" or that a matlab dll is missing.
I'm trying to run this in Docker for Windows due to a company requirement, and have been unable to successfully get the DockerFile to recognize the MCR. Below is the code that I've been playing with to get the MCR onto a docker.
FROM mcr.microsoft.com/dotnet/framework/runtime:4.7.2-windowsservercore-ltsc2019
ADD http://ssd.mathworks.com/supportfiles/downloads/R2017b/deployment_files/R2017b/installers/win64/MCR_R2017b_win64_installer.exe C:\\MCR_R2017b_win64_installer.zip
# Line 3: Use PowerShell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Line 4: Unpack ZIP contents to installation folder
RUN Expand-Archive C:\\MCR_R2017b_win64_installer.zip -DestinationPath C:\\MCR_INSTALLER
# Line 5: Run the setup command for a non-interactive installation of MCR
RUN Start-Process C:\MCR_INSTALLER\bin\win64\setup.exe -ArgumentList '-mode silent', '-agreeToLicense yes' -Wait
# Line 6: Remove ZIP and installation folder after setup is complete
RUN Remove-Item -Force -Recurse C:\\MCR_INSTALLER, C:\\MCR_R2017b_win64_installer.zip
WORKDIR /app
COPY /Project/bin/Debug/*.dll ./
COPY /Project/bin/Debug/Project.exe .
ENTRYPOINT ["C:\\app\\Project.exe"]
Edit: I think I've found a working solution, following the idea from the other anwser about the ltsc2019 not working with Matlab 2017b. The below code has worked with 2017b inside a docker.
FROM mcr.microsoft.com/windows:1809
Windows Server 2019 is not supported by MATLAB R2017b, and support for it was not introduced until MATLAB R2019a.
For MATLAB R2017b you’ll need Windows Server 2016.
That’s not to say there may not be other issues as well.
This issues has taken me one day, really I just thought it's simple at first.
I have a host machine (Windows 10) with Docker desktop for Windows installed.
From the host machine, I would like to use docker run to start a container which contains some simple code to run.
Here is the code (which is built in the container), this is a .NET core Console app (suppose its built name is console.dll):
static void Main(string[] args)
{
Console.WriteLine("Running...");
_execTest();
Console.WriteLine("Finished!");
Console.ReadLine();
}
static void _execTest()
{
var sharedFilePath = Path.Combine(Environment.CurrentDirectory, "Temp", "test.exe");
var si = new ProcessStartInfo(sharedFilePath);
si.RedirectStandardOutput = false;
si.RedirectStandardError = false;
si.RedirectStandardInput = false;
Console.WriteLine("Starting ..." + sharedFilePath);
var p = Process.Start(si);
p.WaitForExit();
}
The main code is just to start another program named test.exe. This program is put in the shared folder Temp (which is established at the time calling docker run by mounting the folders between the host machine and the container).
Here is the code for the test.exe (which is just a .NET console app):
static void Main(string[] args)
{
Console.WriteLine("Something went wrong!");
Console.Write("Welldone!");
}
So I expect that all the messages written in test.exe using Console should be directed back to the parent context (which should use one same STDOUT).
I've tested the code by running the code for container directly using dotnet console.dll and I can see the messages (from test.exe) printed expectedly.
However after deploying the console.dll to an image (console) and try the following command to run the container:
docker run --rm -v D:\SourceFolder:C:\app\Temp console
Then the messages (from test.exe) are not printed. Only the messages written directly in the parent context are printed (Running..., Starting... and Finished!).
You can see that the command above uses -v to mount the folder C:\app\Temp in container to the source folder D:\SourceFolder in the host machine.
And the test.exe is put in D:\SourceFolder.
I'm sure that the container's code can access this file via the shared folder.
This is so weird and hard to diagnose.
Without sharing messages back and forth between the container and the host, running docker like this is just useless.
I hope someone here could give me some some suggestion so that I can try and sort this out. Thanks!
UPDATE:
If I use cmd.exe (which is already existed in the docker image) with argument of /?, then I can see the output of it. So looks like this is some problem of executing an EXE shared via folder.
However I've tried copying the shared file to the some local folder of the container first and run that file instead but still the same issue. So looks like it may be the problem of the test.exe file itself? so ridiculous.
UPDATE: thanks to #jazzdelightsme for his helpful suggestion about checking the ExitCode, so actually the environment in the container has something missing that cannot start the test.exe correctly. I've tried compiling the test.exe targeting the lowest .NET Framework version 2.0 but still that same error. Here is Dockerfile's content which should provide some info about the container's environment:
FROM microsoft/dotnet:2.1-runtime-nanoserver-1709 AS base
WORKDIR /app
FROM microsoft/dotnet:2.1-sdk-nanoserver-1709 AS build
WORKDIR /src
COPY ConsoleApp/ConsoleApp.csproj ConsoleApp/
RUN dotnet restore ConsoleApp/ConsoleApp.csproj
COPY . .
WORKDIR /src/ConsoleApp
RUN dotnet build ConsoleApp.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish ConsoleApp.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ConsoleApp.dll"]
A general troubleshooting thing to check is the exit code of the process. This will often give a clue what the problem is.
In this case, the exit code was STATUS_DLL_NOT_FOUND. This alone may be enough of a clue if you understand your app's dependencies, because you can just manually examine the container and figure out what is missing.
If you don't know what is missing, a direct way to debug is by using the Windows Debuggers and turning on "Show Loader Snaps". Info about getting Windows Debuggers here. You can xcopy them into the container. You would use a command line like C:\Debuggers\cdb.exe -xe "ld ntdll" test.exe, which launches test.exe under the debugger, stopping as soon as ntdll.dll is loaded (which is earlier than normal). Once it stops, you run !gflag +sls to turn on loader snaps, then run g to continue execution. Examining the spew should tell you what is missing or failing to load.
In this particular case, STATUS_DLL_NOT_FOUND is likely because test.exe is a .NET Framework app, but the full .NET Framework is not present in the nanoserver image.
I've been searching for quite some time now, and can't seem to find an answer to this problem. Found only two questions/answers on SO and they still don't answer this question (https://stackoverflow.com/search?q=netcore+publish+mac+app).
I'm working with DotNetCore on Mac, using Visual Studio as the IDE. The app is a Console App, not an ASP.Net app, simple "Hello World" app in C#:
...
Console.Writeline("Hello World");
...
So here's the question... To run the app, I know I can use the "dotnet" command to run it. I'm trying to build/publish the app, as you normally would do in Windows by creating an .exe file, but now on Mac by creating a native binary file.
I have found zero documentation on how to do this, and deploy the application as a self contained app that can run independently without having to call the program using the "dotnet" command. Maybe I'm looking in the wrong places but haven't even found anything on Microsoft's documentation, they all point to documentation for building ASP.Net apps on .NetCore.
Any suggestions?
Found the answer by looking at the "dotnet publish" options:
dotnet publish -c Release --self-contained -r osx.10.13-x64
Where --self-contained includes all required libraries, and -r specifies the runtime target.
$ dotnet publish -c Release --self-contained -a x64
Determining projects to restore...
Restored /Users/walshca/code/temp/MutexThrow/MutexThrow.csproj (in 155 ms).
MutexThrow -> /Users/walshca/code/temp/MutexThrow/bin/Release/net6.0/osx-x64/MutexThrow.dll
MutexThrow -> /Users/walshca/code/temp/MutexThrow/bin/Release/net6.0/osx-x64/publish/
dotnet publish docs
Then I run ./bin/Release/net6.0/osx-x64/publish/MutexThrow
This didn't specify the --output cli arg, so you can see in the build output it defaulted to [project_file_folder]/bin/[configuration]/[framework]/[runtime]/publish/
(In dotnet 6.0 instead of -r runtime target, you can specify --arch x86 and it uses the default RID for your system.)
If your project props sets a different build output, can you find the executable by enumerating files by unix file permissions:
$ gci -r -file | ? UnixMode -match 'x$' | % FullName
/Users/walshca/code/temp/MutexThrow/obj/Release/net6.0/osx-x64/apphost
/Users/walshca/code/temp/MutexThrow/bin/Release/net6.0/osx-x64/MutexThrow
/Users/walshca/code/temp/MutexThrow/bin/Release/net6.0/osx-x64/publish/MutexThrow
I'm exploring the feasibility of running a C# Kinect Visual Gesture Program (something like Continuous Gesture Basics project https://github.com/angelaHillier/ContinuousGestureBasics-WPF) inside of a Docker for Windows container.
Is this even theoretically possible (run C# Kinect in a Docker for Windows container?)
If the answer to 1 is yes, here are some extra details:
I'm using the microsoft/dotnet-framework:4.7 image as a basis and my initial Dockerfile looks like this:
FROM microsoft/dotnet-framework:4.7
ADD . /home/gesture
WORKDIR /home/gesture
Build the image:
$ docker build -t kinect .
Turn on container:
$ docker run -dit --name kinectContainer kinect
Attach to a powershell session to monkey around:
$ docker exec -it kinectContainer powershell
When I attempt to run my gesture application from the Docker container I get the following error (which is expected since no Kinect SDK was installed in the container):
Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'Microsoft.Kinect, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependenc
ies. Reference assemblies should not be loaded for execution. They can only be loaded in the Reflection-only loader context. (Exception from HRESULT: 0x80131058) ---> System.BadImageFormatExcep
tion: Cannot load a reference assembly for execution. erable program. Check the spelling of the name, or if a path was included, verify that the path
--- End of inner exception stack trace ---
at GestureDetector.GestureDetectorApp..ctor()
At this point, the big question is how to install the Kinect v2 SDK [KinectSDK-v2.0_1409-Setup.exe] or the Kinect v2 runtime [KinectRuntime-v2.0_1409-Setup.exe] in the container.
The installers have a EULA and according to some clever University of Wisconsin folks, there is a technique to to extract installers using Wix's dark.exe decompiler(https://social.msdn.microsoft.com/Forums/en-US/a5b04520-e437-48e3-ba22-e2cdb46b4d62/silent-install-installation-instructions?forum=kinectsdk)
ex.
$ & 'C:\Program Files (x86)\WiX Toolset v3.11\bin\dark.exe' C:\installerwork\KinectRuntime-v2.0_1409-Setup.exe -x c:\installerwork\kinect_sdk_installersfiles
The issue I ran into when I got to the underlying msi files is there is no option to run them silently using msiexec.
I've figured out that the runtime installer (Runtime installer (KinectRuntime-x64.msi) extracted from the Kinect v2 SDK) makes at least the following changes in the filesystem:
Creates a folder "Kinect" in C:\Windows\System32 and adds 3 files to System 32:
k4wcll.dll
kinect20.dll
microsoft._kinect.dll
The last three files in System32 should be the 64-bit versions (the installer appears to have x86 and x64 versions of those 3)
Replicating those changes by hand does not lead to success on the host machine let alone in the container.
It's currently unclear what other registry/system changes are occurring with the installer (and whether or not that would get us over the goal line in the Docker container)
Any ideas about how to proceed from here?
In short no. docker on windows does not have the ability to hardware tunnel/map. on Linux, it does via the --device= option
As #VonC has stated you will need to use a Windows VM this could be Hyper-V or you can use Virtual Box then you can provide the Kinect Hardware via the Tunneling method (add/connect device), without this there would be no way for your container be that VM or not to access the hardware of the host machine with windows.
Another approach would be to try and install Kinetic in a Windows server VM, and detect the exact changes brought by said installation.
See for instance "How can I find out what modifications a program’s installer makes?" and a tool like ZSoft Uninstaller 2.5.
Once you have determined exactly what files/registry/variables are impacted by the installation process, you can replicate that in a Dockerfile.