I'm exploring the feasibility of running a C# Kinect Visual Gesture Program (something like Continuous Gesture Basics project https://github.com/angelaHillier/ContinuousGestureBasics-WPF) inside of a Docker for Windows container.
Is this even theoretically possible (run C# Kinect in a Docker for Windows container?)
If the answer to 1 is yes, here are some extra details:
I'm using the microsoft/dotnet-framework:4.7 image as a basis and my initial Dockerfile looks like this:
FROM microsoft/dotnet-framework:4.7
ADD . /home/gesture
WORKDIR /home/gesture
Build the image:
$ docker build -t kinect .
Turn on container:
$ docker run -dit --name kinectContainer kinect
Attach to a powershell session to monkey around:
$ docker exec -it kinectContainer powershell
When I attempt to run my gesture application from the Docker container I get the following error (which is expected since no Kinect SDK was installed in the container):
Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'Microsoft.Kinect, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependenc
ies. Reference assemblies should not be loaded for execution. They can only be loaded in the Reflection-only loader context. (Exception from HRESULT: 0x80131058) ---> System.BadImageFormatExcep
tion: Cannot load a reference assembly for execution. erable program. Check the spelling of the name, or if a path was included, verify that the path
--- End of inner exception stack trace ---
at GestureDetector.GestureDetectorApp..ctor()
At this point, the big question is how to install the Kinect v2 SDK [KinectSDK-v2.0_1409-Setup.exe] or the Kinect v2 runtime [KinectRuntime-v2.0_1409-Setup.exe] in the container.
The installers have a EULA and according to some clever University of Wisconsin folks, there is a technique to to extract installers using Wix's dark.exe decompiler(https://social.msdn.microsoft.com/Forums/en-US/a5b04520-e437-48e3-ba22-e2cdb46b4d62/silent-install-installation-instructions?forum=kinectsdk)
ex.
$ & 'C:\Program Files (x86)\WiX Toolset v3.11\bin\dark.exe' C:\installerwork\KinectRuntime-v2.0_1409-Setup.exe -x c:\installerwork\kinect_sdk_installersfiles
The issue I ran into when I got to the underlying msi files is there is no option to run them silently using msiexec.
I've figured out that the runtime installer (Runtime installer (KinectRuntime-x64.msi) extracted from the Kinect v2 SDK) makes at least the following changes in the filesystem:
Creates a folder "Kinect" in C:\Windows\System32 and adds 3 files to System 32:
k4wcll.dll
kinect20.dll
microsoft._kinect.dll
The last three files in System32 should be the 64-bit versions (the installer appears to have x86 and x64 versions of those 3)
Replicating those changes by hand does not lead to success on the host machine let alone in the container.
It's currently unclear what other registry/system changes are occurring with the installer (and whether or not that would get us over the goal line in the Docker container)
Any ideas about how to proceed from here?
In short no. docker on windows does not have the ability to hardware tunnel/map. on Linux, it does via the --device= option
As #VonC has stated you will need to use a Windows VM this could be Hyper-V or you can use Virtual Box then you can provide the Kinect Hardware via the Tunneling method (add/connect device), without this there would be no way for your container be that VM or not to access the hardware of the host machine with windows.
Another approach would be to try and install Kinetic in a Windows server VM, and detect the exact changes brought by said installation.
See for instance "How can I find out what modifications a program’s installer makes?" and a tool like ZSoft Uninstaller 2.5.
Once you have determined exactly what files/registry/variables are impacted by the installation process, you can replicate that in a Dockerfile.
Related
I am trying to build bindings to C library. Api(sizes of types mostly) different for each operation system. I want to put #if MACOS or whatever macros in code to isolate differences.
I tried to add very specific target frameworks like this <TargetFrameworks>net6.0-macos12;net6.0-windows7</TargetFrameworks>, seems like this is what I want.
In docker image I executed dotnet workflow restore and some macos things installed, but later when I run dotnet pack or dotnet build, command failed with
/usr/share/dotnet/packs/Microsoft.macOS.Sdk/13.0.784/tools/msbuild/macOS/Xamarin.Shared.targets(1400,3): error MSB4018: The "FindItemWithLogicalName" task failed unexpectedly. [/project/MyProject/MyProject.csproj]
/usr/share/dotnet/packs/Microsoft.macOS.Sdk/13.0.784/tools/msbuild/macOS/Xamarin.Shared.targets(1400,3): error MSB4018: System.DllNotFoundException: Unable to load shared library '/usr/lib/libc.dylib' or one of its dependencies. In order to help di
agnose loading problems, consider setting the LD_DEBUG environment variable: /usr/lib/libc.dylib: cannot open shared object file: No such file or directory [/project/MyProject/MyProject.csproj]
/usr/share/dotnet/packs/Microsoft.macOS.Sdk/13.0.784/tools/msbuild/macOS/Xamarin.Shared.targets(1400,3): error MSB4018: at Xamarin.Utils.PathUtils.realpath(String path, IntPtr buffer) [/project/MyProject/MyProject.csproj]
Question is - how I can build such packages in docker container?
I published my c# .NET 5.0 code to azure functions (windows) and im getting this weird error message:
2021-06-21T01:56:53.465 [Error] Executed 'Function1' (Failed, Id=fdefdbba-49a7-44ad-8082-841d2941d90b, Duration=169ms)Unable to load DLL 'libgmp-10.dll' or one of its dependencies: The specified module could not be found. (0x8007007E)
I tried to see the \wwwroot files on the azure functions console but then i get this error:
3 [main] ls (8392) C:\Program Files\Git\usr\bin\ls.exe: *** fatal error - Couldn't set directory to \\?\PIPE\ temporarily.
Any hints?
It seems that the deployment is not done correctly.
Libgmp-10.dll a DLL (Dynamic Link Library) file which is referred to
essential system files of the Windows OS. It usually contains a set of
procedures and driver functions, which may be applied by Windows.
Please delete the Azure Function and, re-create and deploy a fresh code using Develop and publish .NET 5 functions using Azure Functions OR if you are using ADO, Setting up a CI/CD pipeline for Azure Functions.
Let me know if you have any follow up questions.
I have a project developed with .NET Core and C#, running on Docker, that has to call a few functions on a DLL developed with C++.
The problem is: when I run my project without Docker, on Windows using Visual Code, the code runs smoothly, but when I run on Docker, on a Linux container, the code throws an error when trying to execute the DLL function.
I already tried copying the .dll file to the /lib folder, changing it to the parent folder of the project and none of that worked. I started to doubt that the problem is that the file is not found and, by doing some research, I saw that it could be related to the file permissions, so I ran chmod a+wrx on the .dll file, also no success.
This is my Dockerfile configuration:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base
WORKDIR /app
EXPOSE 80
RUN apt-get update \
&& apt-get install -y --allow-unauthenticated \
libc6-dev \
libgdiplus \
libx11-dev \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y poppler-utils
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
COPY . .
RUN dotnet restore --configfile Nuget.config -nowarn:msb3202,nu1503
RUN dotnet publish -c Release -o ./out
FROM base AS final
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MdeGateway.dll"]
This is the code that tries to access the DLL function:
[DllImport("MyDll.dll")]
private static extern int dllfunction(Int32 argc, IntPtr[] argv);
public static void CallDll(string[] args)
{
IntPtr[] argv = ArrayToArgs(args);
dllfunction(args.Length, argv);
FreeMemory(args, argv);
}
The error occurs when the line 'dllfunction(args.Length, argv);' is executed.
The exact message is:
"Unable to load shared library 'MyDll.dll' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libMyDll.dll: cannot open shared object file: No such file or directory"
Also, if someone can teach me how to set the LD_DEBUG environment variable I would appreciate it.
I have a project developed with .NET Core and C#, running on Docker, that has to call a few functions on a DLL developed with C++. The problem is: when I run my project without Docker, on Windows using Visual Code, the code runs smoothly, but when I run on Docker, on a Linux container, the code throws an error when trying to execute the DLL function.
If I am reading this right, you have a C++ application that you compiled to a .dll (on Windows). You can DllImport this .dll on Windows, but not on Linux (container). Is that right?
Are you aware that C++ code compiled into a .dll (shared library) is a Windows-specific thing? Unmanaged code is architecture and platform specific. An unmanaged .dll compiled on x64 won't run on arm64. A unmanaged .dll compiled on Windows wont run on Linux.
Linux (and Linux containers, such as in docker) can't use a .dll built from unmanaged code on Windows. Linux needs the unmanaged (C++) code to be compiled into a shared library (.so file) for DllImport (and the underlying dlopen calls) to work on Linux. Ideally on the same platform as the container it will be running in.
The mono docs cover an (one particular) implementation of DllImport and give more background on how this works on Linux:
https://www.mono-project.com/docs/advanced/pinvoke/
(But keep in mind that Mono != .NET Core. It should still give you some more background information.)
This does not give the solution to OP's problem, but helps answer his 2nd question
Also, if someone can teach me how to set the LD_DEBUG environment variable I would appreciate it.
I am facing a similar issue, and am also struggling to understand what to do with this LD_DEBUG env variable. Turns out that it controls the verbosity of the debugging info for Unix's dynamic linker.
Following the advice here, running LD_DEBUG=help cat in a linux terminal will give you all the valid options for setting LD_DEBUG.
Here's a screenshot of the output of such command:
Additional useful resources:
Linux Apps Debugging Techniques - The Dynamic Linker
Linux LS.SO man page
Quoting from LD.SO man page mentioned above:
LD_DEBUG (since glibc 2.1)
Output verbose debugging information about operation of
the dynamic linker. The content of this variable is one
of more of the following categories, separated by colons,
commas, or (if the value is quoted) spaces:
help Specifying help in the value of this variable does
not run the specified program, and displays a help
message about which categories can be specified in
this environment variable.
all Print all debugging information (except statistics
and unused; see below).
bindings
Display information about which definition each
symbol is bound to.
files Display progress for input file.
libs Display library search paths.
reloc Display relocation processing.
scopes Display scope information.
statistics
Display relocation statistics.
symbols
Display search paths for each symbol look-up.
unused Determine unused DSOs.
versions
Display version dependencies.
Since glibc 2.3.4, LD_DEBUG is ignored in secure-execution
mode, unless the file /etc/suid-debug exists (the content
of the file is irrelevant).
I have a telegram bot written in c# and using .net-core 2.1. Now I am trying to run this application in a docker container on a raspberry pi. During the run I save a configuration file using the formatter System.Runtime.Serialization.Formatters.Binary. According to this question this binary formatter should be included in .net-core 2.1, right? But writing using the binary formatter fails with this exception. (Reading seems to work although the file is incomplete since writing always fails):
Exception when writing the modules: System.TypeInitializationException: The type initializer for 'System.Runtime.Serialization.Formatters.Binary.Converter' threw an exception. ---> System.IO.FileNotFoundException: Could not load file or assembly 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. The system cannot find the file specified.
I can run the application on windows 10, in a container in docker for windows 10 and on hypriot linux on the raspberry with ARM. The exception only happens in the docker container on the raspberry pi.
Any idea how I can solve this? Will I have to switch to a different serializer even though the binary formatter is supposed to be included in .net-core 2.1? I come from java and the easiest included approach there is binary so I just used it in c# as well.
You can find the whole project in this github project. The file dockerARM contains the docker configuration for docker on the raspberry pi. If you want to run the application you need a telegram bot API key. The whole docker-compose build log including the exception can be found here. There are a bunch of errors in the dotnet publish step I can't say I understand them. But the step succeeds.
In my program I have this simple code:
using System;
using System.Data;
using Mono.Data.SqliteClient;
....
IDbConnection cnx = new SqliteConnection("URI=file:reestr.db");
cnx.Open();
....
And this is how I compile it:
$ mcs Test.cs -r:System.Data.dll -r:mono.data.sqliteclient.dll
It compiles ok. But when I run it with ./Test.exe, I get this error messages:
Missing method .ctor in assembly ....
Unhandled Exception:
System.IO.FileNotFoundException: Could not load file or assembly 'Mono.Data.SqliteClient, Version=2.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies.
File name: 'Mono.Data.SqliteClient, Version=2.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756'
I'm not sure what I'm doing wrong here and how to repair it.
PS. I'm using Ubuntu as my OS.
It appears that Mono.Data.SqliteClient can not find the native SQLite binaries:
Prerequisites If you do not have SQLite, download it. There are
binaries for Windows and Linux. You can put the .dll or .so along side
your application binaries, or in a system-wide library path.
Ref: http://www.mono-project.com/docs/database-access/providers/sqlite/
To obtain pre-compiled native binaries (or source) for your platform:
http://www.sqlite.org/download.html
Also if you have the SQLite native shared libraries installed, are they available via dlopen? If not, you can assign the LD_LIBRARY_PATH env. var so Mono can find them at runtime.
Linux Shared Library Search Path From the dlopen(3) man page, the
necessary shared libraries needed by the program are searched for in
the following order:
A colon-separated list of directories in the user’s LD_LIBRARY_PATH
environment variable. This is a frequently-used way to allow native
shared libraries to be found by a CLI program. The list of libraries
cached in /etc/ld.so.cache. /etc/ld.so.cache is created by editing
/etc/ld.so.conf and running ldconfig(8). Editing /etc/ld.so.conf is
the preferred way to search additional directories, as opposed to
using LD_LIBRARY_PATH, as this is more secure (it’s more difficult to
get a trojan library into /etc/ld.so.cache than it is to insert it
into LD_LIBRARY_PATH). /lib, followed by /usr/lib.
Ubuntu Notes:
$ sudo apt-get install sqlite
$ ls -1 /usr/lib/libsqlite*
/usr/lib/libsqlite.so.0
/usr/lib/libsqlite.so.0.8.6
$ export LD_LIBRARY_PATH=/usr/lib:$LD_LIBRARY_PATH
$ mono ./Test.exe
I solve the problem in my Mac in this way. Right Click in Mono.Data.Sqlite on References and click in Local Copy. This make mono copy dll to debug folder and your application will find the library.
OBS: Sorry for my bad english.