I am using a C# library I found that runs some remote Powershell scripts for Outlook Live. As a proof-of-concept that it works, I made a unit test that I can run (and step through for debugging) which merely calls a public static method inside this library which behind the scenes opens a remote Powershell session and runs a script.
This works just great in one of our Solutions, but it does not work when I run it in another Solution, even though both Solutions have the same two projects and test classes in them.. One Solution gives me an exception
There were errors in loading the format data file:
Microsoft.PowerShell, , C:\Users\xxxx.xxxx\AppData\Local\Temp\tmp_8c84e626-4399-420b-b874-9feeb3b1e195_tjlxhzzr.rlr\tmp_8c84e626-4399-420b-b874-9feeb3b1e195_tjlxhzzr.rlr.format.ps1xml : File skipped because of the following validation exception: File C:\Users\xxxx.xxxx\AppData\Local\Temp\tmp_8c84e626-4399-420b-b874-9feeb3b1e195_tjlxhzzr.rlr\tmp_8c84e626-4399-420b-b874-9feeb3b1e195_tjlxhzzr.rlr.format.ps1xml cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details..
One attempt I made was to modify the PSCommand inside the C# library to have Unrestricted set and that did not solve the problem. However, if I open up the x86 Powershell and run set-executionpolicy Unrestricted my test will run in both Solutions. Changing the execution policy in the x64 version of Powershell had no effect on either Solution.
Is there some type of setting for permissions that is specific to a Solution? Neither the Web.Config or Global.asax should matter since I'm not loading any pages, so I don't know what else would take effect since I'm running the same unit test in both solutions with the same test runner (Testdriven.Net).
Any ideas? Thanks.
Run as administrator usually sets hklm otherwise it is set in hkcu
on a 64bit OS you need to run Set-ExecutionPolicy for 32bit and 64bit PSH separately
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell
to check
cd "hklm:\Software\Microsoft\Powershell\1\ShellIds\Microsoft Powershell"
dir
Related
I have a powershell script that automates the set up of PCs for a company. These PCs are not connected to any network when setting up and thus the script has to be manually transfered and started on each PC. The script has thus far been started with the help of a batch file which starts Powershell as admin with executionpolicy Bypass and works as intended.
However, i have created a simple GUI in C# WPF where the operator enters all the relevant information needed for the script to set up the PC. The script is then started from C# by running the batch file and it works, but there is a problem...
When starting from C# the script is not allowed to open secpol.msc to change some network policies. It does not recognize secpol.msc as a valid cmdlet. But, when the script is started via the batch file, or ISE, secpol.msc is a valid cmdlet. Below is the code used to start secpol.msc.
add-type -AssemblyName System.Windows.Forms
add-type -AssemblyName microsoft.VisualBasic
secpol.msc \
I get this error when starting from C# gui.
ERROR: An error has occurred [System.Management.Automation.CommandNotFoundException: The term 'secpol.msc' is not recognized as the name of a cmdlet, function, script file, or operable program.
Here is the code for the batch file.
#ECHO OFF
SET ThisScriptsDirectory=%~dp0
SET PowerShellScriptPath=%ThisScriptsDirectory%IPC-Bot-Main.ps1
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""%PowerShellScriptPath%""' -Verb RunAs}";\
It does not seem that the script is allowed to change registry keys either.
I could work around the secpol.msc problem by opening gpedit.msc and navigating from there. But as it can't change registry keys either i have to solve this problem.
I'm suspecting that the script is opened in another scope when run from C# rather than the batch file.
Has anyone encountered this problem before or something similar?
EDIT:
Here is the C# code i use to run the batch file.
string strExeFilePath = System.Reflection.Assembly.GetExecutingAssembly().Location;
string path = System.IO.Path.GetDirectoryName(strExeFilePath);
Process.Start(path + #"\sources\StartAssist.bat");\
EDIT:
I think i should clarify, the C# GUI opens and runs just fine on the VM from the executable. And the script also starts as it should but the script does not have the same capabilities when starting from the GUI compared to when run from just the batch-file.
Thank you for all the comments and ideas! It made me think in different ways. Much appreciated!
It seems the problem was in the projects Build properties in C#. I Unchecked the option "Prefer 32-bit" and apparently that solved the issue.
If anyone has an idea as to why this could cause this issue i would love hear it.
I am really at beginner level for working with commandlets and powershell stuff. I am invoking commandlets from C# using PowerShell API. I saw strange behaviours. While on different threads on stackoverfow, people explicitly importing modules using Import-Command or PSImportModule method, I can see if there is an assembly at $env:PSModulePath available already, it is automatically loaded. Does this behaviour is by default or due to criteria configurations which i m overlooking. I am running unit test in ms test environment.
I am using following code.
System.Management.Automation.PowerShell _powerShellInstance
_powerShellInstance.AddCommand("Connect-AzureRmAccount", false);
var output = _powerShellInstance.Invoke();
// Invoking my commandlets
_powerShellInstance.AddCommand("Get-LinkParameter", false);
The above command automatically loads the assembly from C:\Windows\system32\WindowsPowerShellModules\v1.0\Modules\. I have not created any runspace and no configuration sets. Just above automatically loading things. I need to confirm how exactly is the powershell and run space behaves. Because I need to clear how i need to install my commandlets on production machine then. How unit tests on production machine will access my commandlets to perfectly get loaded.
While it is good practice to explicitly load the modules you want using Import-Module, since Powershell 3.0 if a module is available at one of the locations returned by $env:PSModulePath, it will automatically get loaded by default if one of its cmdlets are invoked. Below is a breakdown of the different paths:
User Modules
$modulePath = "${env:UserProfile}\Documents\WindowsPowerShell\Modules"
Modules installed here are only available to the current user's Powershell session, and by default modules installed using Install-Module are saved here.
All User Modules
$modulePath = "${env:ProgramFiles}\WindowsPowerShell\Modules"
Modules installed here are available to any users' Powershell session.
System Modules
$modulePath = "${env:SystemRoot}\system32\WindowsPowerShell\v1.0\Modules"
Modules installed here are available system wide to any Powershell session, but should be kept clean for Windows to manage. Typically, you do not want to install your own modules here.
Adding Additional Module Paths
You can add additional paths to $env:PSModulePath similarly to how you would modify the $env:PATH variable for resolving executable paths. It is simply a semicolon ; delimited string of directories where modules reside, and if a module is available at any path in $env:PSModulePath, Powershell will know where to find it. And in fact, you may see that other installed tools may have added their own paths to $env:PSModulePath. A few of examples of programs/toolsets which do this are Microsoft SQL Studio, Microsoft System Center - Operations Manager, and the Chef Development Kit.
Importing a Module Not On the Path
As far as I know, you cannot load a Powershell module that is not a part of $env:PSModulePath. However, you can temporarily edit $env:PSModulePath to contain a directory with a module you want to load. For example, if you wanted to import a module named TestModule from some arbitrary path:
$env:PSModulePath += ';C:\Path\To\Temporary\ModuleDirectory'
Import-Module TestModule
where TestModule exists as a sub-folder of C:\Path\To\Temporary\ModuleDirectory
You do not need to back out the module path change when you are ready to end your Powershell session as the change above is temporary. Consequently, you would need to modify $env:PSModulePath in every session, so if TestModule was something you wanted to have available at all times for use, you can either copy it to one of the other directories in $env:PSModulePath or permanently add C:\Path\To\Temporary\ModuleDirectory to the PSModulePath environment variable.
A Note About UNC Paths
You can also add UNC (network) paths to $env:PSModulePath. However, I believe any remote module scripts will still be subject to the Powershell ExecutionPolicy set on the system.
And a Bit More About Installing Modules
By default, Install-Module installs a module to the User Modules directory, but you can control this a bit with the -Scope parameter. For example, the following commands will change the location a module is installed into:
# Install to the current user location (default behavior if scope unspecified)
Install-Module -Scope CurrentUser $moduleName
# Install to the all users location (requires elevated permissions)
Install-Module -Scope AllUsers $moduleName
Unfortunately, those are the only two locations PowerShell will assist you installing your modules to. The System modules are critical to the operation of PowerShell and should not be modified by end users, and additional paths added to $env:PSModulePath are likely managed by software outside of PowerShell, typically by MSIs or other installers.
Additionally, if you write software that ships with one or more PowerShell modules it's good practice to have your installer add a new directory to the system's %PSModulePath% and drop the module there, rather than installing to the standard AllUsers or CurrentUser path, as these are really meant for the end user to manage at their whim. Have your software update process update the module in this case. This has the benefit of preventing accidental modification or removal of the module to an incompatible version.
I have a simple application using a product activation system offered by cryptlex (cryptlex.com).
The program works correctly on my computer, but when I try to run the program on another machine it returns this error:
I've already made sure that the dll is inside the executable folder and everything looks OK.
When I remove all part of cryptlex the program works perfectly on any machine (x86-x64)
I used depencywalker to check for errors and found these two in the executable that uses cryptlex:
Windows 7 64bits,
.NET Version: 4.0
You can use Process Monitor to record all file activities of the program. Set a filter for your executable. After reproducing the error, save the log as XML file.
Then run ProcMon Analyzer (note: I'm the author of it). It will analyze the file and give a list of DLLs that were not found.
You could also do that manually, but note that some DLLs may not be found at first, but later be found when looking in the %PATH% environment variable etc. The tool will remove all those entries which have PATH NOT FOUND first but SUCCESS later.
While the DLL is present, have you checked the bitrate?
Most C# projects default to building against Any CPU - if the DLL is specific to a bitrate (ie x86 or x64) then it might be that the program picks the wrong bitrate on end machines (usually x86) but the right one on your machine (x64). This is usually best resolved by building out different x86 and x64 versions; it's messier, but only .NET itself is good at using the Any CPU paradigm.
The exception should have detail about what DLL in particular was not found - maybe look closer?
GPSVC and IESHIMS missing should not be a problem; as indicated by the hour glass, they're deferred dependencies anyway.
I have a c# project that generates an EXE file. Now, I'm in a "secure" corporate environment, where I can compile my project, but I cannot execute the EXE file.
As a Java programmer, I'm wondering if there is not a way to compile the c# project into something that would not be an EXE file, but a CIL file and then execute the CIL file by something that corresponds to java.exe in the dotnet world.
EDIT in response to comments:
I can run exe files that have been installed by a package manager
Yes, I know the corporate policy is stupid.
Well, this should be pretty easy.
.NET executables are simply DLLs like any other - the main difference being the executable format itself, and the fact that EXE files have an entry point, while DLLs don't.
It also means that you can load the EXE into memory exactly the same way as you would with a DLL:
Assembly.LoadFrom("SomeExe.exe");
You're already half way there - now we just need to find and execute the entry point. And unsurprisingly, this is also pretty trivial:
var assembly = Assembly.LoadFrom("SomeExe.exe");
assembly.EntryPoint.Invoke(null, null);
For most applications, this should work perfectly fine; for some, you'll have to make sure the thread you're using to invoke the entry point has STAThread or MTAThread respectively (Thread.TrySetThreadApartment if you're starting a new thread).
It might need tweaking for some applications, but it shouldn't be too hard to fix.
So you can just make some bootstrap application ("interpreter") that only really contains these two lines of code. If you can't get even that approved, and you really need something as an "official package", try some .NET application that allows you to execute arbitrary code - for example, LINQPad, or PowerShell.
EDIT:
This does have limitations, of course, and it does introduce some extra setup work:
The bootstrapper has to target the same or higher version of .NET Framework. .NET Portable might be particularly tricky, but I assume you have that well under control. It also has to have the same bitness (if specified explicitly).
You need to run the debugging through the bootstrapper. That actually isn't all too hard - just go to project properties, debug and select "Start external program".
The bootstrapper has to run under full trust conditions - it's necessary for reflection to work. On most systems, this simply means you have to have the exe as a local file (e.g. not from a network share). Tools like LINQPad will run under full trust by default.
The application must not depend on Assembly.GetEntryAssembly. This isn't used all that often, so it shouldn't be a problem. Quite a few similar issues should also be fine since you build the application you're trying to run yourself.
I am running the HelloMvc sample application from the command line using k web. I have tried to run it using the different environments available to me using kvm use -runtime. When I change the controller and hit F5 (or Ctrl+F5) in the browser, the code is not recompiled automatically and the page does not change. What am I doing wrong?
Active Version Runtime Architecture
------ ------- ------- ------------
1.0.0-alpha3 svr50 x86
1.0.0-alpha3 svrc50 x86
1.0.0-alpha4 CLR x86
* 1.0.0-alpha4 CoreCLR x86
Running dnx web from your command line only starts your host. To get the automatic recompilation goodness something needs to watch the files for changes and restart your host if any changes are detected. To accomplish this use the --watch flag and run your web command like this:
dnx --watch web
Currently this just shuts down your host when a change is detected, so you need something that restarts it once that happens. IISExpress does this for you if you run your project from Visual Studio 14.
Your best bet for this workflow outside of Visual Studio is through a JavaScript build tool or npm scripts. I would recommend you to look into this gulp-aspnet-k plugin (note this plugin only works on windows currently) if you want continuous recompilation on file changes while working outside of VS14. Seems to be the best way to accomplish that without IISExpress that I have found. This plugin is/was windows specific, but looking at the code should get you started. :)
Glenn F. Henriksen has written a wrapper for nodemon that is very nice, called kmon. Try that out as well. The kmon GitHub repository has all the instructions you need
Based on the gulp plugin linked to by AndersNS, there's a bit of powershell you can use to automatically restart the application:
#powershell -NoProfile -ExecutionPolicy unrestricted -Command "for(;;) { Write-Output \"Starting...\"; k --watch web }"
If you stick this into a batch file (e.g. run.cmd) you can easily start the application, keep it running and automatically restart and rebuild on file changes.
Make sure you adjust the k command line if you want to use another target than web.