In my .net core web API , I'm using OpenHtmlToPdf nuget package for rendering HTML documents to PDF format. The implementation working fine locally but not in the server production K8 environment. I'm getting following message from the logs.
Permission denied
I did several workaround and was able to find out, a lib called OpenHtmltoPdf uses Path.GetTempPath() and Guid.NewGuid() to create a temp file. seems above permission denied error occurs when it trying to access that temp path. this is code snippet from the OpenHtmltoPdf lib. OpenHtmlToPdf git repo
//inside TemporaryPdf class
public static string TemporaryFilePath()
{
return Path.Combine(Path.GetTempPath(), "OpenHtmlToPdf", TemporaryPdf.TemporaryFilename());
}
private static string TemporaryFilename()
{
return Guid.NewGuid().ToString("N") + ".pdf";
}
I just added following line to my application to determine the temp path and its returns File Path is: /tmp/.
Console.WriteLine("File path is:" + Path.GetTempPath());
But in the Kubernetes pods have rwx permissions for /tmp/ folder for the all users.
My problem is, after deploying the API, do the NuGet packages use a separate temp path? So how exactly identify it?
Related
I am using azure function app V3 and I am reading an excel from Data folder in the same project.
I have set Copy-always for excel and it's been copied into bin folder.
So In code I'm referring path as
var binDirectory = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)
but after publishing the Data folder is out side bin folder and my code is unable to find the excel.
Path of my excel after publish:-
C:\home\site\wwwroot\Data
path my code is looking at:-
C:\home\site\wwwroot\bin\Data
Kudu Path
Is there a generic way to make path to work in local and after published to azure.
Any help is appreciated!! Thanks in advance.
Try to add ExecutionContext context parameter in your function method, the use the code below:
public static void Run(other parameter, ExecutionContext context)
{
string templatePath = Path.Combine(context.FunctionAppDirectory, "Data");
}
I'm having some issues trying to access a folder outside the application root folder with an asp.net application deployed on Linux.
The applcation is deployed at the moment (for testing purposes) in /home/pbl/projects/pbl-web/.
I want to access a folder of an external hard drive I've mounted on the system, and it's located in /mnt/ExtDist/Data.
I'm using a PhysicalFileProvider with the following path: /mnt/ExtDist/Data. This path is configured in app.settings.json file and retrieved through the IConfiguration configuration variable.
Here's a part of the code in the Startup.cs file :
public void ConfigureServices(IServiceCollection services)
{
...
var imagePath = configuration.GetSection("PhotoManagementSettings")["ImagesFolderPath"];
var rootPath = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location);
this.imagePhysicalFileProvider = new PhysicalFileProvider(imagePath));
I've tried in different ways with no luck so far:
passing the absolute path
passing the relative path and combining with the rootPath variable (see the code above).
The PhysicalFileProvider is getting me the following error:
Unhandled exception. System.IO.DirectoryNotFoundException: /mnt/ExtDist/Data/
Testing the code in windows and giving it an absolute path like i.e "C:\Test" works fine.
So there's something weird in linux that is failing, but I cannot understand why. Any clues ?
Thanks in advance
Paolo
I have created a Azure Function, which is having a dependency on ini file.
public class DataProcessingFunction
{
FunctionName("DataProcessingFunction")]
public async Task Run([EventGridTrigger]EventGridEvent eventGridEvent,ILogger log)
{
string iniFolderPath = $#"{Directory.GetCurrentDirectory()}\Ini\";
string iniFileName = "Sample.ini";
var iniConfig = FileManager.ReadFile(iniFolderPath, iniFileName);
}
}
I have selected Copy if never option in Visual Studio while publishing the code to Azure function
Also, I have tried selecting Embedded Resource. But I am not able to find the file
I get an exception
File not found.
Add/Upload option in Azure portal is disabled because I am publishing the function from Visual studio
Question: Do I need to upload file to blob and then refer it in a code?
The final conclusion is that the file was successfully uploaded, the problem is that an error occurred while reading the path. It seems that using Directory.GetCurrentDirectory() in Azure is not reliable.
I just tried it and Directory.GetCurrentDirectory() got the wrong path in azure (I printed it out and it showed "D:\Program Files (x86)\SiteExtensions\Functions\2.0.12961\32bit", This is obviously not the current folder), and eventually it fails to find the Sample.ini file. Since the function is your own, you can set the path to something like "D:\\home\\site\\wwwroot>\\Ini\\Sample.ini". This should read the Sample.ini file.
Here is the way to do it.
ExecutionContext context; // You can modify your function method to take an additional parameter of type ExecutionContext
public static async Task Run(<.... Other parameters>, ILogger log, ExecutionContext context)
Then build the path by combining function directory, "templates" is the directory I have in the function app project which is deployed along with other code, emails.html is file within templates (For each of these files you have to set, copy-always or copy-if-newer in the properties.). Instead of templates you can have your .ini file.
string templatePath = Path.Combine(context.FunctionAppDirectory, "Templates", "emails.html");
I have an SSIS package, zip.dtsx. This package successfully runs on serverA. I copied this package in serverB. However, when I try to run zip.dtsx on serverB, it fails.
zip.dtsx just reads a file in a source folder, compresses it, saves the compressed file to a different folder, then deletes the original file in the source folder.
After some investigation, I figured out that if I comment out the part in the C# script task that deletes the file in the source folder. The package runs successfully.
I need to delete the file in the source folder. Otherwise, this file will just be repeatedly loaded to the database. I've already re-added the script task references as suggested here, but still I cannot make the file.delete run successfully.
public void Main()
{
String sourcePath = Convert.ToString(Dts.Variables["SourcePath"].Value);
String namePart = Convert.ToString(Dts.Variables["NamePart"].Value);
String destinationPath = Convert.ToString(Dts.Variables["DestinationPath"].Value);
FileStream sourceFile = File.OpenRead(#sourcePath + namePart);
FileStream destFile = File.Create(#destinationPath + namePart);
GZipStream compStream = new GZipStream(destFile, CompressionMode.Compress);
try
{
int theByte = sourceFile.ReadByte();
while (theByte != -1)
{
compStream.WriteByte((byte)theByte);
theByte = sourceFile.ReadByte();
}
}
finally
{
compStream.Dispose();
sourceFile.Close();
destFile.Close();
File.Delete(#sourcePath + namePart);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
UPDATE:
After trying the exact same code here. and founding out that this code, deleted my file in the source folder, I tried to update my code to follow the way the file was deleted in the link. However, it still did not work. Below is how I updated my code.
String sourcePath = Convert.ToString(Dts.Variables["SourcePath"].Value);
String namePart = Convert.ToString(Dts.Variables["NamePart"].Value);
String destinationPath = Convert.ToString(Dts.Variables["DestinationPath"].Value);
FileStream sourceFile = File.OpenRead(#sourcePath + namePart);
FileStream destFile = File.Create(#destinationPath + namePart);
GZipStream compStream = new GZipStream(destFile, CompressionMode.Compress);
try
{
int theByte = sourceFile.ReadByte();
while (theByte != -1)
{
compStream.WriteByte((byte)theByte);
theByte = sourceFile.ReadByte();
}
}
finally
{
compStream.Dispose();
sourceFile.Close();
destFile.Close();
FileInfo currFileInfo = new FileInfo(#sourcePath + namePart);
currFileInfo.Delete();
I've finally figured this out.
To explain the whole situation, we have a fully running sqlserver, serverA. We want to duplicate this on serverB so that we can have a test environment at serverB. The necessary databases are restored on serverB already, all that's left are the SSIS packages and SQL Server Agent Jobs.
The main package (main.dtsx) that loads the contents of the files to the DB was failing.
From Integration Services Catalog -> Catalog folder -> right click -> Reports -> All Executions, I've learned that it is failing on Zip.dtsx that is called from main.dtsx. Zip.dtsx that compresses the file accessed, archives it and deletes it from the source folder.
After playing around with the script task in Zip.dtsx(an idea I got from Kannan Kandasamy's comments), I figured out that it's in the File.delete() part where my script task fails. Instantly, one would think that it is a permission issue.
My 1st mistake is I continued to play on my script task while executing Zip.dtsx by right click -> Execute task on Visual Studio. I kept getting the runtime error screencaptured in my previous post without realizing that I was getting it because I am using package variable passed by main.dtsx to zip.dtsx. I got hung up with this until I figured why the script task runs successfully when I replace the variables with my hardcoded pathnames.
My 2nd mistake was to replace the package variable of Zip.dtsx with my hard coded paths. Until finally, I realized that the folder accessed by Zip.dtsx is a local folder in serverB and I'm running the SQL server agent jobs in my local machine, say machineA. So, I shared the serverB local folders to my user account. For some reason, that messed up my package badly that it no longer sees the files in the folder thus my package succeeds running because it finds the folder empty.
The main solution:
I changed back the changes I did to Zip.dtsx and removed the sharing of serverB's local folders to my user account and instead added the NT Service\SQL$Instance to have full control over the source folder where the script task deletes the files. See this link to check how to add the SQLServer$Instance to the folder permission settings.
Other issues I encountered are:
When transferring packages form different servers and the packages fail, I think this is also important. I also did this step. But in serverB, I couldn't find Microsoft.SQLServer.ManagedDTS.dll. What I did is I copied this dll from serverA to serverB. Also check your system path to see what version of Microsoft SQL Server tools you are using on default and make sure that is what you reference in your script task.
At some point in time, while investigating, I encountered an error when viewing Integration Services Catalog's All Executions report. I solved this by going to C:\Temp folder. When I double click it to access it, a dialog appeared that I currently do not have access. I continued by clicking the 'continue' button in the dialog. After that, All Executions Report is running again.
The error I got from SSMS is,
TITLE: Microsoft SQL Server Management Studio
An error occurred during local report processing.
(Microsoft.ReportViewer.WinForms)
------------------------------ ADDITIONAL INFORMATION:
The definition of the report '' is invalid.
(Microsoft.ReportViewer.Common)
An unexpected error occurred while compiling expressions. Native
compiler return value: ‘[BC2001] file 'C:\Windows\TEMP\mpgc21o3.0.vb'
could not be found’. (Microsoft.ReportViewer.Common)
------------------------------ BUTTONS:
OK
I'm not sure how much of this really mattered on my case but I'm sharing it in case someone went through the same problem. I am a beginner in MS SQL Server so I think this might help someone who is also a beginner like me.
I'm probably being blind, but this has me stumped. The following unit test fails on my local machine (but not on our build server) with a System.UnauthorizedAccessException.
[TestMethod()]
public void UserPreferences_LocalApplicationDataPath_PathAlwaysExists()
{
//setup: get the proposed path and then delete the bottom folder and any existing files
string logPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
string actual = System.IO.Path.Combine(logPath, #"Autoscribe\Matrix\Gemini");
//if the folder is there, first delete it
if (Directory.Exists(actual))
{
//log some information about what is going on
Console.WriteLine("Attempting to delete " + actual + " as user " + Environment.UserName);
Directory.Delete(actual, true); // <-THROWS EXCEPTION HERE
Console.WriteLine("Deleted");
}
//action
actual = UserPreferencesManager.LocalApplicationDataPath;
//assert that getting the path forces it to be created.
Assert.IsTrue(Directory.Exists(actual));
}
The reported Environment.UserName value is the Windows user who 'owns' the local AppData folder. Why can't folder be deleted? Oddly I don't have this problem on all machines that the test is run on (all Windows 7).
You may not be able to delete the folder locally because you still have other applications running that access files in it. Maybe you have opened a notepad to check a logfile in this folder?
The build server does not have these problems, because there are no users or processes actually "working" on the machine.