How can I access a "local" path in an Azure Function? - c#

I have written an Azure function to create leads in the Zoho CRM. I gotten great guidance from members here and I have one last hurdle to get over. Zoho API writes to a location it calls the Resource Path. This has to be a local path. For example, running the function in VS, I can use a path to My Documents but not a system path to the temp folder or any other. I've tried several system paths which all throw an error. The actual error is a generic API error but Zoho support has told me its an access issue?
string path = context.FunctionAppDirectory;
string path = System.IO.Path.GetTempPath();
The last conversation I had with support on this, they explained it as
However, upon having further discussions based on your questions with my development team, I was told that it is mandatory for the resource path to be a local path (Documents path in our case) an that it is not possible to retrieve a file stored on cloud and use it with the local SDK.
So I THINK my question is, can I create something that looks like a local path to use in my Azure Function. I was reading about mounting a Azure file share as a drive letter (https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows) but it seems to only work with Azure Windows servers rather than Azure Functions.
Any suggestions are appreciated.

Using temporary is usually the way to go for most but I'm unsure why that doesn't work with the Zoho API.
An option that you could try is to write data to %HOME% or $HOME on Windows or Linux plans, respectively. If you are on Linux, you could also consider mounting a file share.

I got on the Microsoft Q&A to ask and was told it wasn't possible in Azure Functions. As you indicate it would have to be on a some kind of server hosting with a drive letter mounted.

Related

Create custom code to work around missing rename action on Azure Logic App FTP Connector

We are converting a existing C# console app to a Azure solution.
Pretty basic need:
Store files as blobs that need to be FTP'd to a client.
Insert messages into a Service Bus Topic with a JSON format for file name etc.
Have a logic app subscribe to the topic and FTP the file.
The ftp server is our customers and I cannot change its setup.
It starts processing files as soon as they are put on there and meet a certain naming style.
In the past all I do is put temp_ at the front of the file, then create it, then rename it.
Perfect.
We already have this working using standard .Net code, but now I am trying to get it to work using a Azure Logic app as detailed above.
Using a 'Create File' under the FTP connector. If I create it using the proper name it fails as the FTP server grabs the file before Azure is done creating it.
There is no rename functionality in the FTP connector -- can somebody please tell me I am missing something??
Joe
UPDATE 5-24-18
From Microsoft Tech Support:
After looking further into this, this is not going to be doable using
the out-of-the-box FTP connector.
The logic apps workaround would be to do this from custom code. You
could use the FTP connector’s Create File task to create the file on
the FTP server using the temporary name. Then, you could call a custom
Azure Function that would log into the FTP server and rename the file.
This would require you to create a custom function.
See the following link for more details on calling Azure Functions
from logic apps:
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-azure-functions
I am going to attempt to do this and if I can get it to work post it as an answer.
If anybody wants to take a swing at helping me that would be great!
The bigger question I guess here is "why"?
Why go through all this? Why not just use a C# function right from the beginning? The answer is partially just to have the experience of using Service Bus / Topics etc...plus if they ever DO update their FTP connector it will be a easy fix then. But I am still open to other answers or options.
BTW - please vote for that change here if you think it would be helpful:
https://feedback.azure.com/forums/287593-logic-apps/suggestions/19499953-add-rename-action-to-sftp-ftp-storage-etc-connect

Map Google Drive to Network Drive or Local Drive

I am attempting to connect to my Google Drive using C# and the Google Drive API and then map that as a network or local drive. There are other programs I know that do this like NetDrive (which is extremely useful and robust), but I am looking to create something on my own. I have created a project in the developer console and have been able to connect to Drive using my application and do various read and upload operations, so I know that particular portion is ok. Access and permissions all seem to be set. I just have no idea where to start when it comes to mapping that storage as a usable drive in Windows. Any advice would be most helpful, thank you very much!
There are two basic components for implementing a NetDrive/WebDrive type of solution. What you are looking at is the creation of an Installable File System and Network Provider.
The Network Provider, or NP, is the user mode component that handles the Network layers, including the mapping and unmapping of the drive letter, along with lots of other fairly complicated UNC/Network stuff. To get an idea of what you are in for, check out the Win32 WNET*() API; you will need to implement all of the WNet() calls specifically for your IFS and 'network'.
When you are done, you'll probably have the ability to to do a "net use \MyWebDrive\" in DOS and Map Network Drive in Windows Explorer. You might also be able to use Windows Explorer to enum the contents of the remote file system.
However, now you need to make sure that all third party applications can access your network drive...to do that, you want to implement the Win32 File System API, such as CreateFile, Read(), Write(), CloseHandle(), FindFirst(), etc.
To do this, you can write an Installable File System Driver, FSD, to handle all I/O calls from User mode applications wanting to read/write to the files on that mapped network drive. This will most likely be a Kernel Mode application...a signed/certified file system device driver....probably written in old-school C and maybe even utilizing TDI depending on how you want to do your network IO.
Microsoft is becoming much more strict about installing 3rd party kernel mode drivers and network providers. The WebDrive file system driver is now securely signed using a Microsoft based TLS certificate and our Network Provider has been registered with the Microsoft Windows SDK team as a legitimate Network Provider for the Windows platform.
Once you get these pieces in place, you'll then want to think about Caching. Direct I/O through your NP/FSD over the wire to Google is not practical, so you'll need an intermediate caching system on your local drive. There are lots of ways to do that, too many to go into here. However, just keep in mind that you may have multiple user mode applications reading and writing to your network drive simultaneously (or one app like WinWord which opens multiple file handles), and you'll need to be able to handle all those requests with proper locking and ACLs, and then map those changes and access rules to the remote server.
Don't lose faith...what you are looking to do is possible as WebDrive and NetDrive have shown, but it's not really a project that can be knocked out in a few weekends. I'm not sure about the author of NetDrive, but we've been developing WebDrive full time since 1997. It seems that every Windows Patch changes something and every new version of Adobe/Office/XYZ does something quirky with IO calls that makes us pull our hair out.
Note: There's also another way to implement this beast which may get around the FSD, it's the DropBox strategy. Using a temporary folder on your local hard drive, leverage Directory Change Notifications in a User Mode application to monitor file changes in the folder and dynamically synchronize the changes to the remote end. GoogleDrive and a lot of the other online storage companies do it this way because it's quick-&-easy; however, if many changes occur in a short period, a Change Notification could get lost in Windows Messaging and data might get trashed.
I realize this is a lot to digest, but it's doable...it's cool stuff; good luck!
I suggest that before you start coding, you take time to thoroughly understand Google Drive and map its capabilities to/from Windows. Some sample points of impedance:-
folders in Drive aren't folders at all
A file in Drive = the metadata, content is optional
Drive has a lot of metadata that doesn't map to NTFS (eg. properties)
Will applicable files be converted to Google Docs, or stored as is
How will you map revisions
Permissions
There are almost certainly more, this is just off the top of my head. Your app needs to make decisions regarding all of these aspects. Generally, Drive offers more capabilities than NTFS, so provided you are simply using it as a backup repository, you should be OK.

Opening files locally from web application

I recently added a way for my web application (ASP.NET written in C#) to go to a folder which contains a bunch of spreadsheets and import them into SQL server tables. I set the folders and file names using an admin table so it knows how to handle each file and which table they should go to etc. It even keeps track of the file dates and times so it ignores anything that isn't new since the last time it imported them. Very cool but it only works on my development machine, most likely because the path is easily recognized there.
I'd like others to be able to do this but I can't seem get the web application to access a pre-arranged path on the users local machine. Now I'm assuming this is normal (we shouldn't be able to have a web application reach into someone's machine and grab files!) but is there some way to either do it using a known path or by having a user select the local folder? Is it possibly done more easily if I put the files in a folder within the site?
Dana
If I understand your question correctly, the approach is that you want a user to type in a local file path and you process it.
This will not work through a website. And from a security perspective this is very wise as you point out. So unless you install some client application on the local machine it is not possible.
You will need a file-upload dialog and have the user explicity locate the files for you, click upload and process them on the server.
Some other strategies here:
https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications
but it still requires the user to select them manually.

Windows 8 app: accessing local and network files

I started programming Windows 8 apps in C#, and I need help to figure how to access files from within my app.
Premise: from what I learned till this moment, I know that as standard an app can access only certain folders (such as LocalFolder etc.). One way to overcome this is using a FileOpenPicker once and then use FutureAccessList to access programmatically the file.
Now my situation:
my pc is connected to a domain network
my app has the following capabilities: Enterprise Authentication, Internet (Client & Server), Private Networks (Client & Server)
Let's say that I have a file named im.jpg in C: and im2.jpg in a network share called share. I (my domain account) have the access to those files.
How can I access those files from my app without the use of FileOpenPicker?
Till now, I used WebRequest and WebResponse to download a file from an internet site (no problems), then I used WebRequest and WebResponse to access a file located in the LocalFolder of my app (still no problems), but now, if I do something like this:
WebRequest c = WebRequest.Create(#"file://C:\im.jpg");
WebResponse r = await c.GetResponseAsync();
I get a System.Net.WebException telling that I'm not authorized to access the file.
How can I use Enterprise Authentication to provide my domain credentials to the app? I didn't find much documentation about this on the MSDN, but from the little I found I think this capability is intended to be used in a situation like this. Also, is WebRequest the right path to follow? Should I try to access those files in other ways?
Thanks, Daniele
No matter, how you try to access the files from a Windows Store app, the permissions still apply, i.e. you can't access a random file on a disk or a file share if a user didn't grant you access before using a FileOpenPicker or a FolderPicker.
The best you can do is probably getting the user to access the root (of a disk or a share) using a FolderPicker and storing a reference to it in FutureAccessList. This way you'll be able to access all the files on that drive without any user intervention in the future.
Having a service as a workaround should work for shares (why not just use HTTP to allow file download instead?) but not for the local file system because your app can't access local network resources (i.e. services on localhost) unless you're running it from VS or using CheckNetIsolation.exe by hand.
I don't know about Enterprise Authentication, but opening a file is usually done using StorageFile::GetFileFromPathAsync. That also works with UNC path names when you have the permission..

In C#, how can I access a fileshare on a domain from outside a domain?

I've got a webserver where people upload files. What I need to do is take those files and write them to a file share on the Active Directory domain. The problem -- the webserver is not on the domain.
So, how is the best way to do this? I would have thought this would be easy, something along the lines of create a connection with some credentials and do it. But apparently not. The closest I've found is Impersonation with WindowsIdentity.Impersonate, but everything I've read says that is a bad idea in a production environment.
Any ideas? I'm working on a solution that FTPs the files, but that's unsatisfying too, and a fallback plan.
I'm using c# and .net 4.0 in (obviously) a windows environment.
Edit: I should point out that I can't run servers (or services) that access the outside on that domain. The FTPing is a temporary workaround.
I would have another program probably a Windows service pick the files from the web service file location and move them to the active directory directory. I would probably have this process execute from the location where they are being copied to. Make them available in a share on the web server visible only to the process's user and admins.
I think that an FTP solution is better than using a Windows Share; however, I would think a web service of some type would be the best option for an inter-domain file exchange. That said, if you've got it working with WindowsIdentity.Impersonate -- why not use it? What context did you read that it was a bad idea?
Is there any way that you can map this file share as Network Driver. If you can do that, you don't need to manager Security and will be super easy to access these files as if they are local.

Categories

Resources