I have my files stored in Azure File System and here are the requirements -
User should be able to view the documents without downloading it to the local
This is working fine for pdf but not any other mime types
I tried setting Content-Type,Content-Disposition (in azure file properties but no luck) and also iframe.
User should be able to edit the doc online without downloading.
I don't think this is possible just with Azure and have to integrate with One Drive may be? Correct me if I am wrong?
I would really appreciate any inputs/thoughts.
Not sure if this is a viable option but using Storage Accounts in Azure, you can map these accounts as a network drives to any client machine. So they would be able to access these files via File Explorer.
This link covers the basic steps in setting it up.
Unfortunately for anyone who wishes to use this feature, they need to be on Windows 8 (or above) to be able to map a network drive successfully as it uses SMB3.
If this option is a no go I will delete the post.
Related
I am accessing the azure files share via UNC. I have it mount in a windows vm and I am able to access/read files. However I also need to read and do things based on the metadata that is being set on those files.
As far I know the metadata are custom key-value pairs that can be stored on a azure file share, folder and files. A different application set it via Rest API sdks.
So, is there any way to get/set those custom metadata by mounting it in a vm?
I am using a c# program to read into the share and list files in order to find newly uploaded files. Although it works on checking last modified date, I still need to filter on a specific metadata to prevent double processing.
The REST metadata can only be accessed from REST and is not available through SMB.
If you wish you may leave your feedback here. All the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.
I'm trying to add a link in a project that will open a tutorial that explain how to use the system.
The tutorial is only relevant to those who work in the company so I figured it would be best to place the the tutorial on our company network drive and only redirect to this file from web client. In that way it will also be easy to change the tutorial without reuploading the system.
The problem seems that chrome is blocking me from opening a local file from the web client.
But if I remember correctly there are web sites that can open a local pdf files. so why I can't open the tutorial? It is basically a presentation (I think in flash, not really important) that you can open by opening a html file.
Is there any alternate solution for this?
I'm also okay if chrome will ask the user if he is sure that he wants the file to be opened or something like this.
But I don't want to add files directly to my project code because it will require reuploading the whole project to only update the tutorial.
I'm using ASP.NET Framework in the backend and Angular in the Frontend.
No person with a brain would use a browser that lets it open local files. You mean you come to my site to watch cat videos and all the while the browser is opening your banking files or stealing your excel sheet marked "my passwords?". not even close! A browser cannot open local files period!!! To high of a security risk.
You can of course EXPOSE a folder to the web server, and then ANY file in that folder becomes part of the web site and MORE important becomes part of the web server "URL" mapping.
So, you can have your typical site (in say inetpup\wwwroot. In that folder then ONLY valid urls to files in that web site folder are allowed.
And say you have some big file server on your network full of files? Well, then you can add to the site what is called a virtual folder.
So, say the web site is on
c:\inetpub\wwwroot
So, any valid URL typed in a by a user (or your code launching a web page) is now
http://mysite/default.aspx
The above file of course maps to
c:\inetpub\wwwroot\Default.aspx.
So, say you have a folder on the network full of pdf's you want users to see/view/use?
Well, say that folder is:
\SERVER5\PpdFileArcive.
So, that folder is NOT part of the web site. What you do is add a virtual folder to the asp.net site. Lets call it MyPdfs
You map MyPdfs to the above \SERVER5\PdfFileArchive
So, now your URL becomes this:
http://mysite/MyPdf/help.pdf
So, the browser can NOT look at local files on your computer (and would you actually think that is ok that my web site can rummage around on YOUR local computer? Not!!!!).
So you can have some folder sitting on your local network. And if the web site is on that SAME network, then you can add a virtual folder to the web site. That "external" folder will then become part of the web site and mapped to a web "URL" that will then allow you to say have hyper-links, or even allow the user to type in ANY url like:
http://mysite/MyPdf/HowToCookPotatos.pdf
In fact, in many cases you do NOT want users to type in URL's, and in often you do NOT want the users to be able to type in a url.
Well, keep in mind that code behind (the .net code running on the web server) is really the same as .net desktop code. That code behind can open/read/use any file anyplace on the network - including files on your desktop. But that would assume the web server is on the SAME network as you, and it would assume that say even a desktop program running on ANY computer has rights to YOUR desktop folder. (and that's not the default - I think most desktops have a shared public folder).
So, code behind on that web server can open, read/process any file. But you NOT be able to say use a hyper-link, or say a valid URL to get at those files.
Since a company often does NOT want to expose that big huge pdf folder to the wild and crazy internet? Then what is common done is that you do NOT create a virtual folder, and you do NOT map URL's to that internal company folder. you have the code behind OPEN + READ and then STREAM the file down to the browser. You can google and find 100's of examples - just google stream a pdf to browser for how this works. But again, keep in mind that this trick/suggestion STILL is limited to code behind running on the server being able to directly access and open that file sitting on a folder. That file can be any place on the network as long as the web server has rights to read/open/use such files. Users of the web site will not have any possible URL to type in, but if the code behind has such rights or the ability to open such files, then web code can be written to "dish out" or so called "stream" that file to the browser.
So keep in mind the concpet of a web URL (a valid web path name), and that of code behind that does NOT use URL's to open and read files - but you use plane jane regular windows file path names.
Of course if you have a virtual folder and a URL exposed to end users, then code behind STILL often needs to process/open/copy or do whatever with that URL the user typed in. That's where server.mapPath comes in. it will translate the URL value to a full internal path name.
So
Code behind = ALWAYS needs a full valid windows path name.
URL -web folders (including virtual folders that point to server folders). These URL's can be used in hyper-links, web navigation, and even allows the user to type in a full valud URL to the given pdf in that folder (but the user will type in a valid URL that resolves to that folder).
So while you can't for all practical purposes have a browser read/get/use local files on a YOUR computer? You can certanly setup a folder on some server (even the web server) that has all those pdf's, pictures or whatever you want. And with a mapped virtual folder, then the Web users can then consume such files.
Or if you want to keep things locked down, don't want users typing in URL's to files that might not belong to them? Then you can of course maintain a list of files say in a database or whatever. And the code behind can read such files 100% directly with a full valid internal path name and then PUSH (stream) that file out to the user.
Here is a example of such code:
Stream PDF to browser?
Q1: Where do you think is the right place to put a SQLite database file (database.sqlite) in Azure Web App file system? For example:
D:\home\data\database.sqlite
D:\home\site\database.sqlite
D:\home\site\wwwroot\database.sqlite
other?
Q2: What else should be taken into consideration in order to make sure that the database file won't be accessible to public users as well as not being accidentally overwritten during deployments or when the app is scaled up/down? (The Web App is configured for deployments from a Local Git Repository)
Q3: Where to learn more about the file system used in Azure App Service, the official source URL? E.g. how it's shared between multiple VMs within a single Web App, how does it work when the App is scaled up/down, what's the difference between D:\home (persistent) vs D:\local (non-persistent)...
Note that SQLite does not work in Azure Blob Storage, so that one is not an option. Please, don't suggest alternative storage solutions, this question is specifically about SQLite.
References
Appropriate Uses For SQLite
In a Web App, your app is deployed to d:\home\site\wwwroot. This is the area where you may write files. As an example, the ghost deployment writes its SQLite database to d:\home\site\wwwroot\content\data\ghost.db. (easy to see this, if you open up the kudu console via yourapp.scm.azurewebsites.net):
This file area is shared amongst your web app instances. Similar to an SMB file share, but specific to web apps (and different than Azure's File Service).
The content under wwwroot is durable, unless you delete your app service. Scaling up/down impacts the amount of space available. (I have no idea what happens if you scale down and the smaller size has less disk space than what you're consuming already).
I would say the best location would be app_data folder in the site/wwwroot folder. Create the folder if it doesn't exist.
Web Apps can connect to storage accounts so you can in fact use blob storage and connect that to your web app. So in terms of learning more about it then you need to be looking at the appropriate page of documentation.
In your Web App settings you can then select which storage account to use. You can find this under Settings > Data Connections where you can select Storage from the drop down box.
I'm having issues with converting my Intranet Page to PDF file. I used 2 solutions which actually works, however with some issues.
Solution 1:
I used wkhtmltopdf.exe tool. I was able to make it work on my local machine.
However, when I deployed it to our Server, it stopped working until I notice that it's not working with intranet sites. When I tried extranet sites, it's working.
Solution 2:
I took an alternative solution by getting the HTML of that site, and let the wkhtmltopdf.exe tool to make it PDF which also works, however, the data on my page that I'm trying to convert to PDF is database driven. So all information including images was not supplied when it was converted to PDF.
Please help if there's a way to make the wkhtmltopdf.exe tool work in Intranet Sites(solution 1) or
how I can retrieve the whole page including data and images when converting it to PDF(solution 2)
Thank you very much!
it stopped working until I notice that it's not working with intranet sites.
That is not an exhaustive problem report. I have done it by rendering a view to a string and then converting that string to a pdf using wkhtmltopdf.
Rendering the view to a string: Render a view as a string
i did not include wkhtmltopdf direct, rather I used the tuespechkin nuget package: https://github.com/tuespetre/TuesPechkin
I would say to look at the permissions available. Intranet sites normally have different permission levels than a public facing site. It could be that the public facing sites have permissions that have been applied to the .exe such as the IIS_IUSR account to enable it to work with anonymous guest accounts, but lack the permissions needed in an intranet which often uses the domain user account of the logged in user to authenticate resources.
For whtmltopdf software to generate pdf on your intranet server, you need to have 2 files msvcp120.dll & msvrp120.dll in the same folder as wkhtmltopdf.exe file to running from server side. Hope this helps.
I'm developing a mobile app to share some content between users and I'm facing a weird problem.
Currently, what the app does is to allow the users to download some files from the web and store them on their OneDrive account.
The problem is that I need to download the file from the web first, and then upload it to OneDrive, and this means that I'm wasting double bandwidth for each file (OneDrive does not allow to upload a remote file).
The other required feature is to upload a file from OneDrive to my Azure storage, so, basically, I need my Azure service to work with both upload/download from/to OneDrive.
I can't find anything useful online, but I think I got a solution for the OneDrive-to-Azure scenario:
Get the file ID using the LiveSDK on my phone
Build a download link for the given file
Send the link to the Azure Mobile Service
Download the file in the Azure Storage
I've not tried it yet because I still got no access to Azure (I need to register for the trial), but I'm not sure that this may work, and even if it does I still need to figure out how to make the Azure-to-OneDrive stuff.
Do you guys have any clues?
This thing is really driving me insane :\