Retrieve files from cloud public folder - c#

We are creating c# console beta version app for our clients in which they just paste the public folder/file URL of Google drive OR one drive OR drop box OR etc. And in back-end we need to retrieve the file and process it...
I just wanted to know how do we retrieve those cloud files without any prompts for authentication(as given URL will b public, so it should not ask for Id pw)
Any help from you all experts?

With OneDrive, you can use the "shares" API to retrieve a sharing link without authentication.
You just need to encode the sharing URL correctly and then pass that to the API endpoint. The details of encoding are on the page above, but it's just URL safe base64 encoding.
GET https://api.onedrive.com/shares/{encoded_sharing_url}/root/content
The API will return the content of the file.
Edit: I got the URL slightly wrong. The /shares/ API returns a "sharing root" which looks somewhat like a drive object. To access the actual shared file, you need to add /root before the /content part of the path. I've updated this above.

Related

Image src pointing on a file outside my project folder in WEB FORM

I'm looking to do this exactly :
set src property in view to a url outside of the MVC3 project
Fine but in web form ?
I tried simply putting the path as a string into the src of the image :
<asp:Image ID="imgInside" runat="server" src="\\serverName.com\dfs$\APPL-ADM\FichiersDev\MandatsInfo\SAR220-2020_1.jpg" >
Obviously not working, so I made src pointe on this function I wrote like so :
<asp:Image runat="server" Width="160px" src='<%# getImage(Container.DataItem as MandatMobile.DAL.MandatsEcoleCC_Result) %>' ></asp:Image>
In back end C# :
protected Byte[] getImage(MandatsEcoleCC_Result p)
{
using (MandatsDatas db = new MandatsDatas())
{
GROUPE_ARTICLE g = db.GROUPE_ARTICLE.First(t => t.ID_GROUPE == p.ID_GROUPE);
if (string.IsNullOrWhiteSpace(g.image))
return null;
FileStream fs = null;
try
{
fs = new FileStream(#"\\serverName.com\dfs$\APPL-ADM\FichiersDev\MandatsInfo\" + g.MANDAT.NO_MANDAT + g.image, FileMode.Open, FileAccess.Read);
}
catch
{
}
BinaryReader br = new BinaryReader(fs);
return br.ReadBytes((int)fs.Length);
}
}
Still not working, I've been searching but I just can't figure it out and I'm stuck trying all sorts of non-sens.
Well, you confusing two things:
Code behind:
Anytime you run code that uses a file, then you writing 100% server side code. As such any file path is a proper windows FULL qualified path name. It has ZERO ZERO to do with web URL's.
Read the above a dozen times. Your code does not use URL path names - end of story.
Web site:
Anytime you reference a file, picture, script files or anything? You are and MUST use a URL based on the path names of the web site, and more so path names that resolve to the folders that represent the site.
root:
\Pictures (say a folder in the web site folder list with pictures.
So, a src, or ANY URL in the web site? They do NOT use windows path names like code behind.
So, if there is a cat.png picture in folder pictures? When your URL will be this:
www.mywebsite.com/Pictures/cat.png
If you write code to read/load/see/use that cat.png picture? Then you convert in code from that extenral URL to a full qualifed standard windows path name (with back slaches).
So, in code behind if you want to read, or do somthing with the above file?
You use
dim strFile as string
strFile = Server.MapPath("www.mywebsite.com/Pictures/cat.png")
map path will now return a full qualified windows server path
eg:
c:\inetpub\wwwrootmysite\Pictures\cat.png
Ok, so now we realize that to use a VALID link to pictures on teh web site, we MUST use a valid URL.
So, what happens if say we have a network connected HUGE massive say SAN drive or some other huge server on the network that has huge storage, and has our pictures in that site?
Say:
\SANSERVER\WebPictures\cat.png
Well, obviosity that file folder can't be used in a URL. ONLY URL's in the web folder path name can be used. And this is a good thing. Since when I go to www.amazon.com it is a VERY good thing I can't type in a URL to get at their intenral accouting files server and steal all the credit card information of all customers.
So, now, how can I get at that cat.png, and turn it into a valid URL?
There are two ways:
One:
You make the decision to expose and INCLUDE the above path name as part of the web site. This is typical done with what is called a virutal folder. You need IIS, and during development with IIS + Visual Studio, it is a "pain" to setup such path names. But if you have full version of IIS, then you can add the virutal folder to the web site though the IIS user interface tools.
So, you add a virutal folder called MyPictures, and it will be mapped to:
\SANSERVER\WebPictures\cat.png
So, now the web site URL becomes:
www.mycoolsite.com/MyPictures/cat.png
And in code if you do a server.map path, the above url will return this:
\SANSERVER\WebPictures\cat.png
Ok, next issue:
I don't want to expose that other folder to the web site. I don't want a valid URL, and I don't even want users to be able to type in say this:
www.mycoolsite.com/MyPictures/doggie.png
So, if you DO expose another folder or add a folder to the web site hiarchy, then users ARE FREE to type in a URL that will resolve to that other folder (but you are assumed to have added a virtual folder to the web site).
Now, with a valid URL resolution, then you can place markup code on teh web site, and provide valid full URL path names to the picture or whatever for the web site.
However, lets say for reason of security, I do NOT want that other server to be exposed to as a URL?
Well, it it is NOT exposed as a valid web URL folder, then you can NOT put in a valid URL - it that's simple.
However, that don't mean the code behind can't read/load/open that file on the server. In fact the web site code behind can often read any file on the server, and in fact read any file anyplace on the network that the web server is running. And as noted, code behind does not use URL
s, and does not use "forward" "/" for the file - but a plan jane old fashting fully qualfied windows path name.
Since the code behind can darn near read any file and do anything it wants?
Ok, then how can we get the code behind to dish out a file, or send that file to the web site?
Two simple ways:
Your code behind could read the cat.png file, and copy it to a folder that is part of the web server folder layout. Once one, then you can provide a valid URL. However, with a huge picture library, that would be pain full.
And in some cases the picture might come from a database row(s) that store pictures, and once again no valid path name exists for the web site.
So, what you can do is read the file in code behind and then "stream" the data directly to the web site.
When you steam contents from code behind, then you don't care nor even require a valid URL, because the code behind is pumping out the object data (in this case a picture cat.png) directly to the web browser. So this is often done because your pictures don't even exist in a file, or in fact it not practical to include that folder in the web site folder list for reasons of security.
As noted, if this was/is just a folder of pictures OUTSIDE of the folders for the web site? Well then 99% of the time, then adding a mapped folder (a virtual folder) to the web site that points to the picture hard drive is common done, and is practical.
however, you might have a HUGE library of pictures on a big file server, and you have a database that has key words for searching the pictures, and the database row stores a valid path name to the hard drive/server that has all the pictures in a Hodge podge folder hierarchy that is not practical to expose as web based urls.
So, how to stream a file?
You code is close, but you need to include additional information. And unfortantly the server can't stream the file down as 100% binary format.
So, say we drag + drop a image control onto the form. You have this:
<asp:Image ID="Image1" runat="server" />
So, now in code behind to stream + set the picture to a picture on the hard drive?
You can use this:
Dim strFile As String = "c:\Test4\pcards.bmp"
Me.Image1.ImageUrl = Gimage2(strFile)
Now of course the URL path name to the above Test4 folder does not exist.
Gimage2 - it just converts the file as a byte array, and then to a string coded as base64.
Function Gimage2(strPath As String) As String
Dim PicData As Byte() = Nothing
PicData = File.ReadAllBytes(strPath)
Dim ContentType As String = "image/" & Path.GetExtension(strPath)
Return "data:" & ContentType & ";base64," & Convert.ToBase64String(PicData, 0, PicData.Length)
End Function
So I spent some time with a long post. The reason is you attempted to use a URL with standard windows back slashes, and that means in your mind, you are using the concept of a windows full path name and MAJOR confusing that with a URL path name. Failure to make this distingishing will cause you years of pain and suffering. You must have BEYOND CRYSTAL clear this concpet of a URL and that of a file name in code behind. They are two VERY different things.
If that addtional folder is "ok" to expose to the web site? Then create a Virtural folder.
That means:
wwww.mycoolsite.com/MyPictures/dog.png
Could in fact point to ANY mapped folder on your server. And this means the web server will require permisions to that folder, and in most cases thus a user (or your code) can type in and use a full web path name to the picture.
However, as noted, for pdf documents and many other types of files, then it is out of the question to have a valid URL and a mapped folder. So you can use the 100% file based approach as per above, and read the file as bytes, and then stream + output the file to the browser.
You can even do a response.write and pump out the file directly to the browser, but then again you don't have much control as to where it will be. Do realize that pumping out a string as base 64 data as per above can and will cause some bloat and expansion in the size of the string sent to be rendered as a picture. So for a simple image - sure that's ok. But for a larger high quality high resolution image, then of course I don't recommend you send the picture as a base64 string due to the expansion that string will result in.
I ended up putting a fonction in another MVC project that works correctly to retrieve images.
So my src path point on an URL instead of a file on a server path.
src='https://NameOf_MVC_webSite.csdn.qc.ca/imageBank/ForMandat?name=' + (Container.DataItem as MandatMobile.DAL.MandatsEcoleCC_Result).image
Dirty solution using another deployed app that has a (better / easy to use / functional) framework
But this is not an "OK" solution

What should I be saving locally when I use Azure blob storage?

I'm using Azure Blob Storage to allow users to upload files from a web app.
I've got them uploading into a container, but I'm not sure what would be best to save on the web app's database since there are multiple options.
There is a GUID for the file, but also a URL.
The URL can be used to get to the file directly, but is there a risk that it could change?
If I store the file GUID I can use that to get the other details of the file using an API, but of course that's and extra step compared to the URL.
I'm wondering what best practices are. Do you just store the URL and be done with it? Do you store the GUID and always make an extra call whenever a page loads to get the current URL? Do you store both? Is the URL something constant that can act just as good as a GUID?
Any suggestions are appreciated!
If you upload any file on azure blob it will give you Url to access it which contains three part
{blob base url}/{Container Name}/{File Name}
e.g
https://storagesamples.blob.core.windows.net/sample-container/logfile.txt
SO you can save Blob base url and container name in config file and only the file name part in data base.
and at run time you can create whole url and return it back to user.
So in case if you are changing blob or container you just need to change it in config file.

How do I download documents from AtTask?

I'm working on a continuing API project. The current issue at hand is to be able to download my data from the AtTask server in precisely the folder structure they exist in on the AtTask servers. I've got the folder creation working nicely; the data types between Document, Document Folder and Document Version seem to be pretty clear. I am a little disillusioned about the fact that extension isn't in the document object (that I have to refer to the document VERSION for that)... but I can see some of the reason for that from a design perspective.
The issue I'm running into now is that I need to get the file content. I originally through from the API documentation that I'd be able to get to the file contents the same way as the documentation recommends uploading it -- through the handle. Unfortunately, neither document nor docv seem to support me accessing the handle except to write a new file.
So that leaves me the "download URL" as the remaining option. If I build the UI strings from the API calls using my browser, I get a URL with https://attaskURL/document/download?ID=xxxx (and can also get the versionID and such). If I paste the url into the browser where I'm logged in to the user interface of AtTask, it works fine and I can download the file. If, instead, I use my C# code to do so, I get the login page returned as a stream for me to download instead of my actual file because I'm not authenicated. I've tried creating a network credential and attaching it to the request with the username and password, but to no avail.
I imagine there's a couple ways to solve this problem -- the easy one being finding a way to "log in" to the download site through code (which doesn't seem to be the usual network credential object in C#) OR find a way to access the file contents through the API.
Appreciate your thoughts!
It looks like you can use the download URL if you put a session id in the URL. The details on getting a session id are here (basically just call login and a session id is returned in JSON):
http://developers.attask.com/api-docs/#Authentication
Then cram it on the end of your document download URL:
https://yourcompany.attask-ondemand.com/document/download?ID=xxxx&sessionID=abc1234
I've given this a quick test and I'm able to access a document.
You can use the downloadURL and a sessionID IF you are not using SAML authentication.
I have tried it both ways and using SAML will redirect you to the login page.

How to create usable URL for opening document?

The general problem: I have some code that needs a URL to a PDF file. It seems to work for URLs I find online, but not the ones I create myself.
For example, when I use a random URL from Xamarin it works fine, but when I try to generate a URL from either DropBox or Amazon Cloud Drive it does not work.
Example URLs:
These links open harmless PDF files. Please try it:
Xamarin (works fine)
DropBox (does not work)
Amazon Cloud Drive (does not work)
As you see, in a browser (I have used Chrome to test) you will get the PDF documents to open, but not without some kind of context (except for the Xamarin one).
The code: I am developing in MonoTouch and I am using a component called mTouch PDF Reader. The code is simply:
var documentViewController = new DocumentViewController (1, "Some name here", "http://someurlhere.pdf");
ActivateController (documentViewController);
This opens a nice PDF reader inside my app, but, as I can't use my own created URLs this does not help me. This is a 3rd party library so I can't look at the code. By the way, when I use one of my URLs, the code crashes with a System.NullReferenceException with this stacktrace:
MonoTouch.Foundation.NSArray.FromNativeObjects (items={MonoTouch.UIKit.UIViewController[1]}, count=1) in /Developer/MonoTouch/Source/monotouch/src/shared/Foundation/NSArray.cs:109
MonoTouch.Foundation.NSArray.FromNativeObjects (items={MonoTouch.UIKit.UIViewController[1]}) in /Developer/MonoTouch/Source/monotouch/src/shared/Foundation/NSArray.cs:96
MonoTouch.Foundation.NSArray.FromNSObjects (items={MonoTouch.UIKit.UIViewController[1]}) in /Developer/MonoTouch/Source/monotouch/src/shared/Foundation/NSArray.cs:48
MonoTouch.UIKit.UIPageViewController.SetViewControllers (viewControllers={MonoTouch.UIKit.UIViewController[1]}, direction=MonoTouch.UIKit.UIPageViewControllerNavigationDirection.Forward, animated=false, completionHandler={MonoTouch.UIKit.UICompletionHandler}) in /Developer/MonoTouch/Source/monotouch/src/UIKit/UIPageViewController.g.cs:144
mTouchPDFReader.Library.Views.Core.DocumentViewController.ViewDidLoad () in
MonoTouch.UIKit.UIApplication.UIApplicationMain () in
MonoTouch.UIKit.UIApplication.Main (args={string[0]}, principalClassName=(null), delegateClassName="AppDelegate") in /Developer/MonoTouch/Source/monotouch/src/UIKit/UIApplication.cs:38
Exam936.Application.Main (args={string[0]}) in /Users/EdGriMac/Dropbox/Quiz/Code/Exam926/Exam936/Main.cs:16
The frustration:
Is there a specific way to create URLs that work in this way? It does seem like DropBox does something different as it sort of iFrames the document or something. I don't know what Amazon Cloud Drive does. What has Xamarin done? Is it, as pointed out in the comments, because of http vs https?
I am completely lost. Am I missing something simple? Do you have any other way to create URLs to suggest? Googling this is really difficult as I continue to hit examples of how to share a URL in DropBox and so on...
By the way, I do not want to have the documents as part of the app as this means I will have to create a new version of the app just to change something in a document.
Update 1: I have added links above. I will try some other suggestions later and will leave more updates. Thanks in advance for any further suggestions!
Update 2: I have used Fiddler to look at the response on each of the URLs. The Xamarin URL has Content-Type: application/pdf while both DropBox and Amazon Cloud Drive has Content-Type: text/html; charset=UTF-8. This explains a lot. I will try andersr's suggestion later today as I do have a web server to put files on.
Update 3 When I put the PDF file on my Amazon EC2 server, created a virtual directory under my web site in IIS, the URL to my website + virtual directory + filename worked! Turns out the Content-Type had to be application/pdf for the mTouch PDF Reader to open it through a URL.
Thanks everyone for your help!
It seems to me that the first two URLS, link directly to the PDF files, but the latter one, ie. the one on Amazon Cloud Drive links to a page which again links to the PDF. I suggest the following potential solutions:
Find a reliable way to extract the direct url to the document on cloud drive. The link to the document is not the one you provided, but this: link . Perhaps Amazon has documentation on how you can avoid the html interface in order to retrive your file. I am not familiar with cloud drive at all. Note that the url provided has some time limited token attached to it.
Host the document on infrastructure you have more control over. IE. setup your own web server and host the documents there. Alternatively use another cloud storage provider which gives you the ability to link to files directly.

IIS Virtual Folder URL encryption

We have a c# asp.net web application that, amongst other things, allows users to download previously uploaded files such as PDF's, Word docs etc. The asp.net app is served up via an IIS6 server and the file resources live on a different server.
When the user requests a file (i.e. click a button on the web form), we stream the file back to their browser, changing the ContentType appropriately.
This seemed a good way to avoid going down the IIS virtual folder route to serve up the file resources - which we had concerns about due to the potential for users to hack the URL. i.e. with a URL like https://mydomain/myresource/clientid/myreport.docx, a savvy user could have a good stab at guessing alternative cvlientid's and document names.
The trouble with streaming a Word document to the browser is that when the browser throws it at Word, Word treats it as a brand new doc, which means the original document's properties & margin info is lost.
Our users store metadata information in the Word doc properties, so this solution is not acceptable to them.
Serving up via IIS virtual folders solves that problem, but introduces the URL security problem.
So my questions are ...
Does anyone know how we can use URL encryption/decryption (or obfuscation) with IIS Virtual folders?
Or does anyone know of any open source projects that do a similar job.
Or does anyone have any sugestions on how to go about writing our own implementation of Virtual folders but with encrypted URLs?
Many thanks in advance.
ps. our web app is delivered over https
Sorry guys, in my question, I have made some incorrect assumptions.
What am I trying to do is persist the properties stored on a word document when they are delivered from server (using either Response.TransmitFile or via a virtual folder) to a client browser.
I set up a test scenario with an IIS virtual folder and dropped a docx file (that I know contains info in the title & subject properties) in my virtual folder's physical path.
I pointed my browser at the virtual folder alias and the browser popped up its message to either open or save the doc.
If I choose to save it, the saved docx still has the properties intact.
If I choose to open it fist and then save it from Word, the saved docx has lost the properties.
So I think I need to post a different question!
You may find that the ClaimsAuthorizationManager class in "Windows Identity Foundation" does what you want. You get to implement whatever logic you like to determine who can download what without using "directory security".

Categories

Resources