I'm trying to download a public shared file from Google Drive using C#.
Here's the code I'm currently using
DriveService.Files.Get(fileId);
Where the fileID is taken from the URL
https://drive.google.com/file/d/{ fileID }/view?usp=sharing
Now this all seems like it should work no problem, but I'm getting a file not found error every time.
I've done this previously with getting a list of files from a public folder that is shared and I managed to get that one working by using this query
ListRequest request = service.Files.List();
request.Q = $"'{ folderID }' in parents";
request.Fields = "files(mimeType,id,modifiedTime,name,version,originalFilename)";
The in parents section is what made this one work, but I can't think of a similar way to make the Get query work, it seems like it should just work if given the right ID and I have the permissions.
I'm definitely logged in correctly, as I'm able to download other files, so I know that's not the problem either.
Any help would be greatly appreciated.
The File Resource returned by request.Execute(); contains a WebContentLink property. It is a link for downloading the content using cookie based authentication. In cases where the content is shared publicly, the content can be downloaded without any credentials.
Or you could just do (in API v2):
var request = MyService.Files.Get(FileID);
var stream = new System.IO.MemoryStream();
try
{
request.Download(stream);
System.IO.FileStream file = new System.IO.FileStream(PathToSave, System.IO.FileMode.Create, System.IO.FileAccess.Write);
Stream.WriteTo(file);
file.Close();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Error Occured:", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
You won't have a downloadlink if the file you are trying to download is a native Google Docs format. If that's the case you must look for the ExportLinks.
You will have several export links, so you will have to choose which format suits you best.
The issue ended up being related to permission scopes when authenticating through GoogleWebAuthorizationBroker.AuthorizeAsync().
I'd played with these previously, and I thought it just didn't work, but it turns out you have to delete any previous credentials for these changes to actually work. See the comments in google's sample.
To read public files I had to add the following permission to my scope
DriveService.Scope.DriveReadonly
Hopefully nobody else will stare at their screen for hours now.
Related
I want to get all images I have saved in my storage folder. There is a way to get the Metadata and the Downloadlink for only one image. But what if I want the download links for all of my images?
My small code so far:
try
{
FirebaseStorage storage = new FirebaseStorage("*****.appspot.com");
var info = await storage.Child("reports").Child($"{public_id}").Child("report_picture_01.png").GetDownloadUrlAsync();
var starsRef = storage.Child("reports").Child($"{public_id}").Child("report_picture_01.png");
string link = await starsRef.GetDownloadUrlAsync();
MessageBox.Show(info.ToString());
}
catch (FirebaseStorageException ex) { MessageBox.Show(ex.Message); }
Similar questions:
How to get all images stored in firebase storage folder?
Download a file from Firebase using C#
It exist one possible solution: You rename all your image files with numbers and then you ilerate throught your folder and look if theres any files named like the numbers. But is there a smarter and faster way to solve this problem?
I've tried serveral methods for it and it still doesn't work
I am trying to get public shareable link of the uploaded files to GoogleDrive using GoogleDriveApi v3. I am able to upload the files and get FileId. I am not able to add Permission and shareable link to the uploaded files. I have searched numerous answers on the Internet and StackOverflow, but did not anything that would help me solve the problem. I have tried solutions that were provided for Java and Python, but I am getting error:
Google.GoogleApiException: 'The service drive has thrown an exception. HttpStatusCode is Forbidden. The resource body includes fields which are not directly writable.'
Here's my code:
public async Task UploadFileAsync(Stream file, string fileName, string fileMime, string folder, string fileDescription)
{
DriveService driveService = GetService();
var fileMetaData = new Google.Apis.Drive.v3.Data.File()
{
Name = filename, Description = fileDescription, MimeType = fileMime, Parents = new string[] { folder },
};
var request = driveService.Files.Create(fileMetaData, file, fileMime);
request.Fields = "id, webContentLink";
var response = await request.UploadAsync(cancellationToken);
if (response.Status != UploadStatus.Completed)
throw response. Exception;
var permission = new Permission { AllowFileDiscovery = true, Id = "id, webContentLink", Type = "anyone", Role = "reader" };
var createRequest = driveService.Permissions.Create(permission, request.ResponseBody.Id);
createRequest.Fields = "id, webContentLink";
await createRequest.ExecuteAsync();
Debug.WriteLine("Link: " + request.ResponseBody.WebContentLink);
}
I am getting the link in the statement request.ResponseBody.WebContentLink, but the permissions are not set on the file. Hence the file is not shared and the link does not work. Is there anything I am doing wrong?
Got it working after correcting the Permission initialization. Seems that Permission.Id is not writeable (hence the error The resource body includes fields which are not directly writable.). Thus, removed assignment of some value to Permission.Id.
Hence the correct way would be
var permission = new Permission { AllowFileDiscovery = true, Type = "anyone", Role = "reader" };
var fileId = request.ResponseBody.Id;
await driveService.Permissions.Create(permission, fileId).ExecuteAsync();
Hence, in this way we can set permissions on the uploaded file and make it shareable through code. Rest of the code is correct. Further we will get downloadable link using request.ResponseBody.WebContentLink.
The sharable link created through the google drive web application is not something that you can create or get via the google drive API.
The WebView link at the download links can only be used by someone who has permissions on the file
so you will need to add the user as reader for the file if you want them to be able to use that link
API flow to upload a file and grant access to a user.
Create a file via the file.create method uploading the file.
Then preform a permissions.create adding the user who you would like to be able to have access to download the file.
Then do a file.get to get the WebContentLink and the user should be able to access the file via their google drive account.
I'm attempting to make a basic .NET API for managing a collection of media (images and videos).
I have configured the webroot to be a folder called "site", and within that folder is a folder called "media" where these files are stored. I can access a test media file that is saved in /site/media/Smush.jpg by loading https://localhost:5001/site/media/smush.jpg - this serves the image as expected.
I have created a method that receives a POST request containing form data from my frontend, and this method saves the file to the webroot using a filestream, code below:
[HttpPost]
[Route("/media/add")]
public async Task<HttpResponseMessage> MediaAdd()
{
try
{
//get the form
var form = HttpContext.Request.Form;
//if there's a route, add it into the filepath, otherwise leave it out and have the filepath go straight to media (this prevents an exception if route is blank)
string filePath = form["route"] == "" ? Path.Combine(_hostingEnvironment.WebRootPath, "media") : Path.Combine(_hostingEnvironment.WebRootPath, "media", form["route"]);
//get the first (should be only) image - DO WE WANT TO BE ABLE TO ADD MULTIPLE IMAGES? PROBABLY TBH
IFormFile image = form.Files.First();
if (image.Length > 0)
{
//check the directory exists - create it if not
if (!Directory.Exists(filePath)) {
Directory.CreateDirectory(filePath);
}
using (Stream fileStream = new FileStream(Path.Combine(filePath, form["filename"]), FileMode.Create))
{
await image.CopyToAsync(fileStream);
return new HttpResponseMessage(HttpStatusCode.OK);
}
}
else {
return new HttpResponseMessage(HttpStatusCode.BadRequest);
}
}
catch (Exception e)
{
return new HttpResponseMessage(HttpStatusCode.BadRequest);
}
}
My frontend submits a route, filename and the media file, and this is used to save the image. This all works fine; I can submit an image with the path "test" and the name "test.jpg", and the API correctly stores the file at /site/media/test/test.jpg. I can view the file in the solution and see a preview of the image, as with Smush.jpg.
However, attempting to load https://localhost:5001/site/media/test/test.jpg results in a 404. Why is this the case? Can I not add files into the webroot through code and have them be accessible as static files as if I added them to the solution in my IDE? Are there any alternative ways of handling this?
I am using .NET 5.0, and have
app.UseStaticFiles(); in Configure() in Startup.cs.
Sorry if this is a duplicate, but I couldn't find anything else like this.
EDIT:
On checking things again, it seems like rather than my files being at https://localhost:5001/site/media, they are simply in https://localhost:5001/media. I am not sure how I was able to access Smush.jpg at https://localhost:5001/site/media/Smush.jpg before.
It seems as though the webroot is not included as part of a URL to access files within it.
As it is now, I have got what I was looking for it to do.
Well first a security concern as also #Heinzi pointed out...
string filePath = form["route"] == "" ? Path.Combine(_hostingEnvironment.WebRootPath, "media") : Path.Combine(_hostingEnvironment.WebRootPath, "media", form["route"]);
What if the user sends form.route == "../../" and instead of image he updates the appsettings.json file ?
Check this out and have that in mind if you're planing to release this code to a production environment and make sure you only accept image files.
On the other hand if you are serving static files from a folder different to wwwroot please use this configuration
Why the 404
It makes sense. You are under the controller/action paths. Going under the site url the engine does the following:
When you request https://localhost:5001/site/media/test/test.jpg the code tries to find the media controller and the test action. It is not looking for static files on the filesystem. Since there is no such controller/action pairs, it will not find anything and thus return 404 not found.
If you saved the files in https://localhost:5001/static/media/test.jpg outside of the mapped routes, you would be able to access it.
Look inside your code for:
MapHttpRoute
Which is used to configure how to identify controller actions which are not decorated with the [Route] attribute.
Security concern
When you want to upload a file, you should consider a better solution and not one that accesses directly your filesystem.
Possible options:
Blob storage on the cloud
Database blobs
Don't forget to sanitize the input with an antivirs or some similar solution.
I am currently working on a 'download file' implementation using Web API 2.
However, as the files that can be downloaded are NOT stored in the database, I am passing in the full file path as the parameter for identification.
It seems the problem with this approach is that the filePath contains characters that are invalid for a URI... Has anyone got any suggestions to resolve this or an alternate approach?
Download file method:
[HttpGet]
[Route("Files/{*filePath}")]
public HttpResponseMessage Get([FromUri]string filePath)
{
try
{
var file = new FileInfo(filePath);
byte[] bytes = System.IO.File.ReadAllBytes(filePath);
var result = Request.CreateResponse(HttpStatusCode.OK);
result.Content = new ByteArrayContent(bytes);
result.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = file.Name + file.Extension;
return result;
}
catch(Exception ex)
{
return Request.CreateResponse(HttpStatusCode.InternalServerError, ex);
}
}
Requiring the client to put the full path in the URI (even if it were encoded so that it only contains valid characters for the URI) implies that you may be publishing these paths somewhere... this is not a great idea for a few reasons:
Security - full Path Disclosure and associated Relative Path Traversal
i.e. what's to stop someone passing in the path to a sensitive file (e.g. your web.config file) and potentially obtaining information that could assist with attacking your system?
Maintainability
Clients may maintain a copy of a URI for reuse or distribution - what happens if the file paths change? Some related conversation on this topic here: Cool URIs don't change
My suggestion - you don't have to put the files themselves in a database, but put a list of files in a database, and use a unique identifier in the URL (e.g. perhaps a slug or GUID). Look up the identifier in the database to discover the path, then return that file.
This ensures:
Nobody can read a file that you haven't indexed and determined is safe to be downloaded
If you move the files you can update the database and client URIs will not change
And to respond to your original question, you can easily ensure the unique identifier is only made up of URI safe characters
Once you have the database, over time you may also fine it useful to maintain other metadata in the database such as who uploaded the file, when, who downloaded it, and when, etc.
I have a windows service application that needs to download pdf files from different public web sites and save them locally to a folder on the server
I tried to use System.Net.WebClient to perform the download like this
client = new WebClient();
client.DownloadFile(new Uri(fileLink, UriKind.Absolute), destination);
destination is the full path and name to the folder where I need to save the file to. example: \server-name\downloads\file123.pdf
fileLink is the url to the pdf file
One of the links I am trying to save is: https://www.wvmmis.com/WV%20Medicaid%20Provider%20SanctionedExclusion/WV%20Medicaid%20Exclusions%20-%20June%202016.pdf
The code works but the file that is saved is corrupted and cannot be opened by Acrobat reader or any pdf reader.
If you click the link above and do save as and save the page locally to a pdf, then you can open it fine. So the problem is not that the pdf is really corrupted, but WebClient is not saving it right.
Is there any configuration I can do to the WebClient that causes it to save the file correctly, or is there another way to do it that does save it right ?
Thank you
I wrote something similar long time ago
try
{
WebRequest request = WebRequest.Create(url);
WebResponse response = request.GetResponse();
string originalFileName = response.ResponseUri.AbsolutePath.Substring(response.ResponseUri.AbsolutePath.LastIndexOf("/") + 1);
Stream streamWithFileBody = response.GetResponseStream();
using (Stream output = File.OpenWrite(#"C:\MyPath\" + originalFileName))
{
streamWithFileBody.CopyTo(output);
}
Console.WriteLine("Downloded : " + originalFileName);
}
catch (Exception ex)
{
Console.WriteLine("Unable to Download : " + ex.ToString());
}
After trying all the examples that I found online without luck, I finally figured out a way to do this. I am posting my answer here in case someone else runs into the same problem.
I used selenium FireFoxDriver to navigate to the page that contains the link, then I find the link and click it. I created a profile in firefox to download the file type pdf directly instead of opening it.
FirefoxDriver driver = new FirefoxDriver(myProfile);
driver.Navigate().GoToUrl(pageUrl);
driver.FindElement(By.LinkText(linkText)).Click();
You can also find the link by href or id too, but in my case I needed to find it by text.