I am creating WPF application and I am using Google drive API for uploading and downloading files. As the G-mail provides Revision History for files on drive, I also want to implement it in my project and get the detailed list of revisions for a file. Is there some kind of event for this? Can anyone tell me how this work and how can I implement it in my application? And how to revert to the previous version of the file?...
I found answer for the above and also mentioned it in the comments.
I want to use revision history in the scenario like:
I have uploaded a document on Google drive of around(or more) 500 MB and another user
downloads it on their PC and changes 2-3 lines in that document and then upload it again so
instead of uploading the entire document I want the changed version only to be get merged in the already uploaded document as it will be time consuming if one downloads the same
document of 500 MB and make some little changes and then upload the entire document again.
How to achieve this in .net?
You can try calling the google WebAPI
List of revisions
GET /files/{fileId}/revisions
Retrieving a particular revision
GET /files/{fileId}/revisions/{revisionId}
More details at:
https://developers.google.com/drive/v2/reference/#Revisions
Related
I need to figure out a way to let my users download several pdf files (sometimes thousands), from Azure Blob Storage, I know that I can download the files in paralel, and that would make things quicker, but the issue here is that the user could possibly have thousands of pdf files to download, and, that isn't at all reasonable.
Also, I can't download the files to another server, zip them, and let the user download them from there, as that would be incredibly inefficient for me.
Is there a way to create a zip of the files and let the user download that (other than the way above)? I saw other questions on this topic but none gave an answer/solution that suits my needs.
What would be, the absolute best way I can do this? Or isn't there another way to preform this task?
Thank you in advance.
Since no one gave an answer, and I see more posts about this on stack overflow and other sites, I decided to share here my solution (can't share code, because reasons...)
Firstly, as of today 04-09-2020 there's still no support for bulk download from Azure Blob Storage in a zip (or other format) that is directly from azure to client, without routing the download flow through a server that does the organizing and zipping.
The problem I had...
A need to download (several) files from Azure Blob Storage, zip them (maybe organize them by folders), and prompt the client to download them in bulk without any download data passing through the server and not filling the client downloads folder with scattered files...
During my research I thought about doing everything on the client's side in javascript through memory and let the client download it, but it could be quite memory expensive since my downloads could be in the GB size range.
The solution...
Then I came across a javascript library called StreamSaver, this library writes the files with streams and writes directly on the client's machine, meaning the memory expense was much less.
By luck this library also allows to organize the files inside the 'download directory' that will be prompted to the user, and even lets me zip that directory before telling the user if he wants to download it, meaning that this one library solved, almost, all my problems.
Now I only have a webmethod called by javascript that returns all the Azure SAS url to download from, and the rest is all in javascript in the client.
TL;DR:
Used StreamSaver javascript library to download, organize and zip all the files from the client side and then prompt them to download it, only using a webmethod to get all the urls wich are to be downloaded.
This solution works (from what I've tested) in at least these browsers:
Chrome;
FireFox;
Opera;
Edge (Chromium)
Problems I came across using the StreamSaver Library...
There are a few drawbacks/problems with the library,
1st Safary doesn't support it! more info about this here
2nd StreamSaver only allows zipping to files smaller than 4GB, this could be worked around using yet another library for zipping...
One of the many things that SharePoint does extremely well is that when you have versioning enabled for files uploaded to a Document Library, every time you save changes to a file it only saves the difference from the previous version of the file to the Content Database but NOT the whole file again.
I am trying to duplicate that same behavior with standard C# code on either a File System folder in Windows or a SQL Database blob field. Does anyone have any idea or pointers on how SharePoint accomplishes this and how it can be done outside of SharePoint?
SharePoint uses a technique called data "shredding" to contain each change to a given file. Unfortunately, I don't think you will find enough technical details to truly reproduce what they are doing, but you might be able to devise a reasonable approximation using your own design.
When shredded, the data associated with a file such as Document.docx is distributed across a set of BLOBs associated with the file. The independent BLOBS are each assigned a unique ID (offset) to enable reconstruction in the correct order when requested by a user.
Each document "shred" is stored in a SQL database table named DocStreams. Each BLOB contains a numerical Id representative of the source BLOB when coalesced. When a client updates a file, only the shredded BLOB that corresponds to the change is updated with the update occurring on the database server as opposed to the Web server.
For more details on Shredding see
http://download.microsoft.com/download/9/6/6/9661DAC2-393D-445A-BDC1-E60743B1231E/Shredded%20Storage%20in%20SharePoint%202013.pdf
https://jeremythake.com/the-truth-behind-shredded-storage-in-sharepoint-2013-a84ec047f28e
https://www.c-sharpcorner.com/UploadFile/91b369/shredded-storage-in-sharepoint-2013/
I have an ASP.NET website that stores large numbers of files such as videos. I want an easy way to allow the user to download all the files in a single package. I was thinking about creating ZIP files dynamically.
All the examples I have seen involve creating the file before it is downloaded but potentially terabytes of information will be downloaded and therefor the user will have a long wait. Apparently ZIP files store all the information regarding what is in the ZIP file at the end of the file.
My idea is to dynamically create the file as its downloaded. This way I could allow the user to click download. The download would start and not require any space on the server to be pre packaged as it would copy things over uncompressed sequentially. The final part of the file would contain the information on the contents of what has been downloaded.
Has anyone had any experience of this? Does anyone know a better way of doing this? At the moment I cant see any pre made utilities for doing this but I believe it will work. If it doesn't exist then i'm thinking that I will have to read the Zip file format specifications and write my own code... something that will take more time than I was intending to spend on this.
https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
for the past 3 days I've been trying to create an upload system for multiple files, possibly large, with progress bars.
I've been roaming the web relentlessly for the past few days, and I can say, I am now familiar with most difficulties.
sadly, all the solutions I've found online are not written c# or vbscript, in fact most of them are written in php.
I wouldn't mind switching to another language but the entire website is written in vb.net and for the sake of coherence I thought it might be best to keep with it.
File uploads:
Problem 1 - progress bar:
I understand file uploads will not work with ajax, since the ajax response will only occur after the file had completed its upload.
I understand there is a solution using iFrames but I cannot seem to find any online examples (preferably using vb.net or c#).
I understand there is another alternative using flash. how???
I also understand people are mostly against using iframes but I can't find what the reason might be.
Problem 2 - Multiple Files:
I can have multiple file support with HTML5. great, but IE doesn't support it? well... IE users will just have to upload one file at a time.
Problem 3 - Large files:
how?
I heard something about chunking, and blobs, but these are still just random gibberish words for me. can somebody explain, the meaning and the implementation?
references to reading material are much appreciated even though, if it's on the web, I've probably already read it in my search for my solution.
#DevlshOne has a decent thread with some good information.
Here are the three basic requirements for what I did:
Create Silverlight app for clientside access and upload control. (use app of your choice)
Create an HttpHandler to receive the data in chunks and manage requests.
Create the database backend to handle the files.
Silverlight worked well because I was already in VB (ASP.NET). When used in-browser, as opposed to out-of-browser, the ASP.NET session was shared with Silverlight, so there was no need to have additional security/login measures. Silverlight also allowed me to limit what file types could be selected and allow the user to select multiple files from the same folder.
The Silverlight app grabs the files selected by the user, displays them for editing of certain properties, and then begins the upload when the user clicks the 'upload' button. This sets off a number of threads that each upload chunks of data to the httphandler. The HttpHandler and Silverlight app send and receive in chunks, with the HttpHandler always sending an OK or ERROR message when the request has been processed for the uploaded chunk.
Our specific implementation of file uploading also required some database properties (fields) to be filled out by the user, so we also had inputs for those properties and uploaded them to the server with the file data.
An in-browser Silverlight app can also have parameters passed into it through the html, so I do this with settings like 'max chunk size' or 'max thread count'. I can change the setting in the database and have it apply to all users.
The database backend is basically a few stored procedures (insert your data management preference here) that control the flow of the logic. One table holds completed files (no file data), and a second holds the temp files that are in progress of being uploaded. One stored procedure initiates a new file record in the temp table and processes additional chunk uploads, and another controls the migration of the completely uploaded file from the temp table to the completed table. (A piece of VB code in the HttpHandler migrates the actual binary file data from the temp table to a physical file.)
This seems pretty complex, but the most difficult part would be the interaction with the handler and passing the chunks around (response/requests, uploading successive chunks, etc.). I left out a lot of information, but this is the basic implementation.
I am trying to make tool for backup/restore of Documents from Google account.
Backup is easy and I have no problems with it. But I have two unsolved questions for restore:
1) Is it possible to upload new version of existing document? When I upload document, it appears as separate copy.
I found it was discussed already here Upload and replace file in given folder on Google Docs using .net api, but it seems it was suggested just to remove old version before uploading new, the Id of document will be changed. Is this correct?
2) Google Docs have limit for size of documents able to be converted into internal format. http://docs.google.com/support/bin/answer.py?hl=en&answer=37603. So it is possible to create large document, save it to local computer and then Google Docs will refuse to convert it because the document's size is over limit. In such case it is possible to upload the document without convert, but it becomes un-editable via web site. Is there some workaround for this situation?
Unable to upload large files to Google Docs - Here is advice to break document into small pieces before uploading and link them together after. But maybe there some other ideas?
1. Is it possible to upload new version of existing document? When I upload document, it appears as separate copy.
Yes, this is possible. We call it "upload & replace" as you've noticed. No need to remove the existing version first. The following link describes how to do this in the protocol:
http://code.google.com/apis/documents/docs/3.0/developers_guide_protocol.html#UpdatingMetadataAndContent
From the .NET client library, what you need to do is attach a an input stream to the Update() request. The method header for what you need is here:
http://code.google.com/p/google-gdata/source/browse/trunk/clients/cs/src/core/service.cs#554
Create a stream containing your new file content, and just pass that in. That should be it!
2. Google Docs have limit for size... Is there some workaround for this situation?
Unfortunately there is not a way currently to circumvent the size limitations of converted documents. They must be uploaded as unconverted files, and thus, are not editable in the Google Docs user interface.