allowing users to download files from azure blob storage as single zip file - c#

I have multipl files in my blob storage. Also I have a azure database table which stores url which points to actual individual files in blob storage .
On my webform user can specify search criteria.Then it will search for files that match the search condition and show a single link to download matching files all as a single zip file.
I have my search results returned as a list. for example
List searchresults
This search result will contain multiple urls
eg.,searchresults.url ="https://myblobstorage.blob.windows.net\xyz\mymusic.mp3"
if there are matching records ,it will show a single download link on the page,so that the user can click on the link and download the the matching files together as as single zip file.
I am able generate the searchresultsList with the required file urls pointing to the files in my azure blob container.
Now my Question
Is there a way I can generate a zip file by looping through the searchresultsList and grabbing files from blob and generate a single zip file for the user to download them? Give me your suggestions or some sample code to achieve this functionality.
When the user clicks on the link, it should go and fetch all the files from corresponding urls from the search results list and generate a single zip file and download to the users machine

You need to download the blobs to your server, generat a zip and push that to the client.
You can use the following lib to generate the zip file http://dotnetzip.codeplex.com/
The blobs you can download to the client via the .NET SDK provided with the Azure framework. It should be a simple solution, really.

Related

Downloading a file from JFrog Artifactory using its URL

I'm using JFrog Artifactory and want to download file using C# and WebClient. The URL is like /filename.zip
But it downloads as a HTML page. Says container is damaged. And also the HTML says "you needs to enable JavaScript".
How can I fix that?
The Retrieve Folder or Repository Archive API allows to download an archive file (supports zip/tar/tar.gz/tgz) containing all the artifacts that reside under the specified path (folder or repository root). However it does not support filtering by properties.
The Artifactory CLI supports concurrently downloading multiple files. It also supports downloading files we matches a set of property values. The CLI, however, will use multiple HTTP requests for doing so.
A third option would be developing a custom user plugin which allows downloading an archive of artifacts matching a set of properties. An execution user plugin can be executed as a REST API call. There is a sample plugin in the JFrogDev GitHub account which can serve as a good start point. This plugin allows downloading the content of a directory as an archive.

How to compare between local and cloud file Azure

I have a local site where I can add or delete attachments. After I add an attachment they get uploaded to Azure Blob storage. But I do not get direct information except the name of the files currently attached. I am looking for an efficient mechanism to compare files between there local and cloud instances. For example if user adds files A and B, they get uploaded to Azure. Then if user edits A and reuploads I need to compare the contents of the files between local and Azure and if there is a change reupload. Also if the user deletes file B I need to make another check if FIle A is not edited. So far I have thought about comparing the stream content. Are there any other efficient ways to do so?
Bryan gave the right direction. I would use Event Grid to generate the MD5 version, then would store it into a key value pair store. Then before uploading the new one, just would lookup the key value pair and compare both MD5 versions.
Here are some useful links that uses s3, but could give you some insights:
-How to compare versions of an Amazon S3 object?
-https://github.com/micnews/s3-diff

Generate id for files based on file content

I am using a upload mechanism for users to upload files and the user is suppose to receive 3 different file extracted from the uploaded one in an group which is identified by the uploaded file name.
I am trying to map the output files to it's parent file.
How can generate something unique that can be linked to the files so that they are easily associated to its group.
I am using C# and SQl Server Filestream to store the files in database.
The limitation that i am facing is that I cannot rename the file provided by user.
Can someone help me out here?

Not seeing all blobs when using the Azure List Blobs Container API

I am trying to find a simple way to list log files which I am storing in and Azure Blob Container so developers and admins can easily get to dev log information. I am following the information in this API doc https://msdn.microsoft.com/en-us/library/dd135734.aspx but when I go to
https://-my-storage-url-.blob.core.windows.net/dev?comp=list&nclude={snapshots,metadata,uncommittedblobs,copy}&maxresults=1000
I see one file listed which is a Block Blob but the log files I have generated which are of type Append Blob are not showing. How can I construct this api call to include Append Blobs?

Uploading files to site / datatable

I'm trying to make a little site for few people. Basically what i'm looking for is that users can register to this site, log on and upload files. I want the files to upload to a new folder that is named after the uID for example.
User Tommy registers and get user id 1234.
Then his upload folder will be
http://www.site.com/users/1234/upload/
He is the only one that has access to this data and can delete them.
Another way would be storing this info in the sql. I'm using MS SQL 2008. Can i save the upload attachments right in the datatable ? or should i have one account that has full rights that will make the folders for the users and save the direct link in the database ?
I'm going for a little, very so light version of dropbox.com but only in a browser form.
And if you go though folder route,
System.IO FileInfo and DirectoryInfo will give you all the info you need about the contents of a folder, and the files within it.
And dont forget Server.MapPath() :)
Take a look at FILESTREAM in SQL Server 2008, which is designed to efficiently store unstructured data.
BOL: FILESTREAM Overview

Categories

Resources