Merging files on S3 Amazon - c#

I have an application where i want to merge two files present on s3 into the third file. I thought of using the Copy Object using multipart upload. Below is the code.
AmazonS3Config config = new AmazonS3Config();
AmazonS3 s3Client = new AmazonS3Client(accessKeyID, secretAccessKey, config);
// List to store upload part responses.
List<UploadPartResponse> uploadResponses =
new List<UploadPartResponse>();
List<CopyPartResponse> copyResponses =
new List<CopyPartResponse>();
InitiateMultipartUploadRequest initiateRequest =
new InitiateMultipartUploadRequest()
.WithBucketName(targetBucket)
.WithKey(targetObjectKey);
InitiateMultipartUploadResponse initResponse =
s3Client.InitiateMultipartUpload(initiateRequest);
String uploadId = initResponse.UploadId;
try
{
// Get object size.
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest();
metadataRequest.BucketName = sourceBucket;
metadataRequest.Key = sourceObjectKey1;
GetObjectMetadataResponse metadataResponse = s3Client.GetObjectMetadata(metadataRequest);
long objectSize1 = metadataResponse.ContentLength; // in bytes
// Get object size.
GetObjectMetadataRequest metadataRequest2 = new GetObjectMetadataRequest();
metadataRequest2.BucketName = sourceBucket;
metadataRequest2.Key = sourceObjectKey2;
GetObjectMetadataResponse metadataResponse2 = s3Client.GetObjectMetadata(metadataRequest2);
long objectSize2 = metadataResponse2.ContentLength; // in bytes
long bytePosition = 0;
CopyPartRequest copyRequest1 = new CopyPartRequest()
.WithDestinationBucket(targetBucket)
.WithDestinationKey(targetObjectKey)
.WithSourceBucket(sourceBucket)
.WithSourceKey(sourceObjectKey1)
.WithUploadID(uploadId)
.WithFirstByte(bytePosition)
.WithLastByte( objectSize1 - 1 )
.WithPartNumber(1);
copyResponses.Add(s3Client.CopyPart(copyRequest1));
CopyPartRequest copyRequest2 = new CopyPartRequest()
.WithDestinationBucket(targetBucket)
.WithDestinationKey(targetObjectKey)
.WithSourceBucket(sourceBucket)
.WithSourceKey(sourceObjectKey2)
.WithUploadID(uploadId)
.WithFirstByte(bytePosition)
.WithLastByte(objectSize2 - 1)
.WithPartNumber(2);
copyResponses.Add(s3Client.CopyPart(copyRequest2));
////
CompleteMultipartUploadRequest completeRequest =
new CompleteMultipartUploadRequest()
.WithBucketName(targetBucket)
.WithKey(targetObjectKey)
.WithUploadId(initResponse.UploadId)
.WithPartETags(GetETags(copyResponses));
CompleteMultipartUploadResponse completeUploadResponse =
s3Client.CompleteMultipartUpload(completeRequest);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
But it is throwing exception at the last line CompleteMultipartUpload. Below is the S3 exception: Your proposed upload is smaller than the minimum allowed size
Where as if i only upload copyRequest1 it works fine.
Any help is appreciated!!
Regards,
Haseena

Did you manage to solve the problem? It seems that it can't be done using S3 api

It is not possible to merge uploaded files using S3 API, so I am using FTP to download and Merge.

Related

Firebase Bucket

I'm trying to upload the Image on Firebase Storage but I face some problems with it.
ar storage = StorageClient.Create();
string studentImageRef = bucket + 33333 + ".jpg";
var obj = storage.GetObject(studentImageRef, studentImageRef);
var downloadUrl = obj.MediaLink;
await storage.UploadObjectAsync(obj, null, new UploadObjectOptions { PredefinedAcl = PredefinedObjectAcl.PublicRead });
// Add the download URL to the student's information
studentsInfo.ImageUrl = downloadUrl;
This is the error I encountered.
Error: The service storage has thrown an exception. HttpStatusCode is NotFound. The specified bucket does not exist
And yes I tried several times and make sure the bucket name is correct but the error still persists.
I figured how to fix this problem using this codes
/ Convert image to stream
var stream = new MemoryStream();
imageFile.Save(stream, System.Drawing.Imaging.ImageFormat.Jpeg);
stream.Position = 0;
// Initialize Google Cloud Storage
var storage = StorageClient.Create();
string studentImageRef = "my-bucket.appspot.com/" + Lname_FnameGetter + ".jpg";
await storage.UploadObjectAsync("my-bucket.appspot.com/", studentImageRef, "image/jpeg", stream, new UploadObjectOptions { PredefinedAcl = PredefinedObjectAcl.PublicRead });
// Get the download URL for the image
var obj = storage.GetObject("my-bucket.appspot.com", studentImageRef);
var downloadUrl = obj.MediaLink;
// Add the download url to the student's information
studentsInfo.ImageUrl = downloadUrl;
The ff. codes i provide solve the problem.

How to zip multiple S3 Objects in a single zip file and move it to another folder in the same bucket using C#

I'm trying to write a lambda function which would zip all the s3 objects present in Download folder in a single zip file and then move that zip file to BatchDownload folder in the same s3 bucket.
ListObjectsRequest downloadS3Object = new ListObjectsRequest
{
BucketName = sample,
Prefix = download
};
ListObjectsResponse downloadResponse = s3Client.ListObjectsAsync(downloadS3Object).Result;
List<string> downloadS3ObjectKeys = downloadResponse.S3Objects.Where(x => !string.IsNullOrEmpty(Path.GetFileName(x.Key)))
.Select(s3Object => s3Object.Key)
.ToList();
foreach (string downloadS3ObjectKey in downloadS3ObjectKeys)
{
ListObjectsRequest checkBatchDownload = new ListObjectsRequest
{
BucketName = sample,
Prefix = batchDownload
};
ListObjectsResponse s3ObjectResponse = s3Client.ListObjectsAsync(checkBatchDownload).Result;
bool IsArchived = false;
if (s3ObjectResponse.S3Objects.Count <= 0)
{
PutObjectRequest createBatchFolder = new PutObjectRequest()
{
BucketName = sample,
Key = batchDownload
};
s3Client.PutObjectAsync(createBatchFolder);
}
In the above code I'm getting all the objects from download folder and then looping through each of the object keys. I don't understand how to zip all the object keys in a single zip file. Is there a better way to do this without getting the object keys separately.
Can you please help with the code to zip all the objects of download folder in a zip file and move that file to a new folder.
I'm not sure why you appear to be calling ListObjects again, as well as just re-uploading the same objects again, but it doesn't seem right.
It seems you want to download all your objects, place them in a zip archive, and re-upload it.
So you need something like the following:
var downloadS3Object = new ListObjectsRequest
{
BucketName = sample,
Prefix = download
};
List<string> downloadS3ObjectKeys;
using (var downloadResponse = await s3Client.ListObjectsAsync(downloadS3Object))
{
downloadS3ObjectKeys = downloadResponse.S3Objects
.Where(x => !string.IsNullOrEmpty(Path.GetFileName(x.Key)))
.Select(s3Object => s3Object.Key)
.ToList();
}
var stream = new MemoryStream();
using (var zip = new ZipArchive(stream, ZipArchiveMode.Update, true))
{
foreach (string downloadS3ObjectKey in downloadS3ObjectKeys)
{
var getObject = new GetObjectRequest
{
BucketName = sample,
Key = downloadS3ObjectKey,
};
var entry = zip.CreateEntry(downloadS3ObjectKey);
using (var zipStream = entry.Open())
using (var objectResponse = await s3Client.GetObjectAsync(getObject))
using (var objectStream = objectResponse.ResponseStream)
{
await objectStream.CopyToAsync(zip);
}
}
}
stream.Position = 0; // reset the memorystream to the beginning
var createBatchFolder = new PutObjectRequest()
{
BucketName = sample,
Key = batchDownload,
InputStream = stream,
};
using (await s3Client.PutObjectAsync(createBatchFolder))
{ //
}
Note the use of using to dispose things, and also do not use .Result as you may deadlock, instead use await.

Google Cloud Storage: Getting object from hiarchial / sub-folder structure

I have created hierarchical structure for managing file in bucket. Planning to create folder for each month i.e. dec-2017. There will be more than 10k pdf files in each folder.
Written C# code for getting objects from bucket. This code is working fine for accessing files that are on root of bucket. I am having issues with accessing files in folder i.e. my-bucket/dec-2017/test.pdf is not accessed using code.
Refer my bucket structure here
I am using following code, Can anyone don this before?
if (_storageService == null)
{
_storageService = CreateStorageClient();
}
ObjectsResource.GetRequest downloadRequest = null;
//string objectName = "dec-2017%2Ftest.pdf";
string objectName = "dec-2017/test.pdf";
downloadRequest = new ObjectsResource.GetRequest(_storageService, bucketName, objectName);
MemoryStream stream = new MemoryStream();
downloadRequest.Download(stream);
bytes = stream.ToArray();
Please check below sample code -
using Google.Apis.Auth.OAuth2;
using Google.Cloud.Storage.V1;
string file = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Test.csv");
File.WriteAllText(file, "test");
GoogleCredential credential = null;
BucketConnector bucketConnector = new BucketConnector();
credential = bucketConnector.ConnectStream();
var storageClient = StorageClient.Create(credential);
string folderPath = ConfigurationParameters.FOLDER_NAME_IN_BUCKET;
using (FileStream file = File.OpenRead(localPath))
{
objectName = folderPath + Path.GetFileName(localPath);
storage.UploadObject(bucketName, objectName, null, file);
}

Upload thumbnail image to vimeo via api call c#

I am trying to set the thumbnail for a pull video upload done through the vimeo api. I am developing this for a c# windows service and please note that there are no official libraries for this. For the moment I use this library. I am able to successfully upload the video by following the vimeo documentation, however, when I try to upload an image to be the thumbnail of a video I get an issue. According to the vimeo picutre upload documentation, in step 2, i need to upload my thumbnail image via a PUT request. It says, that I need to do the following:
PUT https://i.cloud.vimeo.com/video/518016424
.... binary data of your file in the body ....
I can't figure out how to do this. I can get the binary data of the image by using
byte[] byte_array_of_image = File.ReadAllBytes(file);
but how can I send this data to the api and get a response (with or without using the library)? If it would help, here is my code to upload the video and thumbnail done so far.
var vc = VimeoClient.ReAuthorize(
accessToken: ConfigurationManager.AppSettings["ACCESS_TOKEN"],
cid: ConfigurationManager.AppSettings["API_KEY"],
secret: ConfigurationManager.AppSettings["API_SECRET"]
);
string temporary_video_dir = ConfigurationManager.AppSettings["TEMP_VIDEO_URL"];
Dictionary<string,string> automatic_pull_parameters = new Dictionary<string, string>();
automatic_pull_parameters.Add("type", "pull");
automatic_pull_parameters.Add("link", temporary_video_dir);
var video_upload_request = vc.Request("/me/videos", automatic_pull_parameters, "POST");
string uploaded_URI = video_upload_request["uri"].ToString();
string video_id = uploaded_URI.Split('/')[2];
Library.WriteErrorLog("Succesfully uploaded Video in test folder. Returned Vimeo ID for video: "+ video_id);
var picture_resource_request = vc.Request("/videos/" + video_id + "/pictures", null, "POST");
string picture_resource_link = picture_resource_request["uri"].ToString();
//Library.WriteErrorLog("uri: " + picture_resource_link);
byte[] binary_image_data = File.ReadAllBytes("http://testclient.xitech.com.au/Videos/Images/Closing_2051.jpg");
string thumbnail_upload_link = picture_resource_link.Split('/')[4];
Please help! Stuck for hours now.
WebClient has a method called UploadData that fits like a glove. Below there is an example that what you can do.
WebClient wb = new WebClient();
wb.Headers.Add("Authorization","Bearer" +AccessToken);
var file = wb.DownloadData(new Uri("http://testclient.xitech.com.au/Videos/Images/Closing_2051.jpg"));
var asByteArrayContent = wb.UploadData(new Uri(picture_resource_request ), "PUT", file);
var asStringContent = Encoding.UTF8.GetString(asByteArrayContent);
reference post:- Vimeo API C# - Uploading a video
the answer is not upvoted but it could be tried worked well in my case.
see the code below:-
public ActionResult UploadChapterVideoVimeo(HttpPostedFileBase file, string productID = "")
{
if (file != null){
var authCheck = Task.Run(async () => await vimeoClient.GetAccountInformationAsync()).Result;
if (authCheck.Name != null)
{
BinaryContent binaryContent = new BinaryContent(file.InputStream, file.ContentType);
int chunkSize = 0;
int contenetLength = file.ContentLength;
int temp1 = contenetLength / 1024;
if (temp1 > 1)
{
chunkSize = temp1 / 1024;
chunkSize = chunkSize * 1048576;
}
else
{ chunkSize = chunkSize * 1048576; }
binaryContent.OriginalFileName = file.FileName;
var d = Task.Run(async () => await vimeoClient.UploadEntireFileAsync(binaryContent, chunkSize, null)).Result;
vmodel.chapter_vimeo_url = "VIMEO-" + d.ClipUri;
}
return RedirectToAction("ProductBuilder", "Products", new { productId = EncryptedProductID, message = "Successfully Uploaded video", type = 1 });
}
}
catch (Exception exc)
{
return RedirectToAction("ProductBuilder", "Products", new { productId = EncryptedProductID, message = "Failed to Uploaded video " + exc.Message, type = 0 });
}
}
return null; }

Downloading multiple files from FTP server

I've multiple files on a ftp server.I do not know the names of these files except that they are all. xml files.
How do I programmatically download these files using .Net's FtpWebRequest?
Thanks.
Most likely you'll have to issue a Dir command that lists out all the files, then go through each one downloading it.
Here is some info on getting a directory listing.
http://msdn.microsoft.com/en-us/library/ms229716.aspx
Take a look at the ListDirectory function. It's the equivalent of the NLIST command in FTP.
You'll probably want to use an existing library like this one rather than write your own.
FtpWebRequest __request = (FtpWebRequest)FtpWebRequest.Create(__requestLocation);
__request.Method = WebRequestMethods.Ftp.ListDirectory;
var __response = (FtpWebResponse)__request.GetResponse();
using (StreamReader __directoryList = new StreamReader(__response.GetResponseStream())) {
string ___line = __directoryList.ReadLine();
while (___line != null) {
if (!String.IsNullOrEmpty(___line)) { __output.Add(___line); }
___line = __directoryList.ReadLine();
}
break;
}
Getting the target file...
FtpWebRequest __request = null;
FtpWebResponse __response = null;
byte[] __fileBuffer = null;
byte[] __outputBuffer = null;
__request = (FtpWebRequest)FtpWebRequest.Create(__requestLocation);
__request.Method = WebRequestMethods.Ftp.DownloadFile;
__response = (FtpWebResponse)__request.GetResponse();
using (MemoryStream __outputStream = new MemoryStream()) {
using (Stream __responseStream = __response.GetResponseStream()) {
using (BufferedStream ___outputBuffer = new BufferedStream(__responseStream)) {
__fileBuffer = new byte[BLOCKSIZE];
int ___readCount = __responseStream.Read(__fileBuffer, 0, BLOCKSIZE);
while (___readCount > 0) {
__outputStream.Write(__fileBuffer, 0, ___readCount);
___readCount = __responseStream.Read(__fileBuffer, 0, BLOCKSIZE);
}
__outputStream.Position = 0;
__outputBuffer = new byte[__outputStream.Length];
//Truncate Buffer to only the specified bytes. Store into output buffer
Array.Copy(__outputStream.GetBuffer(), __outputBuffer, __outputStream.Length);
break;
}
}
}
try { __response.Close(); } catch { }
__request = null;
__response = null;
return __outputBuffer;
Ripped out of some other code I have, so it probably wont compile and run directly.
I don't know if the FtpWebRequest is a strict requirement. If you can use a third party component following code would accomplish your task:
// create client, connect and log in
Ftp client = new Ftp();
client.Connect("ftp.example.org");
client.Login("username", "password");
// download all files in the current directory which matches the "*.xml" mask
// at the server to the 'c:\data' directory
client.GetFiles("*.xml", #"c:\data", FtpBatchTransferOptions.Default);
client.Disconnect();
The code uses Rebex FTP which can be downloaded here.
Disclaimer: I'm involved in the development of this product.

Categories

Resources