I have the following code that uploads an image to Azure blob storage. I would like to encrypt the image data before uploading to the blob. I already have a helper class for encrypting and decrypting that I can use by calling AESEncryption.Encrypt("plainText", "key", salt");
I'm just trying to figure out how tom integrate my encryption method into the code. Also, I'm guessing that once it's encrypted instead of calling blob.UploadFromFile() I will be calling blob.UploadFromByteArray().
public override Task ExecutePostProcessingAsync()
{
try
{
// Upload the files to azure blob storage and remove them from local disk
foreach (var fileData in this.FileData)
{
var filename = BuildFilename(Path.GetExtension(fileData.Headers.ContentDisposition.FileName.Trim('"')));
// Retrieve reference to a blob
var blob = _container.GetBlockBlobReference(filename);
blob.Properties.ContentType = fileData.Headers.ContentType.MediaType;
blob.UploadFromFile(fileData.LocalFileName, FileMode.Open);
File.Delete(fileData.LocalFileName);
Files.Add(new FileDetails
{
ContentType = blob.Properties.ContentType,
Name = blob.Name,
Size = blob.Properties.Length,
Location = blob.Uri.AbsoluteUri
});
}
}
catch (Exception ex)
{
throw ex;
}
return base.ExecutePostProcessingAsync();
}
As I see it, you could do it 3 ways:
Encrypt the file beforehand and then upload that encrypted file.
As you mentioned, you could read the file in byte array and then encrypt that byte array and upload it using UploadFromByteArray method.
Similar to #2 but instead of uploading byte array, you could rely on streams and upload using UploadFromStream method.
Related
I am using a C# Console Application (.NET Core 3.1) to read a load of image files from Azure Blob Storage and produce thumbnails of those images. The new images are to be saved back to Azure, and the Blob ID stored in our database. How do I find the ID of the items saved. Here is the command:
Azure.Response<BlobContentInfo> blobs = containerClient.UploadBlob(fileName, outStream);
I can't seem to find it in the returned object
https://learn.microsoft.com/en-us/dotnet/api/azure.storage.blobs.models.blobcontentinfo?view=azure-dotnet
My original pictures were created and save using PowerApps, where the control does indeed return the Blob ID - see below:
Set(
gblSentBlob,
AzureBlobStorage.CreateFile(
Text(gblAzureFileContainer),
GUID() & ".jpg",
camControl.Photo
)
);
If(
!IsEmpty(gblSentBlob),
Notify("Picture saved to Azure storage:" & gblSentBlob.DisplayName);
UpdateContext({locFileName: gblSentBlob.DisplayName});
UpdateContext({locAzureStorageID: Text(gblSentBlob.Id)}); // <- *** this is the Blob ID ***
UpdateContext({locSavedToAzure: true});
Here, the AzureBlobStorage.CreateFile function returns an object which contains the ID I am looking for.
How do I get this ID in my Console Application.
A typical Blob ID looks like this:
JTJmc2hpcmVibG9iY29udGFpbmVyJTJmNTk3MzQ4NGYtNGVhNy00NzJkLTkyMzQtYWIwNzM5NWNlOGRiLmpwZw==
I can then retrieve the images for display using the following (in PowerApps)
AzureBlobStorage.GetFileContent(ThisItem.BlobStorageID)
My full code:
var blobClient = containerClient.GetBlobClient(blobName);
using Stream stream = await blobClient.OpenReadAsync();
Image myImage = Image.FromStream(stream);
Image myThumbnail = PictureProcessor.returnThumbnail(myImage);
// now save this image
string guid = Guid.NewGuid().ToString();
string fileName = guid + ".jpg";
//create a memory stream ready for the rescaled image
Stream outStream = new MemoryStream();
myThumbnail.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
Console.WriteLine(
"Length = {0}, Position = {1}\n",
outStream.Length.ToString(),
outStream.Position.ToString());
outStream.Position = 0;
Azure.Response<BlobContentInfo> blobs = containerClient.UploadBlob(fileName, outStream);
Console.WriteLine("blobs RETURN OBJECT: " + blobs.ToString());
Console.WriteLine("blobs GetRawResponse: " + blobs.GetRawResponse());
Console.ReadKey();
When I decoded this from Base64 and then into UTF-8:
JTJmc2hpcmVibG9iY29udGFpbmVyJTJmNTk3MzQ4NGYtNGVhNy00NzJkLTkyMzQtYWIwNzM5NWNlOGRiLmpwZw==
I got this:
%2fshireblobcontainer%2f5973484f-4ea7-472d-9234-ab07395ce8db.jpg
So your "Blob ID" appears to be the Base64-encoded representation of the UTF-8 (or 7-bit ASCII?) representation of the URL-encoded string value you're passing into AzureBlobStorage.CreateFile's second parameter.
So do this:
String powerAppsBlobId = #"JTJmc2hpcmVibG9iY29udGFpbmVyJTJmNTk3MzQ4NGYtNGVhNy00NzJkLTkyMzQtYWIwNzM5NWNlOGRiLmpwZw==";
Byte[] blobIdBytes = Convert.FromBase64String( powerAppsBlobId );
String urlEncodedBlobName = Encoding.UTF8.GetString( bytes );
String actualBlobName = Uri.UnescapeDataString( urlEncodedBlobName );
Console.WriteLine( actualBlobName )
This program will then print this:
/shireblobcontainer/5973484f-4ea7-472d-9234-ab07395ce8db.jpg
After posting my first answer (with the Base64 decoding) I took a look at the documentation for the Azure Blob connector for PowerApps and I see that the BlobMetadata.Name and/or BlobMetadata.Path values should contain the full blob-name too.
I don't know why you pointed to the documentation for Azure.Storage.Blobs.Models as that's not for use by PowerApps.
So a better idea is to store the Blob.MetadataPath value when you upload the blob from within PowerApps, so your Console application can get to it - this is just in case PowerApps changes their "algorithm" for generating those Base64 BlobIds (e.g. they could include the blob version date+time, or shared access signatures, for example).
So change your code to this:
Set(
gblSentBlob,
AzureBlobStorage.CreateFile(
Text(gblAzureFileContainer),
GUID() & ".jpg",
camControl.Photo
)
);
If(
!IsEmpty(gblSentBlob),
Notify("Picture saved to Azure storage:" & gblSentBlob.DisplayName);
UpdateContext({locFileName: gblSentBlob.DisplayName});
UpdateContext({locAzureStorageID: Text(gblSentBlob.Id)});
UpdateContext({locAzureStoragePath: Text(gblSentBlob.Path)}); // <--- Add this here
UpdateContext({locSavedToAzure: true});
I tried below code to upload file to azure blob container but uploaded file got corrupted.
public async void UploadFile(Stream memoryStream, string fileName, string containerName)
{
try
{
memoryStream.Position = 0;
CloudBlockBlob file = GetBlockBlobContainer(containerName).GetBlockBlobReference(fileName);
file.Metadata["FileType"] = Path.GetExtension(fileName);
file.Metadata["Name"] = fileName;
await file.UploadFromStreamAsync(memoryStream).ConfigureAwait(false);
}
catch (Exception ex)
{
throw ex;
}
}
How can I resolve it.
Unable to open excel file which was uploaded to blob using above code.
Error:
Stream streamData= ConvertDataSetToByteArray(sourceTable); // sourceTable is the DataTable
streamData.Position = 0;
UploadFile(streamData,'ABCD.xlsx','sampleBlobContainer'); //calling logic to upload stream to blob
private Stream ConvertDataSetToByteArray(DataTable dataTable)
{
StringBuilder sb = new StringBuilder();
IEnumerable<string> columnNames = dataTable.Columns.Cast<DataColumn>().
Select(column => column.ColumnName);
sb.AppendLine(string.Join(",", columnNames));
foreach (DataRow row in dataTable.Rows)
{
IEnumerable<string> fields = row.ItemArray.Select(field => (field.ToString()));
sb.AppendLine(string.Join(",", fields));
}
var myByteArray = System.Text.Encoding.UTF8.GetBytes(sb.ToString());
var streamData = new MemoryStream(myByteArray);
return streamData;
}
Your code above creates a .csv file, not an .xlsx file. You can easily test this out by creating something similar to what your code builds, e.g.:
Then if you rename it to .xlsx, to replicate what you do, you get:
You have two solutions:
You either need to build an actual .xlsx file, you can do this with the https://github.com/JanKallman/EPPlus package for example
or
You need to save your file as a .csv, because that's what it really is.
The fact the you upload it to azure blob storage is completely irrelevant here - there's no issue with the upload.
Since the stream is instantiated outside this method I assume the file is handled there and added to the stream, however, here you are returning the position of the stream to 0, thus invalidating the file.
First of all, are you sure the file got corrupted? Save both the MemoryStream contents and the blog to local files and compare them. You could also save the MemoryStream contents to a file and use UploadFromFileAsync.
To check for actual corruption you should calculate the content's MD5 hash in advance and compare it with the blob's hash after upload.
To calculate the stream's MD5 hash use ComputeHash.
var hasher=MD5.Create();
memoryStream.Position = 0;
var originalHash=Convert.ToBase64String(hasher.ComputeHash(memoryStream));
To get the client to calculate an blob has you need to set the BlobRequestOptions.StoreBlobContentMD5 option while uploading :
memoryStream.Position = 0;
var options = new BlobRequestOptions()
{
StoreBlobContentMD5 = testMd5
};
await file.UploadFromStreamAsync(memoryStream,null,options,null).ConfigureAwait(false);
To retrieve and check the uploaded hash use FetchAttributes or FetchAttributesAsync and compare the BlobProperties.ContentMD5 value with the original :
file.FetchAttributes();
var blobHash=file.Properties.ContentMD5;
if (blobHash != originalHash)
{
//Ouch! Retry perhaps?
}
It seems that your method don't have fatal problems. I guess the part of your Stream conversion has gone wrong.
This is my code:
using System;
using System.IO;
using Microsoft.WindowsAzure.Storage;
namespace ConsoleApp7
{
class Program
{
public static class Util
{
public async static void UploadFile(Stream memoryStream, string fileName, string containerName)
{
memoryStream.Position = 0;
var storageAccount = CloudStorageAccount.Parse("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");
var blockBlob = storageAccount.CreateCloudBlobClient()
.GetContainerReference(containerName)
.GetBlockBlobReference(fileName);
blockBlob.UploadFromStreamAsync(memoryStream);
}
}
static void Main(string[] args)
{
//Open the file
FileStream fileStream = new FileStream("C:\\Users\\bowmanzh\\Desktop\\Book1.xlsx", FileMode.Open);
//Read the byte[] of File
byte[] bytes = new byte[fileStream.Length];
fileStream.Read(bytes,0,bytes.Length);
fileStream.Close();
//turn from byte[] to Stream
Stream stream = new MemoryStream(bytes);
Util.UploadFile(stream,"Book2.xlsx","test");
Console.WriteLine("Hello World!");
Console.ReadLine();
}
}
}
I am using SSH.NET library and have written a simple method for ftp-ing files to a server as below:
using (var client = new Renci.SshNet.SftpClient(host, port, username, password))
{
client.Connect();
Console.WriteLine("Connected to {0}", host);
using (var fileStream = new FileStream(uploadfile, FileMode.Open))
{
client.BufferSize = 4 * 1024; // bypass Payload error large files
client.UploadFile(fileStream, Path.GetFileName(uploadfile));
}
}
How can I retrieve the status of transfer back from the server? I need to know if the files are being transferred successfully.
Can a TRY...CATCH work to retrieve the status back from the server?
Thank you,
Try replacing your UploadFile line with this. This provides a callback to the function you are calling. The callback is in the brackets with o being a ulong. Probably a percentage or number of bytes written.
client.UploadFile(fileStream, Path.GetFileName(uploadfile), (o) =>
{
Console.WriteLine(o);
});
EDIT:
The above is equivalent to this:
//I might be called multiple times during the upload process.
public void OnStatusUpdate(ulong bytesWritten)
{
Console.WriteLine(bytesWritten);
}
...
//later
client.UploadFile(fileStream, Path.GetFileName(uploadfile), OnStatusUpdate);
They are calling YOUR function, and your function cannot be called without a value being passed to it.
There are two options that could work.
Use the Action<ulong> uploadCallback parameter of UploadFile(Stream input, string path, Action<ulong> uploadCallback). This can be used to check the number of bytes that been uploaded and could be compared with the size of the file you are sending.
Use SftpFileSytemInformation GetStatus(string path) on the path of the file you have uploaded, to check whether the file exists an again its size on disk.
I'm testing how to upload to AWS using SDK with a sample .txt file from a web app. The file uploads to the Bucket, but the downloaded file from the bucket is just an empty Notepad document without the text from the original uploaded file. I'm new to working with streams, so I'm not sure what could be wrong here. Does anyone see why the data wouldn't be sent in the transfer request? Thanks in advance!
using (var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest1))
{
//Save File to Bucket
using (FileStream txtFileStream = (FileStream)UploadedHttpFileBase.InputStream)
{
try
{
TransferUtility fileTransferUtility = new TransferUtility();
fileTransferUtility.Upload(txtFileStream, bucketLocation,
UploadedHttpFileBase.FileName);
}
catch (Exception e)
{
e.Message.ToString();
}
}
}
EDIT:
Both TransferUtility and PutObjectRequest/PutObjectResponse/AmazonS3Client.PutObject saved a blank text file. Then, after having some trouble instantiating a new FileStream, a MemoryStream used after resetting the starting position to zero still saved a blank text file. Any ideas?
New Code:
using (var client = new AmazonS3Client(Amazon.RegionEndpoint.USWest1))
{
Stream saveableStream = new MemoryStream();
using (Stream source = (Stream)UploadedHttpFileBase.InputStream)
{
source.Position = 0;
source.CopyTo(saveableStream);
}
//Save File to Bucket
try
{
PutObjectRequest request = new PutObjectRequest
{
BucketName = bucketLocation,
Key = UploadedHttpFileBase.FileName,
InputStream = saveableStream
};
PutObjectResponse response = client.PutObject(request);
}
catch (Exception e)
{
e.Message.ToString();
}
}
Most probably that TransferUtility doesn't work good with temporary upload files. Try to copy your input stream somewhere (e.g. into other not-so-temporary file, or even MemoryStream if you're sure it would not give you OutOfMemory at some point). Another thing is to get rid of TransferUtility and use low-level AmazonS3Client.PutObject with which you get finer control over Stream lifetime (do not forget that you'll need to implement some retrying as S3 API is prone to returning random temporary errors).
The answer had something to do with nesting, which is still a little beyond my understanding, and not because the code posted here was inherently wrong. This code came after an initial StreamReader which checked the first line of the text file to determine whether or not to save the file. After moving the code out from the while loop doing the ReadLines, the upload worked. Everything works as it's supposed to now that the validation is reorganized so that there's no need for the nested Stream or MemoryStream.
I am trying to create an controller that will allow me to save images into my database. So far I have this bit of code:
/// <summary>
/// Handles an upload
/// </summary>
/// <returns></returns>
[HttpPost]
[Route("")]
public async Task<IHttpActionResult> Upload()
{
// If the request is not of multipart content, then return a bad request
if (!Request.Content.IsMimeMultipartContent())
return BadRequest("Your form must be of type multipartcontent.");
// Get our provider
var provider = new MultipartFormDataStreamProvider(ConfigurationManager.AppSettings["UploadFolder"]);
// Upload our file
await Request.Content.ReadAsMultipartAsync(provider);
// Get our file
var file = provider.Contents.First();
var bytes = await file.ReadAsByteArrayAsync();
// Using a MemoryStream
using (var stream = new MemoryStream(bytes))
{
stream.Seek(0, SeekOrigin.Begin);
// Create the data
var data = "data:image/gif;base64," + Convert.ToBase64String(stream.ToArray());
// Return the data
return Ok(data);
}
}
But it isn't working.
When I get into the using block I get an error message:
"Error while copying content to a stream."
"Cannot access a closed file."
Does anyone know why?
The reason this is happening is the MultipartFormDataStreamProvider closes and disposes the uploaded files streams after it has written the uploaded data to the file location you provided when you passed this into the constructor: ConfigurationManager.AppSettings["UploadFolder"]
To access the files that have been uploaded you need to consult the file data on disk from the uploaded file location:
So in your example your code needs to use this:
// Read the first file from the file data collection:
var fileupload = provider.FileData.First;
// Get the temp name and path that MultipartFormDataStreamProvider used to save the file as:
var temppath = fileupload.LocalFileName;
// Now read the file's data from the temp location.
var bytes = File.ReadAllBytes(temppath);
Additionally if your using very small files you can instead use:
MultipartMemoryStreamProvider
This stores the file data in memory and should work as expected. Be warned though if you are using large files (25mb+) its wise to stream to disk first otherwise you might get out of memory exceptions as .net tries to hold the whole file in memory.