Uploading file to azure file share from azure function app - c#

I have a function app and azure fileshare. I want to upload a file from internet using the url to my azure file share storage.
private ShareFileClient GetShareFile(string filePath)
{
string connectionString = "My connection string";
return new ShareFileClient(connectionString, "temp", filePath);
}
ShareFileClient destFile = GetShareFile(filePath);
// Start the copy operation
await destFile.StartCopyAsync(new Uri(DownloadUrl));
But this code is not working as expected. It is throwing error "Unauthorized
RequestId:000db1ff-801a-000a-0602-b24449000000
Time:2020-11-03T16:58:36.5697281Z
Status: 401 (Unauthorized)
ErrorCode: CannotVerifyCopySource" . Any helps will be highly appreciated

A simple code to write and read:
string con_str = "DefaultEndpointsProtocol=https;AccountName=0730bowmanwindow;AccountKey=xxxxxx;EndpointSuffix=core.windows.net";
string sharename = "test";
string filename = "test.txt";
string directoryname = "testdirectory";
ShareServiceClient shareserviceclient = new ShareServiceClient(con_str);
ShareClient shareclient = shareserviceclient.GetShareClient(sharename);
ShareDirectoryClient sharedirectoryclient = shareclient.GetDirectoryClient(directoryname);
//write data.
ShareFileClient sharefileclient_in = sharedirectoryclient.CreateFile(filename,1000);
string filecontent_in = "This is the content of the file.";
byte[] byteArray = Encoding.UTF8.GetBytes(filecontent_in);
MemoryStream stream1 = new MemoryStream(byteArray);
stream1.Position = 0;
sharefileclient_in.Upload(stream1);
//read data.
ShareFileClient sharefileclient_out = sharedirectoryclient.GetFileClient(filename);
Stream stream2 = sharefileclient_out.Download().Value.Content;
StreamReader reader = new StreamReader(stream2);
string filecontent_out = reader.ReadToEnd();
Above code works fine on my side, you just need to convert file to stream first.

Related

C# - Trying to grab and save a pdf into my database

In my C# application, I am trying to make it able to pull a pdf and save it, so that end users can press a button and pull up that pdf while they are in the application. But when I copy the content to the filestream it makes the pdf but it is just blank with nothing from the original pdf. What am I doing wrong?
The pdf's could also have pictures on them, and I don't think the way I'm doing it would allow those to be brought over.
Microsoft.Win32.OpenFileDialog openFileDialog = new Microsoft.Win32.OpenFileDialog();
bool? response = openFileDialog.ShowDialog();
var fileContent = string.Empty;
var filestream = openFileDialog.OpenFile();
using (StreamReader reader = new StreamReader(filestream))
{
fileContent = reader.ReadToEnd();
}
// make folder path
string FolderPath = "ProjectPDFs\\";
string RootPath = "X:\\Vents-US Inventory";
DirectoryInfo FolderDir = new DirectoryInfo(Path.Combine(RootPath, FolderPath));
Directory.CreateDirectory(FolderDir.ToString());
string filePath = "";
string FileName = openFileDialog.SafeFileName;
if (fileContent.Length > 0)
{
filePath = Path.Combine(FolderDir.ToString(), FileName);
using (Stream fileStream = new FileStream(filePath, FileMode.Create, FileAccess.Write))
{
byte[] bytestream = Encoding.UTF8.GetBytes(fileContent);
Stream stream = new MemoryStream(bytestream);
stream.CopyTo(fileStream);
}
}

Problem with azure blob storage encoding when uploading a file

I'm uploading files to Azure Blob Storage with the .Net package specifying the encoding iso-8859-1. The stream seems ok in Memory but when I upload to the blob storage it ends with corrupted characters that seems that could not be converted to that encoding. It would seem as if the file gets storaged in a corrupted state and when I download it again and check it the characters get all messed up. Here is the code I'm using.
public static async Task<bool> UploadFileFromStream(this CloudStorageAccount account, string containerName, string destBlobPath, string fileName, Stream stream, Encoding encoding)
{
if (account is null) throw new ArgumentNullException(nameof(account));
if (string.IsNullOrEmpty(containerName)) throw new ArgumentException("message", nameof(containerName));
if (string.IsNullOrEmpty(destBlobPath)) throw new ArgumentException("message", nameof(destBlobPath));
if (stream is null) throw new ArgumentNullException(nameof(stream));
stream.Position = 0;
CloudBlockBlob blob = GetBlob(account, containerName, $"{destBlobPath}/{fileName}");
blob.Properties.ContentType = FileUtils.GetFileContentType(fileName);
using var reader = new StreamReader(stream, encoding);
var ct = await reader.ReadToEndAsync();
await blob.UploadTextAsync(ct, encoding ?? Encoding.UTF8, AccessCondition.GenerateEmptyCondition(), new BlobRequestOptions(), new OperationContext());
return true;
}
This is the file just before uploading it
<provinciaDatosInmueble>Sevilla</provinciaDatosInmueble>
<inePoblacionDatosInmueble>969</inePoblacionDatosInmueble>
<poblacionDatosInmueble>Valencina de la Concepción</poblacionDatosInmueble>
and this is the file after the upload
<provinciaDatosInmueble>Sevilla</provinciaDatosInmueble>
<inePoblacionDatosInmueble>969</inePoblacionDatosInmueble>
<poblacionDatosInmueble>Valencina de la Concepci�n</poblacionDatosInmueble>
The encoding I send is ISO-5589-1 in the parameter of the encoding. Anybody knows why Blob Storage seems to ignore the encoding I'm specifying? Thanks in advance!
We could able to achieve this using Azure.Storage.Blobs instead of WindowsAzure.Storage which is a legacy Storage SDK. Below is the code that worked for us.
class Program
{
static async Task Main(string[] args)
{
string sourceContainerName = "<Source_Container_Name>";
string destBlobPath = "<Destination_Path>";
string fileName = "<Source_File_name>";
MemoryStream stream = new MemoryStream();
BlobServiceClient blobServiceClient = new BlobServiceClient("<Your_Connection_String>");
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient(sourceContainerName);
BlobClient blobClientSource = containerClient.GetBlobClient(fileName);
BlobClient blobClientDestination = containerClient.GetBlobClient(destBlobPath);
// Reading From Blob
var line =" ";
if (await blobClientSource.ExistsAsync())
{
var response = await blobClientSource.DownloadAsync();
using (StreamReader streamReader = new StreamReader(response.Value.Content))
{
line = await streamReader.ReadToEndAsync();
}
}
// Writing To Blob
var content = Encoding.UTF8.GetBytes(line);
using (var ms = new MemoryStream(content))
blobClientDestination.Upload(ms);
}
}
RESULT:

C# big filedownload resumable from azure blob storage

I really need some rubber ducking...
I have a file that is at least 2.3 GiB.
I am currently downloading this file to a temp directory.
But when the download is interrupted (connection error, or windows crash) I want the user to resume download where it stopped. And not download the whole file all over again.
The code works in the fact that it continues downloading the file, but I see that the download stream is starting from the beginning again. So that means that the file ends to be (2.3 GiB + the amount bytes that were downloaded previously), which ofc corrupts my file.
I used the following snippet to resume downloading, so I hoped the stream would resume, where it stopped
localStream.Seek(positionInFile, SeekOrigin.Begin);
Any ideas on what I am missing here?
Here is my code.
BlobContainerClient containerClient = new BlobContainerClient(connectionString, container);
var blobClient = containerClient.GetBlobClient(downloadFile);
fullOutputPath = createOutputFilePath(updateFileUri.OriginalString, outputFolder);
downloadFileInfo = new FileInfo(fullOutputPath);
var response = blobClient.Download(cts.Token);
contentLength = response.Value.ContentLength;
if (contentLength.HasValue && contentLength.Value > 0)
{
if (_fileSystemService.FileExists(fullOutputPath))
{
from = downloadFileInfo.Length;
to = contentLength;
if (from == to)
{
//file is already downloaded
//skip it
progress.Report(1);
return;
}
fileMode = FileMode.Open;
positionInFile = downloadFileInfo.Length;
}
using FileStream localStream = _fileSystemService.CreateFile(fullOutputPath, fileMode, FileAccess.Write);
localStream.Seek(positionInFile, SeekOrigin.Begin);
bytesDownloaded = positionInFile;
double dprog = ((double)bytesDownloaded / (double)(contentLength.Value + positionInFile));
do
{
bytesRead = await response.Value.Content.ReadAsync(buffer, 0, buffer.Length, cts.Token);
await localStream.WriteAsync(buffer, 0, bytesRead, cts.Token);
await localStream.FlushAsync();
bytesDownloaded += bytesRead;
dprog = ((double)bytesDownloaded / (double)(contentLength.Value + positionInFile));
progress.Report(dprog);
} while (bytesRead > 0);
}
I did some test for you, in my case, I use a .txt file to demo your requirement. You can see the .txt file here.
As you can see, at line 151, I made an end mark:
I also created a local file that ends with this end mark to emulate that download is interrupted and we will continue to download from storage:
This is my code for fast demo below:
static void Main(string[] args)
{
string containerName = "container name";
string blobName = ".txt file name";
string storageConnStr = "storage account conn str";
string localFilePath = #"local file path";
var localFileStream = new FileStream(localFilePath, FileMode.Append);
var localFileLength = new FileInfo(localFilePath).Length;
localFileStream.Seek(localFileLength, SeekOrigin.Begin);
var blobServiceClient = new BlobServiceClient(storageConnStr);
var blobClient = blobServiceClient.GetBlobContainerClient(containerName).GetBlobClient(blobName);
var stream = blobClient.Download(new Azure.HttpRange(localFileLength)).Value.Content;
var contentStrting = new StreamReader(stream).ReadToEnd();
Console.WriteLine(contentStrting);
localFileStream.Write(Encoding.ASCII.GetBytes(contentStrting));
localFileStream.Flush();
}
Result:
We only downloaded the content behind the end mark:
Content has been downloaded to local .txt file:
Pls let me know if you have any more questions.

Certain big '.xlsx' extension files failed to open after downloaded via SftpClient

I am trying to download file from a remote linux server to my local computer using SftpClient.
Here is my code to download the file
public MemoryStream DownloadFile2(string path)
{
var connectionInfo = _taskService.GetBioinformaticsServerConnection();
MemoryStream fileStream = new MemoryStream();
using (SftpClient client = new SftpClient(connectionInfo))
{
client.ConnectionInfo.Timeout = TimeSpan.FromSeconds(200);
client.Connect();
client.DownloadFile(path, fileStream);
fileStream.Seek(0, SeekOrigin.Begin);
var response = new MemoryStream(fileStream.GetBuffer());
return fileStream;
}
}
And here is the controller that called above method.
public FileResult DownloadFile(string fullPath, string fileName)
{
if (!string.IsNullOrEmpty(fileName))
{
fullPath = string.Concat(fullPath, "/", fileName);
}
var ms = _reportAPI.DownloadFile2(fullPath);
var ext = Path.GetExtension(fullPath);
if (ext == ".xlsx")
{
return File(ms, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", fileName);
}
return File(ms, "application/octet-stream", fileName);
}
I have manage to do it for most of the files, however for certain large '.xlsx' extension files, when I tried to open it, for some reason, I received below error.
If I am on IISExpress, I still manage to open it after I clicked on 'Yes' button, but if I'm using the normal IIS, it failed to open the file after clicked on 'Yes' button.
For other type of files or smaller excel files, it works as expected.
Any idea how can I modified my code to solve this issue?
I was able to resolve this by modifying my code as below
public MemoryStream DownloadFile2(string path)
{
var connectionInfo = _taskService.GetBioinformaticsServerConnection();
MemoryStream fileStream = new MemoryStream();
byte[] fileBytes = null;
using (SftpClient client = new SftpClient(connectionInfo))
{
client.ConnectionInfo.Timeout = TimeSpan.FromSeconds(200);
client.Connect();
client.DownloadFile(path, fileStream);
fileBytes = fileStream.ToArray();
var response = new MemoryStream(fileBytes);
return response;
}
}

c# files downloaded with httpwebrequest and cookies get corrupted

I am trying to make a program which is able to download files with URI(URL) using httpwebrequest and cookies(for credential information to keep login status).
I can download files with following code but files get corrupted after being downloaded.
when I download xlsx file(on the web page) into text file at local drive, I see some part of numbers and words from an original file in a corrupted file, therefore I assume I have reached to the right file.
however, when I download xlsx file(on the web page) in xlsx file at local drive, it seems like it fails to open saying
excel cannot open the file 'filename.xlsx' because the file format or
file extension is not valid. Verify that the file has not been
corrupted and that the file extension matches the format of the file.
Is there any way I can keep fully original file content after I download?
I attach a part of result content as well.
private void btsDownload_Click(object sender, EventArgs e)
{
try
{
string filepath1 = #"PathAndNameofFile.txt";
string sTmpCookieString = GetGlobalCookies(webBrowser1.Url.AbsoluteUri);
HttpWebRequest fstRequest = (HttpWebRequest)WebRequest.Create(sLinkDwPage);
fstRequest.Method = "GET";
fstRequest.CookieContainer = new System.Net.CookieContainer();
fstRequest.CookieContainer.SetCookies(webBrowser1.Document.Url, sTmpCookieString);
HttpWebResponse fstResponse = (HttpWebResponse)fstRequest.GetResponse();
StreamReader sr = new StreamReader(fstResponse.GetResponseStream());
string sPageData = sr.ReadToEnd();
sr.Close();
string sViewState = ExtractInputHidden(sPageData, "__VIEWSTATE");
string sEventValidation = this.ExtractInputHidden(sPageData, "__EVENTVALIDATION");
string sUrl = ssItemLinkDwPage;
HttpWebRequest hwrRequest = (HttpWebRequest)WebRequest.Create(sUrl);
hwrRequest.Method = "POST";
string sPostData = "__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=" + sViewState + "&__EVENTVALIDATION=" + sEventValidation + "&Name=test" + "&Button1=Button";
ASCIIEncoding encoding = new ASCIIEncoding();
byte[] bByteArray = encoding.GetBytes(sPostData);
hwrRequest.ContentType = "application/x-www-form-urlencoded";
Uri convertedURI = new Uri(ssDwPage);
hwrRequest.CookieContainer = new System.Net.CookieContainer();
hwrRequest.CookieContainer.SetCookies(convertedURI, sTmpCookieString);
hwrRequest.ContentLength = bByteArray.Length;
Stream sDataStream = hwrRequest.GetRequestStream();
sDataStream.Write(bByteArray, 0, bByteArray.Length);
sDataStream.Close();
using (WebResponse response = hwrRequest.GetResponse())
{
using (sDataStream = response.GetResponseStream())
{
StreamReader reader = new StreamReader(sDataStream);
{
string sResponseFromServer = reader.ReadToEnd();
FileStream fs = File.Open(filepath1, FileMode.OpenOrCreate, FileAccess.Write);
Byte[] info = encoding.GetBytes(sResponseFromServer);
fs.Write(info, 0, info.Length);
fs.Close();
reader.Close();
sDataStream.Close();
response.Close();
}
}
}
}
catch
{
MessageBox.Show("Error");
}
}
StreamReader is for dealing with text data. Using it corrupts your binary data(excel file).
Write sDataStream directly to file. For ex.
sDataStream.CopyTo(fs)
PS: I prepared a test case (using similar logic) to show how your code doesn't work
var binaryData = new byte[] { 128,255 };
var sr = new StreamReader(new MemoryStream(binaryData));
var str3 = sr.ReadToEnd();
var newData = new ASCIIEncoding().GetBytes(str3); //<-- 63,63
Just compare binaryData with newData

Categories

Resources