Uploading string as text file to SkyDrive? - c#

I'm trying to use C# with the Live Connect API to upload a blank (or one that says "test") text file to SkyDrive. The code I have so far:
LiveConnectClient client = await LiveSignin();
string folderID = await getFolder(client);
client.BackgroundUploadAsync(folderID, "pins.txt", "", OverwriteOption.Rename);
where LiveSignin() is a function that handles the sign in code and returns a LiveConnectClient, and getFolder(LiveConnectClient client) is a function that gets the folder ID that I'm trying to upload to.
That code throws an error about the blank string (third parameter on the last line) having to be a "Windows.Storage.Streams.IInputStream", but I can't seem to find any documentation on how to convert a String to an IInputStream, or, for that matter, much of any documentation on "IInputStream" that I can find.
With earlier versions of the Windows Runtime/Live Connect (on another project) I had used:
byte[] byteArray = System.Text.Encoding.Unicode.GetBytes(Doc);
MemoryStream stream = new MemoryStream(byteArray);
App.client.UploadCompleted += client_UploadCompleted;
App.client.UploadAsync(roamingSettings.Values["folderID"].ToString(), docTitle.Text + ".txt", stream);
but that throws a lot of errors now (most of them because UploadAsync has been replaced with BackgroundUploadAsync).
So, is there a way to convert a string to an IInputStream, or do I not even need to use an IInputStream? If my method just doesn't work, how would one upload a blank text file to SkyDrive from a C# Metro app? (developing in Visual Studio 2012 Express on the evaluation of Windows 8 Enterprise, if that makes much of a difference)
EDIT: I finally found "Stream.AsInputStream", but now I'm getting the same error as this
An unhandled exception of type 'System.AccessViolationException'
occurred in Windows.Foundation.winmd
Additional information: Attempted to read or write protected memory.
This is often an indication that other memory is corrupt
the code now:
LiveConnectClient client = await LiveSignin();
string folderID = await getFolder(client);
Stream OrigStream = new System.IO.MemoryStream(System.Text.UTF8Encoding.UTF8.GetBytes("test"));
LiveOperationResult result = await client.BackgroundUploadAsync(folderID, "pins.txt", OrigStream.AsInputStream(), OverwriteOption.Rename);

Hi
Had same problem today and as far as I can see the only solution to this problem is to write your text into a local file first and then upload it.
My solution looks like this:
var tmpFile= await ApplicationData.Current.
LocalFolder.CreateFileAsync
("tmp.txt", CreationCollisionOption.ReplaceExisting);
using (var writer = new StreamWriter(await tmpFile.OpenStreamForWriteAsync()))
{
await writer.WriteAsync("File content");
}
var operationResult =
await client.BackgroundUploadAsync(folderId, tmpFile.Name, tmpFile,
OverwriteOption.Overwrite);

Related

Slow SelectPDF conversion after publish

I want to convert html code to pdf so I use SelectPDF library, so my code is:
var converter = new HtmlToPdf();
var today = DateTime.UtcNow;
var fileName = $"test - {today}";
var doc = converter.ConvertHtmlString(html);
using var ms = new MemoryStream();
ms.Position = 0;
doc.Save(ms);
var res = ms.ToArray();
doc.Close();
return File(res, "application/pdf", fileName);
I tested using localhost and everything works well, always do a fast conversion (not more than 5 seconds).
The problem starts when I publish on the server, after the method executed sometimes (not always) it returns an error 500
Failed to load resource: the server responded with a status of 500 ()
Message: "Conversion error: Navigation timeout."
Is it a way always to get a fast result? I know I can expand load time as:
converter.Options.MaxPageLoadTime = 120;
But I want to convert it fast, 2 minutes for a simple HTML to pdf conversion is to much
If it works locally and you are getting a time-out on the server sometimes, it is likely that your Html contains a file reference (e.g. javascript, css or image) that is not available to the server at the time.
Make sure external references in your html that are always accessible to your server.

Getting big data through SignalR - Blazor

I have a component library that uses JS code to generate an image as a base64 string and the image needs to be transposed to C#. The image size is larger than MaximumReceiveMessageSize.
Can I get the value of the MaximumReceiveMessageSize property in C#? I need a way to correctly split the picture into chunks, or some other way to transfer it.
My component can be used in a Wasm or Server application. I can't change the value of the MaximumReceiveMessageSize property.
Thanks
Using a stream as described in the Stream from JavaScript to .NET solved my problem.
From Microsoft docs:
In JavaScript:
function streamToDotNet() {
return new Uint8Array(10000000);
}
In C# code:
var dataReference = await JS.InvokeAsync<IJSStreamReference>("streamToDotNet");
using var dataReferenceStream = await dataReference.OpenReadStreamAsync(maxAllowedSize: 10_000_000);
var outputPath = Path.Combine(Path.GetTempPath(), "file.txt");
using var outputFileStream = File.OpenWrite(outputPath);
await dataReferenceStream.CopyToAsync(outputFileStream);
In the preceding example: JS is an injected IJSRuntime instance. The dataReferenceStream is written to disk (file.txt) at the current user's temporary folder path (GetTempPath).

Change bitrate of MP3 using .Net Core Azure Functions

I am trying to create an Azure Function using NAudio / NLayer.NAudioSupport, in which I can pass in a URL (such as https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_1MG.mp3), read the bytes, and return the mp3 file converted at 128Kbps. However I'm currently getting an exception stating:
System.Private.CoreLib: Exception while executing function : ReduceMp3Bitrate. NAudio.Lame: unsupported encoding format MpegLayer3 (Parameter format).
Here's my current code (Exception occurs on Line 5):
const string linkUrl = #"https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_1MG.mp3";
var audioFile = new HttpClient().GetByteArrayAsync(linkUrl);
await using var ms = new MemoryStream(await audioFile);
var audioReader = new Mp3FileReader(ms);
await using var audioWriter = new LameMP3FileWriter(#"C:\temp\test.mp3", audioReader.Mp3WaveFormat, LAMEPreset.ABR_128);
await audioReader.CopyToAsync(audioWriter);
I think the issue is with audioReader.Mp3WaveFormat, but I'm not sure why this would be the issue, as it's returning MpegLayer3.
I've also tried running this on a .Net Framework 4.7 console application running on Windows (taking Azure Functions out the equation) and it still doesn't work.
Change the audioReader.Mp3WaveFormat to audioReader.WaveFormat (in line 5).
Explanation: As mentioned in https://github.com/naudio/NAudio/blob/master/NAudio/Wave/WaveStreams/Mp3FileReader.cs#L28 Mp3WaveFormat is NOT the output format of the Mp3FileReader stream. That is present in WaveFormat.

Adding zip file as Content in Web API response doubling file size on download

I am saving zip files to an AWS S3 bucket. I am now trying to create a C# .NET API that will allow me to download a specified key from the bucket and save it to a HttpResponseMessage in the Content key.
I've referred to the following question to set up my response for zip files: How to send a zip file from Web API 2 HttpGet
I have modified the code in the previous question so that it instead reads from a TransferUtility stream.
Problem is I am coming into an error when trying to extract or view the file that looks like the following:
The response I am getting back from the API looks like:
The relevant code looks like:
[HttpGet, Route("GetFileFromS3Bucket")]
public HttpResponseMessage GetFileFromS3Bucket(string keyName)
{
HttpResponseMessage response = new HttpResponseMessage();
string bucketName = "myBucket";
RegionEndpoint bucketRegion = RegionEndpoint.ARegion;
IAmazonS3 s3Client;
s3Client = new AmazonS3Client(bucketRegion);
try
{
var fileTransferUtility = new TransferUtility(s3Client);
var stream = fileTransferUtility.OpenStream(bucketName, keyName);
response.Content = new StreamContent(stream);
response.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = keyName + ".zip";
response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/zip");
response.StatusCode = HttpStatusCode.OK;
}
catch (Exception e)
{
response.Content = new StringContent("Something went wrong, error: " + e.Message);
response.StatusCode = HttpStatusCode.InternalServerError;
}
return response;
}
Results of troubleshooting:
The file from the Web API comes out with nearly double the expected size based on what is in S3. This is consistent across different files
Changing the bucket to be publicly accessible did not help (setting since reverted to not allowing public access)
Changing the file type to XML did not display a nicely formatted error (there was a suggestion that you may receive an XML response if an error was provided from S3)
Saving the S3 stream to a file and then saving directly to a file resulted in the correct file size. Seems safe to say the stream from S3 is not the problem
It appears that there ia a problem with the way the HTTPResponseMessage is handling the zip file. I'm unsure of whether it is actually on the server side, or whether it is on the client to parse the data and Swagger is simply incapable of doing that. Any help would be greatly appreciated.
Update 1
I do not believe this string is Base64 encoded as the result I got from converting the stream to a string is the following:
I've updated the code sample with the two lines showing the conversion from a stream to string.
Update 2
I've confirmed the issue is with how the response is handling the stream, or something in the response itself. Downloading the file stream from S3 and saving to a new file on the local computer resulted in a valid file that opened as expected.
Update 3
Link to GDrive folder with testing files: https://drive.google.com/drive/folders/1q_N3NTHz5E_nebtBQJHor3HfqUZWhGgd?usp=sharing
I unfortunately can't provide access to the original file as it contains sensitive data. The provided files are still causing the same problem however.
Interesting to note that the test file came out looking like:
The underscores on either side of the filename are quite strange.
I am running the following relevant packages:
Update 4
I've found the following UTF8 references in various files:
File: configuration91.svcinfo
I could not find anything that said anything about 'responseEncoding' anywhere in the project.
I am going to throw an answer up, because what's happening to you is unorthodox. I use S3 for many things and have done what you are doing with no problems in the past. To ensure that I am mimicking what you are doing, I duplicated your code:
[HttpGet, Route("GetFileFromS3Bucket/{keyName}")]
public HttpResponseMessage GetFileFromS3Bucket(string keyName)
{
string bucketName = "testzipfilesagain";
string awsAccessKey = "AKIAJ********A3QHOUA";
string awsSecretKey = "IYUJ9Gy2wFCQ************dCq5suFS";
IAmazonS3 client = new AmazonS3Client(awsAccessKey, awsSecretKey, RegionEndpoint.USEast1);
var fileTransferUtility = new TransferUtility(client);
var stream = fileTransferUtility.OpenStream(bucketName, "md5.zip");
var resp = new HttpResponseMessage();
resp.Content = new StreamContent(stream);
resp.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
resp.Content.Headers.ContentDisposition.FileName = keyName + ".zip";
resp.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/zip");
resp.StatusCode = HttpStatusCode.OK;
return resp;
}
These are the packages I have installed:
<ItemGroup>
<PackageReference Include="AWSSDK.S3" Version="3.3.111.37" />
<PackageReference Include="Microsoft.AspNetCore.Mvc.WebApiCompatShim" Version="2.2.0" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="5.5.1" />
</ItemGroup>
Everything runs perfectly well.
Trying to troubleshoot your code is going to be fruitless because it works perfectly fine, but there is something wrong with your environment.
So this isn't an answer to your question, but a answer to how you can try to solve the issue at hand and get past this.
Make sure your nuget packages are up to date
Do you have any middleware injected in your pipeline? If so, what?
Post your startup.cs -- maybe something is out of order in your Configure routine.
Could you start a brand new project and try your code in that?
Can you try a small 5KB zip file and post the original and the corrupt so we can look?
I would love to get to the bottom of this as I really like to solve these types of problems.
EDIT 1
So I looked at the zip files and they have been run through a UTF8 encoding process. So, if you take your original zip file, and run this code on it:
var goodBytes = File.ReadAllBytes("Some test to upload to S3.zip");
var badBytes = File.ReadAllBytes("_Some test to upload to S3.zip.zip_");
File.WriteAllText("Some test to upload to S3.zip.utf8", Encoding.UTF8.GetString(goodBytes));
var utf8EncodedGoodBytes = File.ReadAllBytes("Some test to upload to S3.zip.utf8");
var identical = badBytes.SequenceEqual(utf8EncodedGoodBytes);
It the results are:
I am going to do some research and figure out what could be causing your stream to become UTF-8 encoded. Is there anything in your config that looks like this? Can you search your entire solution for anything that resembles "utf" or "utf8" or "utf-8"?

Opening a pdf file in a Cordova App via web service

I have a Cordova app that uses a C# webService to communicate with a SQL database.
This works great.
My problem is that I have some pdf documents on the server with the local filePath held in the database and I need to open these in the app.
I have done a similar thing before where the documents had a URL where they could be reached so they just open, but in this case there is no external access to the file.
So my question is this....how do I best get the file from the server to the app to open it?
I don't need to store the file on the device, just open it so it can be read.
I would be really grateful if someone could steer me in the right direction as I have no clue as the best method for achieving what i'm after.
*****UPDATE******
Right, I don't think i'm a million miles away but have a feeling i'm doing something fundamentally wrong.
I'm creating a byte[] using:
byte[] bytes = System.IO.File.ReadAllBytes(filepath);
which produces a really long string.
In the app, I'm getting that string and using the following to reconstitute it as a file:
var bytes = new Uint8Array(data);
saveByteArray("mytest.txt", data);
function base64ToArrayBuffer(base64) {
var binaryString = window.atob(base64);
var binaryLen = binaryString.length;
var bytes = new Uint8Array(binaryLen);
for (var i = 0; i < binaryLen; i++) {
var ascii = binaryString.charCodeAt(i);
bytes[i] = ascii;
}
return bytes;
}
function saveByteArray(reportName, byte) {
var blob = new Blob([byte], {type: "application/txt"});
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
var fileName = reportName;
link.download = fileName;
link.click();
}
This will either create an empty file or a corrupt one.
Can anyone help with this please?
A fresh pair of eyes would be gratefully received.
Thanks
What you could do would be to make a new endpoint in your C# webservice backend, to download the file from this endpoint, to store it locally, and to display it from your app.
Behind the endpoint, it would use the file location from the database, and it would get the file content in a stream from where the pdf is stored. This stream of data would be placed in a json result object as an array of bytes. Finally, your app would have to get this json object, then to build the pdf file from the array of bytes and from the file name.
Hope it helps.

Categories

Resources