I am using Amazon S3 Low Level API for uploading Large Video File, I am following This link
When I am upoading the file, its giving me exception
The XML you provided was not well-formed or did not validate against our published schema
I have checked Inner Exception and its saying this
The remote server returned an error: (400) Bad Request.
at this line
CompleteMultipartUploadResponse completeUploadResponse =
s3Client.CompleteMultipartUpload(completeRequest);
and this is how I am making my S3Client
IAmazonS3 s3Client = new AmazonS3Client(accesskey, secretKey, Amazon.RegionEndpoint.USEast1);
I also tried changing bucketname like bucketname/filename.mp4,but its giving exception
The specified upload id is not valid
I also tried some other file(doc and pdf) it is also giving XML exception.
Is there any good alternate approach for uploading Large Video files(Around 200-500MB)?
I've found a similar problem during a MultiPart upload using the sample code in the doc. I've found that the ETag list is mandatory for the CompleteMultipartUpload part - which is not in the documentation sample.
This link has a better explanation of the multi-part upload process: s3 multipart upload
I used to send archives to S3 (around 100-300MB). My code looked like this:
AmazonS3Client s3client = new AmazonS3Client(accessKey, secretKey);
s3Client.PutObject(new PutObjectRequest {BucketName = destinationBucketName, FilePath = myFilePath, Key = Path.GetFileName(myFilePath)});
That's it basically. I had retry-policy and exception handling around that, but this is the core. So just simple PutObject function without any multipart uploads works find for such file-sizes.
Related
I am trying to download response from this url
http://www.youtube.com/get_video_info?html5=1&video_id=sFwcLC5HC9I
The file it returns can be downloaded from browser, but when I try to save it with c# webclient I get only error message.
errorcode=180&status=fail&reason=HTTP+is+not+supported.
Is there any other way to download file from the API without using HTTP?
What have I tried (a is instance of WebClient):
byte[] policko = a.DownloadData("http://www.youtube.com/get_video_info?html5=1&video_id=sFwcLC5HC9I");
a.DownloadFile("http://www.youtube.com/get_video_info?html5=1&video_id=sFwcLC5HC9I", "filename");
a.DownloadString("http://www.youtube.com/get_video_info?html5=1&video_id=sFwcLC5HC9I");
The response you got indicates that HTTP is not supported for this API call. The next natural choice is HTTPS.
https://www.youtube.com/get_video_info?html5=1&video_id=sFwcLC5HC9I
I am trying out the one drive graph api to upload folder to my one drive folder.
Using the regular upload works fine.
I'm also testing the resumable upload, which is used for large files. But this is where I'm getting a strange response.
I'm following this link for how to do it: https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_createuploadsession.
First i get a create an upload session using "https://graph.microsoft.com/v1.0/me/drive/items/xxxxxxxxxx:/filename.txt:/createUploadSession".
This gives me back an uploadUrl value, something like "https://api.onedrive.com/rup/xxxxxxxxxxxxx"
I then make a PUT request to that URL with the correct headers.
The response I receive is a 400 (bad request) with the following text (including the HTML):
<h2>Our services aren't available right now</h2><p>We're working to restore all services as soon as possible. Please check back soon.</p>Ref A: 235A863C95DC45BE98688D905A7DB3C1 Ref B: BUH01EDGE0107 Ref C: 2018-08-28T18:56:52Z
I have been getting this for 3 days now and I can't seem to get hold of any support from Microsoft. According to this website, everything is running: https://portal.office.com/servicestatus
Does anyone know why I'm getting this error?
I found the cause for the error.
I received the error because I provided the authentication token in the header.
For small file uploads it is required, but for large file uploads it is not required.
I was using the same code for PUT, POST and GET requests, where I only pass in the URL and HTTP Content and i would always add the auth headers. But for large file uploads it is not required.
But still a very strange error response to receive for adding unrequired headers.
I want to download a 68kb zip file using webclient and I have the error:
C# System.Net.WebException: 'Too many automatic redirections were attempted.'
About this post is a duplicated post:
The solution explained in: Why i'm getting exception: Too many automatic redirections were attempted on webclient?
Don't work to make my code below works and download the zip file.
How Can I edit to explain better ?
My code:
var url_from = "http://www1.caixa.gov.br/listaweb/Lista_imoveis_RJ.zip";
var _to = #"F:\folder\file.zip";
using (var client = new WebClient())
{
client.DownloadFile(url_from, _to);
}
I tried Async ways too but it was generated empty zip file.
Like this: How do I download zip file in C#?
and this: How can I download a ZIP file from a URL using C#?
This is caused by a bad server implementation. If you use Fiddler, you'll see that the server redirects both HTTPS and HTTP connections to the same HTTP url, adding a security=true cookie.
Calling over HTTP is particulary funny :
The first HTTP redirects to an HTTPS URL
The HTTPS redirects back to the original HTTP with the security=true cookie
If the cookie isn't there, the loop starts again
This means that :
There's no security. Anything can intercept that call and alter or replace the contents of the file. Hope you don't try to download this file over WiFi!
The server will cause an infinite redirection loop unless you store the cookie or add it yourself.
WebClient can't store cookies. It's an obsolete class created back when downloading pages and files was all that's needed. All of its functionality and much more is provided by the HttpClient class.
In this case though, you can add the cookie as a header yourself and avoid the redirections, and still download the file over HTTPS
WebClient is an obsolete class. It was created for simple file and page requests and
var url_from = "https://www1.caixa.gov.br/listaweb/Lista_imoveis_RJ.zip";
using (var client = new System.Net.WebClient())
{
client.Headers.Add(System.Net.HttpRequestHeader.Cookie, "security=true");
client.DownloadFile(url_from, _to);
}
This will result in a single call and download the file over HTTP
I'm trying to upload files (Images & Video) to an AWS CloudFront distribution, that points to an s3 bucket.
Currently, I can use the HttpClient to GET and PUT files using signed URLs generated via the CloudFront SDK.
using (HttpClient client = new HttpClient())
using (var stream = File.OpenRead(filepath))
using (HttpResponseMessage response = await client.PutAsync(url, new StreamContent(stream)))
{
response.EnsureSuccessStatusCode();
}
I originally tried a POST, but this didn't work at all (it timed out after 30 seconds) and I found from this SO Answer that I need to add client.DefaultRequestHeaders.Add("x-amz-acl", "bucket-owner-full-control"); to give my object ACL access permissions so the bucket owner can access via the console.
I know I can upload to S3 using the AWS S3 SDK and I could enable transfer acceleration, though the AWS FAQ states that CloudFront is a better choice when uploading smaller files or datasets (< 1GB).
I've found the CloudFront documentation vague, wrong or non-existant for anything other than the initial setup of the CloudFront distribution.
Is the above method the correct way to upload any files to S3 via CloudFront, or is there an optimised, more robust way (e.g. multi-part uploads, so larger files can be resumed) - I want to optimise this for uploading Video, so if answers could focus on this it would be appreciated.
AWS Support suggest response in case this helps someone:
... three possible solutions. The first being that you can PUT objects to
S3 via a signed URL to the S3 origin. The second option, PUTing the
file through CF into S3 via S3 pre-signed URL. The third and most
favorable option, using the Transfer Acceleration Endpoint to PUT the
object into S3.
From my understanding, the FAQ stated that using CF instead of the TA
endpoint is better for files smaller than 1 GB because TA is optimized
for larger files. However, there are many factors that can influence
the performance, I suggest testing both methods to see which service
works best for your environment.
They also mention CF is much more complex to do multipart uploads:
Its going to be much more difficult if you need to use CloudFront
signed URL for other reasons. You will need to use the Multipart
Upload APIs (InitiateMultipartUpload, UploadPart,
CompleteMultipartUpload) and sign them accordingly. Unfortunately we
don't have any documentation or steps on how to do this. You can find
more information on the Multipart Upload process here [2].
I highly recommend using the TransferUtility and S3 Transfer
Acceleration endpoints if possible.
I am sending a file as part of the FormData from AngularJs to .NET Web API as follows:
AngularJS:
var cabinetFormData = new FormData();
cabinetFormData.append('file', file);
Sending the above FormData as a parameter in the service call to .Net WebAPI
.NET:
var httpRequest = HttpContext.Current.Request;
var fileRequest = httpRequest.Files[0];
While receiving the request on the server side, the fileRequest.FileName is always showing up as "blob" for any image files. Rest of the content is showing up fine. Getting proper File names for other format's like .pdf and .xml. I have checked the input, and it's sending all the formData.
What am i doing wrong ?
I would post this as a comment, but I don't have the rep yet..
If you're using Firefox when you see this issue, these links might help you out:
Uploaded file comes in as blob if not on localhost? asp.net mvc4 using IIS express
https://groups.google.com/forum/#!topic/jquery-fileupload/RjfHLX2_EeM
:)