I am trying to upload large file on Google drive performing Resumable upload.
Here is the code flow
Step 1 : Creating file on Google Drive using Drive service and initiating the resumable upload session
using put request
String fileID = _DriveService.Files.Insert(googleFileBody).Execute().Id;
//Initiating resumable upload session
String UploadUrl = null;
String _putUrl = "https://www.googleapis.com/upload/drive/v2/files/" + fileID + "?uploadType=resumable";
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(_putUrl);
httpRequest.Headers["Authorization"] = "Bearer " + AccessToken;
httpRequest.Method = "PUT";
requestStream = httpRequest.GetRequestStream();
_webResponse = (HttpWebResponse)httpRequest.GetResponse();
if (_webResponse.StatusCode == HttpStatusCode.OK)
{
//Getting response OK
UploadUrl = _webResponse.Headers["Location"].ToString();
}
Step 2 : Uploading chunks to using UploadUrl . The byte array is in multiple of 256kb and call to this function is in the loop for every chunk
private void AppendFileData(byte[] chunk)
{
try
{
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(UploadUrl);
httpRequest.ContentLength = chunk.Length;
httpRequest.Headers["Content-Range"] = "bytes " + startOffset + "-" + endOffset+ "/" + sourceFileSize;
httpRequest.ContentType= MimeType;
httpRequest.Method = "PUT";
MemoryStream stream =new MemoryStream(chunk);
using (System.IO.Stream requestStream = httpRequest.GetRequestStream())
{
stream.CopyTo(requestStream);
requestStream.Flush();
requestStream.Close();
}
HttpWebResponse httpResponse = (HttpWebResponse)(httpRequest.GetResponse()); // Throws exception as
//System.Net.WebException: The remote server returned an error: (308) Resume Incomplete.
//at System.Net.HttpWebRequest.GetResponse()
// There is no data getting appended to file
// Still executing the append for remaining chunks
}
catch(System.Net.WebException ex)
{
}
}
For my last chunk which is not multiple of 256KB I am getting error as
System.Net.WebException: The remote server returned an error: (400)
Bad Request. at System.Net.HttpWebRequest.GetResponse()
What I am doing wrong in this code? Please suggest.
Thanks in advance
Mayuresh.
Try checking if the last chunk is passing the correct size and not the entire array as stated in this forum. Ali has stated in that forum that "one potential issue is this: if you are sending a byte array which is half empty for the last request (i.e. the buffer has been read in less than the chunk size)." Here is a sample implementation of resumable upload. Hope this helps.
Related
I have a valid download url for a file located in google firebase storage and I'm trying to download the file into my application written which is in c# via a HTTP Get request. However, the request fails with the error "WebException: The remote server returned an error: (400) Bad Request." I would really appreciate it if you could point me towards what I'm doing wrong. Thank you in advance for your help! -- Here is reference to the google firebase documentation https://firebase.google.com/docs/storage/web/download-files
Below is the code I am using to download the file:
private void downloadFrame()
{
//Extract
try
{
//construct HTTP get request
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(**link address**);
httpRequest.Method = "GET";
httpRequest.ContentType = "text/xml; encoding='utf-8'";
//send the http request and get the http response from webserver
HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse();
Stream httpResponseStream = httpResponse.GetResponseStream();
// Define buffer and buffer size
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
int bytesRead = 0;
// Read from response and write to file
FileStream fileStream = File.Create("frame.pcm");
while ((bytesRead = httpResponseStream.Read(buffer, 0, bufferSize)) != 0)
{
fileStream.Write(buffer, 0, bytesRead);
} // end while
}
catch (WebException we)
{
Debug.Log(we.Response.ToString());
}
}
I realize this is a very old question, but I hit it when I had the same issue and it took me quite a while to find an answer.
The download link I had from Firebase was for a PDF file and contained an escaped "/" character as "%2F". This was being changed back into a "/" on the fly, causing the error.
The answer I required was to alter this behaviour by adding an entry to the web.config file as described here: https://stackoverflow.com/a/12170132
I could then use a simple WebClient to download the file like this:
string targetFileName = #"C:\Temp\Target.pdf"
using (WebClient client = new WebClient())
{
Uri downloadURI = new Uri(<Firebase download URL>);
client.DownloadFile(downloadURI, targetFileName);
}
Note: By default, Cloud Storage buckets require Firebase
Authentication to download files. You can change your Firebase
Security Rules for Cloud Storage to allow unauthenticated access
https://firebase.google.com/docs/storage/web/download-files
Change this and try again.
I've seen threads on this issue but my problem is particularly confusing. I have a free 2 million character subscription, a valid client id and secret. When I run my code I get to call the API a few times successfully (the most I've seen is 75 consecutive successful calls). Then every other call returns a Bad request response: The remote server returned an error: (400) Bad Request.
I create the token once with my credentials and never create it again. I loop through a file, parse it, and submit every parsed string for translation by calling the API. It seems that I reach some sort of limit that I'm now aware of.
When looking at my account, it doesn't seem to be discounting the characters that I've translated already which would make me highly suspicious that I have the wrong credentials when creating the token. I quadruple-checked that and everything seems to be ok.
Any guidance on what I may be missing here would be much appreciated.
Here's the code that creates the token. I do think though that there may be an unknown limitation that I'm not aware of with the free subscription.
static void gettoken()
{
//Get access token
string clientID = "my client id";
string clientSecret = "my secret";
String strTranslatorAccessURI = "https://datamarket.accesscontrol.windows.net/v2/OAuth2-13";
String strRequestDetails = string.Format("grant_type=client_credentials&client_id={0}&client_secret={1}&scope=http://api.microsofttranslator.com", clientID, clientSecret);
System.Net.WebRequest webRequest = System.Net.WebRequest.Create(strTranslatorAccessURI);
webRequest.ContentType = "application/x-www-form-urlencoded";
webRequest.Method = "POST";
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(strRequestDetails);
webRequest.ContentLength = bytes.Length;
using (System.IO.Stream outputStream = webRequest.GetRequestStream())
{
outputStream.Write(bytes, 0, bytes.Length);
}
System.Net.WebResponse webResponse = webRequest.GetResponse();
System.Runtime.Serialization.Json.DataContractJsonSerializer serializer = new System.Runtime.Serialization.Json.DataContractJsonSerializer(typeof(AdmAccessToken));
AdmAccessToken token = (AdmAccessToken)serializer.ReadObject(webResponse.GetResponseStream());
MyGlobals.headerValue = "Bearer " + token.access_token;
}
And here's the code that calls the API itself. I call the API method from a loop.
static void RunBing(string sterm)
{
//Submit the translation request
string txtToTranslate = sterm;
string uri = "http://api.microsofttranslator.com/v2/Http.svc/Translate?text=" + txtToTranslate + "&from=en&to=es";
System.Net.WebRequest translationWebRequest = System.Net.WebRequest.Create(uri);
translationWebRequest.Headers.Add("Authorization", MyGlobals.headerValue);
System.Net.WebResponse response = null;
try {
response = translationWebRequest.GetResponse();
}
catch (Exception e)
{
Console.WriteLine("Term failed: " + sterm);
Console.WriteLine(e);
return;
}
System.IO.Stream stream = response.GetResponseStream();
System.Text.Encoding encode = System.Text.Encoding.GetEncoding("utf-8");
System.IO.StreamReader translatedStream = new System.IO.StreamReader(stream, encode);
System.Xml.XmlDocument xTranslation = new System.Xml.XmlDocument();
xTranslation.LoadXml(translatedStream.ReadToEnd());
MyGlobals.xlation = xTranslation.InnerText;
}
After several successful calls to the API, I start to get the following message:
System.Net.WebException: The remote server returned an error: (400) Bad Request.
at System.Net.HttpWebRequest.GetResponse()
at Translate.TranslateText.Program.RunBing(String sterm)
After reading/googling about HttpClient, I have the impression that this component is not suitable for uploading large files or contents to REST services.
It seems that if the upload takes more than the established timeout, the transmission will fail. Does it make sense? What does this timeout means?
Getting progress information seems hard or requires add-ons.
So my questions are: Is it possible to sove these two issues without too much hassle? Otherwise, what's the best approach when working with large contents and REST services?
Yes, if the upload takes longer that the TimeOut, the upload will fail. This is a limitation of HttpClient. The most robust solution to this problem is the one that Thomas Levesque has written an article about, and linked in his comments to your question. You have to use HttpWebRequest instead of HttpClient.
If you want to get progress messages, open the file as a FileStream and manually iterate through it, copying bytes in increments onto the (upload) request stream. As you go, you can calculate your progress relative to the file size.
TL's code example. Be sure to read the article though!:
long UploadFile(string path, string url, string contentType)
{
// Build request
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = WebRequestMethods.Http.Post;
request.AllowWriteStreamBuffering = false;
request.ContentType = contentType;
string fileName = Path.GetFileName(path);
request.Headers["Content-Disposition"] = string.Format("attachment; filename=\"{0}\"", fileName);
try
{
// Open source file
using (var fileStream = File.OpenRead(path))
{
// Set content length based on source file length
request.ContentLength = fileStream.Length;
// Get the request stream with the default timeout
using (var requestStream = request.GetRequestStreamWithTimeout())
{
// Upload the file with no timeout
fileStream.CopyTo(requestStream);
}
}
// Get response with the default timeout, and parse the response body
using (var response = request.GetResponseWithTimeout())
using (var responseStream = response.GetResponseStream())
using (var reader = new StreamReader(responseStream))
{
string json = reader.ReadToEnd();
var j = JObject.Parse(json);
return j.Value<long>("Id");
}
}
catch (WebException ex)
{
if (ex.Status == WebExceptionStatus.Timeout)
{
LogError(ex, "Timeout while uploading '{0}'", fileName);
}
else
{
LogError(ex, "Error while uploading '{0}'", fileName);
}
throw;
}
}
I am trying to upload from an HTTP stream directly to S3, without storing in memory or as a file first. I am already doing this with Rackspace Cloud Files as HTTP to HTTP, however the AWS authentication is beyond me so am trying to use the SDK.
The problem is the upload stream is failing with this exception:
"This stream does not support seek operations."
I've tried with PutObject and TransferUtility.Upload, both fail with the same thing.
Is there any way to stream into S3 as the stream comes in, rather than buffering the whole thing to a MemoryStream or FileStream?
or is there any good examples of doing the authentication into S3 request using HTTPWebRequest, so I can duplicate what I do with Cloud Files?
Edit: or is there a helper function in the AWSSDK for generating the authorization header?
CODE:
This is the failing S3 part (both methods included for completeness):
string uri = RSConnection.StorageUrl + "/" + container + "/" + file.SelectSingleNode("name").InnerText;
var req = (HttpWebRequest)WebRequest.Create(uri);
req.Headers.Add("X-Auth-Token", RSConnection.AuthToken);
req.Method = "GET";
using (var resp = req.GetResponse() as HttpWebResponse)
{
using (Stream stream = resp.GetResponseStream())
{
Amazon.S3.Transfer.TransferUtility trans = new Amazon.S3.Transfer.TransferUtility(S3Client);
trans.Upload(stream, config.Element("root").Element("S3BackupBucket").Value, container + file.SelectSingleNode("name").InnerText);
//Use EITHER the above OR the below
PutObjectRequest putReq = new PutObjectRequest();
putReq.WithBucketName(config.Element("root").Element("S3BackupBucket").Value);
putReq.WithKey(container + file.SelectSingleNode("name").InnerText);
putReq.WithInputStream(Amazon.S3.Util.AmazonS3Util.MakeStreamSeekable(stream));
putReq.WithMetaData("content-length", file.SelectSingleNode("bytes").InnerText);
using (S3Response putResp = S3Client.PutObject(putReq))
{
}
}
}
And this is how I do it successfully from S3 to Cloud Files:
using (GetObjectResponse getResponse = S3Client.GetObject(new GetObjectRequest().WithBucketName(bucket.BucketName).WithKey(file.Key)))
{
using (Stream s = getResponse.ResponseStream)
{
//We can stream right from s3 to CF, no need to store in memory or filesystem.
var req = (HttpWebRequest)WebRequest.Create(uri);
req.Headers.Add("X-Auth-Token", RSConnection.AuthToken);
req.Method = "PUT";
req.AllowWriteStreamBuffering = false;
if (req.ContentLength == -1L)
req.SendChunked = true;
using (Stream stream = req.GetRequestStream())
{
byte[] data = new byte[32768];
int bytesRead = 0;
while ((bytesRead = s.Read(data, 0, data.Length)) > 0)
{
stream.Write(data, 0, bytesRead);
}
stream.Flush();
stream.Close();
}
req.GetResponse().Close();
}
}
As no-one answering seems to have done it, I spent the time working it out based on guidance from Steve's answer:
In answer to this question "is there any good examples of doing the authentication into S3 request using HTTPWebRequest, so I can duplicate what I do with Cloud Files?", here is how to generate the auth header manually:
string today = String.Format("{0:ddd,' 'dd' 'MMM' 'yyyy' 'HH':'mm':'ss' 'zz00}", DateTime.Now);
string stringToSign = "PUT\n" +
"\n" +
file.SelectSingleNode("content_type").InnerText + "\n" +
"\n" +
"x-amz-date:" + today + "\n" +
"/" + strBucketName + "/" + strKey;
Encoding ae = new UTF8Encoding();
HMACSHA1 signature = new HMACSHA1(ae.GetBytes(AWSSecret));
string encodedCanonical = Convert.ToBase64String(signature.ComputeHash(ae.GetBytes(stringToSign)));
string authHeader = "AWS " + AWSKey + ":" + encodedCanonical;
string uriS3 = "https://" + strBucketName + ".s3.amazonaws.com/" + strKey;
var reqS3 = (HttpWebRequest)WebRequest.Create(uriS3);
reqS3.Headers.Add("Authorization", authHeader);
reqS3.Headers.Add("x-amz-date", today);
reqS3.ContentType = file.SelectSingleNode("content_type").InnerText;
reqS3.ContentLength = Convert.ToInt32(file.SelectSingleNode("bytes").InnerText);
reqS3.Method = "PUT";
Note the added x-amz-date header as HTTPWebRequest sends the date in a different format to what AWS is expecting.
From there it was just a case of repeating what I was already doing.
Take a look at Amazon S3 Authentication Tool for Curl. From that web page:
Curl is a popular command-line tool for interacting with HTTP
services. This Perl script calculates the proper signature, then calls
Curl with the appropriate arguments.
You could probably adapt it or its output for your use.
I think the problem is that according to the AWS Documentation Content-Length is required and you don't know what the length is until the stream has finished.
(I would guess the Amazon.S3.Util.AmazonS3Util.MakeStreamSeekable routine is reading the whole stream into memory to get around this problem which makes it unsuitable for your scenario.)
What you can do is read the file in chunks and upload them using MultiPart upload.
PS, I assume you know the C# source for the AWSSDK for dotnet is on Github.
This is a true hack (which would probably break with a new implementation of the AWSSDK), and it requires knowledge of the length of the file being requested, but if you wrap the response stream as shown with this class (a gist) as shown below:
long length = fileLength;
you can get file length in several ways. I am uploading from a dropbox link, so they give me the
length along with the url. Alternatively, you can perform a HEAD request and get the Content-Length.
string uri = RSConnection.StorageUrl + "/" + container + "/" + file.SelectSingleNode("name").InnerText;
var req = (HttpWebRequest)WebRequest.Create(uri);
req.Headers.Add("X-Auth-Token", RSConnection.AuthToken);
req.Method = "GET";
using (var resp = req.GetResponse() as HttpWebResponse)
{
using (Stream stream = resp.GetResponseStream())
{
//I haven't tested this path
Amazon.S3.Transfer.TransferUtility trans = new Amazon.S3.Transfer.TransferUtility(S3Client);
trans.Upload(new HttpResponseStream(stream, length), config.Element("root").Element("S3BackupBucket").Value, container + file.SelectSingleNode("name").InnerText);
//Use EITHER the above OR the below
//I have tested this with dropbox data
PutObjectRequest putReq = new PutObjectRequest();
putReq.WithBucketName(config.Element("root").Element("S3BackupBucket").Value);
putReq.WithKey(container + file.SelectSingleNode("name").InnerText);
putReq.WithInputStream(new HttpResponseStream(stream, length)));
//These are necessary for really large files to work
putReq.WithTimeout(System.Threading.Timeout.Infinite);
putReq.WithReadWriteTimeout(System.Thread.Timeout.Infinite);
using (S3Response putResp = S3Client.PutObject(putReq))
{
}
}
}
The hack is overriding the Position and Length properties, and returning 0 for Position{get}, noop'ing Position{set}, and returning the known length for Length.
I recognize that this might not work if you don't have the length or if the server providing the source does not support HEAD requests and Content-Length headers. I also realize it might not work if the reported Content-Length or the supplied length doesn't match the actual length of the file.
In my test, I also supply the Content-Type to the PutObjectRequest, but I don't that that is necessary.
As sgmoore said, the problem is that your content length is not seekable from the HTTP response. However HttpWebResponse does have a content length property available. So you can actually form your Http post request to S3 yourself instead of using the Amazon library.
Here's another Stackoverflow question that managed to do that with what looks like full code to me.
I am currently creating a C# application to tie into a php / MySQL online system. The application needs to send post data to scripts and get the response.
When I send the following data
username=test&password=test
I get the following responses...
Starting request at 22/04/2010 12:15:42
Finished creating request : took 00:00:00.0570057
Transmitting data at 22/04/2010 12:15:42
Transmitted the data : took 00:00:06.9316931 <<--
Getting the response at 22/04/2010 12:15:49
Getting response 00:00:00.0360036
Finished response 00:00:00.0360036
Entire call took 00:00:07.0247024
As you can see it is taking 6 seconds to actually send the data to the script, I have done further testing bye sending data from telnet and by sending post data from a local file to the url and they dont even take a second so this is not a problem with the hosted script on the site.
Why is it taking 6 seconds to transmit the data when it is two simple strings?
I use a custom class to send the data
class httppostdata
{
WebRequest request;
WebResponse response;
public string senddata(string url, string postdata)
{
var start = DateTime.Now;
Console.WriteLine("Starting request at " + start.ToString());
// create the request to the url passed in the paramaters
request = (WebRequest)WebRequest.Create(url);
// set the method to post
request.Method = "POST";
// set the content type and the content length
request.ContentType = "application/x-www-form-urlencoded";
request.ContentLength = postdata.Length;
// convert the post data into a byte array
byte[] byteData = Encoding.UTF8.GetBytes(postdata);
var end1 = DateTime.Now;
Console.WriteLine("Finished creating request : took " + (end1 - start));
var start2 = DateTime.Now;
Console.WriteLine("Transmitting data at " + start2.ToString());
// get the request stream and write the data to it
Stream dataStream = request.GetRequestStream();
dataStream.Write(byteData, 0, byteData.Length);
dataStream.Close();
var end2 = DateTime.Now;
Console.WriteLine("Transmitted the data : took " + (end2 - start2));
// get the response
var start3 = DateTime.Now;
Console.WriteLine("Getting the response at " + start3.ToString());
response = request.GetResponse();
//Console.WriteLine(((WebResponse)response).StatusDescription);
dataStream = response.GetResponseStream();
StreamReader reader = new StreamReader(dataStream);
var end3 = DateTime.Now;
Console.WriteLine("Getting response " + (end3 - start3));
// read the response
string serverresponse = reader.ReadToEnd();
var end3a = DateTime.Now;
Console.WriteLine("Finished response " + (end3a - start3));
Console.WriteLine("Entire call took " + (end3a - start));
//Console.WriteLine(serverresponse);
reader.Close();
dataStream.Close();
response.Close();
return serverresponse;
}
}
And to call it I use
private void btnLogin_Click(object sender, EventArgs e)
{
// string postdata;
if (txtUsername.Text.Length < 3 || txtPassword.Text.Length < 3)
{
MessageBox.Show("Missing your username or password.");
}
else
{
string postdata = "username=" + txtUsername.Text +
"&password=" + txtPassword.Text;
httppostdata myPost = new httppostdata();
string response = myPost.senddata("http://www.domainname.com/scriptname.php", postdata);
MessageBox.Show(response);
}
}
Make sure you explicitly set the proxy property of the WebRequest to null or it will try to autodetect the proxy settings which can take some time.
Chances are that because, in your test, you only call this once, the delay you see is the C# code being JIT compiled.
A better test would be to call this twice, and discard the timings from the first time and see if they are better.
An even better test would be to discard the first set of timings, and then run this many times and take an average, although for a very loose "indicative" view, this is probably not necessary.
As an aside, for this sort of timing, you are better off using the System.Diagnostics.Stopwatch class over System.DateTime.
[EDIT]
Also, noting Mant101's suggestion about proxies, if the setting no proxy fails to resolve things, you may wish to set up Fiddler and set your request to use Fiddler as its proxy. This would allow you to intercept the actual http calls so you can get a better breakdown of the http call timings themselves from outside the framework.