For the moment I use a C# app to download a picture from a known URL and upload it on AWS S3 server.
using (WebClient c = new WebClient())
{
var data = c.DownloadData(ad.PhotoUrl);
var s3 = new AmazonS3Client(RegionEndpoint.EUWest1);
using (var memoryStream = new MemoryStream(data))
{
using (var yourBitmap = new Bitmap(memoryStream))
{
yourBitmap.Save(memoryStream, ImageFormat.Jpeg);
var titledRequest = new PutObjectRequest
{
CannedACL = S3CannedACL.PublicRead,
InputStream = memoryStream,
BucketName = myBucket,
Key = keyName
};
s3.PutObject(titledRequest);
}
}
}
It works fine but I would like to avoid this method, I have a constant huge image flow to store / delete so it would kill my bandwidth.
Is it possible to bypass the "download" part ? I mean, can I ask to AWS S3 server to download the image located at ad.PhotoUrl on his own? C# is not required for the "distant request". I just like to know if it's possible so I could dig a little to find a solution.
To make it simple, I just to say to AWS S3 : Hey can you download this image and store it for me ? Instead of : Hey, here is the image I just downloaded, can you store it?
S3 cannot do this (because it really only does file-storage), but you can solve this by using a Lambda-function that initiates the download and pushes the file into S3. The Lambda-function in turn can be invoked via the AWS SDK or an API-gateway HTTP request.
Related
Environment: VS project, .NET, C#
I've implemented uploading documents to my Firebase Storage Bucket via the example in the link below:
How to Upload File to Firebase Storage in .Net C# Windows Form?
I'm trying to find documentation on how to use the same library/functionality to read a file that I've manually uploaded to my Bucket.
In essence: how to 'peek' or 'read' a file that is already on Storage? I basically want to query data inside an existing csv file.
So far I've found documentation only here, which doesn't provide much in terms of a possible solution, at least as far as I can understand it...
Firebase Storage Introduction
There is seemingly more related information on the same page on the 'Firebase Store' section, but that isn't the same as Firebase Storage :/
Any ideas?
Looking at the docs, It seems you can open files by downloading them.
var client = StorageClient.Create();
// Create a bucket with a globally unique name
var bucketName = Guid.NewGuid().ToString();
var bucket = client.CreateBucket(projectId, bucketName);
// Upload some files
var content = Encoding.UTF8.GetBytes("hello, world");
var obj1 = client.UploadObject(bucketName, "file1.txt", "text/plain", new MemoryStream(content));
var obj2 = client.UploadObject(bucketName, "folder1/file2.txt", "text/plain", new MemoryStream(content));
// List objects
foreach (var obj in client.ListObjects(bucketName, ""))
{
Console.WriteLine(obj.Name);
}
// Download file
using (var stream = File.OpenWrite("file1.txt"))
{
client.DownloadObject(bucketName, "file1.txt", stream);
}
In short, I need to detect a webpage's GET requests programmatically.
The long story is that my company is currently trying to write a small installer for a piece of proprietary software that installs another piece of software.
To get this other piece of software, I realize it's as simple as calling the download link through C#'s lovely WebClient class (Dir is just the Temp directory in AppData/Local):
using (WebClient client = new WebClient())
{
client.DownloadFile("[download link]", Dir.FullName + "\\setup.exe");
}
However, the page which the installer comes from does is not a direct download page. The actual download link is subject to change (our company's specific installer might be hosted on a different download server another time around).
To get around this, I realized that I can just monitor the GET requests the page makes and dynamically grab the URL from there.
So, I know I'm going to do, but I was just wondering, is there was a built-in part of the language that allows you to see what requests a page has made? Or do I have to write this functionality myself, and what would be a good starting point?
I think I'd do it like this. First download the HTML contents of the download page (the page that contains the link to download the file). Then scrape the HTML to find the download link URL. And finally, download the file from the scraped address.
using (WebClient client = new WebClient())
{
// Get the website HTML.
string html = client.DownloadString("http://[website that contains the download link]");
// Scrape the HTML to find the download URL (see below).
// Download the desired file.
client.DownloadFile(downloadLink, Dir.FullName + "\\setup.exe");
}
For scraping the download URL from the website I'd recommend using the HTML Agility Pack. See here for getting started with it.
I think you have to write your own "mediahandler", which returns a HttpResponseMessage.
e.g. with webapi2
[HttpGet]
[AllowAnonymous]
[Route("route")]
public HttpResponseMessage GetFile([FromUri] string path)
{
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new StreamContent(new FileStream(path, FileMode.Open, FileAccess.Read));
string fileName = Path.GetFileNameWithoutExtension(path);
string disposition = "attachment";
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue(disposition) { FileName = fileName + Path.GetExtension(absolutePath) };
result.Content.Headers.ContentType = new MediaTypeHeaderValue(MimeMapping.GetMimeMapping(Path.GetExtension(path)));
return result;
}
I have a website build in C# and MVC4 where users can upload large files which are in turn sent to Amazon S3.
It is intermittent but I keep getting the following error:
"An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full 54.231.236.12:443"
I am currently using an InputStream as the PutObject for S3. Does anyone have any recommendations as to why the PutObject would be failing on these larger files?
Below is the code that I am using to send the HttpPostedFileBase to Amazon:
HttpPostedFileBase hpf = Request.Files[0] as HttpPostedFileBase;
if (hpf.ContentLength > 0)
{
string accessKey = ConfigurationManager.AppSettings["Amazon_Access_Key"];
string secretKey = ConfigurationManager.AppSettings["Amazon_Secret_Key"];
AmazonS3Client client;
var filePath = UserID + "/" + hpf.FileName;
client = new AmazonS3Client(accessKey, secretKey, RegionEndpoint.USWest1);
PutObjectRequest request = new PutObjectRequest();
request.BucketName = "MyBucket";
request.CannedACL = S3CannedACL.PublicRead;
request.Key = filePath;
request.InputStream = hpf.InputStream;
client.PutObject(request);
}
return Json(new {message = "chunk uploaded", name = name});
I'm not sure what the mean issue upon uploading your large file into S3 but from the error message you posted it seems that you were trying to upload one big file in one shut. I suggest that you should upload your large file into chunks. I found some links that could help to achieve it.
Streaming
Multi part Upload
Multi part Concept
Multi part API
Forum about S3 Chucking
Hopefully this links could help you!
I'm using the AWS SDK package from Nuget to download files from S3. This involves creating a GetObject request. Amazon has an example of how to do this in their documentation, although I'm actually using the async version of the method.
My code to download a file looks something like this:
using (var client = new AmazonS3Client(accessKey, secretAccessKey, RegionEndpoint.USEast1))
{
var request = new GetObjectRequest
{
BucketName = "my-bucket",
Key = "file.exe"
};
using (var response = await client.GetObjectAsync(request))
{
response.WriteResponseStreamToFile(#"C:\Downloads\file.exe");
}
}
This works; it downloads the file successfully. However, it seems like a little bit of a black box, in that I never really know how long it's going to take to download the file. What I'm hoping to do is get some sort of Progress event so that I can display a nice WPF ProgressBar and watch the download progress. This means I would need to know the size of the file and the number of bytes downloaded, and I'm not sure if there's a way to do that with the AWS SDK.
You can do:
using (var response = client.GetObject(request))
{
response.WriteObjectProgressEvent += Response_WriteObjectProgressEvent;
response.WriteResponseStreamToFile(#"C:\Downloads\file.exe");
}
private static void Response_WriteObjectProgressEvent(object sender, WriteObjectProgressArgs e)
{
Debug.WriteLine($"Tansfered: {e.TransferredBytes}/{e.TotalBytes} - Progress: {e.PercentDone}%");
}
Can you hook in to the WriteObjectProgressEvent object? If you subscribe to events from this object your function will be called multiple times during the download. It will receive the number of bytes that are downloaded/remaining so you can build a progress indicator.
I am using https://github.com/rackspace/csharp-cloudfiles to bulid a command-line tool to upload files to Rackspace Cloud Files.
The thing is that I don't know how to track upload progress (there doesn't seem to be any kind of event or something).
Here's the code:
// Create credentials, client and connection
var creds = new UserCredentials(username, apiKey);
CF_Client client = new CF_Client();
Connection conn = new CF_Connection(creds, client);
conn.Authenticate();
// Get container and upload file
var container = new CF_Container(conn, client, containerName);
var obj = new CF_Object(conn, container, client, remoteFileName);
obj.WriteFromFile(localFilePath);
There doesn't look like there's one built-in, no, but you could probably add your own.
An alternative would be to measure the input; if you look at the source you'll see that WriteFromFile is effectively just
Dictionary<string,string> headers = new Dictionary<string,string>();
using(Stream stream = System.IO.File.OpenRead(localFilePath))
{
obj.Write(stream, headers);
}
so you could wrap the stream you pass to Write in another stream class that measures total-bytes-read progress (there's a few around if you search, or it'd be easy enough to write yourself). If you did want to add progress notifications back from their code you'd need to add it to the wrapped OpenStack Client object but that shouldn't be too hard either.