Track progress when uploading file to Rackspace Cloud Files in C# - c#

I am using https://github.com/rackspace/csharp-cloudfiles to bulid a command-line tool to upload files to Rackspace Cloud Files.
The thing is that I don't know how to track upload progress (there doesn't seem to be any kind of event or something).
Here's the code:
// Create credentials, client and connection
var creds = new UserCredentials(username, apiKey);
CF_Client client = new CF_Client();
Connection conn = new CF_Connection(creds, client);
conn.Authenticate();
// Get container and upload file
var container = new CF_Container(conn, client, containerName);
var obj = new CF_Object(conn, container, client, remoteFileName);
obj.WriteFromFile(localFilePath);

There doesn't look like there's one built-in, no, but you could probably add your own.
An alternative would be to measure the input; if you look at the source you'll see that WriteFromFile is effectively just
Dictionary<string,string> headers = new Dictionary<string,string>();
using(Stream stream = System.IO.File.OpenRead(localFilePath))
{
obj.Write(stream, headers);
}
so you could wrap the stream you pass to Write in another stream class that measures total-bytes-read progress (there's a few around if you search, or it'd be easy enough to write yourself). If you did want to add progress notifications back from their code you'd need to add it to the wrapped OpenStack Client object but that shouldn't be too hard either.

Related

How to send a continuous stream to a ASP Core REST API

I am trying to send a continuous stream, from a C# application, to an ASP Core REST API.
I define a continuous stream as for example someone talking into a microphone and the sound being sent directly, without being saved to a local file) to the Rest API to be saved to file.
I have been searching a lot on Google for something like that and so far could not find anything really useful.
I have been trying to emulate it by sending a large file (297MB).
This is what I have so far for the client side:
string TARGETURL = "http://localhost:58000/api/file/";
string filePath = #"G:\Voice\Samples\The Monkey's Paw.wav";
byte[] fileContent = File.ReadAllBytes(filePath);
var dummyStream = new MemoryStream(fileContent);
var inputData = new StreamContent(dummyStream);
HttpResponseMessage response = this._httpClient.PostAsync(TARGETURL, inputData).Result;
HttpContent result = response.Content;
if (response.IsSuccessStatusCode)
{
string contents = result.ReadAsStringAsync().Result;
}
else
{
// do something
}
And for the server side:
[Route("")]
[HttpPost]
public async Task<JsonResult> Post()
{
Dictionary<string, object> rv = new Dictionary<string, object>();
try
{
string file = Path.Combine(#"G:\Voice\Samples\dummy.txt");
using (FileStream fs = new FileStream(file, FileMode.Create, FileAccess.Write,
FileShare.None, 4096, useAsync: true))
{
await Request.Body.CopyToAsync(fs);
}
// complete the transaction
rv.Add("success", true);
rv.Add("error", "");
}
catch(Exception ex)
{
}
return Json(rv);
}
When I am sending the file, the server throw the following exception:
The request's Content-Length 304137380 is larger than the request body size limit 30000000.
I know that I could increase the body size limit, but that's not a longer term solution as the stream length could get longer that any limit I set.
That's why I am trying to find a solution that send the stream by chunks for the server to rebuild and write to a file.
What you probably want to do is use a different network stack. A web application is always going to try and fit everything into HTTP. This is a very specific kind of way to communicate. And REST is built on top of these ideas as well. Things are generally though of as a document with references on the Internet, and REST is an extension to this idea.
It does however sit on top of some other great technologies that might suit your need better.
There's nothing to stop you using the internet, but maybe you need to look at possibly a UDP or TCP level implementation. Be aware that you will still be sending information in packets. There is no such thing as a constant stream of bits on the internet. A sound wave in the real world is an infinite thing, but computers are rubbish at that.
Maybe start by taking a look at using sockets and a library like NAudio.

Can I monitor the progress of an S3 download using the AWS SDK?

I'm using the AWS SDK package from Nuget to download files from S3. This involves creating a GetObject request. Amazon has an example of how to do this in their documentation, although I'm actually using the async version of the method.
My code to download a file looks something like this:
using (var client = new AmazonS3Client(accessKey, secretAccessKey, RegionEndpoint.USEast1))
{
var request = new GetObjectRequest
{
BucketName = "my-bucket",
Key = "file.exe"
};
using (var response = await client.GetObjectAsync(request))
{
response.WriteResponseStreamToFile(#"C:\Downloads\file.exe");
}
}
This works; it downloads the file successfully. However, it seems like a little bit of a black box, in that I never really know how long it's going to take to download the file. What I'm hoping to do is get some sort of Progress event so that I can display a nice WPF ProgressBar and watch the download progress. This means I would need to know the size of the file and the number of bytes downloaded, and I'm not sure if there's a way to do that with the AWS SDK.
You can do:
using (var response = client.GetObject(request))
{
response.WriteObjectProgressEvent += Response_WriteObjectProgressEvent;
response.WriteResponseStreamToFile(#"C:\Downloads\file.exe");
}
private static void Response_WriteObjectProgressEvent(object sender, WriteObjectProgressArgs e)
{
Debug.WriteLine($"Tansfered: {e.TransferredBytes}/{e.TotalBytes} - Progress: {e.PercentDone}%");
}
Can you hook in to the WriteObjectProgressEvent object? If you subscribe to events from this object your function will be called multiple times during the download. It will receive the number of bytes that are downloaded/remaining so you can build a progress indicator.

How to recreate a Zip File from a MemoryStream

I've looked around for a solution to my issue but no-one seems to being aiming for quite what I'm trying to achieve.
My problem is such, I have Zip files stored in Azure Blob storage, now for security's sake we have an API2 controller action that provisions these zip files, rather than allowing direct downloading. This action will retrieve the blob, and download it to a stream, so that it can be packaged within a HTTPResponseMessage.
All of the above works, however, when I attempt to recreate the zip file, I'm informed it's corrupted. For now I'm just attempting to have the server (running on localhost) create the zip file, whereas the endgame is to have remote Client applications do this (I'm fairly certain the solution to my issue on the server would be the same.
public class FileActionResult : IHttpActionResult
{
private HttpRequestMessage _request;
private ICloudBlob _blob;
public FileActionResult(HttpRequestMessage request, ICloudBlob blob)
{
_request = request;
_blob = blob;
}
public async Task<HttpResponseMessage> ExecuteAsync(System.Threading.CancellationToken cancellationToken)
{
var fileStream = new MemoryStream();
await _blob.DownloadToStreamAsync(fileStream);
var byteTest = new byte[fileStream.Length];
var test = fileStream.Read(byteTest, 0, (int)fileStream.Length);
try
{
File.WriteAllBytes(#"C:\testPlatFiles\test.zip", byteTest);
}
catch(ArgumentException ex)
{
var a = ex;
}
var response = _request.CreateResponse(HttpStatusCode.Accepted);
response.Content = new StreamContent(fileStream);
response.Content.Headers.ContentLength = _blob.Properties.Length;
response.Content.Headers.ContentType = new MediaTypeHeaderValue(_blob.Properties.ContentType);
//set the fileName
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = _blob.Name,
Size = _blob.Properties.Length
};
return response;
}
}
I've looked into Zip libraries to see if any present the solution for converting the stream of a zip back to a zip file, but all I can find is reading zip files into streams, or the creation in order to provision a file download instead of a filecreate.
Any help would be much appreciated, thank you.
You use DotNetZip. Its ZipFile class has a static factory method that should do what you want: ZipFile.Read( Stream zipStream ) reads the given stream as a zip file and gives you back a ZipFile instance (which you can use for whatever.
However, if your Stream contains the raw zip data and all you want to do is persist it to disk, you should just be able to write the bytes straight to disk.
If you're getting 'zip file corrupted' errors, I'd look at the content encoding used to send the data to Azure and the content encoding it's sent back with. You should be sending it up to Azure with a content type of application/zip or application/octet-stream and possibly adding metadata to the Azure blob entry to send it down the same way.
Edited To Note: DotNetZip used to live at Codeplex. Codeplex has been shut down. The old archive is still available at Codeplex. It looks like the code has migrated to Github:
https://github.com/DinoChiesa/DotNetZip. Looks to be the original author's repo.
https://github.com/haf/DotNetZip.Semverd. This looks to be the currently maintained version. It's also packaged up an available via Nuget at https://www.nuget.org/packages/DotNetZip/

HttpClient Upload Multiple Files - can I avoid server bottleneck?

So I currently have 3 sets of small files being generated (think about 1-2 per second per set), which I'd like to upload to a server in a timely manner. The files aren't large, but due to the number of them, it seems to clog up the server. Here's my current implementation:
System.Net.Http.HttpRequestMessage request = new System.Net.Http.HttpRequestMessage(System.Net.Http.HttpMethod.Post, _serverUri);
MultipartFormDataContent form = new MultipartFormDataContent();
request.Content = form;
HttpContent content1 = new ByteArrayContent(filePart1);
content1.Headers.ContentDisposition = new ContentDispositionHeaderValue("filePart1");
content1.Headers.ContentDisposition.FileName = "filePart1";
form.Add(content1);
HttpContent content2 = new ByteArrayContent(filePart2);
content2.Headers.ContentDisposition = new ContentDispositionHeaderValue("filePart2");
content2.Headers.ContentDisposition.FileName = "filePart2";
form.Add(content2);
_httpClient.SendAsync(request).ContinueWith((response) =>
{
ProcessResponse(response.Result);
});
Where filePart1 & filePart2 represent a file from one of the data sets, so there's 3 of these, one for each set of files running concurrently.
What I'd like to know, is if there's a better way to accomplish this, since the current method seems to clog up the server with its bombardment of files. Would it be possible to somehow open a stream for each set of files and send each file as a chunk somehow? Should I wait for the server to respond before sending the next file?

Silverlight Loading Reference Data On Demand from a 'dumb' server

I have a text file with a list of 300,000 words and the frequency with wich they occur. Each line is in the format Word:FequencyOfOccurence.
I want this information to be accessible from within the C# code. I can't hard code the list since it is too long, and I'm not sure how to go about accessing it from a file on the server. Ideally I'd ideally like the information to be downloaded only if it's used (To save on bandwidth) but this is not a high priority as the file is not too big and internet speeds are always increasing.
It doesn't need to be useable for binding.
The information does not need to be editable once the project has been built.
Here is another alternative. Zip the file up and stick it in the clientBin folder next to the apllication XAP. Then at the point in the app where the content is needed do something like this:-
public void GetWordFrequencyResource(Action<string> callback)
{
WebClient client = new WebClient();
client.OpenReadAsync += (s, args) =>
{
try
{
var zipRes = new StreamResourceInfo(args.Result, null)
var txtRes = Application.GetResourceStream(zipRes, new Uri("WordFrequency.txt", UriKind.Relative));
string result = new StreamReader(txtRes.Stream).ReadToEnd();
callback(result);
}
catch
{
callback(null); //Fetch failed.
}
}
client.OpenReadAsync(new Uri("WordFrequency.zip", UriKind.Relative"));
}
Usage:-
var wordFrequency = new Dictionary<string, int>();
GetWordFrequencyResource(s =>
{
// Code here to burst string into dictionary.
});
// Note code here is asynchronous with the building of the dictionary don't attempt to
// use the dictionary here.
The above code allows you to store the file in an efficient zip format but not in the XAP itself. Hence you can download it on demand. It makes use of the fact that a XAP is a zip file so Application.GetResourceStream which is designed to pull resources from XAP files can be used on a zip file.
BTW, I'm not actually suggesting you use a dictionary, I'm just using a dictionary as simple example. In reality I would imagine the file is in sorted order. If that is the case you could use a KeyValuePair<string, int> for each entry but create a custom collection type that holds them in an array or List and then use some Binary search methods to index into it.
Based on your comments, you could download the word list file if you are required to have a very thin server layer. The XAP file containing your Silverlight application is nothing more than a ZIP file with all the referenced files for your Silverlight client layer. Try adding the word list as content that gets compiled into the XAP and see how big the file gets. Text usually compresses really well. In general, though, you'll want to be friendly with your users in how much memory your application consumes. Loading a huge text file into memory, in addition to everything else you need in your app, may untimately make your app a resource hog.
A better practice, in general, would be to call a web service. The service could would perform whatever look up logic you need. Here's a blog post from a quick search that should get you started: (This was written for SL2, but should apply the same for SL3.)
Calling web services with Silverlight 2
Even better would be to store your list in a SQL Server. It will be much easier and quicker to query.
You could create a WCF service on the server side that will send the data to the Silverlight application. Once you retrieve the information you could cache it in-memory inside the client. Here's an example of calling a WCF service method from Silverlight.
Another possibility is to embed the text file into the Silverlight assembly that is deployed to the client:
using (var stream = Assembly.GetExecutingAssembly()
.GetManifestResourceStream("namespace.data.txt"))
using (var reader = new StreamReader(stream))
{
string data = reader.ReadToEnd();
// Do something with the data
}

Categories

Resources