I wanted to check the upload speed of the system.
void CheckUploadSpeed()
{
using (var wc = new WebClient())
{
IPv4InterfaceStatistics ipis = networkInterface.GetIPv4Statistics();
BytesSentb4Upload = ipis.BytesSent;
FileStream stream = File.OpenRead(string.Format("{0}speedtext.txt", path)); //speedtext.txt is a 5 MB file.
var fileBytes = new byte[stream.Length];
stream.Read(fileBytes, 0, fileBytes.Length);
stream.Close();
startTime = Environment.TickCount;
wc.UploadDataAsync(new Uri("http://www.example.com/"), fileBytes);
InternetSpeedResult = "Data upload started. Uploading 5MB file";
wc.UploadProgressChanged += new UploadProgressChangedEventHandler(UploadProgressCallback);
wc.UploadDataCompleted += wc_UploadDataCompleted;
}
}
And on Upload Progress Changed
void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e)
{
InternetSpeedResult = string.Format("Checking Upload Speed ... ");
double endTime = Environment.TickCount;
double secs = Math.Round(Math.Floor(endTime - startTime) / 1000, 0);
if (secs >= 30)
{
UploadComplete(sender, e);
}
}
This code actually is serving my issue but the problem is that this is not giving the exact results everytime. Since I am counting on Complete BytesSent in the particular period of time. This numbers automatically varies. If the Speed is very low (less than 512 KBPS) and very high (Greater than 20 MBPS) than its not giving the expected upload rate.
What should I be doing in the code that I can rely on the Results?
Is there any other approach to check the Upload Speed
If the Speed is very low (less than 512 KBPS) and very high (Greater than 20 MBPS). What approach should I take?
I would not say this as a solution whereas it would be small workaround which might help you to get rely on your upload speed.
Increase the size of the uploaded file to somewhere around 100MB or so.
Set the secs >= 60
Now whatever the speed of the network is if it low it would check for a minute and let u know the upload speed. If it is higher. Then also it would let u know either all 100 MB is transferred and you will get the upload speed and if not than you you'll get to know the speed in 60 seconds.
Since working with high speed internet the problem is that the bandwidth is not optimum while the transaction begins. as soon as you start sending or receiving data it will increase the speed thus Changing it to 60 seconds and increasing the file size will give you results in both the cases.
Related
I have a C# Azure function to read a file content from Blob and write it to a Azure Data Lake destination. The code works perfectly fine with the large size files (~8 MB and above) but with the small size files the destination file is written with 0 bytes. I tried to change the chunk size to a lower number and parallel threads to 1 but the behavior remains the same. I am simulating the code from Visual Studio 2017.
Please find the code snippet I am using. I have gone through the documentation on Parallel.ForEach limitations but didn't come across anything specific with the file size issues. (https://learn.microsoft.com/en-us/dotnet/standard/parallel-programming/potential-pitfalls-in-data-and-task-parallelism)
int bufferLength = 1 * 1024 * 1024;//1 MB chunk
long blobRemainingLength = blob.Properties.Length;
var outPutStream = new MemoryStream();
Queue<KeyValuePair<long, long>> queues = new
Queue<KeyValuePair<long, long>>();
long offset = 0;
while (blobRemainingLength > 0)
{
long chunkLength = (long)Math.Min(bufferLength, blobRemainingLength);
queues.Enqueue(new KeyValuePair<long, long>(offset, chunkLength));
offset += chunkLength;
blobRemainingLength -= chunkLength;
}
Console.WriteLine("Number of Queues: " + queues.Count);
Parallel.ForEach(queues,
new ParallelOptions()
{
//Gets or sets the maximum number of concurrent tasks
MaxDegreeOfParallelism = 10
}, (queue) =>
{
using (var ms = new MemoryStream())
{
blob.DownloadRangeToStreamAsync(ms, queue.Key,
queue.Value).GetAwaiter().GetResult();
lock (mystream)
{
var bytes = ms.ToArray();
Console.WriteLine("Processing on thread {0}",
Thread.CurrentThread.ManagedThreadId);
mystream.Write(bytes, 0, bytes.Length);
}
}
});
Appreciate all the help!!
I found the issue with my code. The ADL Stream writer is not flushed and disposed properly. After adding the necessary code, parallelization with small/large files works fine.
Thanks for the suggestions!!
i know that this has been already asked, but the marked solution is not correct. Usually this article is marked as solution: https://learn.microsoft.com/en-us/archive/blogs/kwill/asynchronous-parallel-blob-transfers-with-progress-change-notification-2-0
It works and give an actual progress, but not the real time progress (and in some cases it gives a totally wrong value). Let me explain:
It gives the progress on the local read buffer, so when i upload something my first "uploaded value" is the read buffer total size. In my case this buffer is 4mb so every file smaller than 4mb results completed in 0 seconds for the progress bar, but it takes the real upload time to complete for real.
Also, if you try to kill your connection just before the upload start it gives as actual progress the first buffer size, so for my 1mb file i get 100% progress while disconnect.
I found another article with another solution, it reads the http response from azure everytime it complete a single block upload, but i need my blocks to be 4mb (since max block count for a single file is 50.000) and its not a perfect solution even with low block size.
The first article overrides the stream class and create a ProgressStream class with an ProgressChanged event that is triggered every time a read is done, there is some way to know the actual uploaded bytes when that ProgressChanged is triggered?
You can do this by using code similar to https://learn.microsoft.com/en-us/archive/blogs/kwill/asynchronous-parallel-block-blob-transfers-with-progress-change-notification (version 1.0 of the blog post you referenced), but instead of calling m_Blob.PutBlock you would instead upload the block with an HTTPWebRequest object and use the progress events from the HTTPWebRequest class. This introduces a lot more code complexity and you would have to add some additional error handling.
The alternative would be to download the Storage Client Library source code from GitHub and modify the block upload methods to track and report progress. The challenge you will face here is that you will have to make these same changes to every new version of the SCL if you plan on staying up to date with the latest fixes.
I must admit I didn't check if everything is as you desired, but here are my 2 cents on uploading with progress indication.
public async Task UploadVideoFilesToBlobStorage(List<VideoUploadModel> videos, CancellationToken cancellationToken)
{
var blobTransferClient = new BlobTransferClient();
//register events
blobTransferClient.TransferProgressChanged += BlobTransferClient_TransferProgressChanged;
//files
_videoCount = _videoCountLeft = videos.Count;
foreach (var video in videos)
{
var blobUri = new Uri(video.SasLocator);
//create the sasCredentials
var sasCredentials = new StorageCredentials(blobUri.Query);
//get the URL without sasCredentials, so only path and filename.
var blobUriBaseFile = new Uri(blobUri.GetComponents(UriComponents.SchemeAndServer | UriComponents.Path,
UriFormat.UriEscaped));
//get the URL without filename (needed for BlobTransferClient (seems to me like a issue)
var blobUriBase = new Uri(blobUriBaseFile.AbsoluteUri.Replace("/"+video.Filename, ""));
var blobClient = new CloudBlobClient(blobUriBaseFile, sasCredentials);
//upload using stream, other overload of UploadBlob forces to put online filename of local filename
using (FileStream fs = new FileStream(video.FilePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
await blobTransferClient.UploadBlob(blobUriBase, video.Filename, fs, null, cancellationToken, blobClient,
new NoRetry(), "video/x-msvideo");
}
_videoCountLeft -= 1;
}
blobTransferClient.TransferProgressChanged -= BlobTransferClient_TransferProgressChanged;
}
private void BlobTransferClient_TransferProgressChanged(object sender, BlobTransferProgressChangedEventArgs e)
{
Console.WriteLine("progress, seconds remaining:" + e.TimeRemaining.Seconds);
double bytesTransfered = e.BytesTransferred;
double bytesTotal = e.TotalBytesToTransfer;
double thisProcent = bytesTransfered / bytesTotal;
double procent = thisProcent;
//devide by video amount
int videosUploaded = _videoCount - _videoCountLeft;
if (_videoCountLeft > 0)
{
procent = (thisProcent + videosUploaded) / _videoCount;
}
procent = procent * 100;//to real %
UploadProgressChangedEvent?.Invoke((int)procent, videosUploaded, _videoCount);
}
Actually Microsoft.WindowsAzure.MediaServices.Client.BlobTransferClient should be able to do concurrent uploads but there is no Method for uploading multiple yet it has properties for NumberOfConcurrentTransfers and ParallelTransferThreadCount, not sure how to use this.
There is a bug in this BlobTransferClient, when uploading using the localFile parameter, it will use the filename of that file, while I gave permissions on a specific file name in the SaSLocator.
This example shows how to upload from a client (not on server), so we don't need any CloudMediaContext which is usually the case. Everything about SasLocators can be found here.
We are using the NAudio Stack written in c# and trying to capture the audio in Exclusive mode with PCM 8kHZ and 16bits per sample.
In the following function:
private void InitializeCaptureDevice()
{
if (initialized)
return;
long requestedDuration = REFTIMES_PER_MILLISEC * 100;
if (!audioClient.IsFormatSupported(AudioClientShareMode.Shared, WaveFormat) &&
(!audioClient.IsFormatSupported(AudioClientShareMode.Exclusive, WaveFormat)))
{
throw new ArgumentException("Unsupported Wave Format");
}
var streamFlags = GetAudioClientStreamFlags();
audioClient.Initialize(AudioClientShareMode.Shared,
streamFlags,
requestedDuration,
requestedDuration,
this.waveFormat,
Guid.Empty);
int bufferFrameCount = audioClient.BufferSize;
this.bytesPerFrame = this.waveFormat.Channels * this.waveFormat.BitsPerSample / 8;
this.recordBuffer = new byte[bufferFrameCount * bytesPerFrame];
Debug.WriteLine(string.Format("record buffer size = {0}", this.recordBuffer.Length));
initialized = true;
}
We configured the WaveFormat before calls this function to (8000,1) and also a period of 100 ms.
We expected the system to allocate 1600 bytes for the buffer and interval of 100 ms as requested.
But we noticed following occured:
1. the system allocated audioClient.BufferSize to be 4800 and "this.recordBuffer" an array of 9600 bytes (which means a buffer for 600ms and not 100ms).
2. the thread is going to sleep and then getting 2400 samples (4800 bytes) and not as expected frames of 1600 bytes
Any idea what is going there?
You say you are capturing audio in exclusive mode, but in the example code you call the Initialize method with AudioClientMode.Shared. It strikes me as very unlikely that shared mode will let you work at 8kHz. Unlike the wave... APIs, WASAPI does no resampling for you of playback or capture, so the soundcard itself must be operating at the sample rate you specify.
I am using this code to extract a chunk from file
// info is FileInfo object pointing to file
var percentSplit = info.Length * 50 / 100; // extract 50% of file
var bytes = new byte[percentSplit];
var fileStream = File.OpenRead(fileName);
fileStream.Read(bytes, 0, bytes.Length);
fileStream.Dispose();
File.WriteAllBytes(splitName, bytes);
Is there any way to speed up this process?
Currently for a 530 MB file it takes around 4 - 5 seconds. Can this time be improved?
There are several cases of you question, but none of them is language relevant.
Following are something to concern
What is the file system of source/destination file?
Do you want to keep original source file?
Are they lie on the same drive?
In c#, you almost do not have a method could be faster than File.Copy which invokes CopyFile of WINAPI internally. Because of the percentage is fifty, however, following code might not be faster. It copies whole file and then set the length of the destination file
var info=new FileInfo(fileName);
var percentSplit=info.Length*50/100; // extract 50% of file
File.Copy(info.FullName, splitName);
using(var outStream=File.OpenWrite(splitName))
outStream.SetLength(percentSplit);
Further, if
you don't keep original source after file splitted
destination drive is the same as source
your are not using a crypto/compression enabled file system
then, the best thing you can do, is don't copy files at all.
For example, if your source file lies on FAT or FAT32 file system, what you can do is
create new dir entry(entries) for newly splitted parts of file
let the entry(entries) point(s) to the cluster of target part(s)
set correct file size for each entry
check for cross-link and avoid that
If your file system was NTFS, you might need to spend a long time to study the spec.
Good luck!
var percentSplit = (int)(info.Length * 50 / 100); // extract 50% of file
var buffer = new byte[8192];
using (Stream input = File.OpenRead(info.FullName))
using (Stream output = File.OpenWrite(splitName))
{
int bytesRead = 1;
while (percentSplit > 0 && bytesRead > 0)
{
bytesRead = input.Read(buffer, 0, Math.Min(percentSplit, buffer.Length));
output.Write(buffer, 0, bytesRead);
percentSplit -= bytesRead;
}
output.Flush();
}
The flush may not be needed but it doesn't hurt, this was quite interesting, changing the loop to a do-while rather than a while had a big hit on performance. I suppose the IL is not as fast. My pc was running the original code in 4-6 secs, the attached code seemed to be running at about 1 second.
I get better results when reading/writing by chunks of a few megabytes. The performances changes also depending on the size of the chunk.
FileInfo info = new FileInfo(#"C:\source.bin");
FileStream f = File.OpenRead(info.FullName);
BinaryReader br = new BinaryReader(f);
FileStream t = File.OpenWrite(#"C:\split.bin");
BinaryWriter bw = new BinaryWriter(t);
long count = 0;
long split = info.Length * 50 / 100;
long chunk = 8000000;
DateTime start = DateTime.Now;
while (count < split)
{
if (count + chunk > split)
{
chunk = split - count;
}
bw.Write(br.ReadBytes((int)chunk));
count += chunk;
}
Console.WriteLine(DateTime.Now - start);
i want to measure current download speed. im sending huge file over tcp. how can i capture the transfer rate every second? if i use IPv4InterfaceStatistics or similar method, instead of capturing the file transfer rate, i capture the device transfer rate. the problem with capturing device transfer rate is that it captures all ongoing data through the network device instead of the single file that i transfer.
how can i capture the file transfer rate? im using c#.
Since you doesn't have control over stream to tell him how much read, you can time-stamp before and after a stream read and then based on received or sent bytes calculate the speed:
using System.IO;
using System.Net;
using System.Diagnostics;
// some code here...
Stopwatch stopwatch = new Stopwatch();
// Begining of the loop
int offset = 0;
stopwatch.Reset();
stopwatch.Start();
bytes[] buffer = new bytes[1024]; // 1 KB buffer
int actualReadBytes = myStream.Read(buffer, offset, buffer.Length);
// Now we have read 'actualReadBytes' bytes
// in 'stopWath.ElapsedMilliseconds' milliseconds.
stopwatch.Stop();
offset += actualReadBytes;
int speed = (actualReadBytes * 8) / stopwatch.ElapsedMilliseconds; // kbps
// End of the loop
You should put the Stream.Read in a try/catch and handle reading exception. It's the same for writing to streams and calculate the speed, just these two lines are affected:
myStream.Write(buffer, 0, buffer.Length);
int speed = (buffer.Length * 8) / stopwatch.ElapsedMilliseconds; // kbps