How to Upload Data to Azure Storage in chunks/blocks? - c#

I'm uploading images to Azure Storage, and need to implement chunking to upload heavy image.
But I'm getting exception such as "Offset and length out of bound"
int fileSize = imageModel.ImageByteArray.Length;
int blockSize = 100*1024 , numberOfBlocks = 0;
if (fileSize < blockSize)
blockSize = fileSize;
if (fileSize % blockSize == 0)
numberOfBlocks = fileSize / blockSize;
else
numberOfBlocks = fileSize / blockSize + 1;
byte[] blockArray = new byte[blockSize];
for (int i = 0; i < numberOfBlocks; i++)
{
Array.Copy(imageModel.ImageByteArray, i * blockSize, blockArray, 0, blockSize);
await blob.UploadFromByteArrayAsync(blockArray, i, imageModel.ImageByteArray.Length);
Progress = (float)i / numberOfBlocks;
}
var ResponseURL = blob.Uri.OriginalString;
Can anyone help me with the issue?

This code always copies blockSize bytes:
Array.Copy(imageModel.ImageByteArray, i * blockSize, blockArray, 0, blockSize);
However, the last block may not be a full blockSize:
if (fileSize % blockSize == 0)
numberOfBlocks = fileSize / blockSize;
else
numberOfBlocks = fileSize / blockSize + 1;
To fix this, you'll need to handle the last block specially. It may be a full blockSize bytes or it may be less.

Related

C# - upload file by chunks - bad last chunk size

I am trying to upload large files to 3rd part service by chunks. But I have problem with last chunk. Last chunk would be always smaller then 5mb, but all chunks incl. the last have all the same size - 5mb
My code:
int chunkSize = 1024 * 1024 * 5;
using (Stream streamx = new FileStream(file.Path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[chunkSize];
int bytesRead = 0;
long bytesToRead = streamx.Length;
while (bytesToRead > 0)
{
int n = streamx.Read(buffer, 0, chunkSize);
if (n == 0) break;
// do work on buffer...
// uploading chunk ....
var partRequest = HttpHelpers.InvokeHttpRequestStream
(
new Uri(endpointUri + "?partNumber=" + i + "&uploadId=" + UploadId),
"PUT",
partHeaders,
buffer
); // upload buffer
bytesRead += n;
bytesToRead -= n;
}
streamx.Dispose();
}
buffer is uploaded on 3rd party service.
Solved, someone posted updated code in comment, but after some seconds deleted this comment. But there was solution. I added this part after
if (n == 0)
this code, which resizes last chunk on the right size
// Let's resize the last incomplete buffer
if (n != buffer.Length)
Array.Resize(ref buffer, n);
Thank you all.
I post full working code:
int chunkSize = 1024 * 1024 * 5;
using (Stream streamx = new FileStream(file.Path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[chunkSize];
int bytesRead = 0;
long bytesToRead = streamx.Length;
while (bytesToRead > 0)
{
int n = streamx.Read(buffer, 0, chunkSize);
if (n == 0) break;
// Let's resize the last incomplete buffer
if (n != buffer.Length)
Array.Resize(ref buffer, n);
// do work on buffer...
// uploading chunk ....
var partRequest = HttpHelpers.InvokeHttpRequestStream
(
new Uri(endpointUri + "?partNumber=" + i + "&uploadId=" + UploadId),
"PUT",
partHeaders,
buffer
); // upload buffer
bytesRead += n;
bytesToRead -= n;
}
}

Find bytes from an offset

So I have this:
public static long FindPosition(Stream stream, byte[] byteSequence)
{
if (byteSequence.Length > stream.Length)
return -1;
byte[] buffer = new byte[byteSequence.Length];
using (BufferedStream bufStream = new BufferedStream(stream, byteSequence.Length))
{
int i;
while ((i = bufStream.Read(buffer, 0, byteSequence.Length)) == byteSequence.Length)
{
if (byteSequence.SequenceEqual(buffer))
return bufStream.Position - byteSequence.Length;
else
bufStream.Position -= byteSequence.Length - PadLeftSequence(buffer, byteSequence);
}
}
return -1;
}
private static int PadLeftSequence(byte[] bytes, byte[] seqBytes)
{
int i = 1;
while (i < bytes.Length)
{
int n = bytes.Length - i;
byte[] aux1 = new byte[n];
byte[] aux2 = new byte[n];
Array.Copy(bytes, i, aux1, 0, n);
Array.Copy(seqBytes, aux2, n);
if (aux1.SequenceEqual(aux2))
return i;
i++;
}
return i;
}
Which works perfectly to get an offset that has a specific set of bytes, but now I want to do the inverse, find a set of bytes from a specific offset.
How can I do that?
Try this example:
long offset = 100L; // Offset
int bytesCount = 20; // Number of bytes to read
byte[] buffer = new byte[bytesCount];
stream.Seek( offset, SeekOrigin.Begin ); // Set offset from Begin of a stream
stream.Read( buffer, 0, bytesCount ); // Read bytesCount from previous set offset
More detail about Seek and Read

What am I doing wrong when parsing a wav file?

I'm trying to parse a wav file. I'm not sure if there can be multiple data chunks in a wav file, but I originally assumed there was only 1 since the wav file format description I was reading only mentioned there being 1.
But I noticed that the subchunk2size was very small (like 26) when the wav file being parsed was something like 36MB and the sample rate was 44100.
So I tried to parse it assuming there were multiple chunks, but after the 1st chunk, there was no subchunk2id to be found.
To go chunk by chunk, I was using the below code
int chunkSize = System.BitConverter.ToInt32(strm, 40);
int widx = 44; //wav data starts at the 44th byte
//strm is a byte array of the wav file
while(widx < strm.Length)
{
widx += chunkSize;
if(widx < 1000)
{
//log "data" or "100 97 116 97" for the subchunkid
//This is only getting printed the 1st time though. All prints after that are garbage
Debug.Log( strm[widx] + " " + strm[widx+1] + " " + strm[widx+2] + " " + strm[widx+3]);
}
if(widx + 8 < strm.Length)
{
widx += 4;
chunkSize = System.BitConverter.ToInt32(strm, widx);
widx += 4;
}else
{
widx += 8;
}
}
A .wav-File has 3 chunks:
Each chunk has a size of 4 Byte
The first chunk is the "RIFF"-chunk. It includes 8 Byte the filesize(4 Byte) and the name of the format(4byte, usually "WAVE").
The next chunk is the "fmt "-chunk (the space in the chunk-name is important). It includes the audio-format(2 Byte), the number of channels (2 Byte), the sample rate (4 Byte), the byte rate (4 Byte), blockalign (2 Byte) and the bits per sample (2 Byte).
The third and last chunk is the data-chunk. Here are the real data and the amplitudes of the samples. It includes 4 Byte for the datasize, which is the number of bytes for the data.
You can find further explanations of the properties of a .wav-file here.
From this knowledge I have already created the following class:
public sealed class WaveFile
{
//privates
private int fileSize;
private string format;
private int fmtChunkSize;
private int audioFormat;
private int numChannels;
private int sampleRate;
private int byteRate;
private int blockAlign;
private int bitsPerSample;
private int dataSize;
private int[][] data;//One array per channel
//publics
public int FileSize => fileSize;
public string Format => format;
public int FmtChunkSize => fmtChunkSize;
public int AudioFormat => audioFormat;
public int NumChannels => numChannels;
public int SampleRate => sampleRate;
public int ByteRate => byteRate;
public int BitsPerSample => bitsPerSample;
public int DataSize => dataSize;
public int[][] Data => data;
public WaveFile(string path)
{
FileStream fs = File.OpenRead(path);
LoadChunk(fs); //read RIFF Chunk
LoadChunk(fs); //read fmt Chunk
LoadChunk(fs); //read data Chunk
fs.Close();
}
private void LoadChunk(FileStream fs)
{
ASCIIEncoding Encoder = new ASCIIEncoding();
byte[] bChunkID = new byte[4];
fs.Read(bChunkID, 0, 4);
string sChunkID = Encoder.GetString(bChunkID);
byte[] ChunkSize = new byte[4];
fs.Read(ChunkSize, 0, 4);
if (sChunkID.Equals("RIFF"))
{
fileSize = BitConverter.ToInt32(ChunkSize, 0);
byte[] Format = new byte[4];
fs.Read(Format, 0, 4);
this.format = Encoder.GetString(Format);
}
if (sChunkID.Equals("fmt "))
{
fmtChunkSize = BitConverter.ToInt32(ChunkSize, 0);
byte[] audioFormat = new byte[2];
fs.Read(audioFormat, 0, 2);
this.audioFormat = BitConverter.ToInt16(audioFormat, 0);
byte[] numChannels = new byte[2];
fs.Read(numChannels, 0, 2);
this.numChannels = BitConverter.ToInt16(numChannels, 0);
byte[] sampleRate = new byte[4];
fs.Read(sampleRate, 0, 4);
this.sampleRate = BitConverter.ToInt32(sampleRate, 0);
byte[] byteRate = new byte[4];
fs.Read(byteRate, 0, 4);
this.byteRate = BitConverter.ToInt32(byteRate, 0);
byte[] blockAlign = new byte[2];
fs.Read(blockAlign, 0, 2);
this.blockAlign = BitConverter.ToInt16(blockAlign, 0);
byte[] bitsPerSample = new byte[2];
fs.Read(bitsPerSample, 0, 2);
this.bitsPerSample = BitConverter.ToInt16(bitsPerSample, 0);
}
if (sChunkID.Equals("data"))
{
dataSize = BitConverter.ToInt32(ChunkSize, 0);
data = new int[this.numChannels][];
byte[] temp = new byte[dataSize];
for (int i = 0; i < this.numChannels; i++)
{
data[i] = new int[this.dataSize / (numChannels * bitsPerSample / 8)];
}
for (int i = 0; i < data[0].Length; i++)
{
for (int j = 0; j < numChannels; j++)
{
if (fs.Read(temp, 0, blockAlign / numChannels) > 0)
{
if (blockAlign / numChannels == 2)
{ data[j][i] = BitConverter.ToInt32(temp, 0); }
else
{ data[j][i] = BitConverter.ToInt16(temp, 0); }
}
}
}
}
}
}
Needed using-directives:
using System;
using System.IO;
using System.Text;
This class reads all chunks byte per byte and sets the properties. You just have to initialize this class and it will return all properties of your selected wave-file.
In the reference you added I dont see any mention of the chunk size being repeated for each data chunk...
Try something like this:
int chunkSize = System.BitConverter.ToInt32(strm, 40);
int widx = 44; //wav data starts at the 44th byte
//strm is a byte array of the wav file
while(widx < strm.Length)
{
if(widx < 1000)
{
//log "data" or "100 97 116 97" for the subchunkid
//This is only getting printed the 1st time though. All prints after that are garbage
Debug.Log( strm[widx] + " " + strm[widx+1] + " " + strm[widx+2] + " " + strm[widx+3]);
}
widx += chunkSize;
}

NAudio WAV file Frequency to Decibels

I'm using NAudio to generate a WAV file. The Wav file contains environment noise (detected and recorded via mic).
I need to process this file to show average loudness (dB) against different frequency bands.
I read much about 1:3 Octave band analysis where the frequency windows are 31, 62, 125, 250, 500 Hz and so on. And we can have an average loudness against each window.
This is exactly what I want to do but HOW to achieve this seems confusing.
What I have done so far is (using NAudio tutorial) to read a WAV file and process it. Here is the code:
private void RenderFile()
{
using (WaveFileReader reader = new WaveFileReader(this.voiceRecorderState.ActiveFile))
{
this.samplesPerSecond = reader.WaveFormat.SampleRate;
SampleAggregator.NotificationCount = reader.WaveFormat.SampleRate/10;
//Sample rate is 44100
byte[] buffer = new byte[1024];
WaveBuffer waveBuffer = new WaveBuffer(buffer);
waveBuffer.ByteBufferCount = buffer.Length;
int bytesRead;
do
{
bytesRead = reader.Read(waveBuffer, 0, buffer.Length);
int samples = bytesRead / 2;
double sum = 0;
for (int sample = 0; sample < samples; sample++)
{
if (bytesRead > 0)
{
sampleAggregator.Add(waveBuffer.ShortBuffer[sample] / 32768f);
double sample1 = waveBuffer.ShortBuffer[sample] / 32768.0;
sum += (sample1 * sample1);
}
}
double rms = Math.Sqrt(sum / (SampleAggregator.NotificationCount));
double decibel = (double)(20 * Math.Log10(rms));
if (calculatedBCount == 0)
{
dBList.Add(decibel);
// System.Diagnostics.Debug.WriteLine(decibel.ToString() + " in dB");
}
} while (bytesRead > 0);
int totalSamples = (int)reader.Length / 2;
TotalWaveFormSamples = totalSamples / sampleAggregator.NotificationCount;
calculatedBCount++;
SelectAll();
//System.Diagnostics.Debug.WriteLine("Average Noise: " + avg.ToString() + " dB");
}
audioPlayer.LoadFile(this.voiceRecorderState.ActiveFile);
}
public int Read(byte[] buffer, int offset, int count)
{
if (waveBuffer == null || waveBuffer.MaxSize < count)
{
waveBuffer = new WaveBuffer(count);
}
int bytesRead = source.Read(waveBuffer, 0, count);
if (bytesRead > 0) bytesRead = count;
int frames = bytesRead / sizeof(float); // MRH: was count
float pitch = pitchDetector.DetectPitch(waveBuffer.FloatBuffer, frames);
PitchList.Add(pitch);
return frames * 4;
}
Using a 5 second WAV file, from above two methods, I get a list of Pitches and Decibels
The decibels list contains 484 values like:
-56.19945639
-55.13139952
-55.06947441
-56.70789076
-57.24140093
-55.98546603
-55.67407176
-55.53060998
-55.98480268
-54.85796943
-57.00735818
-55.64980974
-57.07235475
PitchList contains 62 values which include:
75.36621
247.631836
129.199219
75.36621
96.8994141
96.8994141
86.13281
75.36621
129.199219
107.666016
How can I use these results for identifying what is average loudness against 31Hz, 62Hz, 125Hz, 250Hz and so on.
Am I doing some wrong or everything wrong, maybe?
Please correct me if I am wrong but...I'm afraid You cannot convert HZ to DB because there is no relation between them.
Hz is a measure of frequency and Db is a measure of amplitude, sort like kilos to meters.

Splitting audio into left and right channel into seperate files

I'm having trouble seperating the channel buffers into a new file.
Here is the code for extracting each channels buffer:
int samplesDesired = 10000;
byte[] buffer = new byte[samplesDesired * 4];
short[] left = new short[samplesDesired];
short[] right = new short[samplesDesired];
using (WaveFileReader pcm = new WaveFileReader(filePath))
{
int bytesRead = pcm.Read(buffer, 0, 10000);
int index = 0;
for (int i = 0; i < bytesRead / 4; i++)
{
left[i] = BitConverter.ToInt16(buffer, index);
index += 2;
right[i] = BitConverter.ToInt16(buffer, index);
index += 2;
}
}
And here is how I try to create a file from the gathered buffers:
using(var leftChannelFile = new WaveFileWriter("test.wav", new WaveFormat()))
{
leftChannelFile.WriteSamples(left, 0, left.Length);
}
The problem is, when I try to play the "file.wav", it is 0 seconds long and 19,5 KB large. Any idea on why is that happening?
Either you set the header of the resulting files indicating it is one channel/mono,
OR
you add a second empty channel to your result files (all 0 Bytes)

Categories

Resources