Split an avro file and upload to REST - c#

I have created some avro files. I can use the following commands to convert them to json, just to check whether the files are ok
java -jar avro-tools-1.8.2.jar tojson FileName.avro>outputfilename.json
Now, I have some big avro files and the REST API I m trying to upload to, has size limitations and thus I am trying to upload it in chunks using streams.
The following sample, which just reads from the original file in chunks and copies to another avro file, creates the file perfectly
using System;
using System.IO;
class Test
{
public static void Main()
{
// Specify a file to read from and to create.
string pathSource = #"D:\BDS\AVRO\filename.avro";
string pathNew = #"D:\BDS\AVRO\test\filenamenew.avro";
try
{
using (FileStream fsSource = new FileStream(pathSource,
FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[(20 * 1024 * 1024) + 100];
long numBytesToRead = (int)fsSource.Length;
int numBytesRead = 0;
using (FileStream fsNew = new FileStream(pathNew,
FileMode.Append, FileAccess.Write))
{
// Read the source file into a byte array.
//byte[] bytes = new byte[fsSource.Length];
//int numBytesToRead = (int)fsSource.Length;
//int numBytesRead = 0;
while (numBytesToRead > 0)
{
int bytesRead = fsSource.Read(buffer, 0, buffer.Length);
byte[] actualbytes = new byte[bytesRead];
Array.Copy(buffer, actualbytes, bytesRead);
// Read may return anything from 0 to numBytesToRead.
// Break when the end of the file is reached.
if (bytesRead == 0)
break;
numBytesRead += bytesRead;
numBytesToRead -= bytesRead;
fsNew.Write(actualbytes, 0, actualbytes.Length);
}
}
}
// Write the byte array to the other FileStream.
}
catch (FileNotFoundException ioEx)
{
Console.WriteLine(ioEx.Message);
}
}
}
How do I know this creates a ok avro. Because the earlier command to convert to json, again works i.e.
java -jar avro-tools-1.8.2.jar tojson filenamenew.avro>outputfilename.json
However, when I use the same code, but instead of copying to another file, just call a rest api, the file gets uploaded but upon downloading the same file from the server and running the command above to convert to json says - "Not a Data file".
So, obviously something is getting corrupted and I am struggling to figure out what.
This is the snippet
string filenamefullyqualified = path + filename;
Stream stream = System.IO.File.Open(filenamefullyqualified, FileMode.Open, FileAccess.Read, FileShare.None);
long? position = 0;
byte[] buffer = new byte[(20 * 1024 * 1024) + 100];
long numBytesToRead = stream.Length;
int numBytesRead = 0;
do
{
var content = new MultipartFormDataContent();
int bytesRead = stream.Read(buffer, 0, buffer.Length);
byte[] actualbytes = new byte[bytesRead];
Array.Copy(buffer, actualbytes, bytesRead);
if (bytesRead == 0)
break;
//Append Data
url = String.Format("https://{0}.dfs.core.windows.net/raw/datawarehouse/{1}/{2}/{3}/{4}/{5}?action=append&position={6}", datalakeName, filename.Substring(0, filename.IndexOf("_")), year, month, day, filename, position.ToString());
numBytesRead += bytesRead;
numBytesToRead -= bytesRead;
ByteArrayContent byteContent = new ByteArrayContent(actualbytes);
content.Add(byteContent);
method = new HttpMethod("PATCH");
request = new HttpRequestMessage(method, url)
{
Content = content
};
request.Headers.Add("Authorization", "Bearer " + accesstoken);
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
position = position + request.Content.Headers.ContentLength;
Array.Clear(buffer, 0, buffer.Length);
} while (numBytesToRead > 0);
stream.Close();
I have looked through the forum threads but haven't come across anything which deals with splitting of avro files.
I have a hunch that my "content" for the http request isn't right. what is it that I am missing?
If you need more details, I will be happy to provide.

I have found the problem now. The problem was because of MultipartFormDataContent. When an avro file is uploaded with that, it adds extra text like content Type etc, along with removal of many lines (I do not know why).
So, the solution was to upload the contents as "ByteArrayContent" itself and not add it to MultipartFormDataContent like I was doing earlier.
Here is the snippet, almost similar to the one in the question, except that I no longer use MultipartFormDataContent
string filenamefullyqualified = path + filename;
Stream stream = System.IO.File.Open(filenamefullyqualified, FileMode.Open, FileAccess.Read, FileShare.None);
//content.Add(CreateFileContent(fs, path, filename, "text/plain"));
long? position = 0;
byte[] buffer = new byte[(20 * 1024 * 1024) + 100];
long numBytesToRead = stream.Length;
int numBytesRead = 0;
//while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
//{
do
{
//var content = new MultipartFormDataContent();
int bytesRead = stream.Read(buffer, 0, buffer.Length);
byte[] actualbytes = new byte[bytesRead];
Array.Copy(buffer, actualbytes, bytesRead);
if (bytesRead == 0)
break;
//Append Data
url = String.Format("https://{0}.dfs.core.windows.net/raw/datawarehouse/{1}/{2}/{3}/{4}/{5}?action=append&position={6}", datalakeName, filename.Substring(0, filename.IndexOf("_")), year, month, day, filename, position.ToString());
numBytesRead += bytesRead;
numBytesToRead -= bytesRead;
ByteArrayContent byteContent = new ByteArrayContent(actualbytes);
//byteContent.Headers.ContentType= new MediaTypeHeaderValue("text/plain");
//content.Add(byteContent);
method = new HttpMethod("PATCH");
//request = new HttpRequestMessage(method, url)
//{
// Content = content
//};
request = new HttpRequestMessage(method, url)
{
Content = byteContent
};
request.Headers.Add("Authorization", "Bearer " + accesstoken);
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
position = position + request.Content.Headers.ContentLength;
Array.Clear(buffer, 0, buffer.Length);
} while (numBytesToRead > 0);
stream.Close();

But the streaming by record will not be able to handle the AVRO file as a whole in a transaction. We may end up in partial success, if some records fail, for example.
If we have a small tool that can split AVRO files based on a threshold number of records, it will be great.
The spark-based split by partition technique does allow to split data set to a pre-defined number of files; but, it does not allow splitting based on the number of records. I.e., I do not want an AVRO file with more than 500 records.
So we have to devise a batching logic based on the comfortable heap size the application can handle along with a two-phase commit, to handle transactions

Related

C# - Is there a limit to the size of an httpWebRequest stream?

I am trying to build an application that downloads a small binary file (20-25 KB) from a custom webserver using httpwebrequests.
This is the server-side code:
Stream UpdateRequest = context.Request.InputStream;
byte[] UpdateContent = new byte[context.Request.ContentLength64];
UpdateRequest.Read(UpdateContent, 0, UpdateContent.Length);
String remoteVersion = "";
for (int i = 0;i < UpdateContent.Length;i++) { //check if update is necessary
remoteVersion += (char)UpdateContent[i];
}
byte[] UpdateRequestResponse;
if (remoteVersion == remotePluginVersion) {
UpdateRequestResponse = new byte[1];
UpdateRequestResponse[0] = 0; //respond with a single byte set to 0 if no update is required
} else {
FileInfo info = new FileInfo(Path.Combine(Directory.GetCurrentDirectory(), "remote logs", "PointAwarder.dll"));
UpdateRequestResponse = File.ReadAllBytes(Path.Combine(Directory.GetCurrentDirectory(), "remote logs", "PointAwarder.dll"));
//respond with the updated file otherwise
}
//this byte is past the threshold and will not be the same in the version the client recieves
Console.WriteLine("5000th byte: " + UpdateRequestResponse[5000]);
//send the response
context.Response.ContentLength64 = UpdateRequestResponse.Length;
context.Response.OutputStream.Write(UpdateRequestResponse, 0, UpdateRequestResponse.Length);
context.Response.Close();
After this the array UpdateRequestResponse contains the entire file and has been sent to the client.
The client runs this code:
//create the request
WebRequest request = WebRequest.Create(url + "pluginUpdate");
request.Method = "POST";
//create a byte array of the current version
byte[] requestContentTemp = version.ToByteArray();
int count = 0;
for (int i = 0; i < requestContentTemp.Length; i++) {
if (requestContentTemp[i] != 0) {
count++;
}
}
byte[] requestContent = new byte[count];
for (int i = 0, j = 0; i < requestContentTemp.Length; i++) {
if (requestContentTemp[i] != 0) {
requestContent[j] = requestContentTemp[i];
j++;
}
}
//send the current version
request.ContentLength = requestContent.Length;
Stream dataStream = request.GetRequestStream();
dataStream.Write(requestContent, 0, requestContent.Length);
dataStream.Close();
//get and read the response
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] responseBytes = new byte[response.ContentLength];
responseStream.Read(responseBytes, 0, (int)response.ContentLength);
responseStream.Close();
response.Close();
//if the response containd a single 0 we are up-to-date, otherwise write the content of the response to file
if (responseBytes[0] != 0 || response.ContentLength > 1) {
BinaryWriter writer = new BinaryWriter(File.Open(Path.Combine(Directory.GetCurrentDirectory(), "ServerPlugins", "PointAwarder.dll"), FileMode.Create));
writer.BaseStream.Write(responseBytes, 0, responseBytes.Length);
writer.Close();
TShockAPI.Commands.HandleCommand(TSPlayer.Server, "/reload");
}
The byte array responseBytes on the client should be identical to the array UpdateRequestResponse on the server, but it isn't. after about 4000 bytes every byte after that is set to 0 rather than what it should be (responseBytes[3985] is the last non-zero byte).
Does this happen because httpWebRequest has a size limit? I can't see any bug in my code that could be causing it and the same code works in other instances where I only have to pass around short sequences of data (less than 100 bytes).
The MSDN pages don't mention any size limit like this.
It's not that it has any artificial limit, this is a byproduct of the Streaming nature of what you're attempting to do. I have a feeling the following line is the offender:
responseStream.Read(responseBytes, 0, (int)response.ContentLength);
I've had this issue in the past (with TCP streams), it doesn't read all of the contents of the array, because they haven't all been sent over the wire yet. This is what I would try instead.
for (int i = 0; i < response.ContentLength; i++)
{
responseBytes[i] = responseStream.ReadByte();
}
That way, it will make sure to read all the way until the end of the stream.
EDIT
usr's BinaryReader based solution is much more efficient. Here is the relevant solution:
BinaryReader binReader = new BinaryReader(responseStream);
const int bufferSize = 4096;
byte[] responseBytes;
using (MemoryStream ms = new MemoryStream())
{
byte[] buffer = new byte[bufferSize];
int count;
while ((count = binReader.Read(buffer, 0, buffer.Length)) != 0)
ms.Write(buffer, 0, count);
responseBytes = ms.ToArray();
}
You are assuming that Read is reading as many bytes as you request. But the requested count is just an upper limit. You must tolerate reading small chunks.
You can use var bytes = new BinaryReader(myStream).ReadBytes(count); to read an exact number. Don't call ReadByte too often because that is very CPU intensive.
The best solution would be to step away from the fairly manual HttpWebRequest and use HttpClient or WebClient. All of this is automated for you and you get back a byte[].

Using PushStreamContent to upload from an HTTPClient

I would like to upload a large amount of data to a web server from a client machine. I jumped right to PushStreamContent so I could write directly to the stream, as the results vary in size and can be rather large.
The flow is as follows:
User runs query > Reader Ready Event Fires > Begin Upload
Once the ready event is fired, the listener picks it up and iterates over the result set, uploading the data as a multipart form:
Console.WriteLine("Query ready, uploading");
byte[] buffer = new byte[1024], form = new byte[200];
int offset = 0, byteCount = 0;
StringBuilder rowBuilder = new StringBuilder();
string builderS;
var content = new PushStreamContent(async (stream, httpContent, transportContext) =>
//using (System.IO.Stream stream = new System.IO.FileStream("test.txt", System.IO.FileMode.OpenOrCreate))
{
int bytes = 0;
string boundary = createFormBoundary();
httpContent.Headers.Remove("Content-Type");
httpContent.Headers.TryAddWithoutValidation("Content-Type", "multipart/form-data; boundary=" + boundary);
await stream.WriteAsync(form, 0, form.Length);
form = System.Text.Encoding.UTF8.GetBytes(createFormElement(boundary, "file"));
await stream.WriteAsync(form, 0, form.Length);
await Task.Run(async () =>
{
foreach (var row in rows)
{
for (int i = 0; i < row.Length; i++)
{
rowBuilder.Append(row[i].Value);
if (i + 1 < row.Length)
rowBuilder.Append(',');
else
{
rowBuilder.Append("\r\n");
}
}
builderS = rowBuilder.ToString();
rowBuilder.Clear();
byteCount = System.Text.Encoding.UTF8.GetByteCount(builderS);
bytes += byteCount;
if (offset + byteCount > buffer.Length)
{
await stream.WriteAsync(buffer, 0, offset);
offset = 0;
if (byteCount > buffer.Length)
{
System.Diagnostics.Debug.WriteLine("Expanding buffer to {0} bytes", byteCount);
buffer = new byte[byteCount];
}
}
offset += System.Text.Encoding.UTF8.GetBytes(builderS, 0, builderS.Length, buffer, offset);
}
});
await stream.WriteAsync(buffer, 0, offset);
form = System.Text.Encoding.UTF8.GetBytes(boundary);
await stream.WriteAsync(form, 0, form.Length);
await stream.FlushAsync(); //pretty sure this does nothing
System.Diagnostics.Debug.WriteLine("Wrote {0}.{1} megabytes of data", bytes / 1000000, bytes % 1000000);
I think the code above would work great if I were the server, just adding stream.Close(); would finish it, however since I am the client here closing it causes an error (TaskCancelled). Waiting to read doesn't do anything either, I presume because the PushStreamContent doesn't end the request unless I explicitly close the stream. That being said, writing to a file produces exactly what I expect to be uploaded so everything writes perfectly.
Any ideas on what I can do here? I might be totally misusing PushStreamContent but it seems like this should be an appropriate use case.
So the solution is a little confusing at first but it seems to make sense and perhaps more importantly, it works:
using(var content = new MultipartFormDataContent())
{
var pushContent = new PushStreamContent(async (stream, httpContent, transportContext) =>
{
//do the stream writing stuff
stream.Close();
});
content.add(pushContent);
//post, put, etc. content here
}
This works because the stream passed to the PushStreamContent method is not the actual request stream, it's a stream handled by the HttpClient, just like adding a file to a request stream. As a result, closing it signals the end of input for this part of the HttpContent and allows the request to be finalized.

Can't download complete image file from skydrive using REST API

I'm working on a quick wrapper for the skydrive API in C#, but running into issues with downloading a file. For the first part of the file, everything comes through fine, but then there start to be differences in the file and shortly thereafter everything becomes null. I'm fairly sure that it's just me not reading the stream correctly.
This is the code I'm using to download the file:
public const string ApiVersion = "v5.0";
public const string BaseUrl = "https://apis.live.net/" + ApiVersion + "/";
public SkyDriveFile DownloadFile(SkyDriveFile file)
{
string uri = BaseUrl + file.ID + "/content";
byte[] contents = GetResponse(uri);
file.Contents = contents;
return file;
}
public byte[] GetResponse(string url)
{
checkToken();
Uri requestUri = new Uri(url + "?access_token=" + HttpUtility.UrlEncode(token.AccessToken));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] contents = new byte[response.ContentLength];
responseStream.Read(contents, 0, (int)response.ContentLength);
return contents;
}
This is the image file I'm trying to download
And this is the image I am getting
These two images lead me to believe that I'm not waiting for the response to finish coming through, because the content-length is the same as the size of the image I'm expecting, but I'm not sure how to make my code wait for the entire response to come through or even really if that's the approach I need to take.
Here's my test code in case it's helpful
[TestMethod]
public void CanUploadAndDownloadFile()
{
var api = GetApi();
SkyDriveFolder folder = api.CreateFolder(null, "TestFolder", "Test Folder");
SkyDriveFile file = api.UploadFile(folder, TestImageFile, "TestImage.png");
file = api.DownloadFile(file);
api.DeleteFolder(folder);
byte[] contents = new byte[new FileInfo(TestImageFile).Length];
using (FileStream fstream = new FileStream(TestImageFile, FileMode.Open))
{
fstream.Read(contents, 0, contents.Length);
}
using (FileStream fstream = new FileStream(TestImageFile + "2", FileMode.CreateNew))
{
fstream.Write(file.Contents, 0, file.Contents.Length);
}
Assert.AreEqual(contents.Length, file.Contents.Length);
bool sameData = true;
for (int i = 0; i < contents.Length && sameData; i++)
{
sameData = contents[i] == file.Contents[i];
}
Assert.IsTrue(sameData);
}
It fails at Assert.IsTrue(sameData);
This is because you don't check the return value of responseStream.Read(contents, 0, (int)response.ContentLength);. Read doesn't ensure that it will read response.ContentLength bytes. Instead it returns the number of bytes read. You can use a loop or stream.CopyTo there.
Something like this:
WebResponse response = request.GetResponse();
MemoryStream m = new MemoryStream();
response.GetResponseStream().CopyTo(m);
byte[] contents = m.ToArray();
As LB already said, you need to continue to call Read() until you have read the entire stream.
Although Stream.CopyTo will copy the entire stream it does not ensure that read the number of bytes expected. The following method will solve this and raise an IOException if it does not read the length specified...
public static void Copy(Stream input, Stream output, long length)
{
byte[] bytes = new byte[65536];
long bytesRead = 0;
int len = 0;
while (0 != (len = input.Read(bytes, 0, Math.Min(bytes.Length, (int)Math.Min(int.MaxValue, length - bytesRead)))))
{
output.Write(bytes, 0, len);
bytesRead = bytesRead + len;
}
output.Flush();
if (bytesRead != length)
throw new IOException();
}

Reading(/Writing) Files in C#

I recently wanted to track progress of a HTTPWebRequest Upload progress. So I started small and started with buffered read of a simple text file. I then discovered that a simple task like
File.ReadAllText("text.txt");
becomes something like below, with all the streams, readers, writers etc. Or can somethings be removed? Also the code below is not working. Maybe I did something wrong, whats the way to read (i guess write will be similar) into buffer so that I can track progress, assuming the stream are not local eg. WebRequest
byte[] buffer = new byte[2560]; // 20KB Buffer, btw, how should I decide the buffer size?
int bytesRead = 0, read = 0;
FileStream inStream = new FileStream("./text.txt", FileMode.Open, FileAccess.Read);
MemoryStream outStream = new MemoryStream();
BinaryWriter outWriter = new BinaryWriter(outStream);
// I am getting "Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection."
// inStream.Length = Length = 9335092
// bytesRead = 2560
// buffer.Length = 2560
while ((read = inStream.Read(buffer, bytesRead, buffer.Length)) > 0)
{
outWriter.Write(buffer);
//outStream.Write(buffer, bytesRead, buffer.Length);
bytesRead += read;
Debug.WriteLine("Progress: " + bytesRead / inStream.Length * 100 + "%");
}
outWriter.Flush();
txtLog.Text = outStream.ToString();
Update: Solution
byte[] buffer = new byte[2560];
int bytesRead = 0, read = 0;
FileStream inStream = File.OpenRead("text.txt");
MemoryStream outStream = new MemoryStream();
while ((read = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, buffer.Length);
bytesRead += read;
Debug.WriteLine((double)bytesRead / inStream.Length * 100);
}
inStream.Close();
outStream.Close();
should probably be
outWriter.Write(buffer,0,read);
Since you seem to be reading text (although I could be wrong), it seems that your program could be a lot simpler if you read character by character instead of calling the standard Read():
BinaryReader reader = new BinaryReader(File.Open("./text.txt", FileMode.Open));
MemoryStream outStream = new MemoryStream();
StreamWriter outWriter = new StreamWriter(outStream);
while (Reader.BaseStream.Position < Reader.BaseStream.Length)
{
outWriter.Write(reader.ReadChar());
Debug.WriteLine("Progress: " + ((double)reader.BaseStream.Position) / (double)(reader.BaseStream.Length) + "%");
}
outWriter.Close();
txtLog.Text = outStream.ToString();
Since you only need to check the progress of the upload operation you can just check the size of the file using a fileinfo object.
In the FileInfo class theres a property called length that returns the file size in bytes. Not sure if it gives the current size when the file being written. But I think it'll be worth giving a try as it is more simple and efficient than the method that you are using

C# 4.0: Convert pdf to byte[] and vice versa

How do I convert a pdf file to a byte[] and vice versa?
// loading bytes from a file is very easy in C#. The built in System.IO.File.ReadAll* methods take care of making sure every byte is read properly.
// note that for Linux, you will not need the c: part
// just swap out the example folder here with your actual full file path
string pdfFilePath = "c:/pdfdocuments/myfile.pdf";
byte[] bytes = System.IO.File.ReadAllBytes(pdfFilePath);
// munge bytes with whatever pdf software you want, i.e. http://sourceforge.net/projects/itextsharp/
// bytes = MungePdfBytes(bytes); // MungePdfBytes is your custom method to change the PDF data
// ...
// make sure to cleanup after yourself
// and save back - System.IO.File.WriteAll* makes sure all bytes are written properly - this will overwrite the file, if you don't want that, change the path here to something else
System.IO.File.WriteAllBytes(pdfFilePath, bytes);
using (FileStream fs = new FileStream("sample.pdf", FileMode.Open, FileAccess.Read))
{
byte[] bytes = new byte[fs.Length];
int numBytesToRead = (int)fs.Length;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to numBytesToRead.
int n = fs.Read(bytes, numBytesRead, numBytesToRead);
// Break when the end of the file is reached.
if (n == 0)
{
break;
}
numBytesRead += n;
numBytesToRead -= n;
}
numBytesToRead = bytes.Length;
}
Easiest way:
byte[] buffer;
using (Stream stream = new IO.FileStream("file.pdf"))
{
buffer = new byte[stream.Length - 1];
stream.Read(buffer, 0, buffer.Length);
}
using (Stream stream = new IO.FileStream("newFile.pdf"))
{
stream.Write(buffer, 0, buffer.Length);
}
Or something along these lines...

Categories

Resources