I want to write a JToken to a stream asynchronously. And I referred to JToken.WriteToAsync does not write to JsonWriter.
However, the stream output is ?[], while ToString() output is []. Why does the stream contain extra bytes at the beginning?
My code is below:
static async Task Main(string[] args)
{
JArray arr = new JArray();
//var c = JToken.FromObject("abc");
//arr.Add(c);
var stream = new MemoryStream();
await using (var requestWriter = new StreamWriter(stream, System.Text.Encoding.UTF8, leaveOpen: true))
{
var jsonWriter = new JsonTextWriter(requestWriter);
try
{
await arr.WriteToAsync(jsonWriter);
}
finally
{
await jsonWriter.CloseAsync();
}
Console.WriteLine(System.Text.Encoding.UTF8.GetString(stream.GetBuffer(), 0, checked((int)stream.Length)));
Console.WriteLine(arr.ToString());
}
}
Why stream output is not correct?
The Json.net's version is 13.0.1.
Summary
Your problem has nothing to do with asynchronous writing. Your problem is that Encoding.UTF8:
returns a UTF8Encoding object that provides a Unicode byte order mark (BOM).
The extra ? you are seeing is that BOM. To prevent the BOM from being written, use new UTF8Encoding(false) when writing. Or, you could just do new StreamWriter(stream, leaveOpen: true) as the StreamWriter constructors will use a UTF-8 encoding without a Byte-Order Mark (BOM) by default.
Details
Your problem can be reproduced more simply as follows:
JArray arr = new JArray();
var stream = new MemoryStream();
using (var requestWriter = new StreamWriter(stream, System.Text.Encoding.UTF8, leaveOpen: true))
using (var jsonWriter = new JsonTextWriter(requestWriter))
{
arr.WriteTo(jsonWriter);
}
var resultJson = Encoding.UTF8.GetString(stream.GetBuffer(), 0, checked((int)stream.Length));
Console.WriteLine(BitConverter.ToString(stream.GetBuffer(), 0, checked((int)stream.Length)));
Console.WriteLine(resultJson);
Console.WriteLine(arr.ToString());
Assert.AreEqual(arr.ToString(), resultJson);
The assertion fails with the following message:
NUnit.Framework.AssertionException: Expected string length 2 but was 3. Strings differ at index 0.
And with the following output from BitConverter.ToString():
EF-BB-BF-5B-5D
Demo fiddle here.
The 5B-5D are the brackets, but what are the three preamble characters EF-BB-BF? A quick search shows it to be the UTF-8 byte order mark. Since RFC 8259 specifies that Implementations MUST NOT add a byte order mark (U+FEFF) to the beginning of a networked-transmitted JSON text you should omit the BOM by using new UTF8Encoding(false). Thus your code should look like:
JArray arr = new JArray();
var stream = new MemoryStream();
await using (var requestWriter = new StreamWriter(stream, new UTF8Encoding(false), leaveOpen: true))
{
var jsonWriter = new JsonTextWriter(requestWriter);
try
{
await arr.WriteToAsync(jsonWriter);
}
finally
{
await jsonWriter.CloseAsync();
}
}
var resultJson = Encoding.UTF8.GetString(stream.GetBuffer(), 0, checked((int)stream.Length));
Console.WriteLine(BitConverter.ToString(stream.GetBuffer(), 0, checked((int)stream.Length)));
Console.WriteLine(resultJson);
Console.WriteLine(arr.ToString());
Assert.AreEqual(arr.ToString(), resultJson);
Demo fiddle #2 here.
Related
var orders = new List<Order>();
....
orders.Add(...)
string csvstring;
using (var ms = new MemoryStream())
using (var wr = new StreamWriter(stream, Encoding.UTF8))
using (var csvWriter = new CsvWriter(wr, CultureInfo.InvariantCulture, false))
{
csvWriter.WriteRecords(orders);
csvstring = Encoding.UTF8.GetString(stream.ToArray());
}
And then
sftp.WriteAllText(fileNameAbsolutePath, csvstring, Encoding.UTF8);
The content of the file created in sftp has "feff" in the begining.
" orders.csv: text/plain; charset=utf-8".
This is the first part of the problem. What I am looking is to convert this UTF8 to IS0-8859-1 as
the charset expected in the end file is IS0-8859-1.
May be I should do something like this ?
byte[] bytesSS = Encoding.Convert(Encoding.UTF8, Encoding.GetEncoding("ISO-8859-1"), Encoding.UTF8.GetBytes(csvstring));
string s1 = Encoding.GetEncoding("ISO-8859-1").GetString(bytesSS, 0, bytesSS.Length);
Tried to google for "<feff>" and I quite didn't get the concept of BOM and a way to fix this.
I have no idea which SFTP class you use as .NET itself doesn't have an SFTP client. I'll assume you use this one simply because it came first in a Google search for sftp WriteAllText.
If you want to create a file with a specific encoding, specify it in the StreamWriter constructor instead of UTF8 :
using (var ms = new MemoryStream())
using (var wr = new StreamWriter(stream, Encoding.GetEncoding("ISO-8859-1")))
using (var csvWriter = new CsvWriter(wr, CultureInfo.InvariantCulture, false))
{
csvWriter.WriteRecords(orders);
}
On the other hand, UTF8 and Latin1 (or any codepage) use the exact same values for characters in the range 0-127. If you want to send only English text, there won't be any difference no matter which encoding you use. If the actual requirement is to create a UTF8 file without a BOM, you can specify it by using the appropriate UTF8Encoding constructor :
var utf8NoBom=new UTF8Encoding(false);
using (var ms = new MemoryStream())
using (var wr = new StreamWriter(stream, utf8NoBom)))
using (var csvWriter = new CsvWriter(wr, CultureInfo.InvariantCulture, false))
{
csvWriter.WriteRecords(orders);
}
All SFTP clients have (or should have) a way to upload data using a stream. This means you can use Stream.CopyTo to copy data from the memory stream to the upload stream. Assuming OpenWrite is available, you can modify the code to:
using (var ms = new MemoryStream())
{
using (var wr = new StreamWriter(stream, Encoding.GetEncoding("ISO-8859-1")))
using (var csvWriter = new CsvWriter(wr, CultureInfo.InvariantCulture, false))
{
csvWriter.WriteRecords(orders);
}
ms.Position=0;
using(var stream=sftp.OpenWrite(somePath))
{
ms.CopyTo(stream);
}
}
When the CsvHelper completes, the MemoryStream's position is at the end of the stream and CopyTo wouldn't copy anything. By using ms.Position you move the position to the start of the stream.
My initial requirement is to let the user download a file from object list for that I found this solution https://stackoverflow.com/a/49207997/11178128,
But the problem is when it comes to this line
bin = stream.ToArray();
there are no streams written to it. So the bin comes as an empty array.
What could be the problem?
Also, I'm making my web API available through a windows service. And for some reason System.Web.HttpContext.Current.Response gives me null. any idea why it can be?
Thanks in advance.
This is the code i have so far
List<Device> devices;
using (StreamReader r = new StreamReader(String.Format(#"{0}\deviceList.json", savefilePath)))
{
string json = r.ReadToEnd();
devices = JsonConvert.DeserializeObject<List<Device>>(json);
}
byte[] bin;
//String.Format(#"{0}\devices.csv", savefilePath)
using (MemoryStream stream = new MemoryStream())
using (TextWriter textWriter = new StreamWriter(stream))
using (CsvWriter csv = new CsvWriter(textWriter))
{
csv.Configuration.ShouldQuote = (field, context) => false;
csv.WriteRecords(devices);
bin = stream.ToArray();
}
This is related to another question, CsvHelper not writing anything to memory stream.
You just need to change your using statements so that the StreamWriter gets flushed before calling stream.ToArray();
List<Device> devices;
using (StreamReader r = new StreamReader(String.Format(#"{0}\deviceList.json", savefilePath)))
{
string json = r.ReadToEnd();
devices = JsonConvert.DeserializeObject<List<Device>>(json);
}
byte[] bin;
//String.Format(#"{0}\devices.csv", savefilePath)
using (MemoryStream stream = new MemoryStream())
{
using (TextWriter textWriter = new StreamWriter(stream))
using (CsvWriter csv = new CsvWriter(textWriter))
{
csv.Configuration.ShouldQuote = (field, context) => false;
csv.WriteRecords(devices);
}
bin = stream.ToArray();
}
Actually, after a bit of struggling, Found that i was missing this line.
textWriter.Flush();
As mentioned in the below reply I had to flush the textWriter object in order to write to the file. Here is the working code.
byte[] data;
using (MemoryStream stream = new MemoryStream())
using (TextWriter textWriter = new StreamWriter(stream))
using (CsvWriter csv = new CsvWriter(textWriter))
{
csv.Configuration.RegisterClassMap<DeviceMap>();
csv.Configuration.ShouldQuote = (field, context) => false;
csv.WriteRecords(values);
textWriter.Flush();
data = stream.ToArray();
}
return data;
using (var ms = new MemoryStream())
{
using (var writer = new StreamWriter(ms))
using (var csv = new CsvWriter(writer))
{
csv.WriteRecords(dbresponse);
} // the closing tag here is important!!It flush the streamwriter
ms.ToArray(); // or ms.GetBuffer()
}
Now the ms.ToArray() will contain the data from csvHelper
For a field variant - for example a list, which won't work using the writerecords method - you will need to use writefield. I am just submitting this here as this trifling issue caused me none too little pain.
Here is an async example:
var result = await GetListOfString();
using (var ms = new MemoryStream())
{
using (var writer = new StreamWriter(ms))
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
foreach (var value in result)
{
csv.WriteField(value);
await csv.NextRecordAsync();
}
await writer.FlushAsync();
return ms.ToArray();
}
}
Right now I'm using XmlTextWriter to convert a MemoryStream object into string. But I wan't to know whether there is a faster method to serialize a memorystream to string.
I follow the code given here for serialization - http://www.eggheadcafe.com/articles/system.xml.xmlserialization.asp
Edited
Stream to String
ms.Position = 0;
using (StreamReader sr = new StreamReader(ms))
{
string content = sr.ReadToEnd();
SaveInDB(ms);
}
String to Stream
string content = GetFromContentDB();
byte[] byteArray = Encoding.ASCII.GetBytes(content);
MemoryStream ms = new MemoryStream(byteArray);
byte[] outBuf = ms.GetBuffer(); //error here
using(MemoryStream stream = new MemoryStream()) {
stream.Position = 0;
var sr = new StreamReader(stream);
string myStr = sr.ReadToEnd();
}
You cant use GetBuffer when you use MemoryStream(byte[]) constructor.
MSDN quote:
This constructor does not expose the
underlying stream. GetBuffer throws
UnauthorizedAccessException.
You must use this constructor and set publiclyVisible = true in order to use GetBuffer
In VB.net i used this
Dim TempText = System.Text.Encoding.UTF8.GetString(TempMemoryStream.ToArray())
in C# may apply
I have (several) WebAPI action(s), which load QuickFix logs from database (via EF) and use this private method to return them as CSV:
private HttpResponseMessage BuildCsvResponse<T>(T[] entries, Func<T, string> row, string fileName)
{
var response = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new MemoryStream();
var writer = new StreamWriter(stream);
var i = entries.Length;
foreach (var entry in entries)
{
i--;
writer.WriteLine(row(entry)); // simply call to overridden ToString() method
}
stream.Seek(0, SeekOrigin.Begin);
stream.Flush();
response.Content = new StreamContent(stream);
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = fileName,
};
response.Content.Headers.ContentType = new MediaTypeHeaderValue("text/csv");
return response;
}
The problem is that content is never loaded to the end and cut on random symbol not so far from end. Why could it happen?
May be it is important - all log strings contain delimiter 0x01
You need to Flush your streamwriter's internal buffers before you touch the underlying stream.
Best is to tell your StreamWriter to keep the stream open by using another contructor. You can then safely dispose your streamwriter causing it to flush its buffer while your memorystream instance stays open and doesn't get disposed.
Notice that you need to pick an encoding that matches your HTTP content response. I choose UTF8 here, adapt accordingly.
var stream = new MemoryStream();
// notice the true as last parameter, false is the default.
using(var writer = new StreamWriter(stream, Encoding.UTF8, 8192, true))
{
var i = entries.Length;
foreach (var entry in entries)
{
i--;
writer.WriteLine(row(entry)); // simply call to overridden ToString() method
}
}
// your streamwriter has now flushed its buffer and left the stream open
stream.Seek(0, SeekOrigin.Begin);
// calling Flush on the stream was never needed so I removed that.
response.Content = new StreamContent(stream);
I am trying to download a .json blob that I have stored in a container in the Azure Storage using Newtonsoft.Json to write it to an object.
I am doing this by calling:
(CloudBlockBlob) blob.DownloadToStream(stream);
However, instead of writing the stream to a file in the local app directory, I want to return the json object doing Json(result)
This is what I have tried:
using (var stream = new MemoryStream())
{
blob.DownloadToStream(stream);
var serializer = new JsonSerializer();
using (var sr = new StreamReader(stream))
{
using (var jsonTextReader = new JsonTextReader(sr))
{
result = serializer.Deserialize(jsonTextReader);
}
}
}
At the end my jsonTextReader variable is empty and the object null
What can I do to accomplish this?
Thank you
Both the question and accepted answer start by copying the entire stream into a MemoryStream which is effectively a big byte array in memory. This step is unnecessary - it's more memory-efficient to stream the blob data directly to the object without buffering the bytes first:
using (var stream = await blob.OpenReadAsync())
using (var sr = new StreamReader(stream))
using (var jr = new JsonTextReader(sr))
{
result = JsonSerializer.CreateDefault().Deserialize<T>(jr);
}
Please reset the stream's position to 0 after reading the blob into the stream. So your code would be:
using (var stream = new MemoryStream())
{
blob.DownloadToStream(stream);
stream.Position = 0;//resetting stream's position to 0
var serializer = new JsonSerializer();
using (var sr = new StreamReader(stream))
{
using (var jsonTextReader = new JsonTextReader(sr))
{
var result = serializer.Deserialize(jsonTextReader);
}
}
}
In case you don't care for streaming and want a short and concise way:
var json = await blockBlob.DownloadTextAsync();
var myObject = JsonConvert.DeserializeObject<MyObject>(json);