I would like to keep in memory medium size JSON file from hard disc (it have 14mb). I found easy way to do it:
public static void Load(string path = CacheFilePath)
{
path = Path.GetFullPath(path);
if (!File.Exists(path))
return;
JsonSerializer serializer = new JsonSerializer();
using (FileStream s = File.Open(path, FileMode.Open))
using (StreamReader sr = new StreamReader(s))
using (JsonReader red = new JsonTextReader(sr))
{
while (red.Read())
{
JObject Data = serializer.Deserialize<JObject>(red);
}
}
}
And everything is working but, this JObject take almost 360mb RAM. Is it possible to set any JSON setting to reduce this value?
or maybe there is another way to keep data as JSON in memory?
Quick look on file (example, real have something about 230 k):
{
"OrderValidationData":
{
"Count":2,
"Name":"OrderValidationData",
"Keys":[
"WAL22999-96",
"HEL5DA 193 175-111",
**n**
],
"Data":
{
"WAL22999-96":
{
"MinimalNetPrice":10.00,
"VatRate":12.00
},
"HEL5DA 193 175-111":
{
"MinimalNetPrice":10.00,
"VatRate":12.00
}
, **{n}**
}
}
}
Related
I have this original code:
public async Task<ActionResult> Chunk_Upload_Save(IEnumerable<IFormFile> files, string metaData)
{
if (metaData == null)
{
return await Save(files);
}
MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(metaData));
JsonSerializer serializer = new JsonSerializer();
ChunkMetaData chunkData;
using (StreamReader streamReader = new StreamReader(ms))
{
chunkData = (ChunkMetaData)serializer.Deserialize(streamReader, typeof(ChunkMetaData));
}
string path = String.Empty;
// The Name of the Upload component is "files"
if (files != null)
{
foreach (var file in files)
{
path = Path.Combine(WebHostEnvironment.WebRootPath, "App_Data", chunkData.FileName);
//AppendToFile(path, file);
}
}
FileResult fileBlob = new FileResult();
fileBlob.uploaded = chunkData.TotalChunks - 1<= chunkData.ChunkIndex;
fileBlob.fileUid = chunkData.UploadUid;
return Json(fileBlob);
}
I converted it using only System.Text.Json.* to this:
public async Task<ActionResult> Chunk_Upload_Save(IEnumerable<IFormFile> files, string metaData)
{
if (metaData == null)
{
return await Save(files);
}
var ms = new MemoryStream(Encoding.UTF8.GetBytes(metaData));
ChunkMetaDataModel chunkData;
using (var streamReader = new StreamReader(ms))
{
// Here is the issues
chunkData = (ChunkMetaDataModel) await JsonSerializer.DeserializeAsync(streamReader, typeof(ChunkMetaDataModel));
}
// The Name of the Upload component is "files"
if (files != null)
{
foreach (var file in files)
{
Path.Combine(hostEnvironment.WebRootPath, "App_Data", chunkData!.FileName);
//AppendToFile(path, file);
}
}
var fileBlob = new FileResultModel
{
uploaded = chunkData!.TotalChunks - 1 <= chunkData.ChunkIndex,
fileUid = chunkData.UploadUid
};
return Json(fileBlob);
}
I get the error:
Argument 1: cannot convert from 'System.IO.StreamReader' to 'System.IO.Stream'.
By Argument 1, VS is pointing to the streamReader parameter and it's this line:
chunkData = (ChunkMetaData)serializer.Deserialize(streamReader, typeof(ChunkMetaData));
How do I convert this to the System.Text.Json API?
System.Text.Json is designed to deserialize most efficiently from UTF8 byte sequences rather than UTF16 strings, so there is no overload to deserialize from a StreamReader. Instead deserialize directly from the MemoryStream ms using the following:
chunkData = await JsonSerializer.DeserializeAsync<ChunkMetaDataModel>(ms);
Notes:
There is no reason to use async deserialization when deserializing from a MemoryStream. Instead use synchronous deserialization like so:
chunkData = JsonSerializer.Deserialize<ChunkMetaDataModel>(ms);
And since you already have a string metaData containing the JSON to be deserialized, you can deserialize directly from it using the Deserialize<TValue>(ReadOnlySpan<Char>, JsonSerializerOptions) overload:
chunkData = JsonSerializer.Deserialize<ChunkMetaDataModel>(metaData);
System.Text.Json will do the UTF16 to UTF8 conversion for you internally using memory pooling.
If you really must deserialize from a StreamReader for some reason (e.g. incremental integration of System.Text.Json with legacy code), see Reading string as a stream without copying for suggestions on how to do this.
I need to save the information from an input page into a JSON File and output the information onto another page reading from the JSON File. I've tried many things and what seemed to work for me is using the specialfolder localapplication data.
Now, I don't quite understand how I can output the information and also check if the data is even put in correctly.
I previously used StreamReader to output the information on the JSON file and then put it on a ListView but this doesn't work if I have the file in the specialfolder. It says "stream cant be null". The commented out code is the code I tried in previous attempts.
Code:
ListPageVM (Read Page)
private ObservableCollection<MainModel> data;
public ObservableCollection<MainModel> Data
{
get { return data; }
set { data = value; OnPropertyChanged(); }
}
public ListPageVM()
{
var assembly = typeof(ListPageVM).GetTypeInfo().Assembly;
Stream stream = assembly.GetManifestResourceStream(Path.Combine(System.Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "eintraege.json"/"SaveUp.Resources.eintraege.json"/));
//var file = Path.Combine(System.Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "eintraege.json");
using (var reader = new StreamReader(stream))
{
var json = reader.ReadToEnd();
List<MainModel> dataList = JsonConvert.DeserializeObject<List<MainModel>>(json);
data = new ObservableCollection<MainModel>(dataList);
}
}
MainPageVM (Write Page)
public Command Einfügen
{
get
{
return new Command(() =>
{
// Data ins Json
_mainModels.Add(DModel);
Datum = DateTime.Now.ToString("dd.mm.yyyy");
//var assembly = typeof(ListPageVM).GetTypeInfo().Assembly;
//FileStream stream = new FileStream("SaveUp.Resources.eintraege.json", FileMode.OpenOrCreate, FileAccess.Write);
var file = Path.Combine(System.Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "eintraege.json");
//Stream stream = assembly.GetManifestResourceStream("SaveUp.Resources.eintraege.json");
if (!File.Exists(file))
{
File.Create(file);
}
using (var writer = File.AppendText(file))
{
string data = JsonConvert.SerializeObject(_mainModels);
writer.WriteLine(data);
}
});
}
}
you are trying to read and write resources, not files. That won't work. Instead do this
var path = Path.Combine(System.Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "eintraege.json");
File.WriteAllText(path, myjson);
to read the data back
var json = File.ReadAllText(path);
So I need to decompress a BZip2-file, use the files data, and then remove the decompressed file. The issue is that my method doesn't work when the BZip2-file is too large.
Been using:
using ICSharpCode.SharpZipLib.BZip2;
This is what I've been trying to do:
private List<JObject> listWithObjects;
private void DecompressBzip2File(string bzipPath)
{
string tempPath = Path.GetRandomFileName();
FileStream fs = new FileStream(bzipPath, FileMode.Open);
using (FileStream decompressedStream = File.Create(tempPath))
{
BZip2.Decompress(fs, decompressedStream, true);
}
LoadJson(tempPath);
File.Delete(tempPath);
}
private void LoadJson(string tempPath)
{
List<JObject> jsonList = new List<JObject>();
using (StreamReader file = new StreamReader(tempPath))
{
string line;
while ((line = file.ReadLine()) != null)
{
JObject jObject = JObject.Parse(line);
jsonList.Add(jObject);
}
file.Close();
}
listWithObjects = jsonList;
}
It's working when I've got a .bz2 ~14mb, but not when I've tried a .bz2 ~900mb my program just stops (and I get no error-message(my RAM goes crazy)). I read something about buffer size, but couldn't figure out how to use it.
Does anyone have any tip on how I could decompress a large bzip2-file? Could you like chunk the file to smaller pieces?
I m using Xsd2Code to serialize my object in order to generate a Xml file.
It works fine, just when the file contains much data, I get an OutOfMemoryException. Here's the code I used to serialize my object :
/// Serializes current EntityBase object into an XML document
/// </summary>
// <returns>string XML value</returns>
public virtual string Serialize() {
System.IO.StreamReader streamReader = null;
System.IO.MemoryStream memoryStream = null;
try {
memoryStream = new System.IO.MemoryStream();
Serializer.Serialize(memoryStream, this);
memoryStream.Seek(0, System.IO.SeekOrigin.Begin);
streamReader = new System.IO.StreamReader(memoryStream);
return streamReader.ReadToEnd();
}
finally {
if (streamReader != null) {
streamReader.Dispose();
}
if (memoryStream != null) {
memoryStream.Dispose();
}
}
}
My request here, is how can I extend the memory buffer, or how can I avoid such an exception?
Regards.
You don't show the complete ToString() output of the OutOfMemoryException so it's hard to say for sure how much this will help, but one possibility would be to write directly to a StringWriter without creating an intermediate MemoryStream, like so:
public virtual string Serialize()
{
return this.Serialize(Serializer);
}
Using the extension method:
public static class XmlSerializerExtensions
{
class NullEncodingStringWriter : StringWriter
{
public override Encoding Encoding { get { return null; } }
}
public static string Serialize<T>(this T obj, XmlSerializer serializer = null, bool indent = true)
{
if (serializer == null)
serializer = new XmlSerializer(obj.GetType());
// Precisely emulate the output of http://referencesource.microsoft.com/#System.Xml/System/Xml/Serialization/XmlSerializer.cs,2c706ead96e5c4fb
// - Indent by 2 characters
// - Suppress output of the "encoding" tag.
using (var textWriter = new NullEncodingStringWriter())
{
using (var xmlWriter = new XmlTextWriter(textWriter))
{
if (indent)
{
xmlWriter.Formatting = Formatting.Indented;
xmlWriter.Indentation = 2;
}
serializer.Serialize(xmlWriter, obj);
}
return textWriter.ToString();
}
}
}
You might also consider eliminating the formatting and indentation to save more string memory by setting indent = false.
This will reduce your peak memory footprint somewhat, since it completely eliminates the need to have a large MemoryStream in memory at the same time as the resulting string. It won't reduce your peak memory requirement enormously, however, since the memory taken by the MemoryStream will have been proportional to the memory taken by the final XML string.
Beyond that, I can only suggest trying to stream directly to your database.
I'm using Json.Net to consume some seekable streams.
// reset the input stream, in case it was previously read
inputStream.Position = 0;
using (var textReader = new StreamReader(inputStream))
{
using (var reader = new JsonTextReader(textReader))
{
deserialized = serializer.Deserialize(reader, expectedType);
}
}
However, this method 'consumes' the stream, meaning the first contained valid Json token is removed from the stream.
That it very annoying. And meaningless, stream Position is provided to emulate a consumption, and 'reading' generally implies 'not modifying'.
Of course, I can dump the stream into a MemoryStream to protect my precious source stream, but that's a huge overhead, especially when doing trial-and-error on a deserialization.
If there is a way to to just 'read' and not 'read-and-consume', thanks for your help, I could not find documentation about that (and I hope this post will help others to google the solution ^^).
JsonTextReader is a forward-only reader, meaning it cannot be set back to a position earlier in the JSON to re-read a portion of it, even if the underlying stream supports seeking. However, the reader does not actually "consume" the stream, as you said. If you set the CloseInput property on the reader to false to prevent it from closing the underlying reader and stream when it is disposed, you can position the stream back to the beginning and open a new reader on the same stream to re-read the JSON. Here is a short program to demonstrate reading the same stream twice:
class Program
{
static void Main(string[] args)
{
string json = #"{ ""name"": ""foo"", ""size"": ""10"" }";
MemoryStream inputStream = new MemoryStream(Encoding.UTF8.GetBytes(json));
JsonSerializer serializer = new JsonSerializer();
using (var textReader = new StreamReader(inputStream))
{
for (int i = 0; i < 2; i++)
{
inputStream.Position = 0;
using (var reader = new JsonTextReader(textReader))
{
reader.CloseInput = false;
Widget w = serializer.Deserialize<Widget>(reader);
Console.WriteLine("Name: " + w.Name);
Console.WriteLine("Size: " + w.Size);
Console.WriteLine();
}
}
}
}
}
class Widget
{
public string Name { get; set; }
public int Size { get; set; }
}
Output:
Name: foo
Size: 10
Name: foo
Size: 10
Fiddle: https://dotnetfiddle.net/fftZV7
A stream may be consumed once read. The solution could be to copy it to a memory or file stream as below:
MemoryStream ms = new MemoryStream();
inputStream.CopyTo(ms);
ms.Position = 0;
using (var textReader = new StreamReader(ms))
(...)
Please let me know if it works.