Deserialize json array stream one item at a time - c#

I serialize an array of large objects to a json http response stream. Now I want to deserialize these objects from the stream one at a time. Are there any c# libraries that will let me do this? I've looked at json.net but it seems I'd have to deserialize the complete array of objects at once.
[{large json object},{large json object}.....]
Clarification: I want to read one json object from the stream at a time and deserialize it.

In order to read the JSON incrementally, you'll need to use a JsonTextReader in combination with a StreamReader. But, you don't necessarily have to read all the JSON manually from the reader. You should be able to leverage the Linq-To-JSON API to load each large object from the reader so that you can work with it more easily.
For a simple example, say I had a JSON file that looked like this:
[
{
"name": "foo",
"id": 1
},
{
"name": "bar",
"id": 2
},
{
"name": "baz",
"id": 3
}
]
Code to read it incrementally from the file might look something like the following. (In your case you would replace the FileStream with your response stream.)
using (FileStream fs = new FileStream(#"C:\temp\data.json", FileMode.Open, FileAccess.Read))
using (StreamReader sr = new StreamReader(fs))
using (JsonTextReader reader = new JsonTextReader(sr))
{
while (reader.Read())
{
if (reader.TokenType == JsonToken.StartObject)
{
// Load each object from the stream and do something with it
JObject obj = JObject.Load(reader);
Console.WriteLine(obj["id"] + " - " + obj["name"]);
}
}
}
Output of the above would look like this:
1 - foo
2 - bar
3 - baz

I have simplified one of the samples/tests of my parser/deserializer to answer this question's use case more straightforwardly.
Here's for the test data:
https://github.com/ysharplanguage/FastJsonParser/tree/master/JsonTest/TestData
(cf. fathers.json.txt)
And here's for the sample code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
// Our stuff
using System.Text.Json;
//...
public class FathersData
{
public Father[] fathers { get; set; }
}
public class Someone
{
public string name { get; set; }
}
public class Father : Someone
{
public int id { get; set; }
public bool married { get; set; }
// Lists...
public List<Son> sons { get; set; }
// ... or arrays for collections, that's fine:
public Daughter[] daughters { get; set; }
}
public class Child : Someone
{
public int age { get; set; }
}
public class Son : Child
{
}
public class Daughter : Child
{
public string maidenName { get; set; }
}
//...
static void FilteredFatherStreamTestSimplified()
{
// Get our parser:
var parser = new JsonParser();
// (Note this will be invoked thanks to the "filters" dictionary below)
Func<object, object> filteredFatherStreamCallback = obj =>
{
Father father = (obj as Father);
// Output only the individual fathers that the filters decided to keep (i.e., when obj.Type equals typeof(Father)),
// but don't output (even once) the resulting array (i.e., when obj.Type equals typeof(Father[])):
if (father != null)
{
Console.WriteLine("\t\tId : {0}\t\tName : {1}", father.id, father.name);
}
// Do not project the filtered data in any specific way otherwise,
// just return it deserialized as-is:
return obj;
};
// Prepare our filter, and thus:
// 1) we want only the last five (5) fathers (array index in the resulting "Father[]" >= 29,995),
// (assuming we somehow have prior knowledge that the total count is 30,000)
// and for each of them,
// 2) we're interested in deserializing them with only their "id" and "name" properties
var filters =
new Dictionary<Type, Func<Type, object, object, int, Func<object, object>>>
{
// We don't care about anything but these 2 properties:
{
typeof(Father), // Note the type
(type, obj, key, index) =>
((key as string) == "id" || (key as string) == "name") ?
filteredFatherStreamCallback :
JsonParser.Skip
},
// We want to pick only the last 5 fathers from the source:
{
typeof(Father[]), // Note the type
(type, obj, key, index) =>
(index >= 29995) ?
filteredFatherStreamCallback :
JsonParser.Skip
}
};
// Read, parse, and deserialize fathers.json.txt in a streamed fashion,
// and using the above filters, along with the callback we've set up:
using (var reader = new System.IO.StreamReader(FATHERS_TEST_FILE_PATH))
{
FathersData data = parser.Parse<FathersData>(reader, filters);
System.Diagnostics.Debug.Assert
(
(data != null) &&
(data.fathers != null) &&
(data.fathers.Length == 5)
);
foreach (var i in Enumerable.Range(29995, 5))
System.Diagnostics.Debug.Assert
(
(data.fathers[i - 29995].id == i) &&
!String.IsNullOrEmpty(data.fathers[i - 29995].name)
);
}
Console.ReadKey();
}
The rest of the bits is available here:
https://github.com/ysharplanguage/FastJsonParser
'HTH,

This is my solution (combined from different sources, but mainly based on Brian Rogers solution) to convert huge JSON file (which is an array of objects) to XML file for any generic object.
JSON looks like this:
{
"Order": [
{ order object 1},
{ order object 2},
{...}
{ order object 10000},
]
}
Output XML:
<Order>...</Order>
<Order>...</Order>
<Order>...</Order>
C# code:
XmlWriterSettings xws = new XmlWriterSettings { OmitXmlDeclaration = true };
using (StreamWriter sw = new StreamWriter(xmlFile))
using (FileStream fs = new FileStream(jsonFile, FileMode.Open, FileAccess.Read))
using (StreamReader sr = new StreamReader(fs))
using (JsonTextReader reader = new JsonTextReader(sr))
{
//sw.Write("<root>");
while (reader.Read())
{
if (reader.TokenType == JsonToken.StartArray)
{
while (reader.Read())
{
if (reader.TokenType == JsonToken.StartObject)
{
JObject obj = JObject.Load(reader);
XmlDocument doc = JsonConvert.DeserializeXmlNode(obj.ToString(), "Order");
sw.Write(doc.InnerXml); // a line of XML code <Order>...</Order>
sw.Write("\n");
//this approach produces not strictly valid XML document
//add root element at the beginning and at the end to make it valid XML
}
}
}
}
//sw.Write("</root>");
}

With Cinchoo ETL - an open source library, you can parse large JSON efficiently with low memory footprint. Since the objects are constructed and returned in a stream based pull model
using (var p = new ChoJSONReader(** YOUR JSON FILE **))
{
foreach (var rec in p)
{
Console.WriteLine($"Name: {rec.name}, Id: {rec.id}");
}
}
For more information, please visit codeproject article.
Hope it helps.

I know that the question is old, but it appears in google search, and I needed the same thing recently. The another way to deal with stream serilization is to use JsonSerializer.DeserializeAsyncEnumerable
Usage looks like:
await using (var readStream = File.Open(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
await foreach (T item in JsonSerializer.DeserializeAsyncEnumerable<T>(readStream))
{
// do something withe the item
}
}

Related

Deserializing JSON with numbers as field using JsonSerializer

I need to deserialize this weird JSON (image below). I've seen some deserialization hints using Dictionary<>, etc. but the problem is that "parameters" contains different data, then previous keys.
Can I somehow get it to work using JsonSerializer deserializator without doing foreach loops and other suspicious implementations? I do need data from "data" in my application.
Here's some of my code:
using var client = new WebClient();
var json = client.DownloadString(GetJsonString());
var invoicesData = JsonSerializer.Deserialize<JsonMyData>(json, options);
If using Newtonsoft is necessary I might start using it.
With Newtonsoft you can parse and access arbitrary JSON documents, even ones that can't reasonably be deserialized into a .NET object. So something like:
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System;
namespace ConsoleApp35
{
class Program
{
static void Main(string[] args)
{
var json = #"
{
""myData"" :
{
""0"" : { ""data"": { ""A"":1,""B"":2} },
""1"" : { ""data"": { ""A"":1,""B"":2} },
""2"" : { ""data"": { ""A"":1,""B"":2} },
""3"" : { ""data"": { ""A"":1,""B"":2} },
""parameters"" : { ""p"":""a""}
},
""status"":{ }
}";
var foo = JObject.Parse(json);
var a = foo["myData"]["1"]["data"];
Console.WriteLine(a);
Console.WriteLine("Hit any key to continue");
Console.ReadKey();
}
}
}
I think you should really consider using Newtonsoft.Json instead of default JsonDeserializer, it is much easier to use in such situations.
If you are interested in processing this without foreach loops and wanting to access the data in a list format, I would suggest using Dictionary for this. When you use dictionary, you can use Objects as values that would compensate for differences in numbers (0, 1, 2, ..) and words (parameters).
// Classes to Deserialize data we need.
public class MyObject
{
[JsonProperty("data")]
public Data Data { get; set; }
}
public class Data
{
public int A { get; set; }
public int B { get; set; }
}
Usage in Main
// Read in the JSON
var myData = JsonConvert.DeserializeObject<dynamic>(jsonString)["myData"];
// Convert To Dictionary
Dictionary<string, dynamic> dataAsObjects = myData.ToObject<Dictionary<string, dynamic>>();
string searchFor = "3";
dataAsObjects.TryGetValue(searchFor, out dynamic obj);
if (obj != null)
{
// Conversion to int and matching against searchFor is to ensure its a number.
int.TryParse(searchFor, out int result);
if (result == 0 && result.ToString().Equals(searchFor))
{
MyObject myObject = obj.ToObject<MyObject>();
Console.WriteLine($"A:{myObject.Data.A} - B:{myObject.Data.B}");
}
else if (result == 8 && result.ToString().Equals(searchFor))
{
// I am not clear on whats your parameters class look like.
MyParameters myParams = obj.ToObject<MyParameters>();
}
}
Output
A:1 - B:2
With this method you can either access the numbers or the parameters element.

Append to the last node of xml file c#

Each time i get a request from a user, i have to serialize and append it , to an existing xml file like this :
<LogRecords>
<LogRecord>
<Message>Some messagge</Message>
<SendTime>2017-12-13T22:04:40.1109661+01:00</SendTime>
<Sender>Sender</Sender>
<Recipient>Name</Recipient>
</LogRecord>
<LogRecord>
<Message>Some message too</Message>
<SendTime>2017-12-13T22:05:08.5720173+01:00</SendTime>
<Sender>sender</Sender>
<Recipient>name</Recipient>
</LogRecord>
</LogRecords>
Currently Serializing data in this way (which works fine):
var stringwriter = new StringWriter();
var serializer = new XmlSerializer(object.GetType());
serializer.Serialize(stringwriter, object);
var smsxmlStr = stringwriter.ToString();
var smsRecordDoc = new XmlDocument();
smsRecordDoc.LoadXml(smsxmlStr);
var smsElement = smsRecordDoc.DocumentElement;
var smsLogFile = new XmlDocument();
smsLogFile.Load("LogRecords.xml");
var serialize = smsLogFile.CreateElement("LogRecord");
serialize.InnerXml = smsElement.InnerXml;
smsLogFile.DocumentElement.AppendChild(serialize);
smsLogFile.Save("LogRecords.xml");
And the properties class
[XmlRoot("LogRecords")]
public class LogRecord
{
public string Message { get; set; }
public DateTime SendTime { get; set; }
public string Sender { get; set; }
public string Recipient { get; set; }
}
But what i want to do is to load the file, navigate to the last element/node of it and append a new List<LogRecord> and save, so i can easily deserialize later.
I have tried various ways using XPath Select Methods like SelectSingleNode and SelectNodes but since i am junior with c# i haven't manage to make them work properly. Does anyone have any idea on how to serialize and append properly?
Thank you
Your approach (and most of the answers given to date) rely on having all of the log file in memory in order to append more records to it. As the log file grows, this could cause issues (such as OutOfMemoryException errors) down the road. Your best bet is to use an approach that streams the data from the original file into a new file. While there might be a few bugs in my untested code. The approach would look something like the following:
// What you are serializing
var obj = default(object);
using (var reader = XmlReader.Create("LogRecords.xml"))
using (var writer = XmlWriter.Create("LogRecords2.xml"))
{
// Start the log file
writer.WriteStartElement("LogRecords");
while (reader.Read())
{
// When you see a record in the original file, copy it to the output
if (reader.NodeType == XmlNodeType.Element && reader.LocalName == "LogRecord")
{
writer.WriteNode(reader.ReadSubtree(), false);
}
}
// Add your additional record(s) to the output
var serializer = new XmlSerializer(obj.GetType());
serializer.Serialize(writer, obj);
// Close the tag
writer.WriteEndElement();
}
// Replace the original file with the new file.
System.IO.File.Delete("LogRecords.xml");
System.IO.File.Move("LogRecords2.xml", "LogRecords.xml");
Another idea to consider, does the log file need to be a valid XML file (with the <LogRecords> tag at the start and finish? If you omit the root tag, you could simply append the new records at the bottom of the file (which should be very efficient). You can still read the XML in .Net by creating an XmlReader with the right ConformanceLevel. For example
var settings = new XmlReaderSettings()
{
ConformanceLevel = ConformanceLevel.Fragment
};
using (var reader = XmlReader.Create("LogRecords.xml", settings))
{
// Do something with the records here
}
Try using xml linq :
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml;
using System.Xml.Linq;
namespace ConsoleApplication1
{
class Program
{
const string FILENAME = #"c:\temp\test.xml";
static void Main(string[] args)
{
XDocument doc = XDocument.Load(FILENAME);
LogRecord record = doc.Descendants("LogRecord").Select(x => new LogRecord()
{
Message = (string)x.Element("Message"),
SendTime = (DateTime)x.Element("SendTime"),
Sender = (string)x.Element("Sender"),
Recipient = (string)x.Element("Recipient")
}).OrderByDescending(x => x.SendTime).FirstOrDefault();
}
}
public class LogRecord
{
public string Message { get; set; }
public DateTime SendTime { get; set; }
public string Sender { get; set; }
public string Recipient { get; set; }
}
}
You can perform it by using XDocument like this;
XDocument doc = XDocument.Load("LogRecords.xml");
//Append Node
XElement logRecord = new XElement("LogRecord");
XElement message = new XElement("Message");
message.Value = "Message";
XElement sendTime = new XElement("SendTime");
sendTime.Value = "SendTime";
XElement sender = new XElement("Sender");
sender.Value = "Sender";
XElement recipient = new XElement("Recipient");
recipient.Value = "Recipient";
logRecord.Add(message);
logRecord.Add(sendTime);
logRecord.Add(sender);
logRecord.Add(recipient);
doc.Element("LogRecords").Add(logRecord);
//Append Node
doc.Save("LogRecords.xml");

Is there size limit for a property to be serialized?

I'm working against an interface that requires an XML document. So far I've been able to serialize most of the objects using XmlSerializer. However, there is one property that is proving problematic. It is supposed to be a collection of objects that wrap a document. The document itself is encoded as a base64 string.
The basic structure is like this:
//snipped out of a parent object
public List<Document> DocumentCollection { get; set; }
//end snip
public class Document
{
public string DocumentTitle { get; set; }
public Code DocumentCategory { get; set; }
/// <summary>
/// Base64 encoded file
/// </summary>
public string BinaryDocument { get; set; }
public string DocumentTypeText { get; set; }
}
The problem is that smaller values work fine, but if the document is too big the serializer just skips over that document item in the collection.
Is there some limitation that I'm bumping up against?
Update: I changed
public string BinaryDocument { get; set; }
to
public byte[] BinaryDocument { get; set; }
and I'm still getting the same result. The smaller document (~150kb) is serializing just fine, but the rest aren't. To be clear, it's not just the value of the property, it's the entire containing Document object that gets dropped.
UPDATE 2:
Here's the serialization code with a simple repro. It's out of a console project I put together. The problem is that this code works fine in the test project. I'm having difficulty getting the full object structure packed in here because it's near impossible to use the actual objects in a test case because of the complexity of filling the fields, so I tried to cut down the code in the main application. The populated object goes into the serialization code with the DocumentCollection filled with four Documents and comes out with one Document.
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Serialization;
namespace ConsoleApplication2
{
class Program
{
static void Main(string[] args)
{
var container = new DocumentContainer();
var docs = new List<Document>();
foreach (var f in Directory.GetFiles(#"E:\Software Projects\DA\Test Documents"))
{
var fileStream = new MemoryStream(File.ReadAllBytes(f));
var doc = new Document
{
BinaryDocument = fileStream.ToArray(),
DocumentTitle = Path.GetFileName(f)
};
docs.Add(doc);
}
container.DocumentCollection = docs;
var serializer = new XmlSerializer(typeof(DocumentContainer));
var ms = new MemoryStream();
var writer = XmlWriter.Create(ms);
serializer.Serialize(writer, container);
writer.Flush();
ms.Seek(0, SeekOrigin.Begin);
var reader = new StreamReader(ms, Encoding.UTF8);
File.WriteAllText(#"C:\temp\testexport.xml", reader.ReadToEnd());
}
}
public class Document
{
public string DocumentTitle { get; set; }
public byte[] BinaryDocument { get; set; }
}
// test class
public class DocumentContainer
{
public List<Document> DocumentCollection { get; set; }
}
}
XmlSerializer has no limit on the length of a string it can serialize.
.Net, however, has a maximum string length of int.MaxValue. Furthermore, since internally a string is implemented as a contiguous memory buffer, on a 32 bit process you're likely to be unable to allocate a string anywhere near that large due to process space fragmentation. And since a c# base64 string requires roughly 2.67 times the memory of the byte [] array from which it was created (1.33 for the encoding times 2 since the .Net char type is actually two bytes) you might be getting an OutOfMemoryException encoding a large binary document as a complete base64 string, then swallowing and ignoring it, leaving the BinaryDocument property null.
That being said, there is no reason for you to manually encode your binary documents into base64, because XmlSerializer does this for you automatically. I.e. if I serialize the following class:
public class Document
{
public string DocumentTitle { get; set; }
public Code DocumentCategory { get; set; }
public byte [] BinaryDocument { get; set; }
public string DocumentTypeText { get; set; }
}
I get the following XML:
<Document>
<DocumentTitle>my title</DocumentTitle>
<DocumentCategory>Default</DocumentCategory>
<BinaryDocument>AAECAwQFBgcICQoLDA0ODxAREhM=</BinaryDocument>
<DocumentTypeText>document text type</DocumentTypeText>
</Document>
As you can see, BinaryDocument is base64 encoded. Thus you should be able to keep your binary documents in a more compact byte [] representation and still get the XML output you want.
Even better, under the covers, XmlWriter uses System.Xml.Base64Encoder to do this. This class encodes its inputs in chunks, thereby avoiding the excessive memory use and potential out-of-memory exceptions described above.
I can't reproduce the problem you are having. Even with individual files as large as 267 MB to 1.92 GB, I'm not seeing any elements being skipped. The only problem I am seeing is that the temporary var ms = new MemoryStream(); exceeds its 2 GB buffer limit eventually, whereupon an exception gets thrown. I replaced this with a direct stream, and that problem went away:
using (var stream = File.Open(outputPath, FileMode.Create, FileAccess.ReadWrite))
That being said, your design will eventually run up against memory limits for a sufficiently large number of sufficiently large files, since you load all of them into memory before serializing. If this is happening, somewhere in your production code you may be catching and swallowing the OutOfMemoryException without realizing it, leading to the problem you are seeing.
As an alternative, I would suggest a streaming solution where you incrementally copy each file's contents to the XML output from within XmlSerializer by making your Document class implement IXmlSerializable:
public class Document : IXmlSerializable
{
public string DocumentPath { get; set; }
public string DocumentTitle
{
get
{
if (DocumentPath == null)
return null;
return Path.GetFileName(DocumentPath);
}
}
const string DocumentTitleName = "DocumentTitle";
const string BinaryDocumentName = "BinaryDocument";
#region IXmlSerializable Members
System.Xml.Schema.XmlSchema IXmlSerializable.GetSchema()
{
return null;
}
void ReadXmlElement(XmlReader reader)
{
if (reader.Name == DocumentTitleName)
DocumentPath = reader.ReadElementContentAsString();
}
void IXmlSerializable.ReadXml(XmlReader reader)
{
reader.ReadXml(null, ReadXmlElement);
}
void IXmlSerializable.WriteXml(XmlWriter writer)
{
writer.WriteElementString(DocumentTitleName, DocumentTitle ?? "");
if (DocumentPath != null)
{
try
{
using (var stream = File.OpenRead(DocumentPath))
{
// Write the start element if the file was successfully opened
writer.WriteStartElement(BinaryDocumentName);
try
{
var buffer = new byte[6 * 1024];
int read;
while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
writer.WriteBase64(buffer, 0, read);
}
finally
{
// Write the end element even if an error occurred while streaming the file.
writer.WriteEndElement();
}
}
}
catch (Exception ex)
{
// You could log the exception as an element or as a comment, as you prefer.
// Log as a comment
writer.WriteComment("Caught exception with message: " + ex.Message);
writer.WriteComment("Exception details:");
writer.WriteComment(ex.ToString());
// Log as an element.
writer.WriteElementString("ExceptionMessage", ex.Message);
writer.WriteElementString("ExceptionDetails", ex.ToString());
}
}
}
#endregion
}
// test class
public class DocumentContainer
{
public List<Document> DocumentCollection { get; set; }
}
public static class XmlSerializationExtensions
{
public static void ReadXml(this XmlReader reader, Action<IList<XAttribute>> readXmlAttributes, Action<XmlReader> readXmlElement)
{
if (reader.NodeType != XmlNodeType.Element)
throw new InvalidOperationException("reader.NodeType != XmlNodeType.Element");
if (readXmlAttributes != null)
{
var attributes = new List<XAttribute>(reader.AttributeCount);
while (reader.MoveToNextAttribute())
{
attributes.Add(new XAttribute(XName.Get(reader.Name, reader.NamespaceURI), reader.Value));
}
// Move the reader back to the element node.
reader.MoveToElement();
readXmlAttributes(attributes);
}
if (reader.IsEmptyElement)
{
reader.Read();
return;
}
reader.ReadStartElement(); // Advance to the first sub element of the wrapper element.
while (reader.NodeType != XmlNodeType.EndElement)
{
if (reader.NodeType != XmlNodeType.Element)
// Comment, whitespace
reader.Read();
else
{
using (var subReader = reader.ReadSubtree())
{
while (subReader.NodeType != XmlNodeType.Element) // Read past XmlNodeType.None
if (!subReader.Read())
break;
if (readXmlElement != null)
readXmlElement(subReader);
}
reader.Read();
}
}
// Move past the end of the wrapper element
reader.ReadEndElement();
}
}
Then use it as follows:
public static void SerializeFilesToXml(string directoryPath, string xmlPath)
{
var docs = from file in Directory.GetFiles(directoryPath)
select new Document { DocumentPath = file };
var container = new DocumentContainer { DocumentCollection = docs.ToList() };
using (var stream = File.Open(xmlPath, FileMode.Create, FileAccess.ReadWrite))
using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Indent = true, IndentChars = " " }))
{
new XmlSerializer(container.GetType()).Serialize(writer, container);
}
Debug.WriteLine("Wrote " + xmlPath);
}
Using the streaming solution, when serializing 4 files of around 250 MB each, my memory use went up by 0.8 MB. Using the original classes, my memory went up by 1022 MB.
Update
If you need to write your XML to a memory stream, be aware that the c# MemoryStream has a hard maximum stream length of int.MaxValue (i.e. 2 GB) because the underlying memory is simply a byte array. On a 32-bit process the effective max length will be much smaller, see OutOfMemoryException while populating MemoryStream: 256MB allocation on 16GB system.
To programmatically check to see if your process is actually 32 bit, see How to determine programmatically whether a particular process is 32-bit or 64-bit. To change to 64 bit, see What is the purpose of the “Prefer 32-bit” setting in Visual Studio 2012 and how does it actually work?.
If you are sure you are running in 64 bit mode and are still exceeding the hard size limits of a MemoryStream, perhaps see alternative to MemoryStream for large data volumes or MemoryStream replacement?.

Error serializing MultiValueDictionary(string,string) with Protobuf-net

I am using MultiValueDictionary(String,string) in my project (C# - VS2012 - .net 4.5), which is a great help if you want to have multiple values for each key, but I can't serialize this object with protobuf.net.
I have serialized Dictionary(string,string) with Protobuf with ease and speed and MultiValueDictionary inherits from that generic type; so, logically there should be no problem serializing it with the same protocol.
Does any one know a workaround?
This is the error message when I execute my codes:
System.InvalidOperationException: Unable to resolve a suitable Add
method for System.Collections.Generic.IReadOnlyCollection
Do you realy need a dictionary?
If you have less than 10000 items in your dictionary you can also use a modified list of a datatype..
public class YourDataType
{
public string Key;
public string Value1;
public string Value2;
// add some various data here...
}
public class YourDataTypeCollection : List<YourDataType>
{
public YourDataType this[string key]
{
get
{
return this.FirstOrDefault(o => o.Key == key);
}
set
{
YourDataType old = this[key];
if (old != null)
{
int index = this.IndexOf(old);
this.RemoveAt(index);
this.Insert(index, value);
}
else
{
this.Add(old);
}
}
}
}
Use the list like this:
YourDataTypeCollection data = new YourDataTypeCollection();
// add some values like this:
data.Add(new YourDataType() { Key = "key", Value1 = "foo", Value2 = "bar" });
// or like this:
data["key2"] = new YourDataType() { Key = "key2", Value1 = "hello", Value2 = "world" };
// or implement your own method to adding data in the YourDataTypeCollection class
XmlSerializer xser = new XmlSerializer(typeof(YourDataTypeCollection));
// to export data
using (FileStream fs = File.Create("YourFile.xml"))
{
xser.Serialize(fs, data);
}
// to import data
using (FileStream fs = File.Open("YourFile.xml", FileMode.Open))
{
data = (YourDataTypeCollection)xser.Deserialize(fs);
}
string value1 = data["key"].Value1;

Why can I no longer deserialize this data?

I am using the following code to serialize some data and save it to file:
MemoryStream stream = new MemoryStream();
DataContractJsonSerializer serializer = new DataContractJsonSerializer((typeof(Item)));
Item item = ((Item)list.SelectedItems[0].Tag);
serializer.WriteObject(stream, item);
var filepath = Program.appDataPath + list.SelectedItems[0].Group.Name + ".group";
stream.Position = 0;
using (FileStream fileStream = new FileStream(filepath, FileMode.Create))
{
stream.WriteTo(fileStream);
}
And later on, I'm trying to read back that data from file and insert it into ListView:
private void OpenFiles()
{
// DEBUG ONLY:
// Read into memorystream and filestream.
Console.WriteLine("Attempeting to open note.");
bool canLoad = false;
foreach (string file in Directory.GetFiles(Program.appDataPath))
{
if (file.EndsWith(".group"))
{
MemoryStream stream = new MemoryStream();
DataContractJsonSerializer serializer =
new DataContractJsonSerializer(
typeof(
List<Item>
)
);
using (FileStream fileStream =
new FileStream(
file,
FileMode.Open)
)
{
fileStream.CopyTo(stream);
}
stream.Position = 0;
//List<Withdrawal> tempWithList = new List<Withdrawal>();
foreach (Item item in (List<Item>)serializer.ReadObject(stream))
{
Console.WriteLine(item.Title + " " + item.Group.Name);
Item.Items.Add(item);
}
//Console.WriteLine("Got file \{file}");
//if (file.EndsWith(".group"))
//{
// Console.WriteLine("File is a group.");
// MemoryStream stream = new MemoryStream();
// DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(List<Item>));
// using (FileStream fileStream = new FileStream(file, FileMode.Open))
// {
// fileStream.CopyTo(stream);
// }
// Console.WriteLine("Got stream");
// stream.Position = 0;
// try
// {
// Item.Items = (List<Item>)serializer.ReadObject(stream);
// Console.WriteLine("WTF?");
// }
// catch(Exception exception)
// {
// Console.WriteLine(exception.Message);
// }
// Console.WriteLine(Item.Items.Count);
// canLoad = true;
//}
//else Console.WriteLine("File is not a group.");
}
if (canLoad)
{
//list.Items.Clear();
foreach (Item item in Item.Items)
{
ListViewGroup group = new ListViewGroup(item.Group.Name);
list.Groups.Add(group);
list.Items.Add(
new ListViewItem(
item.Title,
group)
);
Console.WriteLine(item.Title + " " + item.Group.Name);
}
}
}
}
Now, the above exact code works in an older program (few months old), but it's not working in a new program. I have no idea why. I have set breakpoints EVERYWHERE and it has proven to be kind of pointless in this case.
One thing I did learn from setting a breakpoint is that even though the stream contains the data expected, the very next second, when it gets added to list, it is NULL. There is nothing in the list. I've run out of ideas, and Google wasn't of much help.
Group.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.Threading.Tasks;
namespace Notes
{
[DataContract]
public class Group
{
[DataMember]
public string Name { get; set; }
}
}
Item.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.Threading.Tasks;
namespace Notes
{
[DataContract]
[Serializable]
public class Item : Note
{
[DataMember]
public static List<Item> Items = new List<Item>();
[DataContract]
public enum ItemType
{
Group,
Note
}
[DataMember]
public ItemType Type { get; set; }
[DataMember]
public int Index { get; set; }
}
}
Note.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.Threading.Tasks;
namespace Notes
{
[DataContract]
public class Note
{
[DataMember]
public string Title { get; set; }
[DataMember]
public string Content { get; set; }
[DataMember]
public Group Group;
[DataContract]
public enum NoteImportance
{
Important,
Neutral,
NotImportant
}
[DataMember]
public NoteImportance Importance { get; set; }
[DataMember]
public bool Protected { get; set; }
}
}
How can I deserialize these objects/read from file and get them into a List or ListView? I've already done this, but for some reason it's not working anymore.
Any help would be appreciated.
When you create a .group file, you serialize a single Item:
DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(Item));
// And later
serializer.WriteObject(stream, item);
But when you deserialize the contents of a .group file, you try to deserialize a List<Item>:
DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(List<Item>));
// And later
foreach (Item item in (List<Item>)serializer.ReadObject(stream))
{
Item.Items.Add(item);
}
Those types don't match. But in order to deserialize the data you previously serialized, they need to match - or at least, the deserialized type cannot be a collection if the serialized type was, because collections are serialized as JSON arrays while other classes are serialized as JSON objects (name/value pairs).
Since it looks like each .group file has a single item, and there are many .group files in the directory you are scanning, you probably just want to do
var serializer = new DataContractJsonSerializer(typeof(Item));
// And later
var item = (Item)serializer.ReadObject(stream);
if (item != null)
Item.Items.Add(item);

Categories

Resources