Got a little bit of a problem. I have a program that builds an observable collection of Users. The User has a Firstname, Lastname, and Image. I can add the user to the observable collection, but I also want to save the collection and load it everytime I reopen the program.
My problem is that while its fairly easy to save a firstname and lastname, the writer can't write the image to the xml file. Is there any way around this?
Here's what I have so far:
the observable collection:
ObservableCollection<VendorClass> ProfileList = new ObservableCollection<VendorClass>();
the problematic writer:
XmlSerializer xs = new XmlSerializer(typeof(ObservableCollection<VendorClass>));
using (StreamWriter wr = new StreamWriter("vendors.xml")) //Data/customers.xml
{
xs.Serialize(wr, ProfileList);
}
Any ideas? And if there does exist a solution to write in an image, is there a viable way to read it out again?
XmlSerializer can't serialize or deserialize the WPF image types like BitmapImage etc. It is however able to (de)serialize byte arrays. So you may add a byte[] ImageBuffer property to your Person class, which contains the binary image data. You would then also set the XmlIgnore attribute on the Image property to suppress its (de)serialization, and set XmlElement("Image") on the ImageBuffer properties to (de)serialize it as <Image>...</Image>.
public class User
{
public string FirstName { get; set; }
public string LastName { get; set; }
[XmlIgnore]
public BitmapSource Image { get; set; }
[XmlElement("Image")]
public byte[] ImageBuffer
{
get
{
byte[] imageBuffer = null;
if (Image != null)
{
using (var stream = new MemoryStream())
{
var encoder = new PngBitmapEncoder(); // or some other encoder
encoder.Frames.Add(BitmapFrame.Create(Image));
encoder.Save(stream);
imageBuffer = stream.ToArray();
}
}
return imageBuffer;
}
set
{
if (value == null)
{
Image = null;
}
else
{
using (var stream = new MemoryStream(value))
{
var decoder = BitmapDecoder.Create(stream,
BitmapCreateOptions.None, BitmapCacheOption.OnLoad);
Image = decoder.Frames[0];
}
}
}
}
}
This approach has also been suggested for properties of type Bitmap in this answer.
You would base64 encode the image to convert it to a string and then write that into a CDATA section. See How do you serialize a string as CDATA using XmlSerializer?
Related
As you can see I want to add directly image file to the sql server by getting id, carId, uploadDate as Json and imagePath as file byte array.
How can I do that what should I change ?
The default model binder can not handle byte arrays that would be set to null.
As #viveknuna mentioned, if possible, you can try to use IFormFile to process or save the uploaded file.
Besides, if you really want to bind selected file to byte arrays, you can try to implement and use a custom model binder, like below.
public class ImageToByteArrayModelBinder : IModelBinder
{
public Task BindModelAsync(ModelBindingContext bindingContext)
{
if (bindingContext == null)
{
throw new ArgumentNullException(nameof(bindingContext));
}
// ...
// implement it based on your actual requirement
// code logic here
// ...
if (bindingContext.ActionContext.HttpContext.Request.Form.Files["ImagePath"]?.Length > 0)
{
var fileBytes = new byte[bindingContext.ActionContext.HttpContext.Request.Form.Files["ImagePath"].Length];
using (var ms = new MemoryStream())
{
bindingContext.ActionContext.HttpContext.Request.Form.Files["ImagePath"].CopyTo(ms);
fileBytes = ms.ToArray();
}
bindingContext.Result = ModelBindingResult.Success(fileBytes);
}
return Task.CompletedTask;
}
}
Apply the ModelBinder attribute to model property
[ModelBinder(BinderType = typeof(ImageToByteArrayModelBinder))]
public byte[] ImagePath { get; set; }
Test Result
I am looking for a way to fast and simple implementation of this paradigm:
MyByteArray mb = new MyByteArray();
mb.Add<byte>(bytevalue);
mb.Add<float>(floatvalue);
mb.Add<string>(str);
mb.Add<MyClass>(object);
And then get byte[] from mb to send it as a byte packet via RPC call (to be decoded on the other side using the same technique).
I've found MemoryStream, but it looks like too overheaded for this simple operation.
Can you help me? Thank you.
What are you looking for is BinaryWritter. But it still needs a Stream to write on for pure logic reason. And the only Stream that fits in your need is MemoryStream.
Are you afraid of performance overhead ? You can create your MemoryStream from an existing byte array ;
byte [] buffer = new byte[1024];
using (var memoryStream = new MemoryStream(buffer))
{
using (var binaryWriter = new BinaryWriter(memoryStream))
{
binaryWriter.Write(1.2F); // float
binaryWriter.Write(1.9D); // double
binaryWriter.Write(1); // integer
binaryWriter.Write("str"); // string
}
}
// buffer is filled with your data now.
A tricky way to achieve this is to use a combination of builtin class in .net
class Program
{
static void Main(string[] args)
{
Program program = new Program();
var listBytes = new List<byte>();
listBytes.Add( program.CastToBytes("test"));
listBytes.Add(program.CastToBytes(5));
}
Note
for a custom object you have to define an implicit operator on how the properties or all the object should be converted
public byte[] CastToBytes<T>(T value)
{
//this will cover most of primitive types
if (typeof(T).IsValueType)
{
return BitConverter.GetBytes((dynamic)value);
}
if (typeof(T) == typeof(string))
{
return Encoding.UTF8.GetBytes((dynamic) value);
}
//for a custom object you have to define the rules
else
{
var formatter = new BinaryFormatter();
var memoryStream = new MemoryStream();
formatter.Serialize(memoryStream, value);
return memoryStream.GetBuffer();
}
}
}
This looks like the case for Protocol Buffers, you could look at at protobuf-net.
First, let's decorate the classes.
[ProtoContract]
class User
{
[ProtoMember(1)]
public int Id { get; set; }
[ProtoMember(2)]
public string Name { get; set; }
}
[ProtoContract]
class Message
{
[ProtoMember(1)]
public byte Type { get; set; }
[ProtoMember(2)]
public float Value { get; set; }
[ProtoMember(3)]
public User Sender { get; set; }
}
Then we create our message.
var msg = new Message
{
Type = 1,
Value = 1.1f,
Sender = new User
{
Id = 8,
Name = "user"
}
};
And now, we can use ProtoBuf's serializer to do all our work.
// memory
using (var mem = new MemoryStream())
{
Serializer.Serialize<Message>(mem, msg);
var bytes = mem.GetBuffer();
}
// file
using (var file = File.Create("message.bin")) Serializer.Serialize<Message>(file, msg);
I'm working against an interface that requires an XML document. So far I've been able to serialize most of the objects using XmlSerializer. However, there is one property that is proving problematic. It is supposed to be a collection of objects that wrap a document. The document itself is encoded as a base64 string.
The basic structure is like this:
//snipped out of a parent object
public List<Document> DocumentCollection { get; set; }
//end snip
public class Document
{
public string DocumentTitle { get; set; }
public Code DocumentCategory { get; set; }
/// <summary>
/// Base64 encoded file
/// </summary>
public string BinaryDocument { get; set; }
public string DocumentTypeText { get; set; }
}
The problem is that smaller values work fine, but if the document is too big the serializer just skips over that document item in the collection.
Is there some limitation that I'm bumping up against?
Update: I changed
public string BinaryDocument { get; set; }
to
public byte[] BinaryDocument { get; set; }
and I'm still getting the same result. The smaller document (~150kb) is serializing just fine, but the rest aren't. To be clear, it's not just the value of the property, it's the entire containing Document object that gets dropped.
UPDATE 2:
Here's the serialization code with a simple repro. It's out of a console project I put together. The problem is that this code works fine in the test project. I'm having difficulty getting the full object structure packed in here because it's near impossible to use the actual objects in a test case because of the complexity of filling the fields, so I tried to cut down the code in the main application. The populated object goes into the serialization code with the DocumentCollection filled with four Documents and comes out with one Document.
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Serialization;
namespace ConsoleApplication2
{
class Program
{
static void Main(string[] args)
{
var container = new DocumentContainer();
var docs = new List<Document>();
foreach (var f in Directory.GetFiles(#"E:\Software Projects\DA\Test Documents"))
{
var fileStream = new MemoryStream(File.ReadAllBytes(f));
var doc = new Document
{
BinaryDocument = fileStream.ToArray(),
DocumentTitle = Path.GetFileName(f)
};
docs.Add(doc);
}
container.DocumentCollection = docs;
var serializer = new XmlSerializer(typeof(DocumentContainer));
var ms = new MemoryStream();
var writer = XmlWriter.Create(ms);
serializer.Serialize(writer, container);
writer.Flush();
ms.Seek(0, SeekOrigin.Begin);
var reader = new StreamReader(ms, Encoding.UTF8);
File.WriteAllText(#"C:\temp\testexport.xml", reader.ReadToEnd());
}
}
public class Document
{
public string DocumentTitle { get; set; }
public byte[] BinaryDocument { get; set; }
}
// test class
public class DocumentContainer
{
public List<Document> DocumentCollection { get; set; }
}
}
XmlSerializer has no limit on the length of a string it can serialize.
.Net, however, has a maximum string length of int.MaxValue. Furthermore, since internally a string is implemented as a contiguous memory buffer, on a 32 bit process you're likely to be unable to allocate a string anywhere near that large due to process space fragmentation. And since a c# base64 string requires roughly 2.67 times the memory of the byte [] array from which it was created (1.33 for the encoding times 2 since the .Net char type is actually two bytes) you might be getting an OutOfMemoryException encoding a large binary document as a complete base64 string, then swallowing and ignoring it, leaving the BinaryDocument property null.
That being said, there is no reason for you to manually encode your binary documents into base64, because XmlSerializer does this for you automatically. I.e. if I serialize the following class:
public class Document
{
public string DocumentTitle { get; set; }
public Code DocumentCategory { get; set; }
public byte [] BinaryDocument { get; set; }
public string DocumentTypeText { get; set; }
}
I get the following XML:
<Document>
<DocumentTitle>my title</DocumentTitle>
<DocumentCategory>Default</DocumentCategory>
<BinaryDocument>AAECAwQFBgcICQoLDA0ODxAREhM=</BinaryDocument>
<DocumentTypeText>document text type</DocumentTypeText>
</Document>
As you can see, BinaryDocument is base64 encoded. Thus you should be able to keep your binary documents in a more compact byte [] representation and still get the XML output you want.
Even better, under the covers, XmlWriter uses System.Xml.Base64Encoder to do this. This class encodes its inputs in chunks, thereby avoiding the excessive memory use and potential out-of-memory exceptions described above.
I can't reproduce the problem you are having. Even with individual files as large as 267 MB to 1.92 GB, I'm not seeing any elements being skipped. The only problem I am seeing is that the temporary var ms = new MemoryStream(); exceeds its 2 GB buffer limit eventually, whereupon an exception gets thrown. I replaced this with a direct stream, and that problem went away:
using (var stream = File.Open(outputPath, FileMode.Create, FileAccess.ReadWrite))
That being said, your design will eventually run up against memory limits for a sufficiently large number of sufficiently large files, since you load all of them into memory before serializing. If this is happening, somewhere in your production code you may be catching and swallowing the OutOfMemoryException without realizing it, leading to the problem you are seeing.
As an alternative, I would suggest a streaming solution where you incrementally copy each file's contents to the XML output from within XmlSerializer by making your Document class implement IXmlSerializable:
public class Document : IXmlSerializable
{
public string DocumentPath { get; set; }
public string DocumentTitle
{
get
{
if (DocumentPath == null)
return null;
return Path.GetFileName(DocumentPath);
}
}
const string DocumentTitleName = "DocumentTitle";
const string BinaryDocumentName = "BinaryDocument";
#region IXmlSerializable Members
System.Xml.Schema.XmlSchema IXmlSerializable.GetSchema()
{
return null;
}
void ReadXmlElement(XmlReader reader)
{
if (reader.Name == DocumentTitleName)
DocumentPath = reader.ReadElementContentAsString();
}
void IXmlSerializable.ReadXml(XmlReader reader)
{
reader.ReadXml(null, ReadXmlElement);
}
void IXmlSerializable.WriteXml(XmlWriter writer)
{
writer.WriteElementString(DocumentTitleName, DocumentTitle ?? "");
if (DocumentPath != null)
{
try
{
using (var stream = File.OpenRead(DocumentPath))
{
// Write the start element if the file was successfully opened
writer.WriteStartElement(BinaryDocumentName);
try
{
var buffer = new byte[6 * 1024];
int read;
while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
writer.WriteBase64(buffer, 0, read);
}
finally
{
// Write the end element even if an error occurred while streaming the file.
writer.WriteEndElement();
}
}
}
catch (Exception ex)
{
// You could log the exception as an element or as a comment, as you prefer.
// Log as a comment
writer.WriteComment("Caught exception with message: " + ex.Message);
writer.WriteComment("Exception details:");
writer.WriteComment(ex.ToString());
// Log as an element.
writer.WriteElementString("ExceptionMessage", ex.Message);
writer.WriteElementString("ExceptionDetails", ex.ToString());
}
}
}
#endregion
}
// test class
public class DocumentContainer
{
public List<Document> DocumentCollection { get; set; }
}
public static class XmlSerializationExtensions
{
public static void ReadXml(this XmlReader reader, Action<IList<XAttribute>> readXmlAttributes, Action<XmlReader> readXmlElement)
{
if (reader.NodeType != XmlNodeType.Element)
throw new InvalidOperationException("reader.NodeType != XmlNodeType.Element");
if (readXmlAttributes != null)
{
var attributes = new List<XAttribute>(reader.AttributeCount);
while (reader.MoveToNextAttribute())
{
attributes.Add(new XAttribute(XName.Get(reader.Name, reader.NamespaceURI), reader.Value));
}
// Move the reader back to the element node.
reader.MoveToElement();
readXmlAttributes(attributes);
}
if (reader.IsEmptyElement)
{
reader.Read();
return;
}
reader.ReadStartElement(); // Advance to the first sub element of the wrapper element.
while (reader.NodeType != XmlNodeType.EndElement)
{
if (reader.NodeType != XmlNodeType.Element)
// Comment, whitespace
reader.Read();
else
{
using (var subReader = reader.ReadSubtree())
{
while (subReader.NodeType != XmlNodeType.Element) // Read past XmlNodeType.None
if (!subReader.Read())
break;
if (readXmlElement != null)
readXmlElement(subReader);
}
reader.Read();
}
}
// Move past the end of the wrapper element
reader.ReadEndElement();
}
}
Then use it as follows:
public static void SerializeFilesToXml(string directoryPath, string xmlPath)
{
var docs = from file in Directory.GetFiles(directoryPath)
select new Document { DocumentPath = file };
var container = new DocumentContainer { DocumentCollection = docs.ToList() };
using (var stream = File.Open(xmlPath, FileMode.Create, FileAccess.ReadWrite))
using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Indent = true, IndentChars = " " }))
{
new XmlSerializer(container.GetType()).Serialize(writer, container);
}
Debug.WriteLine("Wrote " + xmlPath);
}
Using the streaming solution, when serializing 4 files of around 250 MB each, my memory use went up by 0.8 MB. Using the original classes, my memory went up by 1022 MB.
Update
If you need to write your XML to a memory stream, be aware that the c# MemoryStream has a hard maximum stream length of int.MaxValue (i.e. 2 GB) because the underlying memory is simply a byte array. On a 32-bit process the effective max length will be much smaller, see OutOfMemoryException while populating MemoryStream: 256MB allocation on 16GB system.
To programmatically check to see if your process is actually 32 bit, see How to determine programmatically whether a particular process is 32-bit or 64-bit. To change to 64 bit, see What is the purpose of the “Prefer 32-bit” setting in Visual Studio 2012 and how does it actually work?.
If you are sure you are running in 64 bit mode and are still exceeding the hard size limits of a MemoryStream, perhaps see alternative to MemoryStream for large data volumes or MemoryStream replacement?.
I'm trying to read the metadata of a mp3 file stored in IsolatedStorage using TagLib.
I know TagLib normally only take a file path as input but as WP uses a sandbox environment I need to use a stream.
Following this tutorial (http://www.geekchamp.com/articles/reading-and-writing-metadata-tags-with-taglib) I created a iFileAbstraction interface:
public class SimpleFile
{
public SimpleFile(string Name, Stream Stream)
{
this.Name = Name;
this.Stream = Stream;
}
public string Name { get; set; }
public Stream Stream { get; set; }
}
public class SimpleFileAbstraction : TagLib.File.IFileAbstraction
{
private SimpleFile file;
public SimpleFileAbstraction(SimpleFile file)
{
this.file = file;
}
public string Name
{
get { return file.Name; }
}
public System.IO.Stream ReadStream
{
get { return file.Stream; }
}
public System.IO.Stream WriteStream
{
get { return file.Stream; }
}
public void CloseStream(System.IO.Stream stream)
{
stream.Position = 0;
}
}
Normally I would now be able to do this:
using (IsolatedStorageFileStream filestream = new IsolatedStorageFileStream(name, FileMode.OpenOrCreate, FileAccess.ReadWrite, store))
{
filestream.Write(data, 0, data.Length);
// read id3 tags and add
SimpleFile newfile = new SimpleFile(name, filestream);
TagLib.Tag tags = TagLib.File.Create(newfile);
}
The problem is that TagLib.File.Create still doesn't want to accept the SimpleFile object.
How do I make this work?
Your code doesn't compile because TagLib.File.Create wants IFileAbstraction on input, and you're giving it SimpleFile instance which doesn't implement the interface. Here's one way to fix:
// read id3 tags and add
SimpleFile file1 = new SimpleFile( name, filestream );
SimpleFileAbstraction file2 = new SimpleFileAbstraction( file1 );
TagLib.Tag tags = TagLib.File.Create( file2 );
Don't ask me why we need SimpleFile class instead of passing name and stream into SimpleFileAbstraction - it was in your sample.
I want to be able to save any C# object in a single column of a SQL database table. I am not clear how to convert the object into a varbinary or get it back from a varbinary. My SystemContextObjects table has an OptionValue column that is Varbinary(max).
var dc1 = new DataContextDataContext();
var optionUpdate = dc1.SystemContextObjects.SingleOrDefault(o => o.OptionId == OptionId && o.OptionKey == OptionKey);
if (optionUpdate != null)
{
optionUpdate.OptionValue = Value; <===== NEED HELP HERE...
optionUpdate.DateCreated = DateTime.Now;
optionUpdate.PageName = PageName;
var ChangeSet = dc1.GetChangeSet();
if (ChangeSet.Updates.Count > 0)
{
dc1.SubmitChanges();
return;
}
}
You can use a binary serializer for this, e.g. using the BinaryFormatter - but your classes must be serializable and be marked as such, here a simple example:
You have a simple Person class and mark it as serializable:
[Serializable]
public class Person
{
public string Name { get; set; }
public string Address { get; set; }
}
You can then serialize it using a memory stream to extract a byte array representing the object:
Person p = new Person() { Name = "Fred Fish", Address = "2 Some Place" };
using (MemoryStream ms = new MemoryStream())
{
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(ms, p);
ms.Position = 0;
byte[] personData = ms.ToArray(); // here's your data!
}
To re-create a Person object from a byte array you use de-serialization which works similar:
byte[] personData = ...
using (MemoryStream ms = new MemoryStream(personData))
{
BinaryFormatter formatter = new BinaryFormatter();
Person p = (Person)formatter.Deserialize(ms);
}
I wound up using JSON to accomplish this. I serialize/deserialize the class to/from a string and store that. Works fine.