Stream Object list as file from API - c#

Is it possible for a API endpoint to stream data from an IQueryable object list populated from entity framework, and linked to SQL database to create and return a csv file?
What I have so far is I loop through all the items in my list and create a temporary file. I then stream that file as the result of my GET API call.
Directory.CreateDirectory($"{Environment.CurrentDirectory}/TmpData");
string tmpFileName = $"{Environment.CurrentDirectory}/TmpData/{Guid.NewGuid().ToString()}.csv";
using (FileStream file = new FileStream(tmpFileName, FileMode.CreateNew))
{
using (StreamWriter fileStream = new StreamWriter(file))
{
LookupItem liTmp = new();
await fileStream.WriteAsync(nameof(liTmp.LookupItemId));
await fileStream.WriteAsync(",");
await fileStream.WriteAsync(nameof(liTmp.Code));
await fileStream.WriteAsync(",");
await fileStream.WriteAsync(nameof(liTmp.Label));
await fileStream.WriteLineAsync();
foreach (var li in items)
{
await fileStream.WriteAsync(li.LookupItemId.ToString());
await fileStream.WriteAsync(",");
await fileStream.WriteAsync(li.Code?.ToString());
await fileStream.WriteAsync(",");
await fileStream.WriteAsync(li.Label?.ToString());
await fileStream.WriteLineAsync();
}
}
}
this.Response.StatusCode = 200;
this.Response.Headers.Add(HeaderNames.ContentDisposition, $"attachment; filename=\"{ request.LookupTableType.ToString() } Data { DateTime.Now.ToString("yyyy-mm-dd hh-MM-ss")}.csv\"");
this.Response.Headers.Add(HeaderNames.ContentType, "application/octet-stream");
var inputStream = new FileStream(tmpFileName, FileMode.Open, FileAccess.Read);
var outputStream = this.Response.Body;
const int bufferSize = 1 << 10;
var buffer = new byte[bufferSize];
while (true)
{
var bytesRead = await inputStream.ReadAsync(buffer, 0, bufferSize);
if (bytesRead == 0) break;
await outputStream.WriteAsync(buffer, 0, bytesRead);
}
await outputStream.FlushAsync();
System.IO.File.Delete(tmpFileName);
return new EmptyResult();
This works fine and I get a csv file back, but I was thinking is it possible to write the data straight to the output stream rather than creating a temp file? I don't want to load all the items in to memory as I would like to use this method on large data sets to allow users to download the table data.

You can add EPPlus package. below code I have tested, it works for me.
using dotnetcoreMVC.Models;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using OfficeOpenXml;
using OfficeOpenXml.Style;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace dotnetcoreMVC.Controllers
{
public class ForTestController : Controller
{
public IActionResult ExportData()
{
//TODO read data from db
// Mock data
List<TestModel> li = new List<TestModel>();
for (int i = 0; i < 10; i++)
{
TestModel m = new TestModel();
m.id = 1+i;
m.name = "test name"+ i;
li.Add(m);
}
var data = li;
if (data?.Any() != true)
{
return new ContentResult() { Content = "no data" };
}
ExcelPackage.LicenseContext = LicenseContext.NonCommercial;
using (var ep = new ExcelPackage())
{
using (var worksheet = ep.Workbook.Worksheets.Add("export data for test"))
{
var x = 1;
var y = 1;
var columnTitles = new List<string>()
{
"id",
"alias"
};
foreach (var columnTitle in columnTitles)
{
var cell = worksheet.Cells[x, y++];
cell.Style.Font.Bold = true;
cell.Style.HorizontalAlignment = ExcelHorizontalAlignment.Center;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Center;
cell.Value = columnTitle;
}
foreach (var item in data)
{
x++;
y = 1;
var cell = worksheet.Cells[x, y++];
cell.Value = item.id;
cell = worksheet.Cells[x, y++];
cell.Value = item.name;
}
using (var stream = new MemoryStream())
{
ep.SaveAs(stream);
return new FileContentResult(stream.ToArray(), "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
{
FileDownloadName = Guid.NewGuid()+"test.csv"
};
}
}
}
}
}
public class TestModel {
public int id { get; set; }
public string name { get; set; }
}
}

Related

RedGate ants memory profiler, Failed to create compatible device context

What does this error mean? Failed to create compatible device context
We have a aspx page which calls a webservice. The webservice in turns called 3rd API with Hyland. The error happens in the third party API
I have contacted their API support. They are slow to respond and since this happens in production I am wondering if there is a fix or somehow lighten the impact? Is it memory related.
I got a dump and opened it in ants memory profile dump. This is not a snapshot. I am finding memory leaks. What does this mean? Is red bad. Does it really have 589 instances. I clicked on the system.Byte[] 2.31MB and see this instance list.
Why would it have so many 4120 in the instance list? Is it really creating all these instances.
Could this be the problem? I have a routine like this. This is a asp.net webpage which calls a webservice. The webservice calls a class Provider.GetDocumentData().
public static byte[] GetDocumentData(Application obApp, ParametersList parameters, ref PropertiesList Properties, ref KeywordPropertiesList Keywords)
{
var type = "";
var BUFFER_SIZE = 65536;
var byteData = new byte[BUFFER_SIZE];
long documentId = 0;
PageRangeSet pageRangeSet = null;
var docQuery = obApp.Core.CreateDocumentQuery();
var pageRanges = "";
//get the content type andd DocumentId from paramters
foreach (var paramter in parameters)
{
if (paramter.Key == GetDocumentDataRequest.DocumentID)
{
documentId = Convert.ToInt64(paramter.Value);
}
if (paramter.Key == GetDocumentDataRequest.PageRange)
{
pageRanges = paramter.Value;
}
if (paramter.Key.Contains(GetDocumentDataRequest.ContentType))
{
type = paramter.Value;
}
}
var document = obApp.Core.GetDocumentByID(documentId);
if (document != null)
{
if (String.IsNullOrEmpty(pageRanges))
{
pageRanges = "1-10000";
}
if (type == "text/plain")
{
var textProvider = obApp.Core.Retrieval.Text;
var documentdata = document.DefaultRenditionOfLatestRevision;
pageRangeSet = textProvider.CreatePageRangeSet(pageRanges);
var newpagecharacter = System.Text.Encoding.UTF8.GetBytes(new[]
{
'\f'
});
var newPageStream = new MemoryStream(newpagecharacter);
using (var ms = new MemoryStream())
{
using (var pageDataList = textProvider.GetPages(documentdata, pageRangeSet))
{
for (var i = 0; i < pageDataList.Count; i++)
{
using (var newStream = new MemoryStream())
{
var pageData = pageDataList[i];
pageData.Stream.CopyTo(newStream);
newStream.Seek(0, System.IO.SeekOrigin.Begin);
newPageStream.Seek(0, System.IO.SeekOrigin.Begin);
newStream.CopyTo(ms);
newPageStream.CopyTo(ms);
}
}
byteData = ms.ToArray();
}
}
}
else if (type == "application/pdf")
{
var pdfProvider = obApp.Core.Retrieval.PDF;
var documentdata = document.DefaultRenditionOfLatestRevision;
var pageData = pdfProvider.GetDocument(documentdata);
using (var ms = new MemoryStream())
{
pageData.Stream.CopyTo(ms);
byteData = ms.ToArray();
}
}
else if (type == "image/tiff")
{
var imageProvider = obApp.Core.Retrieval.Image;
var documentdata = document.DefaultRenditionOfLatestRevision;
var pageData = imageProvider.GetDocument(documentdata);
using (var ms = new MemoryStream())
{
pageData.Stream.CopyTo(ms);
byteData = ms.ToArray();
}
}
else
{
var defaultProvider = obApp.Core.Retrieval.Default;
var documentdata = document.DefaultRenditionOfLatestRevision;
pageRangeSet = defaultProvider.CreatePageRangeSet(pageRanges);
var newpagecharacter = System.Text.Encoding.UTF8.GetBytes(new[]
{
'\f'
});
var newPageStream = new MemoryStream(newpagecharacter);
using (var ms = new MemoryStream())
{
using (var pageDataList = defaultProvider.GetPages(documentdata, pageRangeSet))
{
for (var i = 0; i < pageDataList.Count; i++)
{
using (var newStream = new MemoryStream())
{
var pageData = pageDataList[i];
pageData.Stream.CopyTo(newStream);
newStream.Seek(0, System.IO.SeekOrigin.Begin);
newPageStream.Seek(0, System.IO.SeekOrigin.Begin);
newStream.CopyTo(ms);
newPageStream.CopyTo(ms);
}
}
byteData = ms.ToArray();
}
}
}
}
return byteData;
}

Storing text file in Oracle DB as BLOB cuts off the end of the file

I'm generating a text file in a process which at the end loops through a list of strings that were fed to it, and through a MemoryStream and StreamWriter it converts that list to byte[]. The byte[] is then saved to an Oracle Database using a BLOB datatype. While it works for the majority of the data (typically thousands of lines. I've had anywhere between 5,000 and 40,000, and it's the same result regardless), I have a specific message that goes at the end, but it's always missing. Generally the last line that does end up in the file is cut off halfway.
The function that generates the byte[]:
public byte[] GenerateFileData()
{
var fileData = new byte[0];
using (var ms = new MemoryStream())
{
using (var sw = new StreamWriter(ms))
{
Messages.ForEach(x => sw.WriteLine(x)); // Messages is a list of strings in this class
fileData = ms.ToArray();
}
}
return fileData;
}
The function that saves the byte[] to the database:
public void SaveLogFile(int entityId, byte[] fileData)
{
using (var context = new SomeDBContext())
{
var entity= context.SomeEntity.FirstOrDefault(x => x.Id == runId);
if(entity != null)
{
entity.LOG_FILE = fileData;
context.SaveChanges();
}
}
}
And lastly, the function that turns the data into a file:
[HttpGet]
public FileResult GetLogFile(int id = 0)
{
var fileData = new byte[0];
using (var context = new SomeDbContext())
{
var entity = context.SomeEntity.FirstOrDefault(x => x.Id == id);
fileData = entity.LOG_FILE;
}
var fileName = "SomethingSomething" + id.ToString();
return File(fileData, "text/plain", fileName);
}
Try to get the MemoryStream content after the writer close asthis code:
public byte[] GenerateFileData()
{
var fileData = new byte[0];
using (var ms = new MemoryStream())
{
using (var sw = new StreamWriter(ms))
{
Messages.ForEach(x => sw.WriteLine(x)); // Messages is a list of strings in this class
}
ms.Flush();
fileData = ms.ToArray();
}
return fileData;
}

Transfer files directly from FTP to Azure File Storage without keeping them locally in memory or disk

I have to transfer files from FTP to an Azure File Storage. My code works fine, but I'm transferring those files in memory which is not a best practice. So first I read the stream to an Byte array in memory. Then I upload the output to an Azure file storage.
Now I know it's better to do this asynchronicaly. But I don't know if this is possible and how to do it.
My code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage;
using System.Configuration;
using Microsoft.WindowsAzure.Storage.File;
using System.IO;
using Microsoft.Azure;
using System.Net;
namespace TransferFtpToAzure
{
class Program
{
public static void Main(string[] args)
{
List<FileName> sourceFileList = new List<FileName>();
List<FileName> targetFileList = new List<FileName>();
string targetShareReference = ConfigurationManager.AppSettings["AzureShare"];
string targetDirectoryReference = ConfigurationManager.AppSettings["Environment"] + "/" + Enums.AzureFolders.Mos + "/" + Enums.AzureFolders.In;
string sourceURI = (ConfigurationManager.AppSettings["FtpConnectionString"] + ConfigurationManager.AppSettings["Environment"].ToUpper() +"/"+ Enums.FtpFolders.Mos + "/").Replace("\\","/");
string sourceUser = ConfigurationManager.AppSettings["FtpServerUserName"];
string sourcePass = ConfigurationManager.AppSettings["FtpServerPassword"];
getFileLists(sourceURI, sourceUser, sourcePass, sourceFileList, targetShareReference, targetDirectoryReference, targetFileList);
Console.WriteLine(sourceFileList.Count + " files found!");
CheckLists(sourceFileList, targetFileList);
targetFileList.Sort();
Console.WriteLine(sourceFileList.Count + " unique files on sourceURI" + Environment.NewLine + "Attempting to move them.");
foreach (var file in sourceFileList)
{
try
{
CopyFile(file.fName, sourceURI, sourceUser, sourcePass, targetShareReference, targetDirectoryReference);
}
catch
{
Console.WriteLine("There was move error with : " + file.fName);
}
}
}
public class FileName : IComparable<FileName>
{
public string fName { get; set; }
public int CompareTo(FileName other)
{
return fName.CompareTo(other.fName);
}
}
public static void CheckLists(List<FileName> sourceFileList, List<FileName> targetFileList)
{
for (int i = 0; i < sourceFileList.Count; i++)
{
if (targetFileList.BinarySearch(sourceFileList[i]) > 0)
{
sourceFileList.RemoveAt(i);
i--;
}
}
}
public static void getFileLists(string sourceURI, string sourceUser, string sourcePass, List<FileName> sourceFileList, string targetShareReference, string targetDirectoryReference, List<FileName> targetFileList)
{
string line = "";
/////////Source FileList
FtpWebRequest sourceRequest;
sourceRequest = (FtpWebRequest)WebRequest.Create(sourceURI);
sourceRequest.Credentials = new NetworkCredential(sourceUser, sourcePass);
sourceRequest.Method = WebRequestMethods.Ftp.ListDirectory;
sourceRequest.UseBinary = true;
sourceRequest.KeepAlive = false;
sourceRequest.Timeout = -1;
sourceRequest.UsePassive = true;
FtpWebResponse sourceRespone = (FtpWebResponse)sourceRequest.GetResponse();
//Creates a list(fileList) of the file names
using (Stream responseStream = sourceRespone.GetResponseStream())
{
using (StreamReader reader = new StreamReader(responseStream))
{
line = reader.ReadLine();
while (line != null)
{
var fileName = new FileName
{
fName = line
};
sourceFileList.Add(fileName);
line = reader.ReadLine();
}
}
}
/////////////Target FileList
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
//var test = fileClient.ListShares();
CloudFileShare fileShare = fileClient.GetShareReference(targetShareReference);
if (fileShare.Exists())
{
CloudFileDirectory rootDirectory = fileShare.GetRootDirectoryReference();
if (rootDirectory.Exists())
{
CloudFileDirectory customDirectory = rootDirectory.GetDirectoryReference(targetDirectoryReference);
if (customDirectory.Exists())
{
var fileCollection = customDirectory.ListFilesAndDirectories().OfType<CloudFile>();
foreach (var item in fileCollection)
{
var fileName = new FileName
{
fName = item.Name
};
targetFileList.Add(fileName);
}
}
}
}
}
public static void CopyFile(string fileName, string sourceURI, string sourceUser, string sourcePass, string targetShareReference, string targetDirectoryReference)
{
try
{
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(sourceURI + fileName);
request.Method = WebRequestMethods.Ftp.DownloadFile;
request.Credentials = new NetworkCredential(sourceUser, sourcePass);
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
Upload(fileName, ToByteArray(responseStream), targetShareReference, targetDirectoryReference);
responseStream.Close();
}
catch
{
Console.WriteLine("There was an error with :" + fileName);
}
}
public static Byte[] ToByteArray(Stream stream)
{
MemoryStream ms = new MemoryStream();
byte[] chunk = new byte[4096];
int bytesRead;
while ((bytesRead = stream.Read(chunk, 0, chunk.Length)) > 0)
{
ms.Write(chunk, 0, bytesRead);
}
return ms.ToArray();
}
public static bool Upload(string FileName, byte[] Image, string targetShareReference, string targetDirectoryReference)
{
try
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
//var test = fileClient.ListShares();
CloudFileShare fileShare = fileClient.GetShareReference(targetShareReference);
if (fileShare.Exists())
{
CloudFileDirectory rootDirectory = fileShare.GetRootDirectoryReference();
if (rootDirectory.Exists())
{
CloudFileDirectory customDirectory = rootDirectory.GetDirectoryReference(targetDirectoryReference);
if (customDirectory.Exists())
{
var cloudFile = customDirectory.GetFileReference(FileName);
using (var stream = new MemoryStream(Image, writable: false))
{
cloudFile.UploadFromStream(stream);
}
}
}
}
return true;
}
catch
{
return false;
}
}
}
}
If I understand you correctly, you want to avoid storing the file in memory between the download and upload.
For that see:
Azure function to copy files from FTP to blob storage.
Using Azure Storage File Share this is the only way it worked for me without loading the entire ZIP into Memory. I tested with a 3GB ZIP File (with thousands of files or with a big file inside) and Memory/CPU was low and stable. I hope it helps!
var zipFiles = _directory.ListFilesAndDirectories()
.OfType<CloudFile>()
.Where(x => x.Name.ToLower().Contains(".zip"))
.ToList();
foreach (var zipFile in zipFiles)
{
using (var zipArchive = new ZipArchive(zipFile.OpenRead()))
{
foreach (var entry in zipArchive.Entries)
{
if (entry.Length > 0)
{
CloudFile extractedFile = _directory.GetFileReference(entry.Name);
using (var entryStream = entry.Open())
{
byte[] buffer = new byte[16 * 1024];
using (var ms = extractedFile.OpenWrite(entry.Length))
{
int read;
while ((read = entryStream.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
}
}
}
}
}
}

Upload zipped file to Dropbox using C#

I am trying to upload a zipped file to Dropbox using access token. Below code works for unzipped file:
private static async Task FileUploadToDropbox(string filePath, string fileName, byte[] fileContent)
{
var client = new DropboxClient("Access Token");
const int chunkSize = 1024;
using (var stream = new MemoryStream(fileContent))
{
int numChunks = (int)Math.Ceiling((double)stream.Length / chunkSize);
byte[] buffer = new byte[chunkSize];
string sessionId = null;
for (var idx = 0; idx < numChunks; idx++)
{
var byteRead = stream.Read(buffer, 0, chunkSize);
using (MemoryStream memStream = new MemoryStream(buffer, 0, byteRead))
{
if (idx == 0)
{
var result = await client.Files.UploadSessionStartAsync(body: memStream);
sessionId = result.SessionId;
}
else
{
UploadSessionCursor cursor = new UploadSessionCursor(sessionId, (ulong)(chunkSize * idx));
if (idx == numChunks - 1)
{
await client.Files.UploadSessionFinishAsync(cursor, new CommitInfo(filePath + "/" + fileName), memStream);
}
else
{
await client.Files.UploadSessionAppendV2Async(cursor, body: memStream);
}
}
}
}
}
}
But when I try to upload a zipped file using this code, it uploads an empty zipped file to Dropbox. I am reading the zipped file as a byte array and passing it to the above method. Although the file size remains the same, when i download the file and try to extract it, it says that the zipped file is empty.
private static async Task FileUploadToDropbox(string filePath, string fileName, string fileSource)
{
using (var dbx = new DropboxClient("access Token"))
using (var fs = new FileStream(fileSource, FileMode.Open, FileAccess.Read))
{
var updated = await dbx.Files.UploadAsync(
(filePath + "/" + fileName), WriteMode.Overwrite.Instance, body: fs);
}
}
Above method worked for me.
Please try this:
/// <summary>
/// Function to import local file to dropbox.
/// </summary>
public static async Task<bool> WriteFileToDropBox()
{
try
{
//Connecting with dropbox.
var file = "File path at dropbox";
using (var dbx = new DropboxClient("Access Token"))
using (var fs = new FileStream("Path of file to be uploaded.")
{
var updated = await dbx.Files.UploadAsync(file, WriteMode.Add.Instance, body: fs);
}
return true;
}
catch (Exception err)
{
MessageBox.Show(err.Message);
return false;
}
}

Serializing a ConcurrentBag of XAML

I have, in my code, a ConcurrentBag<Point3DCollection>.
I'm trying to figure out how to serialize them. Of course I could iterate through or package it with a provider model class, but I wonder if it's already been done.
The Point3DCollections themselves are potentially quite large and could stand to be compressed to speed up reading and writing to and from the disk, but the response times I need for this are largely in the user interface scale. In other words, I prefer a binary formatting over a XAML-text formatting, for performance reasons. (There is a nice XAML-text serializer which is part of the Helix 3D CodeProject, but it's slower than I'd like.)
Is this a use case where I'm left rolling out my own serializer, or is there something out there that's already packaged for this kind of data?
Here are some extensions methods that handle string and binary serialization of Point3DCollection bags. As I said in my comment, I don't think there is a best way of doing this in all cases, so you might want to try both. Also note they're using Stream parameter as input so you can chain these with calls to GZipStream of DeflateStream.
public static class Point3DExtensions
{
public static void StringSerialize(this ConcurrentBag<Point3DCollection> bag, Stream stream)
{
if (bag == null)
throw new ArgumentNullException("bag");
if (stream == null)
throw new ArgumentNullException("stream");
StreamWriter writer = new StreamWriter(stream);
Point3DCollectionConverter converter = new Point3DCollectionConverter();
foreach (Point3DCollection coll in bag)
{
// we need to use the english locale as the converter needs that for parsing...
string line = (string)converter.ConvertTo(null, CultureInfo.GetCultureInfo("en-US"), coll, typeof(string));
writer.WriteLine(line);
}
writer.Flush();
}
public static void StringDeserialize(this ConcurrentBag<Point3DCollection> bag, Stream stream)
{
if (bag == null)
throw new ArgumentNullException("bag");
if (stream == null)
throw new ArgumentNullException("stream");
StreamReader reader = new StreamReader(stream);
Point3DCollectionConverter converter = new Point3DCollectionConverter();
do
{
string line = reader.ReadLine();
if (line == null)
break;
bag.Add((Point3DCollection)converter.ConvertFrom(line));
// NOTE: could also use this:
//bag.Add(Point3DCollection.Parse(line));
}
while (true);
}
public static void BinarySerialize(this ConcurrentBag<Point3DCollection> bag, Stream stream)
{
if (bag == null)
throw new ArgumentNullException("bag");
if (stream == null)
throw new ArgumentNullException("stream");
BinaryWriter writer = new BinaryWriter(stream);
writer.Write(bag.Count);
foreach (Point3DCollection coll in bag)
{
writer.Write(coll.Count);
foreach (Point3D point in coll)
{
writer.Write(point.X);
writer.Write(point.Y);
writer.Write(point.Z);
}
}
writer.Flush();
}
public static void BinaryDeserialize(this ConcurrentBag<Point3DCollection> bag, Stream stream)
{
if (bag == null)
throw new ArgumentNullException("bag");
if (stream == null)
throw new ArgumentNullException("stream");
BinaryReader reader = new BinaryReader(stream);
int count = reader.ReadInt32();
for (int i = 0; i < count; i++)
{
int pointCount = reader.ReadInt32();
Point3DCollection coll = new Point3DCollection(pointCount);
for (int j = 0; j < pointCount; j++)
{
coll.Add(new Point3D(reader.ReadDouble(), reader.ReadDouble(), reader.ReadDouble()));
}
bag.Add(coll);
}
}
}
And a little console app test program to play with:
static void Main(string[] args)
{
Random rand = new Random(Environment.TickCount);
ConcurrentBag<Point3DCollection> bag = new ConcurrentBag<Point3DCollection>();
for (int i = 0; i < 100; i++)
{
Point3DCollection coll = new Point3DCollection();
bag.Add(coll);
for (int j = rand.Next(10); j < rand.Next(100); j++)
{
Point3D point = new Point3D(rand.NextDouble(), rand.NextDouble(), rand.NextDouble());
coll.Add(point);
}
}
using (FileStream stream = new FileStream("test.bin", FileMode.Create))
{
bag.StringSerialize(stream); // or Binary
}
ConcurrentBag<Point3DCollection> newbag = new ConcurrentBag<Point3DCollection>();
using (FileStream stream = new FileStream("test.bin", FileMode.Open))
{
newbag.StringDeserialize(stream); // or Binary
foreach (Point3DCollection coll in newbag)
{
foreach (Point3D point in coll)
{
Console.WriteLine(point);
}
Console.WriteLine();
}
}
}
}
Compression could potentially take advantage of repeated coordinates. Serializers will often use references for repeat objects as well, although I'm not sure there are many set up to work with structs (like Point3D). Anyhow, here are some examples of how to serialize this. To use the standard formatters, you need to convert the data type to something most of them support: list/array. The code below uses Nuget packages NUnit and Json.NET.
using Newtonsoft.Json;
using Newtonsoft.Json.Bson;
using NUnit.Framework;
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.IO.Compression;
using System.Linq;
using System.Runtime.Serialization.Formatters.Binary;
using System.Text;
using System.Windows.Media.Media3D;
namespace DemoPoint3DSerialize
{
[TestFixture]
class Tests
{
[Test]
public void DemoBinary()
{
// this shows how to convert them all to strings
var collection = CreateCollection();
var data = collection.Select(c => c.ToArray()).ToList(); // switch to serializable types
var formatter = new BinaryFormatter();
using (var ms = new MemoryStream())
{
formatter.Serialize(ms, data);
Trace.WriteLine("Binary of Array Size: " + ms.Position);
ms.Position = 0;
var dupe = (List<Point3D[]>)formatter.Deserialize(ms);
var result = new ConcurrentBag<Point3DCollection>(dupe.Select(r => new Point3DCollection(r)));
VerifyEquality(collection, result);
}
}
[Test]
public void DemoString()
{
// this shows how to convert them all to strings
var collection = CreateCollection();
IEnumerable<IList<Point3D>> tmp = collection;
var strings = collection.Select(c => c.ToString()).ToList();
Trace.WriteLine("String Size: " + strings.Sum(s => s.Length)); // eh, 2x for Unicode
var result = new ConcurrentBag<Point3DCollection>(strings.Select(r => Point3DCollection.Parse(r)));
VerifyEquality(collection, result);
}
[Test]
public void DemoDeflateString()
{
// this shows how to convert them all to strings
var collection = CreateCollection();
var formatter = new BinaryFormatter(); // not really helping much: could
var strings = collection.Select(c => c.ToString()).ToList();
using (var ms = new MemoryStream())
{
using (var def = new DeflateStream(ms, CompressionLevel.Optimal, true))
{
formatter.Serialize(def, strings);
}
Trace.WriteLine("Deflate Size: " + ms.Position);
ms.Position = 0;
using (var def = new DeflateStream(ms, CompressionMode.Decompress))
{
var stringsDupe = (IList<string>)formatter.Deserialize(def);
var result = new ConcurrentBag<Point3DCollection>(stringsDupe.Select(r => Point3DCollection.Parse(r)));
VerifyEquality(collection, result);
}
}
}
[Test]
public void DemoStraightJson()
{
// this uses Json.NET
var collection = CreateCollection();
var formatter = new JsonSerializer();
using (var ms = new MemoryStream())
{
using (var stream = new StreamWriter(ms, new UTF8Encoding(true), 2048, true))
using (var writer = new JsonTextWriter(stream))
{
formatter.Serialize(writer, collection);
}
Trace.WriteLine("JSON Size: " + ms.Position);
ms.Position = 0;
using (var stream = new StreamReader(ms))
using (var reader = new JsonTextReader(stream))
{
var result = formatter.Deserialize<List<Point3DCollection>>(reader);
VerifyEquality(collection, new ConcurrentBag<Point3DCollection>(result));
}
}
}
[Test]
public void DemoBsonOfArray()
{
// this uses Json.NET
var collection = CreateCollection();
var formatter = new JsonSerializer();
using (var ms = new MemoryStream())
{
using (var stream = new BinaryWriter(ms, new UTF8Encoding(true), true))
using (var writer = new BsonWriter(stream))
{
formatter.Serialize(writer, collection);
}
Trace.WriteLine("BSON Size: " + ms.Position);
ms.Position = 0;
using (var stream = new BinaryReader(ms))
using (var reader = new BsonReader(stream, true, DateTimeKind.Unspecified))
{
var result = formatter.Deserialize<List<Point3DCollection>>(reader); // doesn't seem to read out that concurrentBag
VerifyEquality(collection, new ConcurrentBag<Point3DCollection>(result));
}
}
}
private ConcurrentBag<Point3DCollection> CreateCollection()
{
var rand = new Random(42);
var bag = new ConcurrentBag<Point3DCollection>();
for (int i = 0; i < 10; i++)
{
var collection = new Point3DCollection();
for (int j = 0; j < i + 10; j++)
{
var point = new Point3D(rand.NextDouble(), rand.NextDouble(), rand.NextDouble());
collection.Add(point);
}
bag.Add(collection);
}
return bag;
}
private class CollectionComparer : IEqualityComparer<Point3DCollection>
{
public bool Equals(Point3DCollection x, Point3DCollection y)
{
return x.SequenceEqual(y);
}
public int GetHashCode(Point3DCollection obj)
{
return obj.GetHashCode();
}
}
private void VerifyEquality(ConcurrentBag<Point3DCollection> collection, ConcurrentBag<Point3DCollection> result)
{
var first = collection.OrderBy(c => c.Count);
var second = collection.OrderBy(c => c.Count);
first.SequenceEqual(second, new CollectionComparer());
}
}
}
Use Google's protobuf-net. protobuf-net is an open source .net implementation of Google's protocol buffer binary serialization format which can be used as a replacement for the BinaryFormatter serializer. It is probably going to be the fastest solution and easiest to implement.
Here is a link to the the main google wiki for protobuf-net. On the left, you'll find the downloads for all of the most updated binaries.
https://code.google.com/p/protobuf-net/
Here is a great article that you might want to look at first to get a feel for how it works.
http://wallaceturner.com/serialization-with-protobuf-net
Here is a link to a discussion on google's wiki about your specific problem. The answer is at the bottom of the page. That's where I got the code below and substituted with details from your post.
https://code.google.com/p/protobuf-net/issues/detail?id=354
I haven't used it myself but it looks like a very good solution to your stated needs. From what I gather, your code would end up some variation of this.
[ProtoContract]
public class MyClass {
public ConcurrentQueue<Point3DCollection> Points {get;set;}
[ProtoMember(1)]
private Point3DCollection[] Items
{
get { return Points.ToArray(); }
set { Items = new ConcurrentBag<Point3DCollection>(value); }
}
}
I wish you the best of luck. Take care.
For a large amount of data, why don't you consider Sqlite or any other small database system etc, which can store structured data in the file.
I have seen many 3d programs using database to store structure along with relations, which allow them to partially insert/update/delete data.
Benefit of Sqlite/database will be multithreaded serialization to improve speed, however you need to do little bit of work on sqlite to enable multi threaded sqlite connection, or else you can use LocalDB of SQL Express or even Sql Compact.
Also some of workload of loading data can be done through queries, which will be indexed by database nicely. And most of things can be done on background worker without interfering with User Interface.
Sqlite has limited multi-thread support, which can be explored here http://www.sqlite.org/threadsafe.html
Sql Compact is thread safe and requires installation that can be installed without admin priviledges. And you can use Entity framework as well.

Categories

Resources