protobuf-net Serialize a nested list of objects using SerializeWithLengthPrefix - c#

I'm currently trying to serialize the following data structure using protobuf-net:
[ProtoContract]
public class Recording
{
[ProtoMember(1)]
public string Name;
[ProtoMember(2)]
public List<Channel> Channels;
}
[ProtoContract]
public class Channel
{
[ProtoMember(1)]
public string ChannelName;
[ProtoMember(2)]
public List<float> DataPoints;
}
I have a fixed amount of 12 channels, however the amount of datapoints per channel can get very big (up to the Gb range for all channels).
Therefor (and because the data is a continuous stream) I don't want to read and save the structure for one recording at once, but utilise SerializeWithLengthPrefix (and DeserializeItems) to also save it continuously.
My question is, is it even possible to do this with such a nested structure or do I have to flatten it?
I've seen the examples for a list in the first hierarchy level, but none for my specific case.
Also, is there any benefit if I'd write the datapoints as "chunks" of 10, 100, ... (like using List instead of List) over serializing them directly?
Thanks in advance for your help
Tobias

The key challenge in what you are trying to do is that it is heavily stream based internally to each object. protobuf-net can work in that way, but it is not trivial. There is also an issue that you want to interleave data from a single channel over multiple fragments, which is not idiomatic protobuf layout. So the core object materializer code probably doesn't do quite what you want - i.e. treat it as an open stream, not all loaded into memory, for both read and write.
That said: you could use the raw reader/writer API to achieve streaming. You should probably compare and contrast to similar code using BinaryWriter / BinaryReader, but essentially the following works:
using ProtoBuf;
using System;
using System.Collections.Generic;
using System.IO;
static class Program
{
static void Main()
{
var path = "big.blob";
WriteFile(path);
int channelTotal = 0, pointTotal = 0;
foreach(var channel in ReadChannels(path))
{
channelTotal++;
pointTotal += channel.Points.Count;
}
Console.WriteLine("Read: {0} points in {1} channels", pointTotal, channelTotal);
}
private static void WriteFile(string path)
{
string[] channels = {"up", "down", "top", "bottom", "charm", "strange"};
var rand = new Random(123456);
int totalPoints = 0, totalChannels = 0;
using (var encoder = new DataEncoder(path, "My file"))
{
for (int i = 0; i < 100; i++)
{
var channel = new Channel {
Name = channels[rand.Next(channels.Length)]
};
int count = rand.Next(1, 50);
var data = new List<float>(count);
for (int j = 0; j < count; j++)
data.Add((float)rand.NextDouble());
channel.Points = data;
encoder.AddChannel(channel);
totalPoints += count;
totalChannels++;
}
}
Console.WriteLine("Wrote: {0} points in {1} channels; {2} bytes", totalPoints, totalChannels, new FileInfo(path).Length);
}
public class Channel
{
public string Name { get; set; }
public List<float> Points { get; set; }
}
public class DataEncoder : IDisposable
{
private Stream stream;
private ProtoWriter writer;
public DataEncoder(string path, string recordingName)
{
stream = File.Create(path);
writer = new ProtoWriter(stream, null, null);
if (recordingName != null)
{
ProtoWriter.WriteFieldHeader(1, WireType.String, writer);
ProtoWriter.WriteString(recordingName, writer);
}
}
public void AddChannel(Channel channel)
{
ProtoWriter.WriteFieldHeader(2, WireType.StartGroup, writer);
var channelTok = ProtoWriter.StartSubItem(null, writer);
if (channel.Name != null)
{
ProtoWriter.WriteFieldHeader(1, WireType.String, writer);
ProtoWriter.WriteString(channel.Name, writer);
}
var list = channel.Points;
if (list != null)
{
switch(list.Count)
{
case 0:
// nothing to write
break;
case 1:
ProtoWriter.WriteFieldHeader(2, WireType.Fixed32, writer);
ProtoWriter.WriteSingle(list[0], writer);
break;
default:
ProtoWriter.WriteFieldHeader(2, WireType.String, writer);
var dataToken = ProtoWriter.StartSubItem(null, writer);
ProtoWriter.SetPackedField(2, writer);
foreach (var val in list)
{
ProtoWriter.WriteFieldHeader(2, WireType.Fixed32, writer);
ProtoWriter.WriteSingle(val, writer);
}
ProtoWriter.EndSubItem(dataToken, writer);
break;
}
}
ProtoWriter.EndSubItem(channelTok, writer);
}
public void Dispose()
{
using (writer) { if (writer != null) writer.Close(); }
writer = null;
using (stream) { if (stream != null) stream.Close(); }
stream = null;
}
}
private static IEnumerable<Channel> ReadChannels(string path)
{
using (var file = File.OpenRead(path))
using (var reader = new ProtoReader(file, null, null))
{
while (reader.ReadFieldHeader() > 0)
{
switch (reader.FieldNumber)
{
case 1:
Console.WriteLine("Recording name: {0}", reader.ReadString());
break;
case 2: // each "2" instance represents a different "Channel" or a channel switch
var channelToken = ProtoReader.StartSubItem(reader);
int floatCount = 0;
List<float> list = new List<float>();
Channel channel = new Channel { Points = list };
while (reader.ReadFieldHeader() > 0)
{
switch (reader.FieldNumber)
{
case 1:
channel.Name = reader.ReadString();
break;
case 2:
switch (reader.WireType)
{
case WireType.String: // packed array - multiple floats
var dataToken = ProtoReader.StartSubItem(reader);
while (ProtoReader.HasSubValue(WireType.Fixed32, reader))
{
list.Add(reader.ReadSingle());
floatCount++;
}
ProtoReader.EndSubItem(dataToken, reader);
break;
case WireType.Fixed32: // simple float
list.Add(reader.ReadSingle());
floatCount++; // got 1
break;
default:
Console.WriteLine("Unexpected data wire-type: {0}", reader.WireType);
break;
}
break;
default:
Console.WriteLine("Unexpected field in channel: {0}/{1}", reader.FieldNumber, reader.WireType);
reader.SkipField();
break;
}
}
ProtoReader.EndSubItem(channelToken, reader);
yield return channel;
break;
default:
Console.WriteLine("Unexpected field in recording: {0}/{1}", reader.FieldNumber, reader.WireType);
reader.SkipField();
break;
}
}
}
}
}

Related

What is the proper way to make a queue inside a class?

I'm trying to make a queue of orders for my clients.
But the last added value replace all the values ​​in the queue.
Debbuging the code i've see that when a value is enqueued it overrides all the other values in the queue.
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Starting...");
byte[] dataInit = new byte[] { 0x00 };
Client clientTest = new Client();
for (int i = 0; i <= 5; i++)
{
dataInit[0]++;
Console.WriteLine("Adding Order - i = {0}; Order: {1}.", i, BitConverter.ToString(dataInit));
clientTest.AddOrder(dataInit);
Console.WriteLine("Peeking Order - i = {0}; Order: {1}", i, BitConverter.ToString(clientTest.PeekOrder()));
}
for (int i = 0; i <= 5; i++)
{
Console.WriteLine("Removing order - i = {0}; Order: {1}.", i, BitConverter.ToString(clientTest.RemoveOrder()));
}
Console.WriteLine("Press Any Key...");
Console.Read();
}
class ClientOrder
{
public byte[] Order;
public ClientOrder(byte[] data)
{
Order = data;
}
}
class Client
{
public Queue<ClientOrder> ClientOrders = new Queue<ClientOrder>();
public void AddOrder(byte[] orderToAdd)
{
ClientOrders.Enqueue(new ClientOrder(orderToAdd));
}
public byte[] RemoveOrder()
{
ClientOrder toReturn = ClientOrders.Dequeue();
return toReturn.Order;
}
public byte[] PeekOrder()
{
ClientOrder toReturn = ClientOrders.Peek();
return toReturn.Order;
}
}
}
i've expected that the queue was in order [0-6]. but the actual output is {06,06,06,06,06,06} (The last value added).
You are actually sharing the same reference to byte[] and then with every Enqueue you actually replace all the elements in the queue as they all reference the same array. You should make a copy when creating ClientOrder. The easy way is using Linq, but there are also other possibilities.
public ClientOrder(byte[] data)
{
Order = data.ToArray();
}
or other way around as Jeff said

Editing a line in a file by its number [duplicate]

This question already has answers here:
Edit a specific Line of a Text File in C#
(6 answers)
Closed 5 years ago.
I have to write an implementation of string that stores it's values on hard drive instead of ram (I know how stupid it sounds, but it's intended to teach us how different sorting algorithms work on ram and hard drive). This is what I've written so far:
class HDDArray : IEnumerable<int>
{
private string filePath;
public int this[int index]
{
get
{
using (var reader = new StreamReader(filePath))
{
string line = reader.ReadLine();
for (int i = 0; i < index; i++)
{
line = reader.ReadLine();
}
return Convert.ToInt32(line);
}
}
set
{
using (var fs = File.Open(filePath, FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
var reader = new StreamReader(fs);
var writer = new StreamWriter(fs);
for (int i = 0; i < index; i++)
{
reader.ReadLine();
}
writer.WriteLine(value);
writer.Dispose();
}
}
}
public int Length
{
get
{
int length = 0;
using (var reader = new StreamReader(filePath))
{
while (reader.ReadLine() != null)
{
length++;
}
}
return length;
}
}
public HDDArray(string file)
{
filePath = file;
if (File.Exists(file))
File.WriteAllText(file, String.Empty);
else
File.Create(file).Dispose();
}
public IEnumerator<int> GetEnumerator()
{
using (var reader = new StreamReader(filePath))
{
string line;
while ((line = reader.ReadLine()) != null)
{
yield return Convert.ToInt32(line);
}
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
The problem I'm facing is when trying to edit a line (in the the set portion of the indexer) I end up adding a new line instead of editing the old one (it's pretty obvious why, I just can't figure how to fix it).
Your array is designed to work with integers. Such a class is quite easy to create because the length of all numbers is 4 bytes.
class HDDArray : IEnumerable<int>, IDisposable
{
readonly FileStream stream;
readonly BinaryWriter writer;
readonly BinaryReader reader;
public HDDArray(string file)
{
stream = new FileStream(file, FileMode.Create, FileAccess.ReadWrite);
writer = new BinaryWriter(stream);
reader = new BinaryReader(stream);
}
public int this[int index]
{
get
{
stream.Position = index * 4;
return reader.ReadInt32();
}
set
{
stream.Position = index * 4;
writer.Write(value);
}
}
public int Length
{
get
{
return (int)stream.Length / 4;
}
}
public IEnumerator<int> GetEnumerator()
{
stream.Position = 0;
while (reader.PeekChar() != -1)
yield return reader.ReadInt32();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void Dispose()
{
reader?.Dispose();
writer?.Dispose();
stream?.Dispose();
}
}
Since the size of each array element is known, we can simply move to stream by changing its Position property.
BinaryWriter and BinaryReader are very comfortable to write and read numbers.
Open stream is a very heavy operation. Hence do it once when you create the class. At the end of the work, you need to clean up after themselves. So I implemented the IDisposable interface.
Usage:
HDDArray arr = new HDDArray("test.dat");
Console.WriteLine("Length: " + arr.Length);
for (int i = 0; i < 10; i++)
arr[i] = i;
Console.WriteLine("Length: " + arr.Length);
foreach (var n in arr)
Console.WriteLine(n);
// Console.WriteLine(arr[20]); // Exception!
arr.Dispose(); // release resources
I stand to be corrected, but I dont think there is an easy way to re-write a specific line, so you will probably find it easier to rewrite the file - modifying that line.
You could change your set code as follows:
set
{
var allLinesInFile = File.ReadAllLines(filepath);
allLinesInFile[index] = value;
File.WriteAllLines(filepath, allLinesInFile);
}
Goes without saying that there should be some safety checks in there to check the file exists and index < allLinesInFile.Length
I think for the sake of homework of sorting algorithms you needn't bother yourself memory size issues.
Of course please add checking file existing to read.
Note: Line counting in example starts from 0.
string[] lines = File.ReadAllLines(filePath);
using (StreamWriter writer = new StreamWriter(filePath))
{
for (int currentLineNmb = 0; currentLineNmb < lines.Length; currentLineNmb++ )
{
if (currentLineNmb == lineToEditNmb)
{
writer.WriteLine(lineToWrite);
continue;
}
writer.WriteLine(lines[currentLineNmb]);
}
}

Write and Read an Array to a Binary File

I have an array consisting of 1 string value and 2 int values, which I would like to write to a binary file.
It consists of name, index and score.
I have attached the array code below, how could I write this to a file?
Player[] playerArr = new Player[10];
int index = 0;
index = index + 1; // when a new player is added the index is increased by one
Player p = new Player(txtName3.Text, index, Convert.ToInt16(txtScore.Text)); // set the values of the object p
p.refName = txtName3.Text; // set refName to be the string value that is entered in txtName
p.refTotalScore = Convert.ToInt16(txtScore.Text);
playerArr[index] = p; // set the p object to be equal to a position inside the array
I would also like to sort each instantiation of the array to be output in descending order of score. How could this be done?
The file handling code I have so far is:
private static void WriteToFile(Player[] playerArr, int size)
{
Stream sw;
BinaryFormatter bf = new BinaryFormatter();
try
{
sw = File.Open("Players.bin", FileMode.Create);
bf.Serialize(sw, playerArr[0]);
sw.Close();
sw = File.Open("Players.bin", FileMode.Append);
for (int x = 1; x < size; x++)
{
bf.Serialize(sw, playerArr[x]);
}
sw.Close();
}
catch (IOException e)
{
MessageBox.Show("" + e.Message);
}
}
private int ReadFromFile(Player[] playerArr)
{
int size = 0;
Stream sr;
try
{
sr = File.OpenRead("Players.bin");
BinaryFormatter bf = new BinaryFormatter();
try
{
while (sr.Position < sr.Length)
{
playerArr[size] = (Player)bf.Deserialize(sr);
size++;
}
sr.Close();
}
catch (SerializationException e)
{
sr.Close();
return size;
}
return size;
}
catch (IOException e)
{
MessageBox.Show("\n\n\tFile not found" + e.Message);
}
finally
{
lstLeaderboard2.Items.Add("");
}
return size;
}
For the first part, you need to mark your class as Serializable, like this:
[Serializable]
public class Player
It's fine to Append to a new file, so you can change your code to this:
sw = File.Open(#"C:\Players.bin", FileMode.Append);
for (int x = 0; x < size; x++)
{
bf.Serialize(sw, playerArr[x]);
}
sw.Close();
(with the appropriate exception handling, and you'll obviously need to amend this if the file might already exist).
For the second part, you can sort an array like this using LINQ:
var sortedList = playerArr.OrderBy(p => p.Score);
If you require an array as output, do this:
var sortedArray = playerArr.OrderBy(p => p.Score).ToArray();
(Here, Score is the name of the property on the Player class by which you want to sort.)
If you'd like any more help, you'll need to be more specific about the problem!

Read double value from a file C#

I have a txt file that the format is:
0.32423 1.3453 3.23423
0.12332 3.1231 9.23432432
9.234324234 -1.23432 12.23432
...
Each line has three double value. There are more than 10000 lines in this file. I can use the ReadStream.ReadLine and use the String.Split, then convert it.
I want to know is there any faster method to do it.
Best Regards,
StreamReader.ReadLine, String.Split and Double.TryParse sounds like a good solution here.
No need for improvement.
There may be some little micro-optimisations you can perform, but the way you've suggested sounds about as simple as you'll get.
10000 lines shouldn't take very long - have you tried it and found you've actually got a performance problem? For example, here are two short programs - one creates a 10,000 line file and the other reads it:
CreateFile.cs:
using System;
using System.IO;
public class Test
{
static void Main()
{
Random rng = new Random();
using (TextWriter writer = File.CreateText("test.txt"))
{
for (int i = 0; i < 10000; i++)
{
writer.WriteLine("{0} {1} {2}", rng.NextDouble(),
rng.NextDouble(), rng.NextDouble());
}
}
}
}
ReadFile.cs:
using System;
using System.Diagnostics;
using System.IO;
using System.Linq;
public class Test
{
static void Main()
{
Stopwatch sw = Stopwatch.StartNew();
using (TextReader reader = File.OpenText("test.txt"))
{
string line;
while ((line = reader.ReadLine()) != null)
{
string[] bits = line.Split(' ');
foreach (string bit in bits)
{
double value;
if (!double.TryParse(bit, out value))
{
Console.WriteLine("Bad value");
}
}
}
}
sw.Stop();
Console.WriteLine("Total time: {0}ms",
sw.ElapsedMilliseconds);
}
}
On my netbook (which admittedly has an SSD in) it only takes 82ms to read the file. I would suggest that's probably not a problem :)
I would suggest reading all your lines at once with
string[] lines = System.IO.File.ReadAllLines(fileName);
This wold ensure that the I/O is done with the maximum efficiency. You woul have to measure (profile) but I would expect the conversions to take far less time.
your method is already good!
you can improve it by writing a readline function that returns an array of double and you reuse this function in other programs.
This solution is a little bit slower (see benchmarks at the end), but its nicer to read. It should also be more memory efficient because only the current character is buffered at the time (instead of the whole file or line).
Reading arrays is an additional feature in this reader which assumes that the size of the array always comes first as an int-value.
IParsable is another feature, that makes it easy to implement Parse methods for various types.
class StringSteamReader {
private StreamReader sr;
public StringSteamReader(StreamReader sr) {
this.sr = sr;
this.Separator = ' ';
}
private StringBuilder sb = new StringBuilder();
public string ReadWord() {
eol = false;
sb.Clear();
char c;
while (!sr.EndOfStream) {
c = (char)sr.Read();
if (c == Separator) break;
if (IsNewLine(c)) {
eol = true;
char nextch = (char)sr.Peek();
while (IsNewLine(nextch)) {
sr.Read(); // consume all newlines
nextch = (char)sr.Peek();
}
break;
}
sb.Append(c);
}
return sb.ToString();
}
private bool IsNewLine(char c) {
return c == '\r' || c == '\n';
}
public int ReadInt() {
return int.Parse(ReadWord());
}
public double ReadDouble() {
return double.Parse(ReadWord());
}
public bool EOF {
get { return sr.EndOfStream; }
}
public char Separator { get; set; }
bool eol;
public bool EOL {
get { return eol || sr.EndOfStream; }
}
public T ReadObject<T>() where T : IParsable, new() {
var obj = new T();
obj.Parse(this);
return obj;
}
public int[] ReadIntArray() {
int size = ReadInt();
var a = new int[size];
for (int i = 0; i < size; i++) {
a[i] = ReadInt();
}
return a;
}
public double[] ReadDoubleArray() {
int size = ReadInt();
var a = new double[size];
for (int i = 0; i < size; i++) {
a[i] = ReadDouble();
}
return a;
}
public T[] ReadObjectArray<T>() where T : IParsable, new() {
int size = ReadInt();
var a = new T[size];
for (int i = 0; i < size; i++) {
a[i] = ReadObject<T>();
}
return a;
}
internal void NextLine() {
eol = false;
}
}
interface IParsable {
void Parse(StringSteamReader r);
}
It can be used like this:
public void Parse(StringSteamReader r) {
double x = r.ReadDouble();
int y = r.ReadInt();
string z = r.ReadWord();
double[] arr = r.ReadDoubleArray();
MyParsableObject o = r.ReadObject<MyParsableObject>();
MyParsableObject [] oarr = r.ReadObjectArray<MyParsableObject>();
}
I did some benchmarking, comparing StringStreamReader with some other approaches, already proposed (StreamReader.ReadLine and File.ReadAllLines). Here are the methods I used for benchmarking:
private static void Test_StringStreamReader(string filename) {
var sw = new Stopwatch();
sw.Start();
using (var sr = new StreamReader(new FileStream(filename, FileMode.Open, FileAccess.Read))) {
var r = new StringSteamReader(sr);
r.Separator = ' ';
while (!r.EOF) {
var dbls = new List<double>();
while (!r.EOF) {
dbls.Add(r.ReadDouble());
}
}
}
sw.Stop();
Console.WriteLine("elapsed: {0}", sw.Elapsed);
}
private static void Test_ReadLine(string filename) {
var sw = new Stopwatch();
sw.Start();
using (var sr = new StreamReader(new FileStream(filename, FileMode.Open, FileAccess.Read))) {
var dbls = new List<double>();
while (!sr.EndOfStream) {
string line = sr.ReadLine();
string[] bits = line.Split(' ');
foreach(string bit in bits) {
dbls.Add(double.Parse(bit));
}
}
}
sw.Stop();
Console.WriteLine("elapsed: {0}", sw.Elapsed);
}
private static void Test_ReadAllLines(string filename) {
var sw = new Stopwatch();
sw.Start();
string[] lines = System.IO.File.ReadAllLines(filename);
var dbls = new List<double>();
foreach(var line in lines) {
string[] bits = line.Split(' ');
foreach (string bit in bits) {
dbls.Add(double.Parse(bit));
}
}
sw.Stop();
Console.WriteLine("Test_ReadAllLines: {0}", sw.Elapsed);
}
I used a file with 1.000.000 lines of double values (3 values each line). File is located on a SSD disk and each test was repeated multiple times in release-mode. These are the results (on average):
Test_StringStreamReader: 00:00:01.1980975
Test_ReadLine: 00:00:00.9117553
Test_ReadAllLines: 00:00:01.1362452
So, as mentioned StringStreamReader is a bit slower than the other approaches. For 10.000 lines, the performance is around (120ms / 95ms / 100ms).

Ideas to manage IP and Ports in Key/Value format?

I'm looking for a good and fast way to manage IP addresses and ports in a file. Sort of a DB Table that has 2 columns: IP and Port, but in a file, without using a DB.
It has to support adding, deleting and updating. I don't care from concurrency.
Below, some come to complete your task. I tried to go strictly to the point, so maybe something is missing.
I'd to create a "Record" class, to keep ip/port pairs
class Record : IPEndPoint, IComparable<Record>
{
internal long Offset { get; set; }
public bool Deleted { get; internal set; }
public Record() : base(0, 0)
{
Offset = -1;
Deleted = false;
}
public int CompareTo(Record other)
{
if (this.Address == other.Address && this.Address == other.Address )
return 0;
else if (this.Address == other.Address)
return this.Port.CompareTo(other.Port);
else
return
BitConverter.ToInt32(this.Address.GetAddressBytes(), 0).CompareTo(
BitConverter.ToInt32(other.Address.GetAddressBytes(), 0));
}
}
class RecordComparer : IComparer<Record>
{
public int Compare(Record x, Record y)
{
return x.CompareTo(y);
}
}
...And a "DatabaseFile" class to manage datafile interaction.
class DatabaseFile : IDisposable
{
private FileStream file;
private static int RecordSize = 7;
private static byte[] Deleted = new byte[] { 42 };
private static byte[] Undeleted = new byte[] { 32 };
public DatabaseFile(string filename)
{
file = new FileStream(filename,
FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None);
}
public IEnumerable<Record> Locate(Predicate<Record> record)
{
file.Seek(0, SeekOrigin.Begin);
while (file.Position < file.Length)
{
long offset = file.Position;
byte[] buffer = new byte[DatabaseFile.RecordSize];
file.Read(buffer, 0, DatabaseFile.RecordSize);
Record current = Build(offset, buffer);
if (record.Invoke(current))
yield return current;
}
}
public void Append(Record record)
{
// should I look for duplicated values? i dunno
file.Seek(0, SeekOrigin.End);
record.Deleted = false;
record.Offset = file.Position;
Write(record);
}
public void Delete(Record record)
{
if (record.Offset == -1) return;
file.Seek(record.Offset, SeekOrigin.Begin);
record.Deleted = true;
Write(record);
}
public void Update(Record record)
{
if (record.Offset == -1)
{
Append(record);
}
else
{
file.Seek(record.Offset, SeekOrigin.Begin);
Write(record);
}
}
private void Write(Record record)
{
file.Write(GetBytes(record), 0, DatabaseFile.RecordSize);
}
private Record Build(long offset, byte[] data)
{
byte[] ipAddress = new byte[4];
Array.Copy(data, 1, ipAddress, 0, ipAddress.Length);
return new Record
{
Offset = offset,
Deleted = (data[0] == DatabaseFile.Deleted[0]),
Address = new IPAddress(ipAddress),
Port = BitConverter.ToInt16(data, 5)
};
}
private byte[] GetBytes(Record record)
{
byte[] returnValue = new byte[DatabaseFile.RecordSize];
Array.Copy(
record.Deleted ? DatabaseFile.Deleted : DatabaseFile.Undeleted, 0,
returnValue, 0, 1);
Array.Copy(record.Address.GetAddressBytes(), 0,
returnValue, 1, 4);
Array.Copy(BitConverter.GetBytes(record.Port), 0,
returnValue, 5, 2);
return returnValue;
}
public void Pack()
{
long freeBytes = 0;
byte[] buffer = new byte[RecordSize];
Queue<long> deletes = new Queue<long>();
file.Seek(0, SeekOrigin.Begin);
while (file.Position < file.Length)
{
long offset = file.Position;
file.Read(buffer, 0, RecordSize);
if (buffer[0] == Deleted[0])
{
deletes.Enqueue(offset);
freeBytes += RecordSize;
}
else
{
if (deletes.Count > 0)
{
deletes.Enqueue(offset);
file.Seek(deletes.Dequeue(), SeekOrigin.Begin);
file.Write(buffer, 0, RecordSize);
file.Seek(offset + RecordSize, SeekOrigin.Begin);
}
}
}
file.SetLength(file.Length - freeBytes);
}
public void Sort()
{
int offset = -RecordSize; // lazy method
List<Record> records = this.Locate(r => true).ToList();
records.Sort(new RecordComparer());
foreach (Record record in records)
{
record.Offset = offset += RecordSize;
Update(record);
}
}
public void Dispose()
{
if (file != null)
file.Close();
}
}
Below, a working example:
static void Main(string[] args)
{
List<IPEndPoint> endPoints = new List<IPEndPoint>(
new IPEndPoint[]{
new IPEndPoint(IPAddress.Parse("127.0.0.1"), 80),
new IPEndPoint(IPAddress.Parse("69.59.196.211"), 80),
new IPEndPoint(IPAddress.Parse("74.125.45.100"), 80)
});
using (DatabaseFile dbf = new DatabaseFile("iptable.txt"))
{
foreach (IPEndPoint endPoint in endPoints)
dbf.Append(new Record {
Address = endPoint.Address,
Port = endPoint.Port });
Record stackOverflow = dbf.Locate(r =>
Dns.GetHostEntry(r.Address)
.HostName.Equals("stackoverflow.com")).FirstOrDefault();
if (stackOverflow != null)
dbf.Delete(stackOverflow);
Record google = dbf.Locate(r =>
r.Address.ToString() == "74.125.45.100").First();
google.Port = 443;
dbf.Update(google);
foreach(Record http in dbf.Locate(r =>
!r.Deleted && r.Port == 80))
Console.WriteLine(http.ToString());
}
Console.ReadLine();
}
dBase III, I miss you.
Well, that was fun, thank you!
EDIT 1: Added Pack() and lazy Sort() code;
EDIT 2: Added missing IComparable/IComparer implementation
I personally will go for
192.100.10.1:500:20-21
192.100.10.2:27015-27016:80
Where the first is the Ip and every thing after the : is a port, We can also represent a range by - and if we want to be very crazy about it we can introduce a u which will represent the port type UDP or TCP for example:
192.100.10.2:27015-27016:80:90u
And explode() would work for the above quite easily.
When talking about Inserting Deleting and updating.. We can simply create a class structure such as
struct port{
int portnum;
char type;
port(int portnum = 0, char type = 't'){
this.portnum = portnum; this.type = type;
}
}
class Ip{
public:
string Ip_str;
list <port> prt;
}
And then you can have the main to look like
int main(){
list<Ip> Ips;
//Read the list from file and update the list.
//Sort delete update the list
//Rewrite the list back into file in the order mentioned obove
return 0;
}
The easiest way is probably to create a small class that contains your IP and port
class IpAddress
{
public string IP;
public int port;
}
and then create a list<IpAddress> of them. You can then use XML Serialization and Deserialization to read to and write from a file your list.
The .NET BCL does not offer what you are looking for as you want to query against a file without loading it into memory first and support add/remove. So you'd either have to roll your own embedded database or you could simply use something like SQLite http://www.sqlite.org/
IP and Port is a one to many relationship. I would consider something like this
\t192.168.1.1\r\n25\r\n26\r\n\t192.168.1.2\r\n2\r\n80\r\n110
where \t is a tab and \r\n is a carriage return followed by a newline
So when you parse, if you hit a tab character, you know everything that's in that line from there to the newline is an IP address, then everything in between the next newlines is a port number for that IP address until you hit a tab, in which case you're on a new IP address. That's simple and fast but not as human readable.
this has nothing to do with IP and ports.. the problem is that, as far as i know, windows does not allow to INSERT or remove bytes in/from the middle of a file..

Categories

Resources