how to optimize data load from binary file - c#

I have a binary file encoded with little endian and containing ~250.000 values of var1 then another same number of values of var2. I should make a method that reads the file and returns a DataSet with those values in the columns var1 and var2.
I am using the library: miscutil mentioned here in SO multiple times, see here as well for details: will there be an update on MiscUtil for .Net 4?
thanks a lot Jon Skeet for making it available. :)
I have the following code working, I am interested in better ideas on how to minimize the for loops to read from the file and to populate the DataTable. Any suggestion?
private static DataSet parseBinaryFile(string filePath)
{
var result = new DataSet();
var table = result.Tables.Add("Data");
table.Columns.Add("Index", typeof(int));
table.Columns.Add("rain", typeof(float));
table.Columns.Add("gnum", typeof(float));
const int samplesCount = 259200; // 720 * 360
float[] vRain = new float[samplesCount];
float[] vStations = new float[samplesCount];
try
{
if (string.IsNullOrWhiteSpace(filePath) || !File.Exists(filePath))
{
throw new ArgumentException(string.Format("Unable to open the file: '{0}'", filePath));
}
// at this point FilePath is valid and exists...
using (FileStream fs = new FileStream(filePath, FileMode.Open))
{
// We are using the library found here: http://www.yoda.arachsys.com/csharp/miscutil/
var reader = new MiscUtil.IO.EndianBinaryReader(MiscUtil.Conversion.LittleEndianBitConverter.Little, fs);
int i = 0;
while (reader.BaseStream.Position < reader.BaseStream.Length) //while (pos < length)
{
// Read Data
float buffer = reader.ReadSingle();
if (i < samplesCount)
{
vRain[i] = buffer;
}
else
{
vStations[i-samplesCount] = buffer;
}
++i;
}
Console.WriteLine("number of reads was: {0}", (i/2).ToString("N0"));
}
for (int j = 0; j < samplesCount; ++j)
{
table.Rows.Add(new object[] { j + 1, vRain[j], vStations[j] });
}
}
catch (Exception exc)
{
Debug.WriteLine(exc.Message);
}
return result;
}

Option #1
Read the entire file into memory (or Memory Map it) and loop once.
Option #2
Add all the data table rows as you read the var1 section with a placeholder value for var2. Then fix-up the data table as you read the var2 section.

Related

Text file to Excel

I have a Text file contains table contents such as following:
|ID |SN| | Date | Code |Comp|Source| Format |Unit|BuyQTY|DoneQTY|YetQTY|Late
21C011 5 1080201 BAO-99 高雄 10P056 5X3X5M/R RBDC-18865LA M 10000 7000 3000 1
21C006 1 1080201 BAO-99 高雄 20A001 5X8X2M/R 高廠軟 Q 料 M 60000 40000 20000 1
21C002 6 1080201 BAO-99 高雄 10W013 5X1X5M/R PVC+UV M 202000 100500 101500
21C006 4 1080212 BAO-99 高雄 10P038 4X5X5M/R DIGI PACK M 255000 255000
21C006 5 1080212 BAO-99 高雄 10P039 4X6X5M/R DIGI PACK 295000 295000
21C006 6 1080212 BAO-99 高雄 10P040 4X2X5M/R DIGI PACK M 114000 114000
21C006 7 1080212 BAO-99 高雄 10P041 4X9X5M/R DIGI PACK M 49500 49500
Notice that there are many missing values and different length in "Format" column.
I tried to read it into Excel such as following:
Because of the missing values and different format length, I can NOT just simply use Split.
I tried to use Graphics.MeasureString() to get the width of the substring between certain lengths.
Such as width between 125 and 140 will be "Unit".
But because of the Chinese characters and spaces, the result are all "crooked"!
I can never get it to the right column!
Could somebody please be so kind and teach me how could I get it done correctly!?
Much appreciated!!!
Update:
Because I'm writing a program for somebody to do such a task, so I CAN'T ask him to modify the original text through NotePad++ or any other software.
I also can NOT ask him to import it using Excel and set the column widths!
ALL because of it's for their convenience!!!
So I apologize VERY MUCH if I can NOT make life any easier!!!
PS. The Chinese characters are BIG5.
The following is the code I use to parse the text file into a DataGridView:
float[] colLens = new float[] { 137, 161, 301, 359, 400, 510, 760, 804, 872, 944, 1010, 1035,1050 };
Graphics g = CreateGraphics();
str = File.ReadAllLines(ofd.FileName,Encoding.GetEncoding("BIG5"));
for(int i = 0; i < str.Count(); i++)
{
int c = 0;
DataGridViewRow row = new DataGridViewRow();
row.CreateCells(dgvMain);
d = -1;
for(int j = 1; j < str[i].Length ; j++)
{
string s = str[i].Substring(0, j);
SizeF size = g.MeasureString(s, new Font("細明體", 12));
for (int k = d + 1; k < colLens.Count()-1; k++)
{
if (size.Width < colLens[k]) break;
else if(size.Width < colLens[k + 1])
{
d = k;
row.Cells[d].Value = str[i].Substring(c, j - c);
c = j;
break;
}
}
}
dgvMain.Rows.Add(row);
}
Chinese encodings are variable length, whether Big5 or GB18030. This means that Xis stored as a single byte but 高 is stored as two bytes. It seems that this file has a fixed byte length per field, not a fixed character length.
This means that code that expects a fixed character length won't be able to read this file easily. That includes Excel and probably every CSV handling library or code.
In the worst case, you can read the bytes directly from a file stream. Each set of bytes can be converted to a string using Encoding.ToString. You can get the Big5 encoding with Encoding.GetEncoding(950).
Encoding _big5=Encoding.GetEncoding(950);
byte[] _buffer=new byte[90];
public string GetField(FileStream stream,int offset, int length)
{
var read=stream.ReadBytes(_buffer,offset,length);
if(read>0)
{
return _big5.GetString(buffer,0,read);
}
else
{
return "";
}
}
//Quick & dirty way to skip to end
public void SkipToLineEnd(FileStream stream)
{
int c;
while((in=stream.ReadByte()>-1)
{
if (c==(int)'\n')
{
return;
}
}
}
You can construct a record from a line this way :
public MyRecord GetNextRecord(FileStream stream)
{
var record = new MyRecord
{
Id=GetField(stream,0,9),
...
//6 bytes, not just 4
Comp = GetField(stream,28,6),
..
//Start from 50, 16 bytes
Format = GetField(stream,50,16)
};
SkipToLineEnd(stream);
return myRecord;
}
You can write an iterator method that reads records this way until it reaches the end of file. A quick&dirty method to do that would be to check whether the Position of the stream is so close to the end that no full record can be produced, eg :
public IEnumerable<MyRecord> GetRecords(FileStream stream,int recordLength)
{
while(stream.Position < stream.Length - recordLength)
{
yield return GetRecordNextRecord(stream);
}
}
And use it like this :
var records=GetRecords(myStream,96);
foreach(var record in records)
{
....
}
This will take care of trailing newlines and possibly broken last lines.
To skip the header lines, just call SkipToLineEnd() as many times as needed.
You could use a library like EPPlus to generate an Excel file directly from this, eg
using (var p = new ExcelPackage())
{
var ws=p.Workbook.Worksheets.Add("MySheet");
ws.Cells.LoadFromCollection(records);
p.SaveAs(new FileInfo(#"c:\workbooks\myworkbook.xlsx"));
}
My two cents:
First use "Split" and read from origin up to source column, then from late to unit (note "reverse" order). What is left is format.
IE, if using fixed columns and ONLY format gives problems,
var colsIdToSource = line.left(200); //Assuming 200 is the sum of cols up to source
var colsUnitToLate = line.right(150); //idem from unit to late
var formatColumn = line.substring(200, line.length-150); // May need to adjust a char or less
Then you process the known columns.
Good luck :)
This task is not as easy as it looks. The code work with posted input. It may need a little adjustments. See code below
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Data;
namespace ConsoleApplication100
{
class Program
{
const string FILENAME = #"c:\temp\test.csv";
static void Main(string[] args)
{
//|ID |SN| | Date | Code |Comp|Source| Format |Unit|BuyQTY|DoneQTY|YetQTY|Late
DataTable dt = new DataTable();
dt.Columns.Add("ID", typeof(string));
dt.Columns.Add("SN", typeof(string));
dt.Columns.Add("Date", typeof(string));
dt.Columns.Add("Code", typeof(string));
dt.Columns.Add("Comp", typeof(string));
dt.Columns.Add("Source", typeof(string));
dt.Columns.Add("Format", typeof(string));
dt.Columns.Add("Unit", typeof(string));
dt.Columns.Add("BuyQTY", typeof(int));
dt.Columns.Add("DoneQTY", typeof(int));
dt.Columns.Add("YetQTY", typeof(int));
dt.Columns.Add("Late", typeof(int));
StreamReader reader = new StreamReader(FILENAME, Encoding.Unicode);
string line = "";
int lineCount = 0;
while((line = reader.ReadLine()) != null)
{
if ((++lineCount > 1) && (line.Trim().Length > 0))
{
string leader = line.Substring(0, 30).Trim();
string source = line.Substring(31, 16).Trim();
string trailer = line.Substring(48).TrimStart();
string format = trailer.Substring(0, 12).TrimStart();
trailer = trailer.Substring(12).Trim();
DataRow newRow = dt.Rows.Add();
string[] splitLeader = leader.Split(new char[] {' '}, StringSplitOptions.RemoveEmptyEntries);
newRow["ID"] = splitLeader[0].Trim();
newRow["SN"] = splitLeader[1].Trim();
newRow["Date"] = splitLeader[2].Trim();
newRow["Code"] = splitLeader[3].Trim();
newRow["Comp"] = splitLeader[4].Trim();
newRow["Source"] = source;
newRow["Format"] = format;
newRow["Unit"] = trailer.Substring(0,4).Trim();
newRow["BuyQTY"] = int.Parse(trailer.Substring(4, 8));
string doneQTYstr = trailer.Substring(12, 8).Trim();
if (doneQTYstr.Length > 0)
{
newRow["DoneQTY"] = int.Parse(doneQTYstr);
}
if (trailer.Length <= 28)
{
newRow["YetQTY"] = int.Parse(trailer.Substring(20));
}
else
{
newRow["YetQTY"] = int.Parse(trailer.Substring(20,8));
newRow["late"] = int.Parse(trailer.Substring(28));
}
}
}
}
}
}

Large data table to multiple csv files of specific size in .net

I have one large data table of some millions records. I need to export that into multiple CSV files of specific size. So for example, I choose file size of 5MB and when I say export, The Datatable will get exported to 4 CSV files each of size 5MB and last file size may vary due to remaining records. I went through many solutions here as well had a look at csvhelper library but all deals with large files gets split into multiple CSV but not the in memory data table to multiple CSV files based on the file size specified. I want to do this in C#. Any help in this direction would be great.
Thanks
Jay
Thanks #H.G.Sandhagen and #jdweng for the inputs. Currently I have written following code which does the work needed. I know it is not perfect and some enhancement can surely be done and can be made more efficient if we can pre-determine length out of data table item array as pointed out by Nick.McDermaid. As of now, I will go with this code to unblock my self and will post the final optimized version when I have it coded.
public void WriteToCsv(DataTable table, string path, int size)
{
int fileNumber = 0;
StreamWriter sw = new StreamWriter(string.Format(path, fileNumber), false);
//headers
for (int i = 0; i < table.Columns.Count; i++)
{
sw.Write(table.Columns[i]);
if (i < table.Columns.Count - 1)
{
sw.Write(",");
}
}
sw.Write(sw.NewLine);
foreach (DataRow row in table.AsEnumerable())
{
sw.WriteLine(string.Join(",", row.ItemArray.Select(x => x.ToString())));
if (sw.BaseStream.Length > size) // Time to create new file!
{
sw.Close();
sw.Dispose();
fileNumber ++;
sw = new StreamWriter(string.Format(path, fileNumber), false);
}
}
sw.Close();
}
I had a similar problem and this is how I solved it with CsvHelper.
Answer could be easily adapted to use DataTable as source.
public void SplitCsvTest()
{
var inventoryRecords = new List<InventoryCsvItem>();
for (int i = 0; i < 100000; i++)
{
inventoryRecords.Add(new InventoryCsvItem { ListPrice = i + 1, Quantity = i + 1 });
}
const decimal MAX_BYTES = 5 * 1024 * 1024; // 5 MB
List<byte[]> parts = new List<byte[]>();
using (var memoryStream = new MemoryStream())
{
using (var streamWriter = new StreamWriter(memoryStream))
using (var csvWriter = new CsvWriter(streamWriter))
{
csvWriter.WriteHeader<InventoryCsvItem>();
csvWriter.NextRecord();
csvWriter.Flush();
streamWriter.Flush();
var headerSize = memoryStream.Length;
foreach (var record in inventoryRecords)
{
csvWriter.WriteRecord(record);
csvWriter.NextRecord();
csvWriter.Flush();
streamWriter.Flush();
if (memoryStream.Length > (MAX_BYTES - headerSize))
{
parts.Add(memoryStream.ToArray());
memoryStream.SetLength(0);
memoryStream.Position = 0;
csvWriter.WriteHeader<InventoryCsvItem>();
csvWriter.NextRecord();
}
}
if (memoryStream.Length > headerSize)
{
parts.Add(memoryStream.ToArray());
}
}
}
for(int i = 0; i < parts.Count; i++)
{
var part = parts[i];
File.WriteAllBytes($"C:/Temp/Part {i + 1} of {parts.Count}.csv", part);
}
}

Intersect and Union in byte array of 2 files

I have 2 files.
1 is Source File and 2nd is Destination file.
Below is my code for Intersect and Union two file using byte array.
FileStream frsrc = new FileStream("Src.bin", FileMode.Open);
FileStream frdes = new FileStream("Des.bin", FileMode.Open);
int length = 24; // get file length
byte[] src = new byte[length];
byte[] des = new byte[length]; // create buffer
int Counter = 0; // actual number of bytes read
int subcount = 0;
while (frsrc.Read(src, 0, length) > 0)
{
try
{
Counter = 0;
frdes.Position = subcount * length;
while (frdes.Read(des, 0, length) > 0)
{
var data = src.Intersect(des);
var data1 = src.Union(des);
Counter++;
}
subcount++;
Console.WriteLine(subcount.ToString());
}
}
catch (Exception ex)
{
}
}
It is works fine with fastest speed.
but Now the problem is that I want count of it and when I Use below code then it becomes very slow.
var data = src.Intersect(des).Count();
var data1 = src.Union(des).Count();
So, Is there any solution for that ?
If yes,then please lete me know as soon as possible.
Thanks
Intersect and Union are not the fastest operations. The reason you see it being fast is that you never actually enumerate the results!
Both return an enumerable, not the actual results of the operation. You're supposed to go through that and enumerate the enumerable, otherwise nothing happens - this is called "deferred execution". Now, when you do Count, you actually enumerate the enumerable, and incur the full cost of the Intersect and Union - believe me, the Count itself is relatively trivial (though still an O(n) operation!).
You'll need to make your own methods, most likely. You want to avoid the enumerable overhead, and more importantly, you'll probably want a lookup table.
A few points: the comment // get file length is misleading as it is the buffer size. Counter is not the number of bytes read, it is the number of blocks read. data and data1 will end up with the result of the last block read, ignoring any data before them. That is assuming that nothing goes wrong in the while loop - you need to remove the try structure to see if there are any errors.
What you can do is count the number of occurences of each byte in each file, then if the count of a byte in any file is greater than one then it is is a member of the intersection of the files, and if the count of a byte in all the files is greater than one then it is a member of the union of the files.
It is just as easy to write the code for more than two files as it is for two files, whereas LINQ is easy for two but a little bit more fiddly for more than two. (I put in a comparison with using LINQ in a naïve fashion for only two files at the end.)
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var file1 = #"C:\Program Files (x86)\Electronic Arts\Crysis 3\Bin32\Crysis3.exe"; // 26MB
var file2 = #"C:\Program Files (x86)\Electronic Arts\Crysis 3\Bin32\d3dcompiler_46.dll"; // 3MB
List<string> files = new List<string> { file1, file2 };
var sw = System.Diagnostics.Stopwatch.StartNew();
// Prepare array of counters for the bytes
var nFiles = files.Count;
int[][] count = new int[nFiles][];
for (int i = 0; i < nFiles; i++)
{
count[i] = new int[256];
}
// Get the counts of bytes in each file
int bufLen = 32768;
byte[] buffer = new byte[bufLen];
int bytesRead;
for (int fileNum = 0; fileNum < nFiles; fileNum++)
{
using (var sr = new FileStream(files[fileNum], FileMode.Open, FileAccess.Read))
{
bytesRead = bufLen;
while (bytesRead > 0)
{
bytesRead = sr.Read(buffer, 0, bufLen);
for (int i = 0; i < bytesRead; i++)
{
count[fileNum][buffer[i]]++;
}
}
}
}
// Find which bytes are in any of the files or in all the files
var inAny = new List<byte>(); // union
var inAll = new List<byte>(); // intersect
for (int i = 0; i < 256; i++)
{
Boolean all = true;
for (int fileNum = 0; fileNum < nFiles; fileNum++)
{
if (count[fileNum][i] > 0)
{
if (!inAny.Contains((byte)i)) // avoid adding same value more than once
{
inAny.Add((byte)i);
}
}
else
{
all = false;
}
};
if (all)
{
inAll.Add((byte)i);
};
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
// Display the results
Console.WriteLine("Union: " + string.Join(",", inAny.Select(x => x.ToString("X2"))));
Console.WriteLine();
Console.WriteLine("Intersect: " + string.Join(",", inAll.Select(x => x.ToString("X2"))));
Console.WriteLine();
// Compare to using LINQ.
// N/B. Will need adjustments for more than two files.
var srcBytes1 = File.ReadAllBytes(file1);
var srcBytes2 = File.ReadAllBytes(file2);
sw.Restart();
var intersect = srcBytes1.Intersect(srcBytes2).ToArray().OrderBy(x => x);
var union = srcBytes1.Union(srcBytes2).ToArray().OrderBy(x => x);
Console.WriteLine(sw.ElapsedMilliseconds);
Console.WriteLine("Union: " + String.Join(",", union.Select(x => x.ToString("X2"))));
Console.WriteLine();
Console.WriteLine("Intersect: " + String.Join(",", intersect.Select(x => x.ToString("X2"))));
Console.ReadLine();
}
}
}
The counting-the-byte-occurences method is roughly five times faster than the LINQ method on my computer, even without the latter loading the files and on a range of file sizes (a few KB to a few MB).

C# task async await smart card - UI thread blocked

I'm new to C#, and I'm trying to use task async await for a WinsForm GUI. I've read so many tutorials about it, but all of them implement tasks differently. Some tasks use functions, and others just put the code in to execute. Some use Task.Run() or just await. Furthermore, all the examples I've seen are of functions that are included in the UI class. I'm trying to run functions that are in classes that are within my UI. I'm just really confused now, and don't know what's right/wrong.
What I'm trying to do is write a file to an EEPROM, using the SpringCard API/ PC/SC library. I parse the file into packets and write it to the smart card. I also want to update a status label and progress bar. A lot of things can go wrong. I have flags set in the smart card, and right now I just a while loop running until it reads a certain flag, which will obviously stall the program if it's forever waiting for a flag.
I guess I'm just confused about how to set it up. Help. I've tried using Tasks. Here is my code so far.
/* Initialize open file dialog */
OpenFileDialog ofd = new OpenFileDialog();
ofd.Multiselect = false;
ofd.Filter = "BIN Files (.bin)|*.bin|HEX Files (.hex)|*.hex";
ofd.InitialDirectory = "C:";
ofd.Title = "Select File";
//Check open file dialog result
if (ofd.ShowDialog() != DialogResult.OK)
{
if (shade != null)
{
shade.Dispose();
shade = null;
}
return;
}
//progform.Show();
Progress<string> progress = new Progress<string>();
file = new ATAC_File(ofd.FileName);
try
{
cardchannel.DisconnectReset();
Task upgrade = upgradeASYNC();
if(cardchannel.Connect())
{
await upgrade;
}
else
{
add_log_text("Connection to the card failed");
MessageBox.Show("Failed to connect to the card in the reader : please check that you don't have another application running in background that tries to work with the smartcards in the same time");
if (shade != null)
{
shade.Dispose();
shade = null;
}
cardchannel = null;
}
}
private async Task upgradeASYNC()
{
int i = 0;
int totalpackets = 0;
add_log_text("Parsing file into packets.");
totalpackets = file.parseFile();
/*progress.Report(new MyTaskProgressReport
{
CurrentProgressAmount = i,
TotalProgressAmount = totalpackets,
CurrentProgressMessage = "Sending upgrade file..."
});*/
ST_EEPROMM24LR64ER chip = new ST_EEPROMM24LR64ER(this, cardchannel, file, EEPROM.DONOTHING);
bool writefile = chip.WriteFileASYNC();
if(writefile)
{
add_log_text("WRITE FILE OK.");
}
else
{
add_log_text("WRITE FILE BAD.");
}
}
In the file class:
public int parseFile()
{
FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
FileInfo finfo = new FileInfo(filename);
int readbytecount = 0;
int packetcount = 0;
int numofbytesleft = 0;
byte[] hash = new byte[4];
byte[] packetinfo = new byte[4];
byte[] filechunk = null;
/* Read file until all file bytes read */
while (size_int > readbytecount)
{
//Initialize packet array
filechunk = new byte[MAXDATASIZE];
//read into byte array of max write size
if (packetcount < numoffullpackets)
{
//Initialize packet info array
packetinfo[0] = (byte)((size_int + 1) % 0x0100); //packetcountlo
packetinfo[1] = (byte)((size_int + 1) / 0x0100); //packetcounthi
packetinfo[2] = (byte)((packetcount + 1) / 0x0100); //packetcounthi
packetinfo[3] = (byte)((packetcount + 1) % 0x0100); //packetcountlo
//read bytes from file into packet array
bytesread = br.Read(filechunk, 0, MAXDATASIZE);
//add number of bytes read to readbytecount
readbytecount += bytesread;
}
//read EOF into byte array of size smaller than max write size
else if (packetcount == numoffullpackets)
{
//find out how many bytes left to read
numofbytesleft = size_int - (MAXDATASIZE * numoffullpackets);
//Initialize packet info array
packetinfo[0] = (byte)((size_int + 1) / 0x0100); //packetcounthi
packetinfo[1] = (byte)((size_int + 1) % 0x0100); //packetcountlo
packetinfo[2] = (byte)((packetcount + 1) / 0x0100); //packetcounthi
packetinfo[3] = (byte)((packetcount + 1) % 0x0100); //packetcountlo
//Initialize array and add byte padding, MAXWRITESIZE-4 because the other 4 bytes will be added when we append the CRC
//filechunk = new byte[numofbytesleft];
for (int j = 0; j < numofbytesleft; j++)
{
//read byte from file
filechunk[j] = br.ReadByte();
//add number of bytes read to readbytecount
readbytecount++;
}
for (int j = numofbytesleft; j < MAXDATASIZE; j++)
{
filechunk[j] = 0xFF;
}
}
else
{
MessageBox.Show("ERROR");
}
//calculate crc32 on byte array
int i = 0;
foreach (byte b in crc32.ComputeHash(filechunk))
{
hash[i++] = b;
}
//Append hash to filechunk to create new byte array named chunk
byte[] chunk = new byte[MAXWRITESIZE];
Buffer.BlockCopy(packetinfo, 0, chunk, 0, packetinfo.Length);
Buffer.BlockCopy(filechunk, 0, chunk, packetinfo.Length, filechunk.Length);
Buffer.BlockCopy(hash, 0, chunk, (packetinfo.Length + filechunk.Length), hash.Length);
//Add chunk to byte array list
packetcount++;
PacketBYTE.Add(chunk);
}
parseCMD();
return PacketBYTE.Count;
}
In the EEPROM class:
public bool WriteFileASYNC()
{
int blocknum = ATAC_CONSTANTS.RFBN_RFstartwrite;
byte[] response = null;
CAPDU[] EEPROMcmd = null;
int packetCount = 0;
log("ATTEMPT: Read response funct flag.");
do
{
StopRF();
Thread.SpinWait(100);
StartRF();
log("ATTEMPT: Write function flag.");
while (!WriteFlag(ATAC_CONSTANTS.RFBN_functflag, EEPROM.UPLOADAPP)) ;
} while (ReadFunctFlag(ATAC_CONSTANTS.RFBN_responseflag, 0) != EEPROM.UPLOADAPP);
for (int EEPROMcount = 0; EEPROMcount < file.CmdBYTE.Count; EEPROMcount++)
{
string temp = "ATTEMPT: Write EEPROM #" + EEPROMcount.ToString();
log(temp);
EEPROMcmd = file.CmdBYTE[EEPROMcount];
while (EEPROMcmd[blocknum] != null)
{
if (blocknum % 32 == 0)
{
string tempp = "ATTEMPT: Write packet #" + packetCount.ToString();
log("ATTEMPT: Write packet #");
packetCount++;
}
do
{
response = WriteBinaryASYNC(EEPROMcmd[blocknum]);
} while (response == null);
blocknum++;
}
log("ATTEMPT: Write packet flag.");
while (!WriteFlag(ATAC_CONSTANTS.RFBN_packetflag, ATAC_CONSTANTS.RFflag)) ;
log("ATTEMPT: Write packet flag.");
do
{
StopRF();
Thread.SpinWait(300);
StartRF();
} while (!ReadFlag(ATAC_CONSTANTS.RFBN_packetresponseflag, ((blocknum/32) - 1)*(EEPROMcount+1)));
blocknum = ATAC_CONSTANTS.RFBN_RFstartwrite;
}
return true;
}
Tasks are threads.
When you write this:
Task upgrade = upgradeASYNC();
you are simply executing upgradeASYNC in a new thread.
When you write this:
await upgrade;
You are only waiting for the new thread to finish (before going to the next instruction).
And this method
private async Task upgradeASYNC()
returns a Task object only because you add the async keyword. But in the body of this method there is no await. So it just runs synchronously, like any thread job.
I don't have time to rewrite your code, i let that to another stackoverflow user. You should learn and work harder ;)

How can i write and read using a BinaryWriter?

I have this code wich is working when writing a binary file:
using (BinaryWriter binWriter =
new BinaryWriter(File.Open(f.fileName, FileMode.Create)))
{
for (int i = 0; i < f.histogramValueList.Count; i++)
{
binWriter.Write(f.histogramValueList[(int)i]);
}
binWriter.Close();
}
And this code to read back from the DAT file on the hard disk:
fileName = Options_DB.get_histogramFileDirectory();
if (File.Exists(fileName))
{
BinaryReader binReader =
new BinaryReader(File.Open(fileName, FileMode.Open));
try
{
//byte[] testArray = new byte[3];
int pos = 0;
int length = (int)binReader.BaseStream.Length;
binReader.BaseStream.Seek(0, SeekOrigin.Begin);
while (pos < length)
{
long[] l = new long[256];
for (int i = 0; i < 256; i++)
{
if (pos < length)
l[i] = binReader.ReadInt64();
else
break;
pos += sizeof(Int64);
}
list_of_histograms.Add(l);
}
}
catch
{
}
finally
{
binReader.Close();
}
But what i want to do is to add to the Writing code to write to the file more three streams like this:
binWriter.Write(f.histogramValueList[(int)i]);
binWriter.Write(f.histogramValueListR[(int)i]);
binWriter.Write(f.histogramValueListG[(int)i]);
binWriter.Write(f.histogramValueListB[(int)i]);
But the first thing is how can i write all this and make it in the file it self to be identify by a string or something so when im reading the file back i will be able to put each List to a new one ?
Second thing is how do i read back the file now so each List will be added to a new List ?
Now it's easy im writing one List reading and adding it to a List.
But now i added more three Lists so how can i do it ?
Thanks.
To get answer think about how to get number of items in list that you've just serialized.
Cheat code: write number of items in collection before items. When reading do reverse.
writer.Write(items.Count());
// write items.Count() items.
Reading:
int count = reader.ReadInt32();
items = new List<ItemType>();
// read count item objects and add to items collection.

Categories

Resources