getting mulitple images from a single stream piped from ffmpeg stdout - c#

I start a process to retrieve a few frames from a video file with ffmpeg,
ffmpeg -i "<videofile>.mp4" -frames:v 10 -f image2pipe pipe:1
and pipe the images to stdout -
var cmd = Process.Start(p);
var stream = cmd.StandardOutput.BaseStream;
var img = Image.FromStream(stream);
Getting the first image this way works, but how do I get all of them?

OK this was gobspackingly easy, kind of embarrassed I asked here. I'll post the answer in case it will help anyone else.
The first few bytes in the stream will be repeated every time there is a new image. I guessed the first 8 would do and voila.
static IEnumerable<Image> GetThumbnails(Stream stream)
{
byte[] allImages;
using (var ms = new MemoryStream())
{
stream.CopyTo(ms);
allImages = ms.ToArray();
}
var bof = allImages.Take(8).ToArray(); //??
var prevOffset = -1;
foreach (var offset in GetBytePatternPositions(allImages, bof))
{
if (prevOffset > -1)
yield return GetImageAt(allImages, prevOffset, offset);
prevOffset = offset;
}
if (prevOffset > -1)
yield return GetImageAt(allImages, prevOffset, allImages.Length);
}
static Image GetImageAt(byte[] data, int start, int end)
{
using (var ms = new MemoryStream(end - start))
{
ms.Write(data, start, end - start);
return Image.FromStream(ms);
}
}
static IEnumerable<int> GetBytePatternPositions(byte[] data, byte[] pattern)
{
var dataLen = data.Length;
var patternLen = pattern.Length - 1;
int scanData = 0;
int scanPattern = 0;
while (scanData < dataLen)
{
if (pattern[0] == data[scanData])
{
scanPattern = 1;
scanData++;
while (pattern[scanPattern] == data[scanData])
{
if (scanPattern == patternLen)
{
yield return scanData - patternLen;
break;
}
scanPattern++;
scanData++;
}
}
scanData++;
}
}

Related

Reading and finding texts in a file

I'm reading string data from inside a file. When I search the string data I read, the value I want does not seem to exist. Can you help with this topic?
The word I'm trying to search is: GTA:SA:MP
The code I use is:
static byte[] ReadFile(string filePath)
{
byte[] buffer;
FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read);
try
{
int length = (int)fileStream.Length; // get file length
buffer = new byte[length]; // create buffer
int count; // actual number of bytes read
int sum = 0; // total number of bytes read
// read until Read method returns 0 (end of the stream has been reached)
while ((count = fileStream.Read(buffer, sum, length - sum)) > 0)
sum += count; // sum is a buffer offset for next reading
}
finally
{
fileStream.Close();
}
return buffer;
}
static void Main(string[] args)
{
byte[] data = ReadFile(#"FILE.exe");
string result = Encoding.ASCII.GetString(data);
if (result.Contains("GTA:SA:MP"))
{
Console.WriteLine("Found");
}
else
{
Console.WriteLine("Not found");
}
Console.ReadLine();
}
The answer to me: Not found
You've got a couple problems. As others have pointed out if your source is bytes then you should compare bytes not strings. Otherwise you have encoding issues. Second issue is you're using a buffer but you're not checking for any boundary conditions - where the pattern you're searching for is split across the buffer size boundary. One simple way to do something like this is treat the source as a stream and just check byte by byte. I'll include an example using a simple state machine made from local functions.
I used the local functions just because it seemed fun, you can do this in a myriad of ways..
static void Main(string[] _)
{
byte[] target = Encoding.UTF8.GetBytes("2:30pm");
long offsetInSource = 0;
int indexOfTarget = 0;
long current = 0;
bool found = false;
Func<byte, byte, bool> match = CheckStart;
using (BinaryReader reader = new BinaryReader(File.Open("foo.txt", FileMode.Open)))
{
while (current < reader.BaseStream.Length)
{
var b = reader.ReadByte();
var t = target[indexOfTarget];
if (match(t, b))
{
found = true;
break;
}
++current;
}
}
if (found)
{
Console.WriteLine($"Found matching pattern at: {offsetInSource}");
}
else
{
Console.WriteLine("Did not find pattern");
}
bool CheckStart(byte t, byte b)
{
if (t == b)
{
offsetInSource = current;
if (++indexOfTarget == target.Length)
return true;
match = CheckRest;
}
return false;
}
bool CheckRest(byte t, byte b)
{
if (t == b)
{
if (++indexOfTarget == target.Length)
return true;
}
else
{
indexOfTarget = 0;
match = CheckStart;
}
return false;
}
}
}
If your file is huge, you can read file as text in 500 characters (for example) and store them into a string variable and search your phrase in this variable. If your phrase not found, read another 500 characters by 450 (500-50) offset and store them into a string variable and search your phrase in this variable. Do this loop until your phrase found or EOF reached.

How to read/write to a file which is over the size barrier of 2 GB?

So, i don't have a single idea to get through this situation i have a function that patches a file by replacing the values i want but he file i am trying to patch is about 4.5GB.
Here is the function :
private static readonly byte[] PatchFind = { 0x74, 0x72, 0x79 };
private static readonly byte[] PatchReplace = { 0x79, 0x72, 0x74 };
private static bool DetectPatch(byte[] sequence, int position)
{
if (position + PatchFind.Length > sequence.Length) return false;
for (int p = 0; p < PatchFind.Length; p++)
{
if (PatchFind[p] != sequence[position + p]) return false;
}
return true;
}
private static void PatchFile(string originalFile, string patchedFile)
{
// Ensure target directory exists.
var targetDirectory = Path.GetDirectoryName(patchedFile);
if (targetDirectory == null) return;
Directory.CreateDirectory(targetDirectory);
// Read file bytes.
byte[] fileContent = File.ReadAllBytes(originalFile);
// Detect and patch file.
for (int p = 0; p < fileContent.Length; p++)
{
if (!DetectPatch(fileContent, p)) continue;
for (int w = 0; w < PatchFind.Length; w++)
{
fileContent[p + w] = PatchReplace[w];
}
}
// Save it to another location.
File.WriteAllBytes(patchedFile, fileContent);
}
So how can i achieve the function to work with 2GB+ files. Any help would be appreciated.
Your problem is that File.ReadAllBytes will not open files with lengths longer than Int.MaxValue. Loading an entire file into memory just to scan it for a pattern is a bad design no matter how big the file is. You should open the file as a stream and use the Scanner pattern to step through the file, replacing bytes that match your pattern. A rather simplistic implementation using BinaryReader:
static void PatchStream(Stream source, Stream target
, IList<byte> searchPattern, IList<byte> replacementPattern)
{
using (var input = new BinaryReader(source))
using (var output = new BinaryWriter(target))
{
var buffer = new Queue<byte>();
while (true)
{
if (buffer.Count < searchPattern.Count)
{
if (input.BaseStream.Position < input.BaseStream.Length)
buffer.Enqueue(input.ReadByte());
else
break;
}
else if (buffer.Zip(searchPattern, (b, s) => b == s).All(c => c))
{
foreach (var b in replacementPattern)
output.Write(b);
buffer.Clear();
}
else
{
output.Write(buffer.Dequeue());
}
}
foreach (var b in buffer)
output.Write(b);
}
}
You can call it on files with code like:
PatchStream(new FileInfo(...).OpenRead(),
new FileInfo(...).OpenWrite(),
new[] { (byte)'a', (byte)'b', (byte)'c' },
new[] { (byte)'A', (byte)'B', (byte)'C' });

Play video frame by frame performance issues

I want to play a video (mostly .mov with Motion JPEG) in frame by frame mode with changing framerate. I have a function who gives me a framenumber and then I have to jump there. It will be mostly in one direction but can skip a few frames from time to time; also the velocity is not constant.
So I have a timer asking every 40ms about a new framenumber and setting the new position.
My first approach now is with DirectShow.Net (Interop.QuartzTypeLib). Therefore I render and open the video and set it to pause to draw the picture in the graph
FilgraphManagerClass media = new FilgraphManagerClass();
media.RenderFile(FileName);
media.pause();
Now I will just set a new position
media.CurrentPosition = framenumber * media.AvgTimePerFrame;
Since the video is in pause mode it will then draw every requested new position (frame). Works perfectly fine but really slow... the video keeps stuttering and lagging and its not the video source; there are enough frames recorded to play a fluent video.
With some performance tests I found out that the LAV-Codec is the bottleneck here. This is not included directly in my project since its a DirectShow-Player it will be cast through my codec pack I installed on my PC.
Ideas:
Using the LAV-Codec by myself directly in C#. I searched but everyone is using DirectShow it seems, building their own filters and not using existing ones directly in the project.
Instead of seeking or setting the time, can I get single frames just by the framenumber and draw them simply?
Is there a complete other way to archive what I want to do?
Background:
This project has to be a train simulator. We recorded real time videos of trains driving from inside the cockpit and know which frame is what position. Now my C# programm calculates the position of the train in dependence of time and acceleration, gives back the appropriate framenumber and draw this frame.
Additional Information:
There is another project (not written by me) in C/C++ who uses DirectShow and the avcodec-LAV directly with a similar way I do and it works fine! Thats because I had the idea to use a codec / filter like the avrcodec-lav by myself. But I can't find an interop or interface to work with C#.
Thanks everyone for reading this and trying to help! :)
Obtaining specific frame by seeking filter graph (the entire pipeline) is pretty slow since every seek operation involves the following on its backyard: flushing everything, possibly re-creating worker threads, seeking to first key frame/splice point/clean point/I-Frame before the requested time, start of decoding starting from found position skipping frames until originally requested time is reached.
Overall, the method works well when you scrub paused video, or retrieve specific still frames. When however you try to play this as smooth video, it eventually causes significant part of the effort to be wasted and spent on seeking within video stream.
Solutions here are:
re-encode video to remove or reduce temporal compression (e.g. Motion JPEG AVI/MOV/MP4 files)
whenever possible prefer to skip frames and/or re-timestamp them according to your algorithm instead of seeking
have a cached of decoded video frames and pick from there, populate them as necessary in worker thread
The latter two are unfortunately hard to achieve without advanced filter development (where continuous decoding without interruption by seeking operations is the key to achieving decent performance). With basic DirectShow.Net you only have basic control over streaming and hence the first item from the list above.
Wanted to post a comment instead of an answer, but don't have the reputation. I think your heading in the wrong direction with Direct Show. I've been messing with motion-jpeg for a few years now between C# & Android, and have gotten great performance with built-in .NET code (for converting byte-array to Jpeg frame) and a bit of multi-threading. I can easily achieve over 30fps from multiple devices with each device running in it's own thread.
Below is an older version of my motion-jpeg parser from my C# app 'OmniView'. To use, just send the network stream to the constructor, and receive the OnImageReceived event. Then you can easily save the frames to the hard-drive for later use (perhaps with the filename set to the timestamp for easy lookup). For better performance though, you will want to save all of the images to one file.
using OmniView.Framework.Helpers;
using System;
using System.IO;
using System.Text;
using System.Windows.Media.Imaging;
namespace OmniView.Framework.Devices.MJpeg
{
public class MJpegStream : IDisposable
{
private const int BUFFER_SIZE = 4096;
private const string tag_length = "Content-Length:";
private const string stamp_format = "yyyyMMddHHmmssfff";
public delegate void ImageReceivedEvent(BitmapImage img);
public delegate void FrameCountEvent(long frames, long failed);
public event ImageReceivedEvent OnImageReceived;
public event FrameCountEvent OnFrameCount;
private bool isHead, isSetup;
private byte[] buffer, newline, newline_src;
private int imgBufferStart;
private Stream data_stream;
private MemoryStream imgStreamA, imgStreamB;
private int headStart, headStop;
private long imgSize, imgSizeTgt;
private bool useStreamB;
public volatile bool EnableRecording, EnableSnapshot;
public string RecordPath, SnapshotFilename;
private string boundary_tag;
private bool tagReadStarted;
private bool enableBoundary;
public volatile bool OututFrameCount;
private long FrameCount, FailedCount;
public MJpegStream() {
isSetup = false;
imgStreamA = new MemoryStream();
imgStreamB = new MemoryStream();
buffer = new byte[BUFFER_SIZE];
newline_src = new byte[] {13, 10};
}
public void Init(Stream stream) {
this.data_stream = stream;
FrameCount = FailedCount = 0;
startHeader(0);
}
public void Dispose() {
if (data_stream != null) data_stream.Dispose();
if (imgStreamA != null) imgStreamA.Dispose();
if (imgStreamB != null) imgStreamB.Dispose();
}
//=============================
public void Process() {
if (isHead) processHeader();
else {
if (enableBoundary) processImageBoundary();
else processImage();
}
}
public void Snapshot(string filename) {
SnapshotFilename = filename;
EnableSnapshot = true;
}
//-----------------------------
// Header
private void startHeader(int remaining_bytes) {
isHead = true;
headStart = 0;
headStop = remaining_bytes;
imgSizeTgt = 0;
tagReadStarted = false;
}
private void processHeader() {
int t = BUFFER_SIZE - headStop;
headStop += data_stream.Read(buffer, headStop, t);
int nl;
//
if (!isSetup) {
byte[] new_newline;
if ((nl = findNewline(headStart, headStop, out new_newline)) >= 0) {
string tag = Encoding.UTF8.GetString(buffer, headStart, nl - headStart);
if (tag.StartsWith("--")) boundary_tag = tag;
headStart = nl+new_newline.Length;
newline = new_newline;
isSetup = true;
return;
}
} else {
while ((nl = findData(newline, headStart, headStop)) >= 0) {
string tag = Encoding.UTF8.GetString(buffer, headStart, nl - headStart);
if (!tagReadStarted && tag.Length > 0) tagReadStarted = true;
headStart = nl+newline.Length;
//
if (!processHeaderData(tag, nl)) return;
}
}
//
if (headStop >= BUFFER_SIZE) {
string data = Encoding.UTF8.GetString(buffer, headStart, headStop - headStart);
throw new Exception("Invalid Header!");
}
}
private bool processHeaderData(string tag, int index) {
if (tag.StartsWith(tag_length)) {
string val = tag.Substring(tag_length.Length);
imgSizeTgt = long.Parse(val);
}
//
if (tag.Length == 0 && tagReadStarted) {
if (imgSizeTgt > 0) {
finishHeader(false);
return false;
}
if (boundary_tag != null) {
finishHeader(true);
return false;
}
}
//
return true;
}
private void finishHeader(bool enable_boundary) {
int s = shiftBytes(headStart, headStop);
enableBoundary = enable_boundary;
startImage(s);
}
//-----------------------------
// Image
private void startImage(int remaining_bytes) {
isHead = false;
imgBufferStart = remaining_bytes;
Stream imgStream = getStream();
imgStream.Seek(0, SeekOrigin.Begin);
imgStream.SetLength(imgSizeTgt);
imgSize = 0;
}
private void processImage() {
long img_r = (imgSizeTgt - imgSize - imgBufferStart);
int bfr_r = Math.Max(BUFFER_SIZE - imgBufferStart, 0);
int t = (int)Math.Min(img_r, bfr_r);
int s = data_stream.Read(buffer, imgBufferStart, t);
int x = imgBufferStart + s;
appendImageData(0, x);
imgBufferStart = 0;
//
if (imgSize >= imgSizeTgt) processImageData(0);
}
private void processImageBoundary() {
int t = Math.Max(BUFFER_SIZE - imgBufferStart, 0);
int s = data_stream.Read(buffer, imgBufferStart, t);
//
int nl, start = 0;
int end = imgBufferStart + s;
while ((nl = findData(newline, start, end)) >= 0) {
int tag_length = boundary_tag.Length;
if (nl+newline.Length+tag_length > BUFFER_SIZE) {
appendImageData(start, nl+newline.Length - start);
start = nl+newline.Length;
continue;
}
//
string v = Encoding.UTF8.GetString(buffer, nl+newline.Length, tag_length);
if (v == boundary_tag) {
appendImageData(start, nl - start);
int xstart = nl+newline.Length + tag_length;
int xsize = shiftBytes(xstart, end);
processImageData(xsize);
return;
} else {
appendImageData(start, nl+newline.Length - start);
}
start = nl+newline.Length;
}
//
if (start < end) {
int end_x = end - newline.Length;
if (start < end_x) {
appendImageData(start, end_x - start);
}
//
shiftBytes(end - newline.Length, end);
imgBufferStart = newline.Length;
}
}
private void processImageData(int remaining_bytes) {
if (EnableSnapshot) {
EnableSnapshot = false;
saveSnapshot();
}
//
try {
BitmapImage img = createImage();
if (EnableRecording) recordFrame();
if (OnImageReceived != null) OnImageReceived.Invoke(img);
FrameCount++;
}
catch (Exception) {
// output frame error ?!
FailedCount++;
}
//
if (OututFrameCount && OnFrameCount != null) OnFrameCount.Invoke(FrameCount, FailedCount);
//
useStreamB = !useStreamB;
startHeader(remaining_bytes);
}
private void appendImageData(int index, int length) {
Stream imgStream = getStream();
imgStream.Write(buffer, index, length);
imgSize += (length - index);
}
//-----------------------------
private void recordFrame() {
string stamp = DateTime.Now.ToString(stamp_format);
string filename = RecordPath+"\\"+stamp+".jpg";
//
ImageHelper.Save(getStream(), filename);
}
private void saveSnapshot() {
Stream imgStream = getStream();
//
imgStream.Position = 0;
Stream file = File.Open(SnapshotFilename, FileMode.Create, FileAccess.Write);
try {imgStream.CopyTo(file);}
finally {file.Close();}
}
private BitmapImage createImage() {
Stream imgStream = getStream();
imgStream.Position = 0;
return ImageHelper.LoadStream(imgStream);
}
//-----------------------------
private Stream getStream() {return useStreamB ? imgStreamB : imgStreamA;}
private int findNewline(int start, int stop, out byte[] data) {
for (int i = start; i < stop; i++) {
if (i < stop-1 && buffer[i] == newline_src[0] && buffer[i+1] == newline_src[1]) {
data = newline_src;
return i;
} else if (buffer[i] == newline_src[1]) {
data = new byte[] {newline_src[1]};
return i;
}
}
data = null;
return -1;
}
private int findData(byte[] data, int start, int stop) {
int data_size = data.Length;
for (int i = start; i < stop-data_size; i++) {
if (findInnerData(data, i)) return i;
}
return -1;
}
private bool findInnerData(byte[] data, int buffer_index) {
int count = data.Length;
for (int i = 0; i < count; i++) {
if (data[i] != buffer[buffer_index+i]) return false;
}
return true;
}
private int shiftBytes(int start, int end) {
int c = end - start;
for (int i = 0; i < c; i++) {
buffer[i] = buffer[end-c+i];
}
return c;
}
}
}

C# task async await smart card - UI thread blocked

I'm new to C#, and I'm trying to use task async await for a WinsForm GUI. I've read so many tutorials about it, but all of them implement tasks differently. Some tasks use functions, and others just put the code in to execute. Some use Task.Run() or just await. Furthermore, all the examples I've seen are of functions that are included in the UI class. I'm trying to run functions that are in classes that are within my UI. I'm just really confused now, and don't know what's right/wrong.
What I'm trying to do is write a file to an EEPROM, using the SpringCard API/ PC/SC library. I parse the file into packets and write it to the smart card. I also want to update a status label and progress bar. A lot of things can go wrong. I have flags set in the smart card, and right now I just a while loop running until it reads a certain flag, which will obviously stall the program if it's forever waiting for a flag.
I guess I'm just confused about how to set it up. Help. I've tried using Tasks. Here is my code so far.
/* Initialize open file dialog */
OpenFileDialog ofd = new OpenFileDialog();
ofd.Multiselect = false;
ofd.Filter = "BIN Files (.bin)|*.bin|HEX Files (.hex)|*.hex";
ofd.InitialDirectory = "C:";
ofd.Title = "Select File";
//Check open file dialog result
if (ofd.ShowDialog() != DialogResult.OK)
{
if (shade != null)
{
shade.Dispose();
shade = null;
}
return;
}
//progform.Show();
Progress<string> progress = new Progress<string>();
file = new ATAC_File(ofd.FileName);
try
{
cardchannel.DisconnectReset();
Task upgrade = upgradeASYNC();
if(cardchannel.Connect())
{
await upgrade;
}
else
{
add_log_text("Connection to the card failed");
MessageBox.Show("Failed to connect to the card in the reader : please check that you don't have another application running in background that tries to work with the smartcards in the same time");
if (shade != null)
{
shade.Dispose();
shade = null;
}
cardchannel = null;
}
}
private async Task upgradeASYNC()
{
int i = 0;
int totalpackets = 0;
add_log_text("Parsing file into packets.");
totalpackets = file.parseFile();
/*progress.Report(new MyTaskProgressReport
{
CurrentProgressAmount = i,
TotalProgressAmount = totalpackets,
CurrentProgressMessage = "Sending upgrade file..."
});*/
ST_EEPROMM24LR64ER chip = new ST_EEPROMM24LR64ER(this, cardchannel, file, EEPROM.DONOTHING);
bool writefile = chip.WriteFileASYNC();
if(writefile)
{
add_log_text("WRITE FILE OK.");
}
else
{
add_log_text("WRITE FILE BAD.");
}
}
In the file class:
public int parseFile()
{
FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
FileInfo finfo = new FileInfo(filename);
int readbytecount = 0;
int packetcount = 0;
int numofbytesleft = 0;
byte[] hash = new byte[4];
byte[] packetinfo = new byte[4];
byte[] filechunk = null;
/* Read file until all file bytes read */
while (size_int > readbytecount)
{
//Initialize packet array
filechunk = new byte[MAXDATASIZE];
//read into byte array of max write size
if (packetcount < numoffullpackets)
{
//Initialize packet info array
packetinfo[0] = (byte)((size_int + 1) % 0x0100); //packetcountlo
packetinfo[1] = (byte)((size_int + 1) / 0x0100); //packetcounthi
packetinfo[2] = (byte)((packetcount + 1) / 0x0100); //packetcounthi
packetinfo[3] = (byte)((packetcount + 1) % 0x0100); //packetcountlo
//read bytes from file into packet array
bytesread = br.Read(filechunk, 0, MAXDATASIZE);
//add number of bytes read to readbytecount
readbytecount += bytesread;
}
//read EOF into byte array of size smaller than max write size
else if (packetcount == numoffullpackets)
{
//find out how many bytes left to read
numofbytesleft = size_int - (MAXDATASIZE * numoffullpackets);
//Initialize packet info array
packetinfo[0] = (byte)((size_int + 1) / 0x0100); //packetcounthi
packetinfo[1] = (byte)((size_int + 1) % 0x0100); //packetcountlo
packetinfo[2] = (byte)((packetcount + 1) / 0x0100); //packetcounthi
packetinfo[3] = (byte)((packetcount + 1) % 0x0100); //packetcountlo
//Initialize array and add byte padding, MAXWRITESIZE-4 because the other 4 bytes will be added when we append the CRC
//filechunk = new byte[numofbytesleft];
for (int j = 0; j < numofbytesleft; j++)
{
//read byte from file
filechunk[j] = br.ReadByte();
//add number of bytes read to readbytecount
readbytecount++;
}
for (int j = numofbytesleft; j < MAXDATASIZE; j++)
{
filechunk[j] = 0xFF;
}
}
else
{
MessageBox.Show("ERROR");
}
//calculate crc32 on byte array
int i = 0;
foreach (byte b in crc32.ComputeHash(filechunk))
{
hash[i++] = b;
}
//Append hash to filechunk to create new byte array named chunk
byte[] chunk = new byte[MAXWRITESIZE];
Buffer.BlockCopy(packetinfo, 0, chunk, 0, packetinfo.Length);
Buffer.BlockCopy(filechunk, 0, chunk, packetinfo.Length, filechunk.Length);
Buffer.BlockCopy(hash, 0, chunk, (packetinfo.Length + filechunk.Length), hash.Length);
//Add chunk to byte array list
packetcount++;
PacketBYTE.Add(chunk);
}
parseCMD();
return PacketBYTE.Count;
}
In the EEPROM class:
public bool WriteFileASYNC()
{
int blocknum = ATAC_CONSTANTS.RFBN_RFstartwrite;
byte[] response = null;
CAPDU[] EEPROMcmd = null;
int packetCount = 0;
log("ATTEMPT: Read response funct flag.");
do
{
StopRF();
Thread.SpinWait(100);
StartRF();
log("ATTEMPT: Write function flag.");
while (!WriteFlag(ATAC_CONSTANTS.RFBN_functflag, EEPROM.UPLOADAPP)) ;
} while (ReadFunctFlag(ATAC_CONSTANTS.RFBN_responseflag, 0) != EEPROM.UPLOADAPP);
for (int EEPROMcount = 0; EEPROMcount < file.CmdBYTE.Count; EEPROMcount++)
{
string temp = "ATTEMPT: Write EEPROM #" + EEPROMcount.ToString();
log(temp);
EEPROMcmd = file.CmdBYTE[EEPROMcount];
while (EEPROMcmd[blocknum] != null)
{
if (blocknum % 32 == 0)
{
string tempp = "ATTEMPT: Write packet #" + packetCount.ToString();
log("ATTEMPT: Write packet #");
packetCount++;
}
do
{
response = WriteBinaryASYNC(EEPROMcmd[blocknum]);
} while (response == null);
blocknum++;
}
log("ATTEMPT: Write packet flag.");
while (!WriteFlag(ATAC_CONSTANTS.RFBN_packetflag, ATAC_CONSTANTS.RFflag)) ;
log("ATTEMPT: Write packet flag.");
do
{
StopRF();
Thread.SpinWait(300);
StartRF();
} while (!ReadFlag(ATAC_CONSTANTS.RFBN_packetresponseflag, ((blocknum/32) - 1)*(EEPROMcount+1)));
blocknum = ATAC_CONSTANTS.RFBN_RFstartwrite;
}
return true;
}
Tasks are threads.
When you write this:
Task upgrade = upgradeASYNC();
you are simply executing upgradeASYNC in a new thread.
When you write this:
await upgrade;
You are only waiting for the new thread to finish (before going to the next instruction).
And this method
private async Task upgradeASYNC()
returns a Task object only because you add the async keyword. But in the body of this method there is no await. So it just runs synchronously, like any thread job.
I don't have time to rewrite your code, i let that to another stackoverflow user. You should learn and work harder ;)

How to read large file and split by "\r\n"

I have a large file >200MB. The file is an CSV-file from an external party, but sadly I cannot just read the file line by line, as \r\n is used to define a new line.
Currently I am reading in all the lines using this approach:
var file = File.ReadAllText(filePath, Encoding.Default);
var lines = Regex.Split(file, #"\r\n");
for (int i = 0; i < lines.Length; i++)
{
string line = lines[i];
...
}
How can I optimize this? After calling ReadAllText on my 225MB file, the process is using more than 1GB RAM. Is it possible to use a streaming approach in my case, where I need to split the file using my \r\n pattern?
EDIT1:
Your solutions using the File.ReadLines and a StreamReader will not work, as it sees each line in the file as one line. I need to split the file using my \r\n pattern. Reading the file using my code results in 758.371 lines (which is correct), whereas a normal line counts results in more than 1.5 million.
SOLUTION
public static IEnumerable<string> ReadLines(string path)
{
const string delim = "\r\n";
using (StreamReader sr = new StreamReader(path))
{
StringBuilder sb = new StringBuilder();
while (!sr.EndOfStream)
{
for (int i = 0; i < delim.Length; i++)
{
Char c = (char)sr.Read();
sb.Append(c);
if (c != delim[i])
break;
if (i == delim.Length - 1)
{
sb.Remove(sb.Length - delim.Length, delim.Length);
yield return sb.ToString();
sb = new StringBuilder();
break;
}
}
}
if (sb.Length>0)
yield return sb.ToString();
}
}
You can use File.ReadLines which returns IEnumerable<string> instead of loading whole file to memory.
foreach(var line in File.ReadLines(#filePath, Encoding.Default)
.Where(l => !String.IsNullOrEmpty(l)))
{
}
using StreamReader it will be easy.
using (StreamReader sr = new StreamReader(path))
{
foreach(string line = GetLine(sr))
{
//
}
}
IEnumerable<string> GetLine(StreamReader sr)
{
while (!sr.EndOfStream)
yield return new string(GetLineChars(sr).ToArray());
}
IEnumerable<char> GetLineChars(StreamReader sr)
{
if (sr.EndOfStream)
yield break;
var c1 = sr.Read();
if (c1 == '\\')
{
var c2 = sr.Read();
if (c2 == 'r')
{
var c3 = sr.Read();
if (c3 == '\\')
{
var c4 = sr.Read();
if (c4 == 'n')
{
yield break;
}
else
{
yield return (char)c1;
yield return (char)c2;
yield return (char)c3;
yield return (char)c4;
}
}
else
{
yield return (char)c1;
yield return (char)c2;
yield return (char)c3;
}
}
else
{
yield return (char)c1;
yield return (char)c2;
}
}
else
yield return (char)c1;
}
Use StreamReader to read file line by line:
using (StreamReader sr = new StreamReader(filePath))
{
while (true)
{
string line = sr.ReadLine();
if (line == null)
break;
}
}
How about
StreamReader sr = new StreamReader(path);
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
}
Using the stream reader approach means the whole file won't get loaded into memory.
This was my lunch break :)
Set MAXREAD to the amount of data you want in memory if for example using a foreach since I'm using yield return. Use the code at your own risk, I've tried it on smaller sets of data :)
Your usage would be something like:
foreach (var row in StreamReader(FileName).SplitByChar(new char[] {'\r','\n'}))
{
// Do something awesome! :)
}
And the extension method like this:
public static class FileStreamExtensions
{
public static IEnumerable<string> SplitByChar(this StreamReader stream, char[] splitter)
{
int MAXREAD = 1024 * 1024;
var chars = new List<char>(MAXREAD);
var bytes = new char[MAXREAD];
var lastStop = 0;
var read = 0;
while (!stream.EndOfStream)
{
read = stream.Read(bytes, 0, MAXREAD);
lastStop = 0;
for (int i = 0; i < read; i++)
{
if (bytes[i] == splitter[0])
{
var assume = true;
for (int p = 1; p < splitter.Length; p++)
{
assume &= splitter[p] == bytes[i + p];
}
if (assume)
{
chars.AddRange(bytes.Skip(lastStop).Take(i - lastStop));
var res = new String(chars.ToArray());
chars.Clear();
yield return res;
i += splitter.Length - 1;
lastStop = i + 1;
}
}
}
chars.AddRange(bytes.Skip(lastStop));
}
chars.AddRange(bytes.Skip(lastStop).Take(read - lastStop));
yield return new String(chars.ToArray());
}
}

Categories

Resources