Read last line in open file [duplicate] - c#

This question already has answers here:
How to read a text file reversely with iterator in C#
(11 answers)
Closed 8 years ago.
I'm fairly new all this, but I feel like I'm pretty close to making this work, I just need a little help! I want to create a DLL which can read and return the last line in a file that is open in another application. This is what my code looks like, I just don't know what to put in the while statement.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
namespace SharedAccess
{
public class ReadShare {
static void Main(string path) {
FileStream stream = File.Open(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
StreamReader reader = new StreamReader(stream);
while (!reader.EndOfStream)
{
//What goes here?
}
}
}
}

To read last line,
var lastLine = File.ReadLines("YourFileName").Last();
If it's a large File
public static String ReadLastLine(string path)
{
return ReadLastLine(path, Encoding.ASCII, "\n");
}
public static String ReadLastLine(string path, Encoding encoding, string newline)
{
int charsize = encoding.GetByteCount("\n");
byte[] buffer = encoding.GetBytes(newline);
using (FileStream stream = new FileStream(path, FileMode.Open))
{
long endpos = stream.Length / charsize;
for (long pos = charsize; pos < endpos; pos += charsize)
{
stream.Seek(-pos, SeekOrigin.End);
stream.Read(buffer, 0, buffer.Length);
if (encoding.GetString(buffer) == newline)
{
buffer = new byte[stream.Length - stream.Position];
stream.Read(buffer, 0, buffer.Length);
return encoding.GetString(buffer);
}
}
}
return null;
}
I refered here,
How to read only last line of big text file

File ReadLines should work for you.
var value = File.ReadLines("yourFile.txt").Last();

Related

Best way to read a short array from disk in C#?

I have to write 4GB short[] arrays to and from disk, so I have found a function to write the arrays, and I am struggling to write the code to read the array from the disk. I normally code in other languages so please forgive me if my attempt is a bit pathetic so far:
using UnityEngine;
using System.Collections;
using System.IO;
public class RWShort : MonoBehaviour {
public static void WriteShortArray(short[] values, string path)
{
using (FileStream fs = new FileStream(path, FileMode.OpenOrCreate, FileAccess.Write))
{
using (BinaryWriter bw = new BinaryWriter(fs))
{
foreach (short value in values)
{
bw.Write(value);
}
}
}
} //Above is fine, here is where I am confused:
public static short[] ReadShortArray(string path)
{
byte[] thisByteArray= File.ReadAllBytes(fileName);
short[] thisShortArray= new short[thisByteArray.length/2];
for (int i = 0; i < 10; i+=2)
{
thisShortArray[i]= ? convert from byte array;
}
return thisShortArray;
}
}
Shorts are two bytes, so you have to read in two bytes each time. I'd also recommend using a yield return like this so that you aren't trying to pull everything into memory in one go. Though if you need all of the shorts together that won't help you.. depends on what you're doing with it I guess.
void Main()
{
short[] values = new short[] {
1, 999, 200, short.MinValue, short.MaxValue
};
WriteShortArray(values, #"C:\temp\shorts.txt");
foreach (var shortInfile in ReadShortArray(#"C:\temp\shorts.txt"))
{
Console.WriteLine(shortInfile);
}
}
public static void WriteShortArray(short[] values, string path)
{
using (FileStream fs = new FileStream(path, FileMode.OpenOrCreate, FileAccess.Write))
{
using (BinaryWriter bw = new BinaryWriter(fs))
{
foreach (short value in values)
{
bw.Write(value);
}
}
}
}
public static IEnumerable<short> ReadShortArray(string path)
{
using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read))
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[2];
while (br.Read(buffer, 0, 2) > 0)
yield return (short)(buffer[0]|(buffer[1]<<8));
}
}
You could also define it this way, taking advantage of the BinaryReader:
public static IEnumerable<short> ReadShortArray(string path)
{
using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read))
using (BinaryReader br = new BinaryReader(fs))
{
while (br.BaseStream.Position < br.BaseStream.Length)
yield return br.ReadInt16();
}
}
Memory-mapping the file is your friend, there's a MemoryMappedViewAccessor.ReadInt16 function that will allow you to directly read the data, with type short, out of the OS disk cache. Also a Write() overload that accepts an Int16. Also ReadArray and WriteArray functions if you are calling functions that need a traditional .NET array.
Overview of using Memory-mapped files in .NET on MSDN
If you want to do it with ordinary file I/O, use a block size of 1 or 2 megabytes and the Buffer.BlockCopy function to move data en masse between byte[] and short[], and use the FileStream functions that accept a byte[]. Forget about BinaryWriter or BinaryReader, forget about doing 2 bytes at a time.
It's also possible to do the I/O directly into a .NET array with the help of p/invoke, see my answer using ReadFile and passing the FileStream object's SafeFileHandle property here But even though this has no extra copies, it still shouldn't keep up with the memory-mapped ReadArray and WriteArray calls.

Move position in FileStream (C#)

I have a txt file like this
#header1
#header2
#header3
....
#headerN
ID Value Pvalue
a 0.1 0.002
b 0.2 0.002
...
My code will try to parse
FileStream fs = new FileStream(file, FileMode.Open, FileMode.Read);
......
Table t = Table.Load(fs);
what I want is to make the start position of the Stream right before "ID", so I can feed the stream to the code and make a new table. But I am not sure what is the correct way to do it.
Thanks in advance
Ideally, you should convert Table.Load to take an IEnumerable<string> or at least a StreamReader, not a raw Stream.
If this is not an option, you can read the whole file into memory, skip its header, and write the result into MemoryStream:
MemoryStream stream = new MemoryStream();
using (var writer = new StreamWriter(stream, Encoding.UTF8);
foreach (var line in File.ReadLines(fileName).SkipWhile(s => s.StartsWith("#"))) {
writer.WriteLine(line);
}
}
stream.Position = 0;
Table t = Table.Load(stream);
Try this code
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace ConsoleApplication57
{
class Program
{
const string file = "";
static void Main(string[] args)
{
FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read);
StreamReader reader = new StreamReader(fs);
string inputline = "";
State state = State.FIND_HEADER;
while((inputline = reader.ReadLine()) != null)
{
switch (state)
{
case State.FIND_HEADER:
if (inputline.StartsWith("#header"))
{
state = State.READ_TABLE;
}
break;
case State.READ_TABLE:
Table t = Table.Load(fs);
break;
}
}
}
enum State
{
FIND_HEADER,
READ_TABLE
}
}
}

Multiple file in one Stream, custom stream

According to the answer here I want to write multiple files stream to one stream as following:
4 byte reserved for length number of each stream
each stream content write after it's length number(after 4 byte)
at the end stream will be something like this
Stream = File1 len + File1 stream content + File2 len + File2 stream content + ....
Example code:
result = new ExportResult_C()
{
PackedStudy = packed.ToArray() ,
Stream = new MemoryStream()
};
string[] zipFiles = Directory.GetFiles(zipRoot);
foreach (string fileN in zipFiles)
{
MemoryStream outFile = new MemoryStream(File.ReadAllBytes(fileN));
MemoryStream len = new MemoryStream(4);
//initiate outFile len to 4 byte push it to main stream
//Then push outFile stream to main stream
//Continue and do this for another file
}
//For test Save stream to file(s)
is it good idea? really don't know how that comments can be lines of code.
Thanks in advance.
Try this
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
byte[] testMessage = Encoding.UTF8.GetBytes("The quick brown fox jumped over the lazy dog");
MemoryStream outFile = new MemoryStream();
BinaryWriter writer = new BinaryWriter(outFile);
for (int i = 0; i < 10; i++ )
{
writer.Write(BitConverter.GetBytes(testMessage.Length), 0, 4);
writer.Write(testMessage, 0, testMessage.Length);
}
writer.Flush();
outFile.Position = 0;
BinaryReader reader = new BinaryReader(outFile, Encoding.UTF8);
while (outFile.Position < outFile.Length)
{
int size = reader.ReadInt32();
byte[] data = reader.ReadBytes(size);
}
}
}
}
I think there is a better solution I posted as answer to my question here
multiple file byte will be serialized to one stream and in client side it will be deserialized to a class of byte array.
see here, it may be useful.
But I have accepted the #jdweng solution and I appreciate his attention and help.

Is there a better way to write read and modify text lines and write them into an output stream?

I'm currently trying to read a file, modify a few placeholders within and then write the file into an output stream. As its the output stream for a page response in aspx.net I'm using the OutputStream.Write method there (the file is an attachment in the end).
Originally I had:
using (FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
while (readBytes < fs.Length)
{
tmpReadBytes = fs.Read(bytes, 0, bytes.Length);
if (tmpReadBytes > 0)
{
readBytes += tmpReadBytes;
page.Response.OutputStream.Write(bytes, 0, tmpReadBytes);
}
}
}
After thinking things over I came up with the following:
foreach(string line in File.ReadLines(filename))
{
string modifiedLine = line.Replace("#PlaceHolder#", "NewValue");
byte[] modifiedByteArray = System.Text.Encoding.UTF8.GetBytes(modifiedLine);
page.Response.OutputStream.Write(modifiedByteArray, 0, modifiedByteArray.length);
}
But it looks inefficient especially with the conversions. So my question is: Is there any better way of doing this?
As note the file itself is not very big, it's an about 3-4 KB sized textfile.
You don't need to handle the bytes your self.
If you know the file is and always will be small,
this.Response.Write(File.ReadAllText("path").Replace("old", "new"));
otherwise
using (var stream = new FileStream("path", FileMode.Open))
{
using (var streamReader = new StreamReader(stream))
{
while (streamReader.Peek() != -1)
{
this.Response.Write(streamReader.ReadLine().Replace("old", "new"));
}
}
}
To get the lines in a string array:
string[] lines = File.ReadAllLines(file);
To alter the lines, use a loop.
for (int i = 0; i < lines.Length; i++)
{
lines[i] = lines[i].Replace("#PlaceHolder#", "NewValue");
}
And to save the new text, first create a string with all the lines.
string output = "";
foreach(string line in lines)
{
output+="\n"+line;
}
And then save the string to the file.
File.WriteAllText(file,output);

C# Decompress .GZip to file

I have this Code
using System.IO;
using System.IO.Compression;
...
UnGzip2File("input.gz","output.xls");
Which run this procedure, it runs without error but after it, the input.gz is empty and created output.xls is also empty. At the start input.gz had 12MB. What am i doing wrong ? Or have you better/functional solution ?
public static void UnGzip2File(string inputPath, string outputPath)
{
FileStream inputFileStream = new FileStream(inputPath, FileMode.Create);
FileStream outputFileStream = new FileStream(outputPath, FileMode.Create);
using (GZipStream gzipStream = new GZipStream(inputFileStream, CompressionMode.Decompress))
{
byte[] bytes = new byte[4096];
int n;
// To be sure the whole file is correctly read,
// you should call FileStream.Read method in a loop,
// even if in the most cases the whole file is read in a single call of FileStream.Read method.
while ((n = gzipStream.Read(bytes, 0, bytes.Length)) != 0)
{
outputFileStream.Write(bytes, 0, n);
}
}
outputFileStream.Dispose();
inputFileStream.Dispose();
}
Opening the FileStream with the FileMode.Create will overwrite the existing file as documented here. This will cause the file to be empty when you try to decompress it, which in turn leads to an empty output-file.
Below is a working code sample, note that it is async, this can be changed by leaving out async/await and changing the call to the regular CopyTo-method and changing the return type to void.
public static async Task DecompressGZip(string inputPath, string outputPath)
{
using (var input = File.OpenRead(inputPath))
using (var output = File.OpenWrite(outputPath))
using (var gz = new GZipStream(input, CompressionMode.Decompress))
{
await gz.CopyToAsync(output);
}
}

Categories

Resources