Antlr4: How to parse only one part of the file - c#

Is it possible to parse only let's say the first half of the file with antlr4?
I am parsing large files and I am using UnbufferedCharStream and UnbufferedTokenStream.
I am not building a parse tree and I am using parse actions instead of visitor/listener patterns. With these I was able to save a significant amount of RAM and improve the parse speed.
However it still takes around 15s to parse the whole file. The parsed file is divided into two sections. The first half of the file has metadata, the second one is the actual data. The majority of the time is spent in the data section as there are more than 3m. lines to be parsed. The metadata section has only around 20,000 lines. Is it possible to parse only the first half, which would improve parse speed significantly? Is it possible to inject EOF manually after the metadata section?
How about dividing the file into two?

How about you programatically extract only the part you want to parse and create a new tmp.extension file that you parse? It could look like this:
System.IO.File.WriteAllText(#"C:\Users\Path\tmp.extension", text);
After the parsing you can delete the tmp file and the original stays as it is.
System.IO.File.Delete(#"C:\Users\Path\tmp.extension");

ANTLR4 creates recursive-decent parsers, with parse functions that can directly be invoked. Assume you have a grammar like this:
grammar t;
start: meta data EOF;
meta: x y z;
data: a b c+;
Your natural entry point would be the start rule (in your case that would be the rule for the entire file). But it's also possible to only invoke rule meta, which in your case could be the header part of the file. If you don't end this rule with EOF, your parser will just consume enough input to parse this particular part of the entire file.

So, i was able to find a solution. I overrode the Emit method from the generated lexer
so it finds the beginning of the second section and it manually injects EOF token,
like this:
public override IToken Emit()
{
string tokenText = base.Text;
if (this.metaDataOnly && tokenText == "DATA")
return base.EmitEOF();
return base.Emit();
}

Related

Convert text of a C# project into 1 text file

So I'm doing Google Code Jam, and for their new format I have to upload my code as a single text file.
I like writing my code as properly constructed classes and multiple files even when under time pressure (I find that I save more time in clarity and my own debugging speed than I lose in wasted time.) and I want to re-use the common code.
Once I've got my code finished I have to convert from a series of classes in multiple files, to a single file.
Currently I'm just manually copying and pasting all the files' text into a single file, and then manually massaging the usings and namespaces to make it all work.
Is there a better option?
Ideally a tool that will JustDoIt for me?
Alternatively, if there were some predictable algorithm that I could implement that wouldn't require any manual tweaks?
Write your classes so that all "using"s are inside "namespace"
Write a script which collects all *.cs files and concatenates them
This is probably not the most optimal way to do this but this is a algorithm which can do what you need:
loop through every file and grab every line starting with "using" -> write them to a temp file/buffer
check for duplicates and remove them
get the position of the first '{' after the charsequence "namespace"
get the position of the last '}' in the file
append the text in between these two positions onto a temp file/buffer
append the second file/buffer to the first one
write out the merged buffer
It is very subjective. I see the algorithm as the following in pseudo code:
usingsLines = new HashSet<string>();
newFile = new StringBuilder();
foreeach (file in listOfFiles)
{
var textFromFile = file.ReadToEnd();
var usingOperators = textFromFile.GetUsings();
var fileBody = textFromFile.GetBody();
newFile+=fileBody ;
}
newFile = usingsLines.ToString() + newFile;
// As a result if will have something like this
// using usingsfromFirstFile;
// using usingsfromSecondFile;
//
// namespace FirstFileNamespace
// {
// ...
// }
//
// namespace SecondFileNamespace
// {
// ...
// }
But keep in mind this approach can lead to conflicts in namespaces if two different namespaces contan the same classes etc. To solve it you need to fix it manually, or rid of using operator and use fullnames with namespaces.
Also these few links may be useful:
Merge files,
Merge file in Java

Search multiple XML files for string

I have a folder with 400k+ XML-documents and many more to come, each file is named with 'ID'.xml, and each belongs to a specific user. In a SQL server database I have the 'ID' from the XML-file matched with a userID which is where I interconnect the XML-document with the user. A user can have an infinite number of XML-document attached (but let's say maximum >10k documents)
All XML-documents have a few common elements, but the structure can vary a little.
Now, each user will need to make a search in the XML-documents belonging to her, and what I've tried so far (looping through each file and read it with a streamreader) is too slow. I don't care, if it reads and matches the whole file with attributes and so on, or just the text in each element. What should be returned in the first place is a list with the ID's from the filenames.
What is the fastest and smartest methods here, if any?
I think LINQ-to-XML is probably the direction you want to go.
Assuming you know the names of the tags that you want, you would be able to do a search for those particular elements and return the values.
var xDoc = XDocument.Load("yourFile.xml");
var result = from dec in xDoc.Descendants()
where dec.Name == "tagName"
select dec.Value;
results would then contain an IEnumerable of the value of any XML tag that has has a name matching "tagName"
The query could also be written like this:
var result = from dec in xDoc.Decendants("tagName")
select dec.Value;
or this:
var result = xDoc.Descendants("tagName").Select(tag => tag.Value);
The output would be the same, it is just a different way to filter based on the element name.
You'll have to open each file that contains relevant data, and if you don't know which files contain it, you'll have to open all that may match. So the only performance gain would be in the parsing routine.
When parsing Xml, if speed is the requirement, you could use the XmlReader as it performs way better than the other parsers (most read the entire Xml file before you can query them). The fact that it is forward-only should not be a limitation for this case.
If parsing takes about as long as the disk I/O, you could try parsing files in parallel, so one thread could wait for a file to be read while the other parses the loaded data. I don't think you can make that big a win there, though.
Also what is "too slow" and what is acceptable? Would this solution of many files become slower over time?
Use LINQ to XML.
Check out this article. over at msdn.
XDocument doc = XDocument.Load("C:\file.xml");
And don't forget that reading so many files will always be slow, you may try writing a multi-threaded program...
If I understood correctly you don't want to open each xml file for particular user because it's too slow whether you are using linq to xml or some other method.
Have you considered saving some values both in xml file and relational database (tags) (together with xml ID).
In that case you could search for some values in DB first and select only xml files that contain searched values ?
for example:
ID, tagName1, tagName2
xmlDocID, value1, value2
my other question is, why have you chosen to store xml documents in file system. If you are using SQL Server 2005/2008, it has very good support for storing, searching through xml columns (even indexing some values in xml)
Are you just looking for files that have a specific string in the content somewhere?
WARNING - Not a pure .NET solution. If this scares you, then stick with the other answers. :)
If that's what you're doing, another alternative is to get something like grep to do the heavy lifting for you. Shell out to that with the "-l" argument to specify that you are only interested in filenames and you are on to a winner. (for more usage examples, see this link)
L.B Have already made a valid point.
This is a case, where Lucene.Net(or any indexer) would be a must. It would give you a steady (very fast) performance in all searches. And it is one of the primary benefits of indexers, to handle a very large amount of arbitrary data.
Or is there any reason, why you wouldn't use Lucene?
Lucene.NET (and Lucene) support incremental indexing. If you can re-open the index for reading every so often, then you can keep adding documents to the index all day long -- your searches will be up-to-date with the last time you re-opened the index for searching.

Text File Mapping

I have a text files that are coming always in the same text format (I do not have the xsd of the text file).
I want to map the data from it to some class.
Is there some standard way to do so, except starting writing string parsers or some complicated REGEXs.
I really do not want to go with text parsers becasue we are several people working on this and it probably take each of us time to understand what the other is doing .
Example
Thanks.
If you have a special format you need your own parser for sure.
If the format is a standard one like xml, yml, json, csv etc, the parsing library will be always available in your language.
UPDATE
From the sample you provide it seems the format is more like INI file but entries are custom. May be you could extend NINI
Solution:
Change the format of that file to a standard format like tab delimited or comma separated csv file.
Then use a many libraries that out there to read that files or import it in a database and use an ORM like Entity Framework to read them
Assuming you cannot change the incoming file format to something more machine-readable, then you will probably need to write your own custom parser. The best way to do it would be to create classes to represent and store all of the different kinds of data, using the appropriate data formats for each field (custom enums, DateTime, Version, etc.)
Try to compartmentalize the code. For example, take these lines here:
272 298 9.663 18.665 -90.000 48 0 13 2 10 5 20009 1 2 1 257 "C4207" 0 0 1000 0 0
This could be a single class or struct. Its constructor could accept the above string as a parameter, and each value could be parsed to to different local members. That same class could have a Save() or ToString() method that converts all the values back to a string if needed.
Then the parent class would simply contain an array of the above structure, based on how many entries are in the file.

How to read a text file into a List in C#

I have a text file that has the following format:
1234
ABC123 1000 2000
The first integer value is a weight and the next line has three values, a product code, weight and cost, and this line can be repeated any number of times. There is a space in between each value.
I have been able to read in the text file, store the first value on the first line into a variable, and then the subsequent lines into an array and then into a list, using first readline.split('').
To me this seems an inefficient way of doing it, and I have been trying to find a way where I can read from the second line where the product codes, weights and costs are listed down into a list without the need of using an array. My list control contains an object where I am only storing the weight and cost, not the product code.
Does anyone know how to read in a text file, take in some values from the file straight into a list control?
Thanks
What you do is correct. There is no generalized way of doing it, since what you did is that you descirbed the algorithm for it, that has to be coded or parametrized somehow.
Since your text file isn't as structured as a CSV file, this kind of manual parsing is probably your best bet.
C# doesn't have a Scanner class like Java, so what you wan't doesn't exist in the BCL, though you could write your own.
The other answers are correct - there's no generalized solution for this.
If you've got a relatively small file, you can use File.ReadAllLines(), which will at least get rid of a lot cruft code, since it'll immediately convert it to a string array for you.
If you don't want to parse strings from the file and to reserve an additional memory for holding split strings you can use a binary format to store your information in the file. Then you can use the class BinaryReader with methods like ReadInt32(), ReadDouble() and others. It is more efficient than read by characters.
But one thing: binary format is bad readable by humans. It will be difficult to edit the file in the editor. But programmatically - without any problems.

How to write a file format handler

Today i'm cutting video at work (yea me!), and I came across a strange video format, an MOD file format with an companion MOI file.
I found this article online from the wiki, and I wanted to write a file format handler, but I'm not sure how to begin.
I want to write a file format handler to read the information files, has anyone ever done this and how would I begin?
Edit:
Thanks for all the suggestions, I'm going to attempt this tonight, and I'll let you know. The MOI files are not very large, maybe 5KB in size at most (I don't have them in front of me).
You're in luck in that the MOI format at least spells out the file definition. All you need to do is read in the file and interpret the results based on the file definition.
Following the definition, you should be able to create a class that could read and interpret a file which returns all of the file format definitions as properties in their respective types.
Reading the file requires opening the file and generally reading it on a byte-by-byte progression, such as:
using(FileStream fs = File.OpenRead(path-to-your-file)) {
while(true) {
int b = fs.ReadByte();
if(b == -1) {
break;
}
//Interpret byte or bytes here....
}
}
Per the wiki article's referenced PDF, it looks like someone already reverse engineered the format. From the PDF, here's the first entry in the format:
Hex-Address: 0x00
Data Type: 2 Byte ASCII
Value (Hex): "V6"
Meaning: Version
So, a simplistic implementation could pull the first 2 bytes of data from the file stream and convert to ASCII, which would provide a property value for the Version.
Next entry in the format definition:
Hex-Address: 0x02
Data Type: 4 Byte Unsigned Integer
Value (Hex):
Meaning: Total size of MOI-file
Interpreting the next 4 bytes and converting to an unsigned int would provide a property value for the MOI file size.
Hope this helps.
If the files are very large and just need to be streamed in, I would create a new reader object that uses an unmanagedmemorystream to read the information in.
I've done a lot of different file format processing like this. More recently, I've taken to making a lot of my readers more functional where reading tends to use 'yield return' to return read only objects from the file.
However, it all depends on what you want to do. If you are trying to create a general purpose format for use in other applications or create an API, you probably want to conform to an existing standard. If however you just want to get data into your own application, you are free to do it however you want. You could use a binaryreader on the stream and construct the information you need within your app, or get the reader to return objects representing the contents of the file.
The one thing I would recommend. Make sure it implements IDisposable and you wrap it in a using!

Categories

Resources