I'm testing ANTLR 4 with C# as target language.
The Definitive ANTLR 4 reference says:
Actions are arbitrary chunks of code written in the target language
(the language in which ANTLR generates code) enclosed in {...}. We can
do whatever we want in these actions as long as they are valid target
language statements
However, I get an error if I place a '?' inside {...}
This works:
| ID '(' exprList? ')' { $result = creator.CreateFunctionCall( $ID, null, $exprList.result ); }
But if I add a questionmark, to take care of the optional exprList, ANTLR, not C#, gives an error:
| ID '(' exprList? ')' { $result = creator.CreateFunctionCall( $ID, null, $exprList?.result ); }
Error ANT02 error(67): Expr.g4:4:156: missing attribute access on rule
reference exprList in $exprList
Is this an error in ANTLR? Or can you use an escape code or similar?
Try something like this instead:
| ID '(' exprList ')' { $result = creator.CreateFunctionCall( $ID, null, $exprList.result ); }
| ID '(' ')' { $result = creator.CreateFunctionCall( $ID, null, null ); }
Related
I'm trying to parse C# preprocessors using ANTLR4 instead of ignoring them. I'm using the grammar mentioned here: https://github.com/antlr/grammars-v4/tree/master/csharp
This is my addition (now i'm focusing only on pp_conditional):
pp_directive
: Pp_declaration
| pp_conditional
| Pp_line
| Pp_diagnostic
| Pp_region
| Pp_pragma
;
pp_conditional
: pp_if_section (pp_elif_section | pp_else_section | pp_conditional)* pp_endif
;
pp_if_section:
SHARP 'if' conditional_or_expression statement_list
;
pp_elif_section:
SHARP 'elif' conditional_or_expression statement_list
;
pp_else_section:
SHARP 'else' (statement_list | pp_if_section)
;
pp_endif:
SHARP 'endif'
;
I added its entry here:
block
: OPEN_BRACE statement_list? CLOSE_BRACE
| pp_directive
;
i'm getting that error:
line 19:0 mismatched input '#if TEST\n' expecting '}'
when i use the following test case:
if (!IsPostBack){
#if TEST
ltrBuild.Text = "**TEST**";
#else
ltrBuild.Text = "**LIVE**";
#endif
}
The problem is that a block is composed of either '{' statement_list? '}' or a pp_directive. In this specific case, it chooses the first, because the first token it sees is a { (after the if condition). Now, it is expecting to maybe see a statement_list? and then a }, but what it find is #if TEST, a pp_directive.
What do we have to do? Make your pp_directive a statement. Since we know statement_list: statement+;, we search for statement and add pp_directive to it:
statement
: labeled_statement
| declaration_statement
| embedded_statement
| pp_directive
;
And it should be working fine. However, we must also see if your block: ... | pp_directive should be removed or not, and it should be. I'll let it for you to find out why, but here's a test case that's ambiguous:
if (!IsPostBack)
#pragma X
else {
}
I used ANTLR version 4 for creating compiler.First Phase was the Lexer part. I created "CompilerLexer.g4" file and putted lexer rules in it.It works fine.
CompilerLexer.g4:
lexer grammar CompilerLexer;
INT : 'int' ; //1
FLOAT : 'float' ; //2
BEGIN : 'begin' ; //3
END : 'end' ; //4
To : 'to' ; //5
NEXT : 'next' ; //6
REAL : 'real' ; //7
BOOLEAN : 'bool' ; //8
.
.
.
NOTEQUAL : '!=' ; //46
AND : '&&' ; //47
OR : '||' ; //48
POW : '^' ; //49
ID : [a-zA-Z]+ ; //50
WS
: ' ' -> channel(HIDDEN) //50
;
Now it is time for phase 2 which is the parser.I created "CompilerParser.g4" file and putted grammars in it but have dozens warning and errors.
CompilerParser.g4:
parser grammar CompilerParser;
options { tokenVocab = CompilerLexer; }
STATEMENT : EXPRESSION SEMIC
| IFSTMT
| WHILESTMT
| FORSTMT
| READSTMT SEMIC
| WRITESTMT SEMIC
| VARDEF SEMIC
| BLOCK
;
BLOCK : BEGIN STATEMENTS END
;
STATEMENTS : STATEMENT STATEMENTS*
;
EXPRESSION : ID ASSIGN EXPRESSION
| BOOLEXP
;
RELEXP : MODEXP (GT | LT | EQUAL | NOTEQUAL | LE | GE | AND | OR) RELEXP
| MODEXP
;
.
.
.
VARDEF : (ID COMA)* ID COLON VARTYPE
;
VARTYPE : INT
| FLOAT
| CHAR
| STRING
;
compileUnit
: EOF
;
Warning and errors:
implicit definition of token 'BLOCK' in parser
implicit definition of token 'BOOLEXP' in parser
implicit definition of token 'EXP' in parser
implicit definition of token 'EXPLIST' in parser
lexer rule 'BLOCK' not allowed in parser
lexer rule 'EXP' not allowed in parser
lexer rule 'EXPLIST' not allowed in parser
lexer rule 'EXPRESSION' not allowed in parser
Have dozens of these warning and errors. What is the cause?
General Questions: What is difference between using combined grammar and using lexer and parser separately? How should join separate grammar and lexer files?
Lexer rules start with a capital letter, and parser rules start with a lowercase letter. In a parser grammar, you can't define tokens. And since ANTLR thinks all your upper-cased rules lexer rules, it produces theses errors/warning.
EDIT
user2998131 wrote:
General Questions: What is difference between using combined grammar and using lexer and parser separately?
Separating the lexer and parser rules will keeps things organized. Also, when creating separate lexer and parser grammars, you can't (accidentally) put literal tokens inside your parser grammar but will need to define all tokens in your lexer grammar. This will make it apparent which lexer rules get matched before others, and you can't make any typo's inside recurring literal tokens:
grammar P;
r1 : 'foo' r2;
r2 : r3 'foo '; // added an accidental space after 'foo'
But when you have a parser grammar, you can't make that mistake. You will have to use the lexer rule that matches 'foo':
parser grammar P
options { tokenVocab=L; }
r1 : FOO r2;
r2 : r3 FOO;
lexer grammar L;
FOO : 'foo';
user2998131 wrote:
How should join separate grammar and lexer files?
Just like you do in your parser grammar: you point to the proper tokenVocab inside the options { ... } block.
Note that you can also import grammars, which is something different: https://github.com/antlr/antlr4/blob/master/doc/grammars.md#grammar-imports
I have a very simple ANTLR parser building in Visual Studio 2012. It works. But when it builds the grammar file, it emits a warning for every token, saying that the token is already defined. What could be causing this?
Here is the grammar file SimpleCalc.g4:
grammar SimpleCalc;
options {
language=CSharp2;
}
tokens {
PLUS,
MINUS,
TIMES,
DIV
}
#members {
}
expr : term ( (PLUS|MINUS) term )* ;
term : factor ( ( TIMES|DIV ) factor )* ;
factor : NUMBER ;
DIV : '/';
PLUS : '+';
TIMES: '*';
MINUS: '-';
NUMBER : (DIGIT)+ {System.Console.WriteLine("Found number"); };
WHITESPACE: ( '\t' | ' ' | '\r' | '\n' | '\u000C' )+ -> skip ;
fragment DIGIT : '0'..'9';
And here are the warnings:
[path]\SimpleCalc.g4(8,3): warning AC0108: token name 'PLUS' is already defined
[path]\SimpleCalc.g4(9,3): warning AC0108: token name 'MINUS' is already defined
[path]\SimpleCalc.g4(10,3): warning AC0108: token name 'TIMES' is already defined
[path]\SimpleCalc.g4(11,3): warning AC0108: token name 'DIV' is already defined
I would get rid of the unnecessary tokens {...} block.
My task is to create ANTLR grammar, to analyse C# source code files and generate class hierarchy. Then, I will use it to generate class diagram.
I wrote rules to parse namespaces, class declarations and method declarations. Now I have problem with skipping methods bodies. I don't need to parse them, because bodies are useless in my task.
I wrote simple rule:
body:
'{' .* '}'
;
but it does not work properly, when method looks like:
void foo()
{
...
{
...
}
...
}
rule matches first brace what is ok, then it matches
...
{
...
as 'any'(.*) and then third brace as final brace, what is not ok, and rule ends.
Anybody could help me to write proper rule for method bodies? As I said before, I don't want to parse them - only to skip.
UPDATE:
here is solution of my problem strongly based on Adam12 answer
body:
'{' ( ~('{' | '}') | body)* '}'
;
You have to use recursive rules that match parentheses pairs.
rule1 : '('
(
nestedParan
| (~')')*
)
')';
nestedParan : '('
(
nestedParan
| (~')')*
)
')';
This code assumes you are using the parser here so strings and comments are already excluded. ANTLR doesn't allow negation of multiple alternatives in parser rules so the code above relies on the fact that alternatives are tried in order. It should give a warning that alternatives 1 and 2 both match '(' and thus choose the first alternative, which is what we want.
You can handle the recursion of (nested) blocks in your lexer. The trick is to let your class definition also include the opening { so that not the entire contents of the class is gobbled up by this recursive lexer rule.
A quick demo that is without a doubt not complete, but is a decent start to "fuzzy parse/lex" a Java (or C# with some slight modifications) source file:
grammar T;
parse
: (t=. {System.out.printf("\%-15s '\%s'\n", tokenNames[$t.type], $t.text.replace("\n", "\\n"));})* EOF
;
Skip
: (StringLiteral | CharLiteral | Comment) {skip();}
;
PackageDecl
: 'package' Spaces Ids {setText($Ids.text);}
;
ClassDecl
: 'class' Spaces Id Spaces? '{' {setText($Id.text);}
;
Method
: Id Spaces? ('(' {setText($Id.text);}
| /* no method after all! */ {skip();}
)
;
MethodOrStaticBlock
: Block {skip();}
;
Any
: . {skip();}
;
// fragments
fragment Spaces
: (' ' | '\t' | '\r' | '\n')+
;
fragment Ids
: Id ('.' Id)*
;
fragment Id
: ('a'..'z' | 'A'..'Z' | '_') ('a'..'z' | 'A'..'Z' | '_' | '0'..'9')*
;
fragment Block
: '{' ( ~('{' | '}' | '"' | '\'' | '/')
| {input.LA(2) != '/'}?=> '/'
| StringLiteral
| CharLiteral
| Comment
| Block
)*
'}'
;
fragment Comment
: '/*' .* '*/'
| '//' ~('\r' | '\n')*
;
fragment CharLiteral
: '\'' ('\\\'' | ~('\\' | '\'' | '\r' | '\n'))+ '\''
;
fragment StringLiteral
: '"' ('\\"' | ~('\\' | '"' | '\r' | '\n'))* '"'
;
I ran the generated parser against the following Java source file:
/*
... package NO.PACKAGE; ...
*/
package foo.bar;
public final class Mu {
static String x;
static {
x = "class NotAClass!";
}
void m1() {
// {
while(true) {
double a = 2.0 / 2;
if(a == 1.0) { break; } // }
/* } */
}
}
static class Inner {
int m2 () {return 42; /*comment}*/ }
}
}
which produced the following output:
PackageDecl 'foo.bar'
ClassDecl 'Mu'
Method 'm1'
ClassDecl 'Inner'
Method 'm2'
The ANTLR website describes two approaches to implementing "include" directives. The first approach is to recognize the directive in the lexer and include the file lexically (by pushing the CharStream onto a stack and replacing it with one that reads the new file); the second is to recognize the directive in the parser, launch a sub-parser to parse the new file, and splice in the AST generated by the sub-parser. Neither of these are quite what I need.
In the language I'm parsing, recognizing the directive in the lexer is impractical for a few reasons:
There is no self-contained character pattern that always means "this is an include directive". For example, Include "foo"; at top level is an include directive, but in Array bar --> Include "foo"; or Constant Include "foo"; the word Include is an identifier.
The name of the file to include may be given as a string or as a constant identifier, and such constants can be defined with arbitrarily complex expressions.
So I want to trigger the inclusion from the parser. But to perform the inclusion, I can't launch a sub-parser and splice the AST together; I have to splice the tokens. It's legal for a block to begin with { in the main file and be terminated by } in the included file. A file included inside a function can even close the function definition and start a new one.
It seems like I'll need something like the first approach but at the level of TokenStreams instead of CharStreams. Is that a viable approach? How much state would I need to keep on the stack, and how would I make the parser switch back to the original token stream instead of terminating when it hits EOF? Or is there a better way to handle this?
==========
Here's an example of the language, demonstrating that blocks opened in the main file can be closed in the included file (and vice versa). Note that the # before Include is required when the directive is inside a function, but optional outside.
main.inf:
[ Main;
print "This is Main!";
if (0) {
#include "other.h";
print "This is OtherFunction!";
];
other.h:
} ! end if
]; ! end Main
[ OtherFunction;
A possibility is for each Include statement to let your parser create a new instance of your lexer and insert these new tokens the lexer creates at the index the parser is currently at (see the insertTokens(...) method in the parser's #members block.).
Here's a quick demo:
Inform6.g
grammar Inform6;
options {
output=AST;
}
tokens {
STATS;
F_DECL;
F_CALL;
EXPRS;
}
#parser::header {
import java.util.Map;
import java.util.HashMap;
}
#parser::members {
private Map<String, String> memory = new HashMap<String, String>();
private void putInMemory(String key, String str) {
String value;
if(str.startsWith("\"")) {
value = str.substring(1, str.length() - 1);
}
else {
value = memory.get(str);
}
memory.put(key, value);
}
private void insertTokens(String fileName) {
// possibly strip quotes from `fileName` in case it's a Str-token
try {
CommonTokenStream thatStream = new CommonTokenStream(new Inform6Lexer(new ANTLRFileStream(fileName)));
thatStream.fill();
List extraTokens = thatStream.getTokens();
extraTokens.remove(extraTokens.size() - 1); // remove EOF
CommonTokenStream thisStream = (CommonTokenStream)this.getTokenStream();
thisStream.getTokens().addAll(thisStream.index(), extraTokens);
} catch(Exception e) {
e.printStackTrace();
}
}
}
parse
: stats EOF -> stats
;
stats
: stat* -> ^(STATS stat*)
;
stat
: function_decl
| function_call
| include
| constant
| if_stat
;
if_stat
: If '(' expr ')' '{' stats '}' -> ^(If expr stats)
;
function_decl
: '[' id ';' stats ']' ';' -> ^(F_DECL id stats)
;
function_call
: Id exprs ';' -> ^(F_CALL Id exprs)
;
include
: Include Str ';' {insertTokens($Str.text);} -> /* omit statement from AST */
| Include id ';' {insertTokens(memory.get($id.text));} -> /* omit statement from AST */
;
constant
: Constant id expr ';' {putInMemory($id.text, $expr.text);} -> ^(Constant id expr)
;
exprs
: expr (',' expr)* -> ^(EXPRS expr+)
;
expr
: add_expr
;
add_expr
: mult_expr (('+' | '-')^ mult_expr)*
;
mult_expr
: atom (('*' | '/')^ atom)*
;
atom
: id
| Num
| Str
| '(' expr ')' -> expr
;
id
: Id
| Include
;
Comment : '!' ~('\r' | '\n')* {skip();};
Space : (' ' | '\t' | '\r' | '\n')+ {skip();};
If : 'if';
Include : 'Include';
Constant : 'Constant';
Id : ('a'..'z' | 'A'..'Z') ('a'..'z' | 'A'..'Z' | '0'..'9')+;
Str : '"' ~'"'* '"';
Num : '0'..'9'+ ('.' '0'..'9'+)?;
main.inf
Constant IMPORT "other.h";
[ Main;
print "This is Main!";
if (0) {
Include IMPORT;
print "This is OtherFunction!";
];
other.h
} ! end if
]; ! end Main
[ OtherFunction;
Main.java
import org.antlr.runtime.*;
import org.antlr.runtime.tree.*;
import org.antlr.stringtemplate.*;
public class Main {
public static void main(String[] args) throws Exception {
// create lexer & parser
Inform6Lexer lexer = new Inform6Lexer(new ANTLRFileStream("main.inf"));
Inform6Parser parser = new Inform6Parser(new CommonTokenStream(lexer));
// print the AST
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT((CommonTree)parser.parse().getTree());
System.out.println(st);
}
}
To run the demo, do the following on the command line:
java -cp antlr-3.3.jar org.antlr.Tool Inform6.g
javac -cp antlr-3.3.jar *.java
java -cp .:antlr-3.3.jar Main
The output you'll see corresponds to the following AST: