Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Longest match lexer option to mimic the Unix tool Lex #1490

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

AiStudent
Copy link

@AiStudent AiStudent commented Nov 17, 2024

Hi. I added a lexer for the lalr parser as a complement, which mimics the behavior of the Unix tool Lex and libraries such as flex. The behavior in question is to: match with the longest match found; if there are multiple longest matches the precedence of the terminals is in the order that they are defined.

This also means that the terminals are not sorted according to the priority rules defined in grammar.md:

  1. Highest priority first (priority is specified as: TERM.number: ...)
  2. Length of match (for regexps, the longest theoretical match is used)
  3. Length of literal / pattern definition
  4. Name

By not relying on the above rules for precedence, it is possible to use grammars defined for Lex and its derivations in Lark.

Below follows an example using longest_match:

from lark import Lark

parser = Lark(r"""
        start: AB -> ab
          | AC -> ac
        
        AB: /a(b)*/
        AC: /a(c)*/
        
        """, parser='lalr', lexer='longest_match')

data = 'ac'
tree = parser.parse(data)
print(tree)

This is a grammar that neither the basic nor contextual lexer can deal with. Both will use AB, and the contextual will not try AC as AB is a possible token to parse from start. It's not possible for the programmer to set the precedence in this scenario to tokenize "ab" or "ac" correctly. Using basic yields:

lark.exceptions.UnexpectedCharacters: No terminal matches 'c' in the current parser context, at line 1 col 2

ac
 ^
Expected one of: 
	* AB
	* AC

Using contextual yields:

lark.exceptions.UnexpectedCharacters: No terminal matches 'c' in the current parser context, at line 1 col 2

ac
 ^
Expected one of: 
	* <END-OF-FILE>

Previous tokens: Token('AB', 'a')

Regarding the implementation it consists of a new lexer and a new scanner: LongestMatchLexer, which inherits from BasicLexer; LongestMatchScanner, which attempts to match against every terminal, yielding the longest match. (Not optimal - but it's an option.)

It seems some other users have attempted to use longest matches (as I tried when I used lark first):
#370
#1463

Edit:
An issue with using earley instead, is that it may not yield the desired result for ambiguous grammars such as in the example below (which is a simplification of a grammar that worked for a lex derivation):

from lark import Lark

grammar = r"""
  start: INTEGER "." INTEGER -> eproj
    | DOUBLE -> double

    INTEGER: /\d+/
    DOUBLE: /\d+\.\d+/
"""

parser = Lark(grammar, start='start', parser='earley')
result = parser.parse("1.2")
print(result)

Which yields:

Tree('eproj', [Token('INTEGER', '1'), Token('INTEGER', '2')])

@MegaIng
Copy link
Member

MegaIng commented Nov 17, 2024

IMO, it is better to just write your grammar correctly which I am pretty sure is always possible. If your grammar is not ambiguous, you will get nice performance guarantees. (i.e. O(n) with lalr). We cannot give you these guarantees for these ambiguous grammars, and your solution makes the performance worse in all situations.

Note that if you really don't want to adjust your grammar, you can just create a custom lexer and pass it to Lark - but I don't think it should be within the library itself.

@erezsh
Copy link
Member

erezsh commented Nov 18, 2024

I share @MegaIng 's concerns regarding the performance of this lexing method.

If we were to include this behavior as an official lexer, I think it makes more sense to specify a subset of "competing" terminals, rather than the entire set of terminals. i.e. that only (AC, AB) will be evaluated for length, and not every terminal in the grammar.

Another thing that is maybe worth pointing out - regexes are technically capable of solving this particular example using a single match:

>>> re.match('a(b|c)*', 'ac')
<re.Match object; span=(0, 2), match='ac'>

If there was a way to manually (or even automatically?) merge these terminals, and later discern which one was matched, I believe that would address the performance issues, while still supporting this behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants