Markup Lexer
inyoka.markup.lexer
Tokenizes our wiki markup. The lexer is implemented as some sort of scanner with an internal stack. Inspired by pygments.
- copyright:
2007-2024 by the Inyoka Team, see AUTHORS for more details.
- license:
BSD, see LICENSE for more details.
- class inyoka.markup.lexer.Lexer
- tokenize(string)
Resolve quotes and parse quote for quote in an isolated environment.
- inyoka.markup.lexer.astuple(token)
Yield the groups together as one tuple.
- inyoka.markup.lexer.bygroups(*args)
Callback creator for bygroup yielding.
- inyoka.markup.lexer.escape(text)
Escape a text.
- inyoka.markup.lexer.fallback()
Just pop and do nothing.
- class inyoka.markup.lexer.include
Tells the lexer to include tokens from another set.
- inyoka.markup.lexer.iter_rules(x)
- class inyoka.markup.lexer.rule(regexp, token=None, enter=None, silententer=None, switch=None, leave=0)
This represents a parsing rule.
- enter
- leave
- match
- silententer
- switch
- token
- class inyoka.markup.lexer.ruleset(*args)
Rulesets keep some rules. If at the end of parsing a ruleset is left on the stack a name_end token is emitted without a value. If you don’t want this behavior use the helperset.
- inyoka.markup.lexer.switch(state)
Go to another state when reached.
- inyoka.markup.lexer.tokenize_block(string, _escape_hint=None)
This tokenizes a block. It’s used by the normal tokenize function to lex quotes and normal markup isolated, so that breakage in one block does not affect outer areas.