deepfold.runner package¶
Submodules¶
deepfold.runner.config_utils module¶
deepfold.runner.interpreter module¶
deepfold.runner.main module¶
deepfold.runner.parser module¶
- class deepfold.runner.parser.Lexer(debug=False, **kwargs)[source]¶
Bases:
object
- states = (('iterable', 'inclusive'), ('string', 'exclusive'), ('escaped', 'exclusive'), ('graph', 'inclusive'), ('stoi', 'exclusive'))¶
- t_DECIMAL_POINT = '\\x2E'¶
- t_DIGITS = '[\\x30-\\x39]+'¶
- t_E = '[\\x45\\x65]'¶
- t_FALSE = '\\x66\\x61\\x6c\\x73\\x65'¶
- t_MINUS = '\\x2D'¶
- t_NAME_SEPARATOR = '\\x3A'¶
- t_NULL = '\\x6e\\x75\\x6c\\x6c'¶
- t_PLUS = '\\x2B'¶
- t_SEMICOLON = '\\x3B'¶
- t_TRUE = '\\x74\\x72\\x75\\x65'¶
- t_VALUE_SEPARATOR = '\\x2C'¶
- t_ZERO = '\\x30'¶
- t_escaped_ignore = ''¶
- t_graph_VALUE_SEPARATOR = ','¶
- t_graph_ignore = ' \t\r'¶
- t_ignore = ' \t\r'¶
- t_stoi_ignore = ' \t\r'¶
- t_string_ignore = ''¶
- tokenize(data, *args, **kwargs)[source]¶
Invoke the lexer on an input string an return the list of tokens.
This is relatively inefficient and should only be used for testing/debugging as it slurps up all tokens into one list.
- Parameters:
data – The input to be tokenized.
- Returns:
A list of LexTokens
- tokens = ['BEGIN_ARRAY', 'BEGIN_OBJECT', 'END_ARRAY', 'END_OBJECT', 'NAME_SEPARATOR', 'VALUE_SEPARATOR', 'QUOTATION_MARK', 'FALSE', 'TRUE', 'NULL', 'DECIMAL_POINT', 'DIGITS', 'E', 'MINUS', 'PLUS', 'ZERO', 'UNESCAPED', 'ESCAPE', 'REVERSE_SOLIDUS', 'SOLIDUS', 'BACKSPACE_CHAR', 'FORM_FEED_CHAR', 'LINE_FEED_CHAR', 'CARRIAGE_RETURN_CHAR', 'TAB_CHAR', 'UNICODE_HEX', 'BEGIN_EDGE_LIST', 'END_EDGE_LIST', 'EDGE_SEP', 'STOI_SEP', 'NUM_SYM_SEP', 'ID', 'SEMICOLON', 'NEWLINE', 'MODEL', 'ENTITY', 'PREDICT', 'USING', 'STOI', 'PAIR', 'SAMPLE', 'GRAPH', 'LET', 'IN']¶
- class deepfold.runner.parser.Parser(lexer=None, debug=False, **kwargs)[source]¶
Bases:
object
- p_char(p)[source]¶
char : UNESCAPED | ESCAPE QUOTATION_MARK | ESCAPE REVERSE_SOLIDUS | ESCAPE SOLIDUS | ESCAPE BACKSPACE_CHAR | ESCAPE FORM_FEED_CHAR | ESCAPE LINE_FEED_CHAR | ESCAPE CARRIAGE_RETURN_CHAR | ESCAPE TAB_CHAR
- p_predict(p)[source]¶
- predictPREDICT string STOI stoi_list USING id
- PREDICT string STOI stoi_list USING id pred_optionsPREDICT id STOI stoi_list USING idPREDICT id STOI stoi_list USING id pred_options
- p_statements(p)[source]¶
- statementsstatement
- statements NEWLINE statementstatements SEMICOLON statement
- tokens = ['BEGIN_ARRAY', 'BEGIN_OBJECT', 'END_ARRAY', 'END_OBJECT', 'NAME_SEPARATOR', 'VALUE_SEPARATOR', 'QUOTATION_MARK', 'FALSE', 'TRUE', 'NULL', 'DECIMAL_POINT', 'DIGITS', 'E', 'MINUS', 'PLUS', 'ZERO', 'UNESCAPED', 'ESCAPE', 'REVERSE_SOLIDUS', 'SOLIDUS', 'BACKSPACE_CHAR', 'FORM_FEED_CHAR', 'LINE_FEED_CHAR', 'CARRIAGE_RETURN_CHAR', 'TAB_CHAR', 'UNICODE_HEX', 'BEGIN_EDGE_LIST', 'END_EDGE_LIST', 'EDGE_SEP', 'STOI_SEP', 'NUM_SYM_SEP', 'ID', 'SEMICOLON', 'NEWLINE', 'MODEL', 'ENTITY', 'PREDICT', 'USING', 'STOI', 'PAIR', 'SAMPLE', 'GRAPH', 'LET', 'IN']¶