Home > Class Reference > %SYS namespace > %SQL.Util.RowType
Private  Storage   



Parameters Properties Methods Queries Indices ForeignKeys Triggers
8 27


column currentLexeme debugMode source
sourceLine token tokenPtr transitionStack

%AddToSaveSet %ClassIsLatestVersion %ClassName %ConstructClone
%DispatchClassMethod %DispatchGetModified %DispatchGetProperty %DispatchMethod
%DispatchSetModified %DispatchSetMultidimProperty %DispatchSetProperty %Extends
%GetParameter %IsA %IsModified %New
%NormalizeObject %ObjectModified %OriginalNamespace %PackageName
%RemoveFromSaveSet %SerializeObject %SetModified %ValidateObject
GenerateProperties consumeWhite delimitedToken lookAheadSkipWhite
lookAheadToken macroDefs nestedStringLiteral nextToken
parse parseArray parseFile s1
s1c s2 s3 s4
s5 s6 s6c s7
s8 simpleKeywordValue terminatedKeywordValue tokenizer


• property column as %RawString [ MultiDimensional ];
This is a temporary structure - perhaps. Normally the output from a parser is a parse tree. This is a pt -of sorts. column - Number of columns column(n) - Column name column(n,1) - Column SQL type column(n,2) - type column(n,2,) - value of type parameter
• property currentLexeme as %RawString;
The current lexeme value. It make be a composite of several tokens.
• property debugMode as %Integer [ InitialExpression = $$$CompileDebugMode ];
• property source as %Stream.Object;
The source is the text to be parsed. Files are bound to a file stream object, arrays are copied to a global stream object.
• property sourceLine as %Integer;
The current sourceLine number. Used for error reporting.
• property token as %RawString [ MultiDimensional ];
This is an array of tokens. It is managed completely by the tokenizer() and only accessed through public accessor methods, including nextToken.
• property tokenPtr as %Integer;
The pointer to the current token in the source stream
• property transitionStack as %Integer;
The state transition stack. Each production pushes the next state onto this stack If continuation is necessary then the production first pushes a return state onto the transition stack. States are simple integers, 1 being the initial state. Continuations are the same integer followed by a "c".


• classmethod GenerateProperties(pClass As %Dictionary.ClassDefinition, ByRef pColumn, ByRef pSequence As %Integer = 0)
pMetadata and pObjects are generated metadata from the row type as contained in pColumn.
• method consumeWhite()
• method delimitedToken(pBegin, pEnd)
Extract a delimited token from the lexeme stream. If the current lexeme is the pBegin value then consume all tokens from the current position until the pEnd token is found and is not nested and is not in a string.
• method lookAheadSkipWhite(pLexPtr)
This function looks ahead to the first non-white token.
• method lookAheadToken(pLexPtr)
• classmethod macroDefs()
• method nestedStringLiteral(pQuoteChar As %String = """")
• method nextToken(pTerminators, pStrip=0, pStringAsToken As %Integer = 0, pQuoteChars As %String = """")
This should really be using regular expressions and matching the longest possible lexeme against the valid lookaheads. For now, just look for a terminator in the pTerminators string. If pStringAsToken is true then look for a leading quote character. If found then invoke nestedStringLiteral to consume all tokens up to the ending quote and return the nested string as a single token.
• method parse(ByRef pColumns As %RawString)
• classmethod parseArray(pSource As %RawString, ByRef pColumns As %RawString)
• classmethod parseFile(pFilename, ByRef pColumns As %RawString)
• method s1()
STATE: 1 - The initial state production line. LOOKAHEADS: rowtype | objecttype | rowtype_body
• method s1c()
STATE: 1c - continuation of the initial state
• method s2()
STATE: 2 - ROW LOOKAHEADS: rowtype_body
• method s3()
STATE: 3 - OBJECT LOOKAHEADS: objecttype_body
• method s4()
STATE: 4 - rowtype_body ::= LOOKAHEADS: sql_identifier | WHITESPACE | COMMA | right_paren
• method s5()
STATE: 5 - objecttype_body ::= LOOKAHEADS: object_identifier | WHITESPACE | COMMA | right_paren
• method s6()
STATE: 6 - field_definition ::= field_name data_type LOOKAHEADS: sql_identifier ( ::= ::= | )
• method s6c()
• method s7()
STATE: 7 - property_definition =:: LOOKAHEADS:
• method s8()
STATE: datatype ::= | | | | LOOKAHEADS:
• method simpleKeywordValue(pDefault="")
• method terminatedKeywordValue(pTerminator=";", pIgnoreString=0)
#; terminatedKeywordValue(pTerminator,pIgnoreString) #; #; This function scans the source until the token pTerminator in encountered in the stream. #; If pIgnoreString is TRUE then pTerminator can be found anywhere, otherwise pTerminator #; is only recognized if it is not in a quoted string.
• method tokenizer()
• method transition()