| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Signed-off-by: Emil Velikov <[email protected]>
Acked-by: Matt Turner <[email protected]>
Acked-by: Jose Fonseca <[email protected]>
|
| |
|
|
|
|
| |
Signed-off-by: Chris Forbes <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were two problems with the way this script used sed on OS X:
1. The OS X sed doesn't interpret "\r" in a replacement list as a
carriage-return character, (instead it was inserting a literal
'r' character).
We fix this by putting an actual ^M character into the source of
the script, (rather than a two-character escape sequence hoping
for sed to do the right thing).
2. When generating the test files with LF-CR ("\n\r") newlines, the
OS X sed was adding an undesired final newline ("\n") at the end
of the file. We avoid this by first using sed to add the ^M
before the newlines, then using tr to swap the \r and \n
characters. This way, sed never sees any lines ending with
anything but \n, so it doesn't get confused and doesn't add any
bogus extra newlines.
Tested-by: Vinson Lee <[email protected]>
Vinson's testing confirmed that this patch fixes FreeBSD as well.
|
|
|
|
|
|
|
|
|
|
| |
I noticed that with /bin/sh on Mac OS X, "echo -n" does not work as
desired, (it actually prints "-n" rather than suppressing the final
newline). There is a /bin/echo that could be used (it actually works)
instead of the builtin echo.
But I decided it's more robust to just use printf rather than
hardcoding /bin/echo into the script.
|
|
|
|
|
|
|
|
|
|
|
| |
With two tests both numbered 118, there was a confusing off-by-two difference
between the last test number and the total number of tests (as reported by
glcpp-test).
With this rename, there's only an off-by-one difference left, (which is easy
to understand given the zero-based test numbering).
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here is some additional stress testing of nested macros where the expansion
of macros involves commas, (and whether those commas are interpreted as
argument separators or not in subsequent function-like macro calls).
Credit to the GCC documentation that directed my attention toward this issue:
https://gcc.gnu.org/onlinedocs/gcc-3.2/cpp/Argument-Prescan.html
Fixing the bug required only removing code from glcpp. When first testing the
details of expansions involving commas, I had come to the mistaken conclusion
that an expanded comma should never be treated as an argument separator, (so
had introduced the rather ugly COMMA_FINAL token to represent this).
In fact, an expanded comma should be treated as a separator, (as tested here),
and this treatment can be avoided by judicious use of parentheses (as also
tested here).
With this simple removal of the COMMA_FINAL token, the behavior of glcpp
matches that of gcc's preprocessor for all of these hairy cases.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Beyond just listing this in the TESTS variable in Makefile.am, only minor
changes were needed to make this work. The primary issue is that the build
system runs the test script from a different directory than the script
itself. So we have to use the $srcdir variable to find the test input files.
Using $srcdir in this way also ensures that this test works when using an
out-of-tree build.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The (optional) test-specific command-line arguments to be passed to glcpp are
embedded within the source files of some tests, and glcpp-test uses grep to
extract them.
Of course, grep is line-based and looks for the native line-separator to
determine line boundaries. So, for files using non-native line separators,
grep was getting quite confused and passing bogus arguments to glcpp.
Fix this by canonical-izing the line separators in the source file prior to
using grep.
With this commit, the glcpp-test-cr-lf tests pass entirely:
\r: 143/143 tests pass
\r\n: 143/143 tests pass
\n\r: 143/143 tests pass
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GLSL specification says that either carriage-return, line-feed, or both
together can be used to terminate lines. Further, it says that when used
together, the pair of terminators shall be interpreted as a single line.
This final requirement has not been respected by glcpp up until now, (it has
been emitting two newlines for every CR+LF pair).
Here, we fix the lexer by using a regular expression for NEWLINE that eats
up both "\r\n" (or even "\n\r") if possible before also considering a single
'\n' or a single '\r' as a line terminator.
Before this commit, the test results are as follows:
\r: 135/143 tests pass
\r\n: 4/143 tests pass
\n\r: 4/143 tests pass
After this commit, the test results are as follows:
\r: 135/143 tests pass
\r\n: 140/143 tests pass
\n\r: 139/143 tests pass
So, obviously, a dramatic improvement.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GLSL specification has a very broad definition of what is a
newline. Namely, it can be the carriage-return character, '\r', the newline
character, '\n', or any combination of the two, (though in combination, the
two are treated as a single newline).
Here, we add a new test-runner, glcpp-test-cr-lf, that, for each possible
line-termination combination, runs through the existing test suite with all
source files modified to use those line-termination characters. Instead of
using the .expected files for this, this script assumes that the regular test
suite has been run already and expects the output to match the .out
files. This avoids getting 4 test failures for any one bug, and instead will
hopefully only report bugs actually related to the line-termination
characters.
The new testing is not yet integrated into "make check". For that, some
munging of the testdir option will be necessary, (to support "make check" with
out-of-tree builds). For now, the scripts can just be run directly by hand.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to this commit, the following snippet would trigger an error in glcpp:
#define FOO defined BAR
#if FOO
#endif
The problem was that support for the "defined" operator was implemented within
the grammar, (where the parser was parsing the tokens of the condition
itself). But what is required is to interpret the "defined" operator that
results after macro expansion is performed.
I could not find any fix for this case by modifying the grammar alone. The
difficulty is that outside of the grammar we already have a recursive function
that performs macro expansion (_glcpp_parser_expand_token_list) and that
function itself must be augmented to be made aware of the semantics of the
"defined" operator.
The reason we can't simply handle "defined" outside of the recursive expansion
function is that not only must we scan for any "defined" operators in the
original condition (before any macro expansion occurs); but at each level of
the recursive expansion, we must again scan the list of tokens resulting from
expansion and handle "defined" before entering the next level of recursion to
further expand macros.
And of course, all of this is context dependent. The evaluation of "defined"
operators must only happen when we are handling preprocessor conditionals,
(#if and #elif) and not when performing any other expansion, (such as in the
main body).
To implement this, we add a new "mode" parameter to all of the expansion
functions to specify whether resulting DEFINED tokens should be evaluated or
ignored.
One side benefit of this change is that an ugly wart in the grammar is
removed. We previously had "conditional_token" and "conditional_tokens"
productions that were basically copies of "pp_token" and "pp_tokens" but with
added productions for the various forms of DEFINED operators. With the new
code here, those ugly copy-and-paste productions are eliminated from the
grammar.
A new "make check" test is added to stress-test the code here.
This commit fixes the following Khronos GLES3 CTS tests:
conditional_inclusion.basic_2_vertex
conditional_inclusion.basic_2_fragment
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we were passing these through, just like any other pragma. But the
downstream compiler was tripping up on them. It seems easier to swallow these
in the preprocessor and not pass them on at all rather than fixing the
downstream compiler.
This fixes the following Khronos GLES3 CTS tests:
preprocessor.pragmas.pragma_vertex
preprocessor.pragmas.pragma_fragment
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the #pragma directive was swallowing an entire line, (including
the final newline). At that time it was appropriate for it to increment the
line count.
More recently, our handling of #pragma changed to not include the newline. But
the code to increment yylineno stuck around. This was causing __LINE__ to be
increased by one more than desired for every #pragma.
Remove the bogus, extra increment, and add a test for this case.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This new "make check" test stresses out the support from the last two commits,
(to esnure that '#' is correctly interpreted as the null directives,
regardless of any whitespace or comments on the same line).
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
This simply tests the previous commit, (that #define followed by a comment
will still generate the expected "#define without macro name" error message).
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
| |
This ensures that the previous commit indeed generates the expected error
message when a "#define" directive is not followed by anything except for a
newline.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, glcpp would emit an error like this if <EOF> happened to occur
immediately after the "#define", but in general would just get confused,
(leading to un-helpful error messages).
To fix things to generate a clean error message, we do a few things:
1. Don't require horizontal whitespace immediately after #define
2. Add a production for the error case, (DEFINE_TOKEN followed
immediately by a NEWLINE token).
3. Make the lexer reset to the <INITIAL> state after every NEWLINE.
This 3rd point prevents the lexer from getting so confused and generating
further spurious errors in the file because it was stuck in the <DEFINE> start
condition.
We also drop the similar error message from the <EOF> rule since the
newly-added rule will have already printed the error message.
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test is written to exercise a bug which I recently wrote, (but
fortunately caught and fixed before ever committing it).
For the curious:
The bug happened when the NEWLINE_CATCHUP code didn't actually return the
NEWLINE token (due to the skipping). This resulted in the lexer continuing
on through all the subsequent rules while still in the NEWLINE_CATCHUP start
condition, (which then triggered the internal-compiler-error catch-all
rule).
What is intended is for the return of the NEWLINE token to start a new
iteration of the lexer loop, at which time the NEWLINE_CATCHUP-handling code
will reset from the <NEWLINE_CATCHUP> to the <INITIAL> start condition.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
At one point while rewriting the lexing rule for pre-processing numbers, I
made it a bit too aggressive and within a replacement list sucked up a
parameter name that appeared immediately after a period. This caused the
parameter name to be unreplaced when the macro was expanded.
It was in some piglit tests that I originally found this issue. Here, I'm
adding a test to "make check" to ensure that this behavior remains correct.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These operators aren't defined for preprocessor expressions, so we never
implemented them. This led them to be misinterpreted as strings of unary
'+' or '-' operators.
In fact, what is actually desired is to generate an error if these operators
appear in any preprocessor condition.
So this commit looks like it is strictly adding support for these
operators. And it is supporting them as far as passing them through to the
subsequent compiler, (which was already happening anyway).
What's less apparent in the commit is that with these tokens now being lexed,
but with no change to the grammar for preprocessor expressions, these
operators will now trigger errors there.
A new "make check" test is added to verify the desired behavior.
This commit fixes the following Khronos GLES3 CTS test:
invalid_op_1_vertex
invalid_op_1_fragment
invalid_op_2_vertex
invalid_op_2_fragment
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will emit an error for something like:
#define FOO(x,x) ...
Obviously, it's not a legal thing to do, and it's easy to check.
Add a "make check" test for this as well.
This fixes the following Khronos GLES3 CTS tests:
invalid_function_definitions.unique_param_name_vertex
invalid_function_definitions.unique_param_name_fragment
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's legal (though highly bizarre) for a pre-processor directive to look like
this:
# /* why? */ define FOO bar
This behavior comes about since the specification defines separate logical
phases in a precise order, and comment-removal occurs in a phase before the
identification of directives.
Our implementation does not use an actual separate phase for comment removal,
so some extra care is necessary to correctly parse this. What we want is for
'#' to introduce a directive iff it is the first token on a line, (ignoring
whitespace and comments). Previously, we had a lexical rule that worked only
for whitespace (not comments) with the following regular expression to find a
directive-introducing '#' at the beginning of a line:
HASH ^{HSPACE}*#{HSPACE}*
In this commit, we switch to instead use a simple literal match of '#' to
return a HASH_TOKEN token and add a new <HASH> start condition for whenever
the HASH_TOKEN is the first non-space token of a line. This requires the
addition of the new bit of state: first_non_space_token_this_line.
This approach has a couple of implications on the glcpp parser:
1. The parser now sees two separate tokens, (such as HASH_TOKEN and
HASH_DEFINE) where it previously saw one token (HASH_DEFINE) for
the sequence "#define". This is a straightforward change throughout
the grammar.
2. The parser may now see a SPACE token before the HASH_TOKEN token of
a directive. Previously the lexical regular expression for {HASH}
would eat up the space and there would be no SPACE token.
This second implication is a bit of a nuisance for the parser. It causes a
SPACE token to appear in a production of the grammar with the following two
definitions of a control_line:
control_line
SPACE control_line
This is really ugly, since normally a space would simply be a token
separator, so it wouldn't appear in the tokens of a production. This leads to
a further problem with interleaved spaces and comments:
/* ... */ /* ... */ #define /* ..*/
For this, we must not return several consecutive SPACE tokens, or else we would need an arbitrary number of new productions:
SPACE SPACE control_line
SPACE SPACE SPACE control_line
ad nauseam
To avoid this problem, in this commit we also change the lexer to emit only a
single SPACE token for any series of consecutive spaces, (whether from actual
whitespace or comments). For this compression, we add a new bit of parser
state: last_token_was_space. And we also update the expected results of all
necessary test cases for the new compression of space tokens.
Fortunately, the compression of spaces should not lead to any semantic changes
in terms of what the eventual GLSL compiler sees.
So there's a lot happening in this commit, (particularly for such a tiny
feature). But fortunately, the lexer itself is looking cleaner than ever. The
only ugly bit is all the state updating, but it is at least isolated to a
single shared function.
Of course, a new "make check" test is added for the new feature, (directives
with comments and whitespace interleaved in many combinations).
And this commit fixes the following Khronos GLES3 CTS tests:
function_definition_with_comments_vertex
function_definition_with_comments_fragment
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the first line we were initializing the column to 1, but for all
subsequent lines we were initializing the column to 0. The column number is
advanced for each token read before any error message is printed. So the 0
value is the correct initialization, (so that the first column is reported as
column 1).
With this extremely minor change, many of the .expected files are updated such
that error messages for the first line now have the correct column number in
them.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
| |
It makes more sense to print the directive name with the preceding '#'.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The glcpp parser is line-based, so it needs to see a NEWLINE token at the end
of each line. This causes a trick for files that end without a final newline.
Previously, the lexer for glcpp punted in this case by unconditionally
returning a NEWLINE token at end-of-file, (causing most files to have an extra
blank line at the end). Here, we refine this by lexing end-of-file as a
NEWLINE token only if the immediately preceding token was not a NEWLINE token.
The patch is a minor change that only looks huge for two reasons:
1. Almost all glcpp test result ".expected" files are updated to drop
the extra newline.
2. All return statements from the lexer are adjusted to use a new
RETURN_TOKEN macro that tracks the last-token-was-a-newline state.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The glcpp implementation has long had code to support a file that ends without
a final newline. But we didn't have a "make check" test for this.
Additionally, the <EOF> action was restricted only to the <INITIAL> state so
it would fail to get invoked if the EOF was encountered in the <COMMENT> or
the <DEFINE> case. Neither of these was a bug, per se, since EOF in either
of these cases is an error anyway, (either "unterminated comment" or
"missing macro name for #define").
But with the new explicit support for these cases, we not generate clean error
messages in these cases, (rather than "unexpected $end" from before).
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The recent adddition of an error for "#define followed by a non-identifier"
was a bit to aggressive since it used a regular expression in the lexer to
flag any character that's not legal as the first character of an identifier.
But we need to allow comments to appear here, (since we aren't removing
comments in a preliminary pass). So we refine the error here to only flag
characters that could not be an identifier, nor a comment, nor whitespace.
We also augment the existing comment support to be active in the <DEFINE>
state as well.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if the preprocessor encountered a #define with a non-identifier,
such as:
#define 123 456
The lexer had no explicit rules to match non-identifiers in the <DEFINE> start
state. Because of this, flex's default rule was being invoked, (printing
characters to stdout), and all text was being discarded by the compiler until
the next identifier. As one can imagine, this led to all sorts of interesting
and surprising results.
Fix this by adding an explicit rule complementing the existing
identifier-based rules that should catch all non-identifiers after #define and
reliably give a well-formatted error message.
A new test is added to "make check" to ensure this bug stays fixed.
This commit also fixes the following Khronos GLES3 CTS test:
define_non_identifier_vertex
(The "fragment" variant was passing earlier only because the preprocessor was
behaving so randomly and causing the compilation to fail. It's lucky, in fact,
that the "vertex" version succesfully compiled so we could find and fix this
bug.)
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
| |
This test simply has one of each directive, all of which are preceded by a
single space character.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The glcpp lexer and parser use the space_tokens state bit to avoid emitting
tokens for spaces while parsing a directive. Previously, this bit was only
being set again by the first non-space token following a directive.
This led to a bug where a space, (or a comment that should emit a space),
immediately following a directive, (optionally searated by newlines), would be
omitted from the output.
Here we fix the bug by also setting the space_tokens bit whenever we lex a
newline in the standard start conditions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lexer was insisting that there be at least one character after "#pragma"
and before the end of the line. This caused an error for a line consisting
only of "#pragma" which volates at least the following sentence from the GLSL
ES Specification 3.00.4:
The scope as well as the effect of the optimize and debug pragmas is
implementation-dependent except that their use must not generate an
error. [Page 12 (Page 28 of PDF)]
and likely the following sentence from that specification and also in
GLSLangSpec 4.30.6:
If an implementation does not recognize the tokens following #pragma,
then it will ignore that pragma.
Add a "make check" test to ensure no future regressions.
This change fixes at least part of the following Khronos GLES3 CTS test:
preprocessor.pragmas.pragma_vertex
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We've always warned about this case, but a recent confromance test expects
this to be an error that causes compilation to fail. Make it so.
Also add a "make check" test to ensure these errors are generated.
This fixes the following Khronos GLES3 conformance tests:
invalid_conditionals.tokens_after_ifdef_vertex
invalid_conditionals.tokens_after_ifdef_fragment
invalid_conditionals.tokens_after_ifndef_vertex
invalid_conditionals.tokens_after_ifndef_fragment
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
While writing the previous commit message, I just felt bad documenting the
shortcoming of the change, (that undefined macro names would not be reported
in error messages).
Fix this by preserving the first-encounterd undefined macro name and reporting
that in any resulting error message.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The GLSL ES Specification 3.00.4 says:
#if, #ifdef, #ifndef, #else, #elif, and #endif are defined to operate
as for C++ except for the following:
...
• Undefined identifiers not consumed by the defined operator do not
default to '0'. Use of such identifiers causes an error.
[Page 11 (page 127 of the PDF file)]
as well as:
The semantics of applying operators in the preprocessor match those
standard in the C++ preprocessor with the following exceptions:
• The 2nd operand in a logical and ('&&') operation is evaluated if
and only if the 1st operand evaluates to non-zero.
• The 2nd operand in a logical or ('||') operation is evaluated if
and only if the 1st operand evaluates to zero.
If an operand is not evaluated, the presence of undefined identifiers
in the operand will not cause an error.
(Note that neither of these deviations from C++ preprocessor behavior apply to
non-ES GLSL, at least as of specfication version 4.30.6).
The first portion of this, (generating an error for an undefined macro in an
(short-circuiting to squelch errors), was not implemented previously, but is
implemented in this commit.
A test is added for "make check" to ensure this behavior.
Note: The change as implemented does make the error message a bit less
precise, (it just states that an undefined macro was encountered, but not the
name of the macro).
This commit fixes the following Khronos GLES3 conformance test:
undefined_identifiers.valid_undefined_identifier_1_vertex
undefined_identifiers.valid_undefined_identifier_1_fragment
undefined_identifiers.valid_undefined_identifier_2_vertex
undefined_identifiers.valid_undefined_identifier_2_fragment
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The preprocessor defines a notions of a "preprocessing number" that
starts with either a digit or a decimal point, and continues with zero
or more of digits, decimal points, identifier characters, or the sign
symbols, ('-' and '+').
Prior to this change, preprocessing numbers were lexed as some
combination of OTHER and IDENTIFIER tokens. This had the problem of
causing undesired macro expansion in some cases.
We add tests to ensure that the undesired macro expansion does not
happen in cases such as:
#define e +1
#define xyz -2
int n = 1e;
int p = 1xyz;
In either case these macro definitions have no effect after this
change, so that the numeric literals, (whether valid or not), will be
passed on as-is from the preprocessor to the compiler proper.
This fixes the following Khronos GLES3 CTS tests:
preprocessor.basic.correct_phases_vertex
preprocessor.basic.correct_phases_fragment
v2. Thanks to Anuj Phogat for improving the original regular expression,
(which accepted a '+' or '-', where these are only allowed after one of
[eEpP]. I also expanded the test to exercise this.
v3. Also fixed regular expression to require at least one digit at the
beginning (after an optional period). Otherwise, a string such as ".xyz" was
getting sucked up as a preprocessing number, (where obviously this should be a
field access). Again, I expanded the test to exercise this.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a line such as:
#else garbage
would flag an error if it followed "#if 0", but not if it followed "#if 1".
We fix this by setting a new bit of state (lexing_else) that allows the lexer
to defer switching to the <SKIP> start state until after the NEWLINE following
the #else directive.
A new test case is added for:
#if 1
#else garbage
#endif
which was untested before, (and did not generate the desired error).
This fixes the following Khronos GLES3 CTS tests:
tokens_after_else_vertex
tokens_after_else_fragment
Reviewed-by: Matt Turner <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the test suite was expecting the compiler to allow a redefintion
of a macro with whitespace added, but gcc is more strict and allows only for
changes in the amounts of whitespace, (but insists that whitespace exist or
not in exactly the same places).
See: https://gcc.gnu.org/onlinedocs/cpp/Undefining-and-Redefining-Macros.html:
These definitions are effectively the same:
#define FOUR (2 + 2)
#define FOUR (2 + 2)
#define FOUR (2 /* two */ + 2)
but these are not:
#define FOUR (2 + 2)
#define FOUR ( 2+2 )
#define FOUR (2 * 2)
#define FOUR(score,and,seven,years,ago) (2 + 2)
This change adjusts the existing "redefine-macro-legitimate" test to work with
the more strict understanding, and adds a new "redefine-whitespace" test to
verify that changes in the position of whitespace are flagged as errors.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
| |
Currently verifying that an #undef of __FILE__, __LINE__, or __VERSION__ will
generate an error.
Reviewed-by: Anuj Phogat <[email protected]>
|
|
|
|
|
|
|
|
|
| |
After preprocessing by glcpp all adjacent spaces were replaced by
single one and glsl parser received column-shifted shader source.
It negatively affected ast location set up and produced wrong error
messages for heavily-spaced shaders.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Section 3.3 (Preprocessor) of the GLSL 1.30 spec (and later) and the
GLSL ES spec (all versions) say:
"All macro names containing two consecutive underscores ( __ ) are
reserved for future use as predefined macro names. All macro names
prefixed with "GL_" ("GL" followed by a single underscore) are also
reserved."
The intention is that names containing __ are reserved for internal use
by the implementation, and names prefixed with GL_ are reserved for use
by Khronos. Since every extension adds a name prefixed with GL_ (i.e.,
the name of the extension), that should be an error. Names simply
containing __ are dangerous to use, but should be allowed. In similar
cases, the C++ preprocessor specification says, "no diagnostic is
required."
Per the Khronos bug mentioned below, a future version of the GLSL
specification will clarify this.
Signed-off-by: Ian Romanick <[email protected]>
Cc: "9.2 10.0 10.1" <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Tested-by: Kenneth Graunke <[email protected]>
Reviewed-by: Anuj Phogat <[email protected]>
Tested-by: Darius Spitznagel <[email protected]>
Cc: Tapani Pälli <[email protected]>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71870
Bugzilla: Khronos #11702
|
|
|
|
|
|
|
| |
This is the innocent-looking but killer test case to verify the bug fixed in
the preceding commit.
Reviewed-by: Jordan Justen <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The preprocessor currently accepts multiple else/elif-groups
per if-section. The GLSL-preprocessor is defined by the C++
specification, which defines the following parse-rule:
if-section:
if-group elif-groups(opt) else-group(opt) endif-line
This clearly only allows a single else-group, that has to come
after any elif-groups.
So let's modify the code to follow the specification. Add test
to prevent regressions.
Reviewed-by: Ian Romanick <[email protected]>
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Carl Worth <[email protected]>
Cc: 10.0 <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
definition)
The preprocessor has always replaced multi-line comments with a single space
character, (as required by the specification), but as of commit
bd55ba568b301d0f764cd1ca015e84e1ae932c8b the lexer also emitted a NEWLINE
token for each newline within the comment, (in order to preserve line
numbers).
The emitting of NEWLINE tokens within the comment broke the rule of "replace a
multi-line comment with a single space" as could be exposed by code like the
following:
#define FOO a/*
*/b
FOO
Prior to commit bd55ba568b301d0f764cd1ca015e84e1ae932c8b, this code defined
the macro FOO as "a b" as desired. Since that commit, this code instead
defines FOO as "a" and leaves a stray "b" in the output.
In this commit, we fix this by not emitting the NEWLINE tokens while lexing
the comment, but instead merely counting them in the commented_newlines
variable. Then, when the lexer next encounters a non-commented newline it
switches to a NEWLINE_CATCHUP state to emit as many NEWLINE tokens as
necessary (so that subsequent parsing stages still generate correct line
numbers).
Of course, it would have been more clear if we could have written a loop to
emit all the newlines, but flex conventions prevent that, (we must use
"return" for each token we emit).
It similarly would have been clear to have a new rule restricted to the
<NEWLINE_CATCHUP> state with an action much like the body of this if
condition. The problem with that is that this rule must not consume any
characters. It might be possible to write a rule that matches a single
lookahead of any character, but then we would also need an additional rule to
ensure for the <EOF> case where there are no additional characters available
for the lookahead to match.
Given those considerations, and given that the SKIP-state manipulation already
involves a code block at the top of the lexer function, before any rules, it
seems best to me to go with the implementation here which adds a similar
pre-rule code block for the NEWLINE_CATCHUP.
Finally, this commit also changes the expected output of a few, existing glcpp
tests. The change here is that the space character resulting from the
multi-line comment is now emitted before the newlines corresponding to that
comment. (Previously, the newlines were emitted first, and the space character
afterward.)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=72686
Reviewed-by: Kenneth Graunke <[email protected]>
Reviewed-by: Ian Romanick <[email protected]>
|
|
|
|
|
|
|
| |
To trigger the bug, it suffices to have a line-continuation followed by
a newline and then a non-line-continuation backslash.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
| |
Removing the subdirectory recursion provides a small speed up.
Tested-by: Andreas Boll <[email protected]>
|
|
|
|
|
|
|
|
|
| |
First we test that line continuations are honored within a comment, (as
recently changed in glcpp), then we test that line continuations can be
disabled via an option within the context. This is tested via the new support
for a test-specific command-line option passed to glcpp.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
| |
This will allow the test exercising disabled line continuations to arrange
for the --disable-line-continuations argument to be passed to the standalone
glcpp.
Reviewed-by: Kenneth Graunke <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This test file is very similar to test 113-line-and-file-macros but uses token
pasting for cleaner quiz answers (without spaces between the digits). This
test passes thanks to the recent addition of support for pasting INTEGER
tokens, (but would have failed without that).
(Note that this test is distinct from test 059-token-pasting-integer which
pastes integers parsed from the source. Those are parsed to INTEGER_STRING
tokens and are already pasted correctly as verified by that test. The only way
to generate the INTEGER tokens which currently fail to paste is with an
internal define such as __LINE__ that results in an integer.)
Reviewed-by: Matt Turner <[email protected]>
|
|
|
|
|
|
|
|
| |
The current code lets a few invalid pastes through, such as an string pasted
onto the end of an integer. Extend the invalid-paste test to catch some of
these.
Reviewed-by: Matt Turner <[email protected]>
|