| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
now contains 3 static tables. The first table is a single, large string of
all the enum names. The second table is an array, sorted by enum name, of
indexes to the string table and the matching enum value. The extra string
table is used to eliminate relocs (and save space) in the compiled file.
The third table is an array, sorted by enum value, of indexes into the
second table.
The [name, enum] table contains all of the enums, but the table sorted by
enum-value does not. This table contains one entry per enum value. For
enum values that have multiple names (e.g., 0x84C0 has GL_TEXTURE0_ARB and
GL_TEXTURE0), only an index to the "best" name will appear in the table.
gl_enums.py gives precedence to "core" GL versions of names, followed by ARB
versions, followed by EXT versions, followed, finally, by vendor versions
(i.e., anything that doesn't fall into one of the previous categories). By
filtering the unneeded elements from this table, not only can we guarantee
determinism in the generated tables, but we save 364 elements in the table.
The optimizations outlined above reduced the size of the stripped enums.o
(on x86) from ~80KB to ~53KB.
The internal organization of gl_enums.py was also heavily modified.
Previously enums were stored in an unsorted list as [value, name] tuples
(basically). This list was then sorted, using a user-specified compare
function (i.e., VERY slow in most Python implementations) to generate a
table sorted by enum value. It was then sorted again, using another
user-specified compare function, to generate a table sorted by name.
Enums are now stored in a dictionary, called enum_table, with the enum value
as the key. Each dictionary element is a list of [name, priority] pairs.
The priority is determined as described above. The table sorted by enum
value is generated by sorting the keys of enum_table (i.e., very fast). The
tables sorted by name are generated by creating a list, called name_table,
of [name, enum value] pairs. This table can then be sorted by doing
name_table.sort() (i.e., very fast).
The result is a fair amount more Python code, but execution time was reduced
from ~14 seconds to ~2 seconds.
|
|
|
|
|
|
|
| |
ARB_fragment_program_shadow, ARB_vertex_program, NV_fragment_program,
NV_fragment_program_option, NV_fragment_program2, NV_vertex_program,
NV_vertex_program1_1, NV_vertex_program2, NV_vertex_program2_option,
NV_vertex_program3, and ATI_text_fragment_shader.
|
|
|
|
|
|
|
| |
setup_single_request, and setup_vendor_request to the global functions
__glXReadPixelReply, __glXReadReply, __glXSetupSingleRequest, and
__glXSetupVendorRequest. This will make it easier to add handcoded Single /
VendorPrivate / VendorPrivteWithReply functions.
|
| |
|
| |
|
|
|
|
|
|
| |
changed. Other drivers don't need to do this because they're swapping
modified textures out of texture memory, which implies a timestamp
update.
|
| |
|
| |
|
|
|
|
|
| |
heap aging, similar to the way it's done in the i810 and i855 drivers.
This avoids idling the engine on every texture upload.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This is needed for multitexturing to work properly.
|
|
|
|
| |
and some new debugging stuff.
|
| |
|
|
|
|
| |
some comments.
|
|
|
|
| |
parenthesis. Can you see it? HINT: Anything texture related should now work slightly better. And yes it took me several hours to find it.
|
|
|
|
|
|
|
| |
Be a bit more useful about the sync message after flushing command buffers.
Add an "allmsg" debug name that enables all log messages but does not
enable syncing.
|
| |
|
|
|
|
|
| |
waiting for the engine to idle. There's no way for another buffer to
become free anyway once the engine is idle.
|
|
|
|
|
|
|
| |
so that we no longer leak DMA buffers (plus, this just might fix some
state-setting related problems, if there were any - but that's unlikely).
Update the DRM to cope with cmdbuf->nbox == 0.
|
|
|
|
| |
anything(r300_get_num_verts returns 0).
|
|
|
|
| |
Added a more verbose comment about nr_released_bufs in r300_context.h
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
whitespace before preprocessor commands.
Please, can you try to keep the warnings down? Try running make with
make -s sometime to see just how bad an offender the current code is.
|
|
|
|
|
| |
Also, put the hash in preprocessor directives at the beginning of the line
to fix error messages.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
any lock ups nor other issues. Tests with one object using elts should pass. Introducing more than one object will cause indices to mix up as far as i can see. DRM update is needed for this code to work\!
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
and fix for glxinfo problem.
|