diff options
author | Tom Stellard <[email protected]> | 2012-01-06 17:38:37 -0500 |
---|---|---|
committer | Tom Stellard <[email protected]> | 2012-04-13 10:32:06 -0400 |
commit | a75c6163e605f35b14f26930dd9227e4f337ec9e (patch) | |
tree | 0263219cbab9282896f874060bb03d445c4de891 /src/gallium/drivers/radeon/AMDIL789IOExpansion.cpp | |
parent | e55cf4854d594eae9ac3f6abd24f4e616eea894f (diff) |
radeonsi: initial WIP SI code
This commit adds initial support for acceleration
on SI chips. egltri is starting to work.
The SI/R600 llvm backend is currently included in mesa
but that may change in the future.
The plan is to write a single gallium driver and
use gallium to support X acceleration.
This commit contains patches from:
Tom Stellard <[email protected]>
Michel Dänzer <[email protected]>
Alex Deucher <[email protected]>
Vadim Girlin <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
The following commits were squashed in:
======================================================================
radeonsi: Remove unused winsys pointer
This was removed from r600g in commit:
commit 96d882939d612fcc8332f107befec470ed4359de
Author: Marek Olšák <[email protected]>
Date: Fri Feb 17 01:49:49 2012 +0100
gallium: remove unused winsys pointers in pipe_screen and pipe_context
A winsys is already a private object of a driver.
======================================================================
radeonsi: Copy color clamping CAPs from r600
Not sure if the values of these CAPS are correct for radeonsi, but the
same changed were made to r600g in commit:
commit bc1c8369384b5e16547c5bf9728aa78f8dfd66cc
Author: Marek Olšák <[email protected]>
Date: Mon Jan 23 03:11:17 2012 +0100
st/mesa: do vertex and fragment color clamping in shaders
For ARB_color_buffer_float. Most hardware can't do it and st/mesa is
the perfect place for a fallback.
The exceptions are:
- r500 (vertex clamp only)
- nv50 (both)
- nvc0 (both)
- softpipe (both)
We also have to take into account that r300 can do CLAMPED vertex colors only,
while r600 can do UNCLAMPED vertex colors only. The difference can be expressed
with the two new CAPs.
======================================================================
radeonsi: Remove PIPE_CAP_OUTPUT_READ
This CAP was dropped in commit:
commit 04e324008759282728a95a1394bac2c4c2a1a3f9
Author: Marek Olšák <[email protected]>
Date: Thu Feb 23 23:44:36 2012 +0100
gallium: remove PIPE_SHADER_CAP_OUTPUT_READ
r600g is the only driver which has made use of it. The reason the CAP was
added was to fix some piglit tests when the GLSL pass lower_output_reads
didn't exist.
However, not removing output reads breaks the fallback for glClampColorARB,
which assumes outputs are not readable. The fix would be non-trivial
and my personal preference is to remove the CAP, considering that reading
outputs is uncommon and that we can now use lower_output_reads to fix
the issue that the CAP was supposed to workaround in the first place.
======================================================================
radeonsi: Add missing parameters to rws->buffer_get_tiling() call
This was changed in commit:
commit c0c979eebc076b95cc8d18a013ce2968fe6311ad
Author: Jerome Glisse <[email protected]>
Date: Mon Jan 30 17:22:13 2012 -0500
r600g: add support for common surface allocator for tiling v13
Tiled surface have all kind of alignment constraint that needs to
be met. Instead of having all this code duplicated btw ddx and
mesa use common code in libdrm_radeon this also ensure that both
ddx and mesa compute those alignment in the same way.
v2 fix evergreen
v3 fix compressed texture and workaround cube texture issue by
disabling 2D array mode for cubemap (need to check if r7xx and
newer are also affected by the issue)
v4 fix texture array
v5 fix evergreen and newer, split surface values computation from
mipmap tree generation so that we can get them directly from the
ddx
v6 final fix to evergreen tile split value
v7 fix mipmap offset to avoid to use random value, use color view
depth view to address different layer as hardware is doing some
magic rotation depending on the layer
v8 fix COLOR_VIEW on r6xx for linear array mode, use COLOR_VIEW on
evergreen, align bytes per pixel to a multiple of a dword
v9 fix handling of stencil on evergreen, half fix for compressed
texture
v10 fix evergreen compressed texture proper support for stencil
tile split. Fix stencil issue when array mode was clear by
the kernel, always program stencil bo. On evergreen depth
buffer bo need to be big enough to hold depth buffer + stencil
buffer as even with stencil disabled things get written there.
v11 rebase on top of mesa, fix pitch issue with 1d surface on evergreen,
old ddx overestimate those. Fix linear case when pitch*height < 64.
Fix r300g.
v12 Fix linear case when pitch*height < 64 for old path, adapt to
libdrm API change
v13 add libdrm check
Signed-off-by: Jerome Glisse <[email protected]>
======================================================================
radeonsi: Remove PIPE_TRANSFER_MAP_PERMANENTLY
This was removed in commit:
commit 62f44f670bb0162e89fd4786af877f8da9ff607c
Author: Marek Olšák <[email protected]>
Date: Mon Mar 5 13:45:00 2012 +0100
Revert "gallium: add flag PIPE_TRANSFER_MAP_PERMANENTLY"
This reverts commit 0950086376b1c8b7fb89eda81ed7f2f06dee58bc.
It was decided to refactor the transfer API instead of adding workarounds
to address the performance issues.
======================================================================
radeonsi: Handle PIPE_VIDEO_CAP_PREFERED_FORMAT.
Reintroduced in commit 9d9afcb5bac2931d4b8e6d1aa571e941c5110c90.
======================================================================
radeonsi: nuke the fallback for vertex and fragment color clamping
Ported from r600g commit c2b800cf38b299c1ab1c53dc0e4ea00c7acef853.
======================================================================
radeonsi: don't expose transform_feedback2 without kernel support
Ported from r600g commit 15146fd1bcbb08e44a1cbb984440ee1a5de63d48.
======================================================================
radeonsi: Handle PIPE_CAP_GLSL_FEATURE_LEVEL.
Ported from r600g part of commit 171be755223d99f8cc5cc1bdaf8bd7b4caa04b4f.
======================================================================
radeonsi: set minimum point size to 1.0 for non-sprite non-aa points.
Ported from r600g commit f183cc9ce3ad1d043bdf8b38fd519e8f437714fc.
======================================================================
radeonsi: rework and consolidate stencilref state setting.
Ported from r600g commit a2361946e782b57f0c63587841ca41c0ea707070.
======================================================================
radeonsi: cleanup setting DB_SHADER_CONTROL.
Ported from r600g commit 3d061caaed13b646ff40754f8ebe73f3d4983c5b.
======================================================================
radeonsi: Get rid of register masks.
Ported from r600g commits
3d061caaed13b646ff40754f8ebe73f3d4983c5b..9344ab382a1765c1a7c2560e771485edf4954fe2.
======================================================================
radeonsi: get rid of r600_context_reg.
Ported from r600g commits
9344ab382a1765c1a7c2560e771485edf4954fe2..bed20f02a771f43e1c5092254705701c228cfa7f.
======================================================================
radeonsi: Fix regression from 'Get rid of register masks'.
======================================================================
radeonsi: optimize r600_resource_va.
Ported from r600g commit 669d8766ff3403938794eb80d7769347b6e52174.
======================================================================
radeonsi: remove u8,u16,u32,u64 types.
Ported from r600g commit 78293b99b23268e6698f1267aaf40647c17d95a5.
======================================================================
radeonsi: merge r600_context with r600_pipe_context.
Ported from r600g commit e4340c1908a6a3b09e1a15d5195f6da7d00494d0.
======================================================================
radeonsi: Miscellaneous context cleanups.
Ported from r600g commits
e4340c1908a6a3b09e1a15d5195f6da7d00494d0..621e0db71c5ddcb379171064a4f720c9cf01e888.
======================================================================
radeonsi: add a new simple API for state emission.
Ported from r600g commits
621e0db71c5ddcb379171064a4f720c9cf01e888..f661405637bba32c2cfbeecf6e2e56e414e9521e.
======================================================================
radeonsi: Also remove sbu_flags member of struct r600_reg.
Requires using sid.h instead of r600d.h for the new CP_COHER_CNTL definitions,
so some code needs to be disabled for now.
======================================================================
radeonsi: Miscellaneous simplifications.
Ported from r600g commits 38bf2763482b4f1b6d95cd51aecec75601d8b90f and
b0337b679ad4c2feae59215104cfa60b58a619d5.
======================================================================
radeonsi: Handle PIPE_CAP_QUADS_FOLLOW_PROVOKING_VERTEX_CONVENTION.
Ported from commit 8b4f7b0672d663273310fffa9490ad996f5b914a.
======================================================================
radeonsi: Use a fake reloc to sleep for fences.
Ported from r600g commit 8cd03b933cf868ff867e2db4a0937005a02fd0e4.
======================================================================
radeonsi: adapt to get_query_result interface change.
Ported from r600g commit 4445e170bee23a3607ece0e010adef7058ac6a11.
Diffstat (limited to 'src/gallium/drivers/radeon/AMDIL789IOExpansion.cpp')
-rw-r--r-- | src/gallium/drivers/radeon/AMDIL789IOExpansion.cpp | 723 |
1 files changed, 723 insertions, 0 deletions
diff --git a/src/gallium/drivers/radeon/AMDIL789IOExpansion.cpp b/src/gallium/drivers/radeon/AMDIL789IOExpansion.cpp new file mode 100644 index 00000000000..cf5afb9d195 --- /dev/null +++ b/src/gallium/drivers/radeon/AMDIL789IOExpansion.cpp @@ -0,0 +1,723 @@ +//===-- AMDIL789IOExpansion.cpp - TODO: Add brief description -------===// +// +// The LLVM Compiler Infrastructure +// +// This file is distributed under the University of Illinois Open Source +// License. See LICENSE.TXT for details. +// +//==-----------------------------------------------------------------------===// +// +// @file AMDIL789IOExpansion.cpp +// @details Implementation of the IO expansion class for 789 devices. +// +#include "AMDILCompilerErrors.h" +#include "AMDILCompilerWarnings.h" +#include "AMDILDevices.h" +#include "AMDILGlobalManager.h" +#include "AMDILIOExpansion.h" +#include "AMDILKernelManager.h" +#include "AMDILMachineFunctionInfo.h" +#include "AMDILTargetMachine.h" +#include "AMDILUtilityFunctions.h" +#include "llvm/CodeGen/MachineConstantPool.h" +#include "llvm/CodeGen/MachineInstr.h" +#include "llvm/CodeGen/MachineInstrBuilder.h" +#include "llvm/DerivedTypes.h" +#include "llvm/Support/DebugLoc.h" +#include "llvm/Value.h" + +using namespace llvm; +AMDIL789IOExpansion::AMDIL789IOExpansion(TargetMachine &tm + AMDIL_OPT_LEVEL_DECL) +: AMDILIOExpansion(tm AMDIL_OPT_LEVEL_VAR) +{ +} + +AMDIL789IOExpansion::~AMDIL789IOExpansion() { +} + +const char *AMDIL789IOExpansion::getPassName() const +{ + return "AMDIL 789 IO Expansion Pass"; +} +// This code produces the following pseudo-IL: +// mov r1007, $src.y000 +// cmov_logical r1007.x___, $flag.yyyy, r1007.xxxx, $src.xxxx +// mov r1006, $src.z000 +// cmov_logical r1007.x___, $flag.zzzz, r1006.xxxx, r1007.xxxx +// mov r1006, $src.w000 +// cmov_logical $dst.x___, $flag.wwww, r1006.xxxx, r1007.xxxx +void +AMDIL789IOExpansion::emitComponentExtract(MachineInstr *MI, + unsigned flag, unsigned src, unsigned dst, bool before) +{ + MachineBasicBlock::iterator I = *MI; + DebugLoc DL = MI->getDebugLoc(); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VEXTRACT_v4i32), AMDIL::R1007) + .addReg(src) + .addImm(2); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_Y_i32), AMDIL::R1007) + .addReg(flag) + .addReg(AMDIL::R1007) + .addReg(src); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VEXTRACT_v4i32), AMDIL::R1006) + .addReg(src) + .addImm(3); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_Z_i32), AMDIL::R1007) + .addReg(flag) + .addReg(AMDIL::R1006) + .addReg(AMDIL::R1007); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VEXTRACT_v4i32), AMDIL::R1006) + .addReg(src) + .addImm(4); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_W_i32), dst) + .addReg(flag) + .addReg(AMDIL::R1006) + .addReg(AMDIL::R1007); + +} +// We have a 128 bit load but a 8/16/32bit value, so we need to +// select the correct component and make sure that the correct +// bits are selected. For the 8 and 16 bit cases we need to +// extract from the component the correct bits and for 32 bits +// we just need to select the correct component. + void +AMDIL789IOExpansion::emitDataLoadSelect(MachineInstr *MI) +{ + MachineBasicBlock::iterator I = *MI; + DebugLoc DL = MI->getDebugLoc(); + emitComponentExtract(MI, AMDIL::R1008, AMDIL::R1011, AMDIL::R1011, false); + if (getMemorySize(MI) == 1) { + // This produces the following pseudo-IL: + // iand r1006.x___, r1010.xxxx, l14.xxxx + // mov r1006, r1006.xxxx + // iadd r1006, r1006, {0, -1, 2, 3} + // ieq r1008, r1006, 0 + // mov r1011, r1011.xxxx + // ishr r1011, r1011, {0, 8, 16, 24} + // mov r1007, r1011.y000 + // cmov_logical r1007.x___, r1008.yyyy, r1007.xxxx, r1011.xxxx + // mov r1006, r1011.z000 + // cmov_logical r1007.x___, r1008.zzzz, r1006.xxxx, r1007.xxxx + // mov r1006, r1011.w000 + // cmov_logical r1011.x___, r1008.wwww, r1006.xxxx, r1007.xxxx + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1006) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(3)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i32), AMDIL::R1006) + .addReg(AMDIL::R1006); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::ADD_v4i32), AMDIL::R1006) + .addReg(AMDIL::R1006) + .addImm(mMFI->addi128Literal(0xFFFFFFFFULL << 32, + (0xFFFFFFFEULL | (0xFFFFFFFDULL << 32)))); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::IEQ_v4i32), AMDIL::R1008) + .addReg(AMDIL::R1006) + .addImm(mMFI->addi32Literal(0)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHRVEC_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi128Literal(8ULL << 32, 16ULL | (24ULL << 32))); + emitComponentExtract(MI, AMDIL::R1008, AMDIL::R1011, AMDIL::R1011, false); + } else if (getMemorySize(MI) == 2) { + // This produces the following pseudo-IL: + // ishr r1007.x___, r1010.xxxx, 1 + // iand r1008.x___, r1007.xxxx, 1 + // ishr r1007.x___, r1011.xxxx, 16 + // cmov_logical r1011.x___, r1008.xxxx, r1007.xxxx, r1011.xxxx + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHR_i32), AMDIL::R1007) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(1)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1008) + .addReg(AMDIL::R1007) + .addImm(mMFI->addi32Literal(1)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHR_i32), AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(16)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_i32), AMDIL::R1011) + .addReg(AMDIL::R1008) + .addReg(AMDIL::R1007) + .addReg(AMDIL::R1011); + } +} +// This function does address calculations modifications to load from a vector +// register type instead of a dword addressed load. + void +AMDIL789IOExpansion::emitVectorAddressCalc(MachineInstr *MI, bool is32bit, bool needsSelect) +{ + MachineBasicBlock::iterator I = *MI; + DebugLoc DL = MI->getDebugLoc(); + // This produces the following pseudo-IL: + // ishr r1007.x___, r1010.xxxx, (is32bit) ? 2 : 3 + // iand r1008.x___, r1007.xxxx, (is32bit) ? 3 : 1 + // ishr r1007.x___, r1007.xxxx, (is32bit) ? 2 : 1 + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHR_i32), AMDIL::R1007) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal((is32bit) ? 0x2 : 3)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1008) + .addReg(AMDIL::R1007) + .addImm(mMFI->addi32Literal((is32bit) ? 3 : 1)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHR_i32), AMDIL::R1007) + .addReg(AMDIL::R1007) + .addImm(mMFI->addi32Literal((is32bit) ? 2 : 1)); + if (needsSelect) { + // If the component selection is required, the following + // pseudo-IL is produced. + // mov r1008, r1008.xxxx + // iadd r1008, r1008, (is32bit) ? {0, -1, -2, -3} : {0, 0, -1, -1} + // ieq r1008, r1008, 0 + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i32), AMDIL::R1008) + .addReg(AMDIL::R1008); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::ADD_v4i32), AMDIL::R1008) + .addReg(AMDIL::R1008) + .addImm(mMFI->addi128Literal((is32bit) ? 0xFFFFFFFFULL << 32 : 0ULL, + (is32bit) ? 0xFFFFFFFEULL | (0xFFFFFFFDULL << 32) : + -1ULL)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::IEQ_v4i32), AMDIL::R1008) + .addReg(AMDIL::R1008) + .addImm(mMFI->addi32Literal(0)); + } +} +// This function emits a switch statement and writes 32bit/64bit +// value to a 128bit vector register type. + void +AMDIL789IOExpansion::emitVectorSwitchWrite(MachineInstr *MI, bool is32bit) +{ + MachineBasicBlock::iterator I = *MI; + uint32_t xID = getPointerID(MI); + assert(xID && "Found a scratch store that was incorrectly marked as zero ID!\n"); + // This section generates the following pseudo-IL: + // switch r1008.x + // default + // mov x1[r1007.x].(is32bit) ? x___ : xy__, r1011.x{y} + // break + // case 1 + // mov x1[r1007.x].(is32bit) ? _y__ : __zw, r1011.x{yxy} + // break + // if is32bit is true, case 2 and 3 are emitted. + // case 2 + // mov x1[r1007.x].__z_, r1011.x + // break + // case 3 + // mov x1[r1007.x].___w, r1011.x + // break + // endswitch + DebugLoc DL; + BuildMI(*mBB, I, MI->getDebugLoc(), mTII->get(AMDIL::SWITCH)) + .addReg(AMDIL::R1008); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::DEFAULT)); + BuildMI(*mBB, I, DL, + mTII->get((is32bit) ? AMDIL::SCRATCHSTORE_X : AMDIL::SCRATCHSTORE_XY) + , AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(xID); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BREAK)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CASE)).addImm(1); + BuildMI(*mBB, I, DL, + mTII->get((is32bit) ? AMDIL::SCRATCHSTORE_Y : AMDIL::SCRATCHSTORE_ZW), AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(xID); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BREAK)); + if (is32bit) { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CASE)).addImm(2); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHSTORE_Z), AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(xID); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BREAK)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CASE)).addImm(3); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHSTORE_W), AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(xID); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BREAK)); + } + BuildMI(*mBB, I, DL, mTII->get(AMDIL::ENDSWITCH)); + +} + void +AMDIL789IOExpansion::expandPrivateLoad(MachineInstr *MI) +{ + MachineBasicBlock::iterator I = *MI; + bool HWPrivate = mSTM->device()->usesHardware(AMDILDeviceInfo::PrivateMem); + if (!HWPrivate || mSTM->device()->isSupported(AMDILDeviceInfo::PrivateUAV)) { + return expandGlobalLoad(MI); + } + if (!mMFI->usesMem(AMDILDevice::SCRATCH_ID) + && mKM->isKernel()) { + mMFI->addErrorMsg(amd::CompilerErrorMessage[MEMOP_NO_ALLOCATION]); + } + uint32_t xID = getPointerID(MI); + assert(xID && "Found a scratch load that was incorrectly marked as zero ID!\n"); + if (!xID) { + xID = mSTM->device()->getResourceID(AMDILDevice::SCRATCH_ID); + mMFI->addErrorMsg(amd::CompilerWarningMessage[RECOVERABLE_ERROR]); + } + DebugLoc DL; + // These instructions go before the current MI. + expandLoadStartCode(MI); + switch (getMemorySize(MI)) { + default: + // Since the private register is a 128 bit aligned, we have to align the address + // first, since our source address is 32bit aligned and then load the data. + // This produces the following pseudo-IL: + // ishr r1010.x___, r1010.xxxx, 4 + // mov r1011, x1[r1010.x] + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SHR_i32), AMDIL::R1010) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(4)); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHLOAD), AMDIL::R1011) + .addReg(AMDIL::R1010) + .addImm(xID); + break; + case 1: + case 2: + case 4: + emitVectorAddressCalc(MI, true, true); + // This produces the following pseudo-IL: + // mov r1011, x1[r1007.x] + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHLOAD), AMDIL::R1011) + .addReg(AMDIL::R1007) + .addImm(xID); + // These instructions go after the current MI. + emitDataLoadSelect(MI); + break; + case 8: + emitVectorAddressCalc(MI, false, true); + // This produces the following pseudo-IL: + // mov r1011, x1[r1007.x] + // mov r1007, r1011.zw00 + // cmov_logical r1011.xy__, r1008.xxxx, r1011.xy, r1007.zw + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHLOAD), AMDIL::R1011) + .addReg(AMDIL::R1007) + .addImm(xID); + // These instructions go after the current MI. + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::VEXTRACT_v2i64), AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(2); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::CMOVLOG_i64), AMDIL::R1011) + .addReg(AMDIL::R1008) + .addReg(AMDIL::R1011) + .addReg(AMDIL::R1007); + break; + } + expandPackedData(MI); + expandExtendLoad(MI); + BuildMI(*mBB, I, MI->getDebugLoc(), + mTII->get(getMoveInstFromID( + MI->getDesc().OpInfo[0].RegClass)), + MI->getOperand(0).getReg()) + .addReg(AMDIL::R1011); +} + + + void +AMDIL789IOExpansion::expandConstantLoad(MachineInstr *MI) +{ + MachineBasicBlock::iterator I = *MI; + if (!isHardwareInst(MI) || MI->memoperands_empty()) { + return expandGlobalLoad(MI); + } + uint32_t cID = getPointerID(MI); + if (cID < 2) { + return expandGlobalLoad(MI); + } + if (!mMFI->usesMem(AMDILDevice::CONSTANT_ID) + && mKM->isKernel()) { + mMFI->addErrorMsg(amd::CompilerErrorMessage[MEMOP_NO_ALLOCATION]); + } + + DebugLoc DL; + // These instructions go before the current MI. + expandLoadStartCode(MI); + switch (getMemorySize(MI)) { + default: + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SHR_i32), AMDIL::R1010) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(4)); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::CBLOAD), AMDIL::R1011) + .addReg(AMDIL::R1010) + .addImm(cID); + break; + case 1: + case 2: + case 4: + emitVectorAddressCalc(MI, true, true); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::CBLOAD), AMDIL::R1011) + .addReg(AMDIL::R1007) + .addImm(cID); + // These instructions go after the current MI. + emitDataLoadSelect(MI); + break; + case 8: + emitVectorAddressCalc(MI, false, true); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::CBLOAD), AMDIL::R1011) + .addReg(AMDIL::R1007) + .addImm(cID); + // These instructions go after the current MI. + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::VEXTRACT_v2i64), AMDIL::R1007) + .addReg(AMDIL::R1011) + .addImm(2); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::VCREATE_v2i32), AMDIL::R1008) + .addReg(AMDIL::R1008); + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::CMOVLOG_i64), AMDIL::R1011) + .addReg(AMDIL::R1008) + .addReg(AMDIL::R1011) + .addReg(AMDIL::R1007); + break; + } + expandPackedData(MI); + expandExtendLoad(MI); + BuildMI(*mBB, I, MI->getDebugLoc(), + mTII->get(getMoveInstFromID( + MI->getDesc().OpInfo[0].RegClass)), + MI->getOperand(0).getReg()) + .addReg(AMDIL::R1011); + MI->getOperand(0).setReg(AMDIL::R1011); +} + + void +AMDIL789IOExpansion::expandConstantPoolLoad(MachineInstr *MI) +{ + if (!isStaticCPLoad(MI)) { + return expandConstantLoad(MI); + } else { + uint32_t idx = MI->getOperand(1).getIndex(); + const MachineConstantPool *MCP = MI->getParent()->getParent() + ->getConstantPool(); + const std::vector<MachineConstantPoolEntry> &consts + = MCP->getConstants(); + const Constant *C = consts[idx].Val.ConstVal; + emitCPInst(MI, C, mKM, 0, isExtendLoad(MI)); + } +} + + void +AMDIL789IOExpansion::expandPrivateStore(MachineInstr *MI) +{ + MachineBasicBlock::iterator I = *MI; + bool HWPrivate = mSTM->device()->usesHardware(AMDILDeviceInfo::PrivateMem); + if (!HWPrivate || mSTM->device()->isSupported(AMDILDeviceInfo::PrivateUAV)) { + return expandGlobalStore(MI); + } + if (!mMFI->usesMem(AMDILDevice::SCRATCH_ID) + && mKM->isKernel()) { + mMFI->addErrorMsg(amd::CompilerErrorMessage[MEMOP_NO_ALLOCATION]); + } + uint32_t xID = getPointerID(MI); + assert(xID && "Found a scratch store that was incorrectly marked as zero ID!\n"); + if (!xID) { + xID = mSTM->device()->getResourceID(AMDILDevice::SCRATCH_ID); + mMFI->addErrorMsg(amd::CompilerWarningMessage[RECOVERABLE_ERROR]); + } + DebugLoc DL; + // These instructions go before the current MI. + expandStoreSetupCode(MI); + switch (getMemorySize(MI)) { + default: + // This section generates the following pseudo-IL: + // ishr r1010.x___, r1010.xxxx, 4 + // mov x1[r1010.x], r1011 + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SHR_i32), AMDIL::R1010) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(4)); + BuildMI(*mBB, I, MI->getDebugLoc(), + mTII->get(AMDIL::SCRATCHSTORE), AMDIL::R1010) + .addReg(AMDIL::R1011) + .addImm(xID); + break; + case 1: + emitVectorAddressCalc(MI, true, true); + // This section generates the following pseudo-IL: + // mov r1002, x1[r1007.x] + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHLOAD), AMDIL::R1002) + .addReg(AMDIL::R1007) + .addImm(xID); + emitComponentExtract(MI, AMDIL::R1008, AMDIL::R1002, AMDIL::R1002, true); + // This section generates the following pseudo-IL: + // iand r1003.x, r1010.x, 3 + // mov r1003, r1003.xxxx + // iadd r1000, r1003, {0, -1, -2, -3} + // ieq r1000, r1000, 0 + // mov r1002, r1002.xxxx + // ishr r1002, r1002, {0, 8, 16, 24} + // mov r1011, r1011.xxxx + // cmov_logical r1002, r1000, r1011, r1002 + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1003) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(3)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i32), AMDIL::R1003) + .addReg(AMDIL::R1003); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::ADD_v4i32), AMDIL::R1001) + .addReg(AMDIL::R1003) + .addImm(mMFI->addi128Literal(0xFFFFFFFFULL << 32, + (0xFFFFFFFEULL | (0xFFFFFFFDULL << 32)))); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::IEQ_v4i32), AMDIL::R1001) + .addReg(AMDIL::R1001) + .addImm(mMFI->addi32Literal(0)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i32), AMDIL::R1002) + .addReg(AMDIL::R1002); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHRVEC_v4i32), AMDIL::R1002) + .addReg(AMDIL::R1002) + .addImm(mMFI->addi128Literal(8ULL << 32, 16ULL | (24ULL << 32))); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_v4i32), AMDIL::R1002) + .addReg(AMDIL::R1001) + .addReg(AMDIL::R1011) + .addReg(AMDIL::R1002); + if (mSTM->device()->getGeneration() == AMDILDeviceInfo::HD4XXX) { + // This section generates the following pseudo-IL: + // iand r1002, r1002, 0xFF + // ishl r1002, r1002, {0, 8, 16, 24} + // ior r1002.xy, r1002.xy, r1002.zw + // ior r1011.x, r1002.x, r1002.y + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_v4i32), AMDIL::R1002) + .addReg(AMDIL::R1002) + .addImm(mMFI->addi32Literal(0xFF)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHL_v4i32), AMDIL::R1002) + .addReg(AMDIL::R1002) + .addImm(mMFI->addi128Literal(8ULL << 32, 16ULL | (24ULL << 32))); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v2i64), AMDIL::R1002) + .addReg(AMDIL::R1002).addReg(AMDIL::R1002); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1002).addReg(AMDIL::R1002); + } else { + // This section generates the following pseudo-IL: + // mov r1001.xy, r1002.yw + // mov r1002.xy, r1002.xz + // ubit_insert r1002.xy, 8, 8, r1001.xy, r1002.xy + // mov r1001.x, r1002.y + // ubit_insert r1011.x, 16, 16, r1002.y, r1002.x + BuildMI(*mBB, I, DL, mTII->get(AMDIL::LHI_v2i64), AMDIL::R1001) + .addReg(AMDIL::R1002); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::LLO_v2i64), AMDIL::R1002) + .addReg(AMDIL::R1002); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::UBIT_INSERT_v2i32), AMDIL::R1002) + .addImm(mMFI->addi32Literal(8)) + .addImm(mMFI->addi32Literal(8)) + .addReg(AMDIL::R1001) + .addReg(AMDIL::R1002); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::LHI), AMDIL::R1001) + .addReg(AMDIL::R1002); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::UBIT_INSERT_i32), AMDIL::R1011) + .addImm(mMFI->addi32Literal(16)) + .addImm(mMFI->addi32Literal(16)) + .addReg(AMDIL::R1001) + .addReg(AMDIL::R1002); + } + emitVectorAddressCalc(MI, true, false); + emitVectorSwitchWrite(MI, true); + break; + case 2: + emitVectorAddressCalc(MI, true, true); + // This section generates the following pseudo-IL: + // mov r1002, x1[r1007.x] + BuildMI(*mBB, I, DL, + mTII->get(AMDIL::SCRATCHLOAD), AMDIL::R1002) + .addReg(AMDIL::R1007) + .addImm(xID); + emitComponentExtract(MI, AMDIL::R1008, AMDIL::R1002, AMDIL::R1002, true); + // This section generates the following pseudo-IL: + // ishr r1003.x, r1010.x, 1 + // iand r1003.x, r1003.x, 1 + // ishr r1001.x, r1002.x, 16 + // cmov_logical r1002.x, r1003.x, r1002.x, r1011.x + // cmov_logical r1001.x, r1003.x, r1011.x, r1001.x + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHR_i32), AMDIL::R1003) + .addReg(AMDIL::R1010) + .addImm(mMFI->addi32Literal(1)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1003) + .addReg(AMDIL::R1003) + .addImm(mMFI->addi32Literal(1)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHR_i32), AMDIL::R1001) + .addReg(AMDIL::R1002) + .addImm(mMFI->addi32Literal(16)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_i32), AMDIL::R1002) + .addReg(AMDIL::R1003) + .addReg(AMDIL::R1002) + .addReg(AMDIL::R1011); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::CMOVLOG_i32), AMDIL::R1001) + .addReg(AMDIL::R1003) + .addReg(AMDIL::R1011) + .addReg(AMDIL::R1001); + if (mSTM->device()->getGeneration() == AMDILDeviceInfo::HD4XXX) { + // This section generates the following pseudo-IL: + // iand r1002.x, r1002.x, 0xFFFF + // iand r1001.x, r1001.x, 0xFFFF + // ishl r1001.x, r1002.x, 16 + // ior r1011.x, r1002.x, r1001.x + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1002) + .addReg(AMDIL::R1002) + .addImm(mMFI->addi32Literal(0xFFFF)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_i32), AMDIL::R1001) + .addReg(AMDIL::R1001) + .addImm(mMFI->addi32Literal(0xFFFF)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHL_i32), AMDIL::R1001) + .addReg(AMDIL::R1001) + .addImm(mMFI->addi32Literal(16)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_OR_i32), AMDIL::R1011) + .addReg(AMDIL::R1002).addReg(AMDIL::R1001); + } else { + // This section generates the following pseudo-IL: + // ubit_insert r1011.x, 16, 16, r1001.y, r1002.x + BuildMI(*mBB, I, DL, mTII->get(AMDIL::UBIT_INSERT_i32), AMDIL::R1011) + .addImm(mMFI->addi32Literal(16)) + .addImm(mMFI->addi32Literal(16)) + .addReg(AMDIL::R1001) + .addReg(AMDIL::R1002); + } + emitVectorAddressCalc(MI, true, false); + emitVectorSwitchWrite(MI, true); + break; + case 4: + emitVectorAddressCalc(MI, true, false); + emitVectorSwitchWrite(MI, true); + break; + case 8: + emitVectorAddressCalc(MI, false, false); + emitVectorSwitchWrite(MI, false); + break; + }; +} + void +AMDIL789IOExpansion::expandStoreSetupCode(MachineInstr *MI) +{ + MachineBasicBlock::iterator I = *MI; + DebugLoc DL; + if (MI->getOperand(0).isUndef()) { + BuildMI(*mBB, I, DL, mTII->get(getMoveInstFromID( + MI->getDesc().OpInfo[0].RegClass)), AMDIL::R1011) + .addImm(mMFI->addi32Literal(0)); + } else { + BuildMI(*mBB, I, DL, mTII->get(getMoveInstFromID( + MI->getDesc().OpInfo[0].RegClass)), AMDIL::R1011) + .addReg(MI->getOperand(0).getReg()); + } + expandTruncData(MI); + if (MI->getOperand(2).isReg()) { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::ADD_i32), AMDIL::R1010) + .addReg(MI->getOperand(1).getReg()) + .addReg(MI->getOperand(2).getReg()); + } else { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::MOVE_i32), AMDIL::R1010) + .addReg(MI->getOperand(1).getReg()); + } + expandAddressCalc(MI); + expandPackedData(MI); +} + + +void +AMDIL789IOExpansion::expandPackedData(MachineInstr *MI) +{ + MachineBasicBlock::iterator I = *MI; + if (!isPackedData(MI)) { + return; + } + DebugLoc DL; + // If we have packed data, then the shift size is no longer + // the same as the load size and we need to adjust accordingly + switch(getPackedID(MI)) { + default: + break; + case PACK_V2I8: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi64Literal(0xFFULL | (0xFFULL << 32))); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHL_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011).addImm(mMFI->addi64Literal(8ULL << 32)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1011); + } + break; + case PACK_V4I8: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(0xFF)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHL_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi128Literal(8ULL << 32, (16ULL | (24ULL << 32)))); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v2i64), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1011); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1011); + } + break; + case PACK_V2I16: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(0xFFFF)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHL_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi64Literal(16ULL << 32)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v2i32), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1011); + } + break; + case PACK_V4I16: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::BINARY_AND_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(0xFFFF)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::SHL_v4i32), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi64Literal(16ULL << 32)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::HILO_BITOR_v4i16), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1011); + } + break; + case UNPACK_V2I8: + BuildMI(*mBB, I, DL, mTII->get(AMDIL::USHRVEC_i32), AMDIL::R1012) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(8)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::LCREATE), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1012); + break; + case UNPACK_V4I8: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::VCREATE_v4i8), AMDIL::R1011) + .addReg(AMDIL::R1011); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::USHRVEC_v4i8), AMDIL::R1011) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi128Literal(8ULL << 32, (16ULL | (24ULL << 32)))); + } + break; + case UNPACK_V2I16: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::USHRVEC_i32), AMDIL::R1012) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(16)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::LCREATE), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1012); + } + break; + case UNPACK_V4I16: + { + BuildMI(*mBB, I, DL, mTII->get(AMDIL::USHRVEC_v2i32), AMDIL::R1012) + .addReg(AMDIL::R1011) + .addImm(mMFI->addi32Literal(16)); + BuildMI(*mBB, I, DL, mTII->get(AMDIL::LCREATE_v2i64), AMDIL::R1011) + .addReg(AMDIL::R1011).addReg(AMDIL::R1012); + } + break; + }; +} |