aboutsummaryrefslogtreecommitdiffstats
path: root/OpenAL32/Include
Commit message (Collapse)AuthorAgeFilesLines
* Remove unused channel enumsChris Robinson2014-11-231-2/+0
|
* Remove the cube+diamond virtual layoutChris Robinson2014-11-231-1/+1
|
* Partially revert "Use a different method for HRTF mixing"Chris Robinson2014-11-232-1/+11
| | | | | | | | | | | | The sound localization with virtual channel mixing was just too poor, so while it's more costly to do per-source HRTF mixing, it's unavoidable if you want good localization. This is only partially reverted because having the virtual channel is still beneficial, particularly with B-Format rendering and effect mixing which otherwise skip HRTF processing. As before, the number of virtual channels can potentially be customized, specifying more or less channels depending on the system's needs.
* Attempt to use BS2B when using headphones without HRTFChris Robinson2014-11-221-1/+0
|
* Rework HRTF decision logicChris Robinson2014-11-221-0/+1
| | | | | | This way takes into account a new stereo-mode config option, which when set to "headphones" will default to using HRTF. Eventually the device will also be able to specify if headphones are being used.
* Remove an unused macroChris Robinson2014-11-221-3/+0
|
* Rename Voice's NumChannels to OutChannelsChris Robinson2014-11-221-1/+1
|
* Store the number of output channels in the voiceChris Robinson2014-11-221-0/+1
|
* Remove an unnecessary union containerChris Robinson2014-11-221-3/+1
|
* Use a different method for HRTF mixingChris Robinson2014-11-222-28/+33
| | | | | | | | | | | | | | | | | | | | | | | This new method mixes sources normally into a 14-channel buffer with the channels placed all around the listener. HRTF is then applied to the channels given their positions and written to a 2-channel buffer, which gets written out to the device. This method has the benefit that HRTF processing becomes more scalable. The costly HRTF filters are applied to the 14-channel buffer after the mix is done, turning it into a post-process with a fixed overhead. Mixing sources is done with normal non-HRTF methods, so increasing the number of playing sources only incurs normal mixing costs. Another benefit is that it improves B-Format playback since the soundfield gets mixed into speakers covering all three dimensions, which then get filtered based on their locations. The main downside to this is that the spatial resolution of the HRTF dataset does not play a big role anymore. However, the hope is that with ambisonics- based panning, the perceptual position of panned sounds will still be good. It is also an option to increase the number of virtual channels for systems that can handle it, or maybe even decrease it for weaker systems.
* Allocate the DryBuffer dynamicallyChris Robinson2014-11-211-1/+1
|
* Remove the unused angle and elevation from the device channel configChris Robinson2014-11-151-2/+0
|
* Remove the unused wide-stereo optionChris Robinson2014-11-081-3/+0
|
* Move a declarationChris Robinson2014-11-071-1/+2
|
* Pas the output device channel count to ALeffectState::processChris Robinson2014-11-072-2/+6
|
* Rename speakers to channels, and remove an old incorrect commentChris Robinson2014-11-071-4/+2
|
* Use a separate macro for the max output channel countChris Robinson2014-11-072-12/+11
|
* Fix 5.1 surround soundChris Robinson2014-11-071-2/+2
| | | | | | | | | | | | | Apparently, 5.1 surround sound is supposed to use the "side" channels, not the back channels, and we've been wrong this whole time. That means the "5.1 Side" is actually the correct 5.1 setup, and using the back channels is anomalous. Additionally, this means the 5.1 buffer format should also use the the side channels instead of the back channels. A final note: the 5.1 mixing coefficients are changed so both use the original 5.1 surround sound set (with the surround channels at +/-110 degrees). So the only difference now between 5.1 "side" and 5.1 "back" is the channel labels.
* Remove the channel name from ChannelConfigChris Robinson2014-11-051-1/+0
|
* Use a method to set omni-directional channel gainsChris Robinson2014-11-041-14/+7
|
* Support B-Format source rotation with AL_ORIENTATIONChris Robinson2014-10-312-3/+5
|
* Rename the source's Orientation to DirectionChris Robinson2014-10-311-1/+1
|
* Add preliminary AL_EXT_BFORMAT supportChris Robinson2014-10-313-1/+14
| | | | | Currently missing the AL_ORIENTATION source property. Gain stepping also does not work.
* Make alcSuspendContext and alcProcessContext batch updatesChris Robinson2014-10-121-0/+3
| | | | | | | | | | This behavior better matches Creative's hardware drivers and Rapture3D's OpenAL driver. A compatibility environment variable is provided to restore the old no-op behavior for any app that behaves badly from this change (set __ALSOFT_SUSPEND_CONTEXT to "ignore"). If too many apps have a problem with this, the default behavior may need to be changed to ignore, with the env var providing an option to defer/batch instead.
* Add a helper to search for a channel index by nameChris Robinson2014-10-021-0/+17
|
* Store default speaker configurations in a structChris Robinson2014-10-021-11/+14
|
* Make ComputeAngleGains use ComputeDirectionalGainsChris Robinson2014-10-022-5/+6
|
* Don't use ComputeAngleGains for SetGainsChris Robinson2014-10-021-1/+5
|
* Use an ambisonics-based panning methodChris Robinson2014-09-302-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For mono sources, third-order ambisonics is utilized to generate panning gains. The general idea is that a panned mono sound can be encoded into b-format ambisonics as: w[i] = sample[i] * 0.7071; x[i] = sample[i] * dir[0]; y[i] = sample[i] * dir[1]; ... and subsequently rendered using: output[chan][i] = w[i] * w_coeffs[chan] + x[i] * x_coeffs[chan] + y[i] * y_coeffs[chan] + ...; By reordering the math, channel gains can be generated by doing: gain[chan] = 0.7071 * w_coeffs[chan] + dir[0] * x_coeffs[chan] + dir[1] * y_coeffs[chan] + ...; which then get applied as normal: output[chan][i] = sample[i] * gain[chan]; One of the reasons to use ambisonics for panning is that it provides arguably better reproduction for sounds emanating from between two speakers. As well, this makes it easier to pan in all 3 dimensions, with for instance a "3D7.1" or 8-channel cube speaker configuration by simply providing the necessary coefficients (this will need some work since some methods still use angle-based panpot, particularly multi-channel sources). Unfortunately, the math to reliably generate the coefficients for a given speaker configuration is too costly to do at run-time. They have to be pre- generated based on a pre-specified speaker arangement, which means the config options for tweaking speaker angles are no longer supportable. Eventually I hope to provide config options for custom coefficients, which can either be generated and written in manually, or via alsoft-config from user-specified speaker positions. The current default set of coefficients were generated using the MATLAB scripts (compatible with GNU Octave) from the excellent Ambisonic Decoder Toolbox, at https://bitbucket.org/ambidecodertoolbox/adt/
* Combine some fields into a structChris Robinson2014-09-101-3/+6
|
* Invert the ChannelOffsets arrayChris Robinson2014-09-101-1/+2
|
* Remove the GetLatency method from the old BackendFuncsChris Robinson2014-09-081-4/+0
|
* Convert the winmm backend to the new backend APIChris Robinson2014-09-081-3/+0
|
* Make the fontsound's buffer and link fields atomicChris Robinson2014-09-031-2/+3
|
* Make the buffer's pack and unpack properties atomicChris Robinson2014-09-031-2/+2
|
* Remove a couple unnecessary typedefsChris Robinson2014-08-241-3/+0
|
* Convert the wave writer backend to the new APIChris Robinson2014-08-241-3/+0
|
* Use al_malloc/al_free for default allocatorsChris Robinson2014-08-241-2/+2
|
* Rename activesource to voiceChris Robinson2014-08-213-10/+10
|
* Use an array of objects for active sources instead of pointersChris Robinson2014-08-211-1/+1
|
* Use a NULL source for inactive activesourcesChris Robinson2014-08-212-5/+12
| | | | Also only access the activesource's source field once per update.
* ALC_SOFT_pause_device is finishedChris Robinson2014-08-121-10/+0
|
* Use atomics for the device and context list headsChris Robinson2014-08-011-2/+2
|
* Make the source's buffer queue head and current queue item atomicChris Robinson2014-07-311-5/+5
|
* Use generic atomics in more placesChris Robinson2014-07-222-3/+3
|
* Add macros for generic atomic functionalityChris Robinson2014-07-222-2/+2
|
* Make some functions staticChris Robinson2014-07-201-2/+0
|
* Add a source radius property that determines the directionality of a soundChris Robinson2014-07-111-0/+2
| | | | | | | | | At 0 distance from the listener, the sound is omni-directional. As the source and listener become 'radius' units apart, the sound becomes more directional. With HRTF, an omni-directional sound is handled using 0-delay, pass-through filter coefficients, which is blended with the real delay and coefficients as needed to become more directional.
* Store 4 modulators per map entryChris Robinson2014-07-061-0/+1
|
* Regroup and reorganize some macrosChris Robinson2014-07-061-40/+57
|