| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
!lsbFirst; Add unit tests
|
|
|
|
| |
from T::java_class()
|
| |
|
|
|
|
| |
shall only proceed if !running
|
| |
|
|
|
|
| |
user services
|
|
|
|
| |
... without passing java class type nor ctor name, just relying on ctor code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lock on modifying count_down() and synchronizing wait*()
See similar commit 4d67b951ef9a119c1610c4faf1b9ab9c79a69167 regarding jau::latch
Using condition_variable requires us to hold same mutex lock on modifying count_down() and synchronizing wait*(),
i.e. complying to pattern as described in <https://en.cppreference.com/w/cpp/thread/condition_variable>:
"Even if the shared variable is atomic,
it must be modified under the mutex in order to correctly publish the modification to the waiting thread."
Notable: The mutex lock shall only be held during modification and
the corresponding condition_variable wait-loop.
The notify*() still shall be issued w/o lock, i.e. out of mutex scope,
to avoid inefficient double-lock at wait*().
This patch reverts commit 075af3920c7e187655041fccd41dbd9f2c739adb change
in regards to the removed mutex lock on modifying the read or write position.
This patch fixes the a potential 'wait_for( <timeout> )' max waiting time,
where the modification of the conditional variable (position) is not properly recognized
when the ringbuffer is full (at write) or empty (at read).
|
|
|
|
| |
another timeout_duration for sporadic wake-ups w/o condition nor timeout hit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
modifying count_down() and synchronizing wait*()
Using condition_variable requires us to hold same mutex lock on modifying count_down() and synchronizing wait*().
This is required to not slip the modification of shared variable in count_down()
while synchronizing in wait().
This fixed bug lead to sporadic maximum timouts within latch::wait_for(..),
even though the condition has been reached beforehand, incl. notification.
Notable: The mutex lock shall only be held during modification and
the corresponding condition_variable wait-loop.
The notify*() still shall be issued w/o lock, i.e. out of mutex scope,
to avoid inefficient double-lock at wait*().
|
|
|
|
| |
manual '.load()' using jau::to_string() etc.
|
|
|
|
| |
with timeout duration value
|
| |
|
| |
|
|
|
|
|
|
| |
This latch implementation uses a size_t counter, exceeding std::latch ptrdiff_t.
Since we use memory_order_seq_cst for the counter atomic, try_wait() shall always return the correct result.
|
|
|
|
|
|
|
|
|
|
|
|
| |
re-block of notified wait() thread.
Holding the same lock @ notify_all as the waiting thread, would lead to waking up the waiting thread,
which only would be blocked again as it attempts to re-acquire the lock.
Hence removing it.
A lock is also not required for the readPos or writePos field,
as it is atomic of memory_order_seq_cst.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Add notion of operating threading mode for more efficancy:
- One producer-thread and one consumer-thread (default)
- Multiple producer-threads and multiple consumer-threads
|
|
|
|
|
|
| |
copyIntoImpl()
Ensure consistent behavior and same optimization.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`use_memset` non-type template param use `use_memcpy`
- ringbuffer API change: Drop whole `NullValue` *angle*, simplifying
- Drop `Value_type [get|peek]*()`, use `bool [get|peek]*(Value_type&)` instead
- Use `bool` return to determine success and `Value_type` reference as storage.
- Drop `NullValue_type` template type param and `NullValue_type` ctor argument.
- Simplifies and unifies single and multi get and put, as well as testing (motivation).
- ringbuffer: Drop `use_memset` non-type template param,
simply use `use_memcpy` having same semantics of *TriviallyCopyable*
- favor ::memcpy over ::memmove if applicable
don't confuse with our `use_memmove` semantics :)
- Use proper 'void*' cast to lose const'ness, drop non-required 'void*' cast for source (memmove)
- Use global namespace ::memmove and ::explicit_bzero
|
|
|
|
| |
'void*' cast for source (memmove); Use global namespace ::memmove and ::explicit_bzero
|
| |
|
|
|
|
| |
memory ops are either std:: or global namespace ::
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
move-semantics, instead of copy-semantics (std::initializer_list)
Using the `std::initializer_list` requires to *copy* the given value_type objects into this darray.
This is due to `std::initializer_list` exposing its iterator as `const`.
Initially I hacked the `std::initializer_list` ctor to cast away 'const', which practically worked.
However, this is not as design and `std::initializer_list` storage has been produced by
the compiler, stealing it may lead to undefinded behavior (UB).
Hence we require to use a self-made construct, like std::make_shared<>(..).
Inspired by Tristan Brindle's <https://tristanbrindle.com/posts/beware-copies-initializer-list>,
I added similar more enhanced functionality to darray and cow_array:
'template <typename... Args>
constexpr void push_back_list(Args&&... args)'
push_back_list() moves the whole argument list
into our array as an atomic operation.
The latter is more important for cow_darray of course.
Storage space for all elements is adjusted
and all elements are added.
The outter template 'make_[cow_]darray<..>(..)'
simply creates the array instance with the desired size
and passes the argument list to push_back_list().
Since we can't properly have the array's Value_type deduced
if uses as a template type argument, this template
passes the First and all Next arguments (pack) in a dedicated fashion.
After constraining the template to having the same type for all arguments,
we use the First type for the array definition.
This version can only handle argument lists of size 2 or greater.
Hence a complement template for one argument only has been added.
Features used here:
- C++11 template pack
- C++17 fold expression
Tested with Direct-BT.
|
|
|
|
| |
path, push_back must happen on new storage
|
|
|
|
| |
low-memory, PRINTF for top ctor, dtor, ..
|
| |
|
| |
|
|
|
|
| |
free. Memory already bzero'ed when object got removed.
|
| |
|
|
|
|
| |
storage for more efficient placement-new ctor/dtor (like darray)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sec_mem -> use_secmem; Add Value_type traits for uses_memmove and use_secmem.
Change also applied on cow_darray<>.
Added full API doc for these Non-Type Template Parameter (NTTP).
Since `use_realloc` has been removed, all use cases must be validated for changed template parameter.
User can now add the following typedef's to not use the NTTP:
- typedef std::true_type container_memmove_compliant;
- typedef std::true_type enforce_secmem;
^^ these will be queried via compile time traits and set default values of the NTTP.
+++
darray::grow_storage_move(Size_type) now returns the maximum
of growth * old_capacity and new_capacity,
i.e. not dropping the `golder rule` growth factor.
Clearing a darray instance (e.g. from std::move), only the iterator
and hence heap pointer is being nulled - not the allocator (was a bug).
Added darray::shrink_to_fit()
|
| |
|
| |
|
|
|
|
| |
'constexpr if': if constexpr ( .. )
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
order
|
| |
|
| |
|
|
|
|
| |
freeSlots()
|
|
|
|
|
|
| |
-> size(); getCapacity() -> capacity()
Also throw IllegalArgumentException on POctets ctor passing source buffer w/ size to copy, if nullptr + size>0
|
|
|
|
| |
std::exception); Generalize our Exception structure w/ ExceptionBase
|
| |
|
|
|
|
|
|
|
|
|
|
| |
endian::big at ctor
One important detail which got totally lost moving *Octets from direct_bt to jau
is the endian awareness.
This change allows using jau::*Octets using either endian::little (for direct_bt BT data)
or endian::big (e.g. for eth/tcp/ip networking stack).
|