aboutsummaryrefslogtreecommitdiffstats
path: root/include
Commit message (Collapse)AuthorAgeFilesLines
* Add Java/C++ hexStringBytes(..); Fix Java's bytesHexString(..) path ↵Sven Gothel2022-01-271-0/+14
| | | | !lsbFirst; Add unit tests
* helper_jni: Add convert_instance_to_jobject(..) with given jclass, different ↵Sven Gothel2022-01-271-0/+16
| | | | from T::java_class()
* Fixes for clang++ 11.0.1-2Sven Gothel2022-01-251-1/+5
|
* service_runner: Add facility for optional singleton sighandler; start() ↵Sven Gothel2022-01-171-2/+39
| | | | shall only proceed if !running
* jau::server_runner: Fix API doc; Have get_name() return const reference only.v0.7.8Sven Gothel2022-01-131-3/+3
|
* Added `jau::service_runner`, a reusable dedicated thread performing custom ↵v0.7.7Sven Gothel2022-01-121-0/+140
| | | | user services
* Add helper_jni.hpp: convert_vector_uniqueptr_to_jarraylist(..) variant ...v0.7.6Sven Gothel2022-01-041-0/+54
| | | | ... without passing java class type nor ctor name, just relying on ctor code.
* jau::ringbuffer: Using condition_variable requires us to hold same mutex ↵Sven Gothel2021-12-051-18/+12
| | | | | | | | | | | | | | | | | | | | | | | | lock on modifying count_down() and synchronizing wait*() See similar commit 4d67b951ef9a119c1610c4faf1b9ab9c79a69167 regarding jau::latch Using condition_variable requires us to hold same mutex lock on modifying count_down() and synchronizing wait*(), i.e. complying to pattern as described in <https://en.cppreference.com/w/cpp/thread/condition_variable>: "Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread." Notable: The mutex lock shall only be held during modification and the corresponding condition_variable wait-loop. The notify*() still shall be issued w/o lock, i.e. out of mutex scope, to avoid inefficient double-lock at wait*(). This patch reverts commit 075af3920c7e187655041fccd41dbd9f2c739adb change in regards to the removed mutex lock on modifying the read or write position. This patch fixes the a potential 'wait_for( <timeout> )' max waiting time, where the modification of the conditional variable (position) is not properly recognized when the ringbuffer is full (at write) or empty (at read).
* jau:latch::wait_for(..): Reuse absolute timeout time_point, don't wait for ↵Sven Gothel2021-12-051-3/+6
| | | | another timeout_duration for sporadic wake-ups w/o condition nor timeout hit
* jau::latch: Using condition_variable requires us to hold same mutex lock on ↵Sven Gothel2021-12-051-4/+3
| | | | | | | | | | | | | | | | | modifying count_down() and synchronizing wait*() Using condition_variable requires us to hold same mutex lock on modifying count_down() and synchronizing wait*(). This is required to not slip the modification of shared variable in count_down() while synchronizing in wait(). This fixed bug lead to sporadic maximum timouts within latch::wait_for(..), even though the condition has been reached beforehand, incl. notification. Notable: The mutex lock shall only be held during modification and the corresponding condition_variable wait-loop. The notify*() still shall be issued w/o lock, i.e. out of mutex scope, to avoid inefficient double-lock at wait*().
* ordered_atomic: Add to_string(const ordered_atomic<>&), allowing to skip ↵Sven Gothel2021-11-171-0/+6
| | | | manual '.load()' using jau::to_string() etc.
* latch: Extend with wait_for() and arrive_and_wait_for(), i.e. add variants ↵Sven Gothel2021-11-161-2/+116
| | | | with timeout duration value
* latch: Fix and add unit testSven Gothel2021-11-161-6/+10
|
* latch: Move into namespace jauSven Gothel2021-11-161-66/+70
|
* Add jau::latch implementation for C++17, inspired by std::latch C++20Sven Gothel2021-11-161-0/+112
| | | | | | This latch implementation uses a size_t counter, exceeding std::latch ptrdiff_t. Since we use memory_order_seq_cst for the counter atomic, try_wait() shall always return the correct result.
* ringbuffer: Remove locking mutex before notify_all, leading to pessimistic ↵Sven Gothel2021-11-161-6/+12
| | | | | | | | | | | | re-block of notified wait() thread. Holding the same lock @ notify_all as the waiting thread, would lead to waking up the waiting thread, which only would be blocked again as it attempts to re-acquire the lock. Hence removing it. A lock is also not required for the readPos or writePos field, as it is atomic of memory_order_seq_cst.
* helper_jni: Add checkAndGetObject(..)Sven Gothel2021-11-151-0/+13
|
* Fix Doxygen <tt> on '#### `something`', remove the code-qualifierSven Gothel2021-11-033-7/+8
|
* ringbuffer: Refine API docv0.7.1Sven Gothel2021-11-031-7/+10
|
* ringbuffer: Add notion of operating threading mode for more efficancyv0.7.0Sven Gothel2021-11-021-232/+470
| | | | | | Add notion of operating threading mode for more efficancy: - One producer-thread and one consumer-thread (default) - Multiple producer-threads and multiple consumer-threads
* ringbuffer::moveIntoImpl(): Apply same 'integral, memcpy' path as for ↵Sven Gothel2021-11-021-1/+9
| | | | | | copyIntoImpl() Ensure consistent behavior and same optimization.
* darray/ringbuffer::get_info(): Add type infoSven Gothel2021-10-312-2/+6
|
* ringbuffer API change: Drop whole `NullValue` *angle*, simplifying; Drop ↵Sven Gothel2021-10-311-221/+158
| | | | | | | | | | | | | | | | | | | | `use_memset` non-type template param use `use_memcpy` - ringbuffer API change: Drop whole `NullValue` *angle*, simplifying - Drop `Value_type [get|peek]*()`, use `bool [get|peek]*(Value_type&)` instead - Use `bool` return to determine success and `Value_type` reference as storage. - Drop `NullValue_type` template type param and `NullValue_type` ctor argument. - Simplifies and unifies single and multi get and put, as well as testing (motivation). - ringbuffer: Drop `use_memset` non-type template param, simply use `use_memcpy` having same semantics of *TriviallyCopyable* - favor ::memcpy over ::memmove if applicable don't confuse with our `use_memmove` semantics :) - Use proper 'void*' cast to lose const'ness, drop non-required 'void*' cast for source (memmove) - Use global namespace ::memmove and ::explicit_bzero
* darray: Use proper 'void*' cast to lose const'ness, drop non-required ↵Sven Gothel2021-10-311-8/+10
| | | | 'void*' cast for source (memmove); Use global namespace ::memmove and ::explicit_bzero
* darray: Enhance use_memmove API docSven Gothel2021-10-311-10/+18
|
* TOctets: Add convenient memmove*, memset* and bzero* methods; Ensure all ↵Sven Gothel2021-10-281-6/+34
| | | | memory ops are either std:: or global namespace ::
* Add darray and cow_darray construction with initializer list using ↵Sven Gothel2021-10-242-1/+225
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | move-semantics, instead of copy-semantics (std::initializer_list) Using the `std::initializer_list` requires to *copy* the given value_type objects into this darray. This is due to `std::initializer_list` exposing its iterator as `const`. Initially I hacked the `std::initializer_list` ctor to cast away 'const', which practically worked. However, this is not as design and `std::initializer_list` storage has been produced by the compiler, stealing it may lead to undefinded behavior (UB). Hence we require to use a self-made construct, like std::make_shared<>(..). Inspired by Tristan Brindle's <https://tristanbrindle.com/posts/beware-copies-initializer-list>, I added similar more enhanced functionality to darray and cow_array: 'template <typename... Args> constexpr void push_back_list(Args&&... args)' push_back_list() moves the whole argument list into our array as an atomic operation. The latter is more important for cow_darray of course. Storage space for all elements is adjusted and all elements are added. The outter template 'make_[cow_]darray<..>(..)' simply creates the array instance with the desired size and passes the argument list to push_back_list(). Since we can't properly have the array's Value_type deduced if uses as a template type argument, this template passes the First and all Next arguments (pack) in a dedicated fashion. After constraining the template to having the same type for all arguments, we use the First type for the array definition. This version can only handle argument lists of size 2 or greater. Hence a complement template for one argument only has been added. Features used here: - C++11 template pack - C++17 fold expression Tested with Direct-BT.
* Fix cow_darray::push_back( InputIt first, InputIt last ): On storage growth ↵Sven Gothel2021-10-241-1/+1
| | | | path, push_back must happen on new storage
* darray: Split JAU_DEBUG print for low- and high-level, PRINTF0 for ↵Sven Gothel2021-10-241-13/+19
| | | | low-memory, PRINTF for top ctor, dtor, ..
* ringbuffer: Fix string usage in constexpr func (alloc -> abort case)Sven Gothel2021-10-241-4/+2
|
* cow_darray, darray: Use unique debug PRINTF macro namesSven Gothel2021-10-232-77/+78
|
* use_secmem darray, ringbuffer: Remove redundant bzero on [re]allocation and ↵Sven Gothel2021-10-232-18/+0
| | | | free. Memory already bzero'ed when object got removed.
* octets.hpp: Use unique TRACE macro nameSven Gothel2021-10-231-20/+20
|
* Apply same Non-Type Template Parameter of darray; Drop new[] C++ array ↵Sven Gothel2021-10-231-98/+154
| | | | storage for more efficient placement-new ctor/dtor (like darray)
* darray<> Non-Type Template Parameter: Remove use_realloc (fully deducted), ↵Sven Gothel2021-10-234-49/+145
| | | | | | | | | | | | | | | | | | | | | | | | | | | sec_mem -> use_secmem; Add Value_type traits for uses_memmove and use_secmem. Change also applied on cow_darray<>. Added full API doc for these Non-Type Template Parameter (NTTP). Since `use_realloc` has been removed, all use cases must be validated for changed template parameter. User can now add the following typedef's to not use the NTTP: - typedef std::true_type container_memmove_compliant; - typedef std::true_type enforce_secmem; ^^ these will be queried via compile time traits and set default values of the NTTP. +++ darray::grow_storage_move(Size_type) now returns the maximum of growth * old_capacity and new_capacity, i.e. not dropping the `golder rule` growth factor. Clearing a darray instance (e.g. from std::move), only the iterator and hence heap pointer is being nulled - not the allocator (was a bug). Added darray::shrink_to_fit()
* callocator: Cover more std typedef's, standard, C++17 and deprecated C++17Sven Gothel2021-10-231-3/+14
|
* darray + ringbuffer: Use explicit 'constexpr if': if constexpr ( .. )Sven Gothel2021-10-223-27/+28
|
* endian conversion [le|be|cpu]_to_[le|be|cpu]() and bit_cast(): Use explicit ↵Sven Gothel2021-10-222-51/+51
| | | | 'constexpr if': if constexpr ( .. )
* POctet: Add explicit copy-ctor with given capacity; Add TROOctets default ctorSven Gothel2021-10-211-0/+26
|
* POctet copy-ctor API doc: Mention source.size() -> capacity of new instanceSven Gothel2021-10-211-6/+10
|
* octets: Fix TRACE_PRINT fmt usageSven Gothel2021-10-211-1/+1
|
* POctets API doc: Mention using native byte orderSven Gothel2021-10-211-7/+7
|
* EUI48[Sub] C++/Java: Better API doc re byte order, mention using native byte ↵Sven Gothel2021-10-211-10/+6
| | | | order
* EUI48Sub: Add required endian conversion for byte stream ctor (C++ and Java)v0.5.0Sven Gothel2021-10-052-19/+18
|
* EUI48[Sub]: Add endian awareness, also fixes indexOf() semantics (C++ and Java)Sven Gothel2021-10-052-15/+115
|
* ringbuffer: Use std names for sizes: getSize() -> size(); getFreeSlots() -> ↵Sven Gothel2021-10-051-32/+32
| | | | freeSlots()
* jau:*Octets: Adding elaborated API doc; Use std names for sizes: getSize() ↵Sven Gothel2021-10-051-117/+248
| | | | | | -> size(); getCapacity() -> capacity() Also throw IllegalArgumentException on POctets ctor passing source buffer w/ size to copy, if nullptr + size>0
* Have OutOfMemoryError being derived from std::bad_alloc (not ↵Sven Gothel2021-10-042-11/+35
| | | | std::exception); Generalize our Exception structure w/ ExceptionBase
* Fix: be_to_cpu(uint16_t const n)Sven Gothel2021-10-041-1/+1
|
* jau:*Octets: Expose endian awareness, pass either endian::little or ↵Sven Gothel2021-10-031-65/+137
| | | | | | | | | | endian::big at ctor One important detail which got totally lost moving *Octets from direct_bt to jau is the endian awareness. This change allows using jau::*Octets using either endian::little (for direct_bt BT data) or endian::big (e.g. for eth/tcp/ip networking stack).