diff options
author | lloyd <[email protected]> | 2008-04-12 03:08:18 +0000 |
---|---|---|
committer | lloyd <[email protected]> | 2008-04-12 03:08:18 +0000 |
commit | 66d92bc063a4cbb69e4242a15c3a90daa3db069e (patch) | |
tree | f48af6779692e324cbee3ee64cdf45c98a619f5f /include/parsing.h | |
parent | 21669116db5ccb075d92a24af763f7b8c7a32976 (diff) |
Remove Config::option_as_u32bit - the only advantage it had over calling
to_u32but on the return value from Config::option was that it passed
it through parse_expr, which did some simple evaluation tricks so you
could say 64*1024. That does not seem worth the cost in code, especially
because most of the values so controlled are probably never changed.
By making them compile time constants, additional optimizations are
possible in the source as well as by the compiler.
Remove the pkcs8_tries config option. Hardcode that value to 3 instead.
I want to rewrite that code in the relatively near future and all that will
(hopefully) go away.
Diffstat (limited to 'include/parsing.h')
-rw-r--r-- | include/parsing.h | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/include/parsing.h b/include/parsing.h index 93eb8c279..9c9128d33 100644 --- a/include/parsing.h +++ b/include/parsing.h @@ -19,7 +19,6 @@ std::vector<std::string> parse_algorithm_name(const std::string&); std::vector<std::string> split_on(const std::string&, char); std::vector<u32bit> parse_asn1_oid(const std::string&); bool x500_name_cmp(const std::string&, const std::string&); -u32bit parse_expr(const std::string&); /************************************************* * String/Integer Conversions * |