Yea, you encounter this a lot at companies with very old codebases. Don't use "chrono" because we have our own date/time types that were made before chrono even existed. Don't use standard library containers because we have our own containers that date back to before the STL was even stable.
I wonder how many of these (or the Google style guide rules) would make sense for a project starting today from a blank .cpp file. Probably not many of them.
The specification over-specifies std::unordered_map so that no good modern hash table type could implement this specification, but then under-specifies std::deque so that the MSVC std::deque is basically useless in practice. It requires (really, in the standard) that std::vector<bool> is a bitset, even though that makes no sense. It sometimes feels as though nobody on WG21 has any idea what they're doing, which is wild.
for one, its iterator won't be invalidated
It's an adaptor which will use two other containers (typically std::vector) to manage the sorted keys and their associated values. The keys are sorted and their values are stored in the corresponding position in their own separate std::vector. If you already have sorted data or close enough then this type can be created almost for free yet it has similar affordances to std::map - if you don't it's likely you will find the performance unacceptable.
Flashback to last job. Wrote their own containers. Opaque.
You ask for an item from it, you get back a void pointer. It's a pointer to the item. You ask for the previous, or the next, and you give back that void pointer (because it then goes through the data to find that one again, to know from where you want the next or previous) and get a different void pointer. No random access. You had to start with the special function which would give you the first item and go from there.
They screwed up the end, or the beginning, depending on what you were doing, so you wouldn't get back a null pointer if there was no next or previous. You had to separately check for that.
It was called an iterator, but it wasn't an iterator; an iterator is something for iterating over containers, but it didn't have actual iterators either.
When I opened it up, inside there was an actual container. Templated, so you could choose the real inside container. The default was a QList (as in Qt 4.7.4). The million line codebase contained no other uses; it was always just the default. They took a QList, and wrapped it inside a machine that only dealt in void pointers and stripped away almost all functionality, safety and ability to use std::algorithm
I suspect but cannot prove that the person who did this was a heavy C programmer in the 1980s. I do not know but suspect that this person first encountered variable data type containers that did this sort of thing (a search for "generic linked list in C" gives some ideas, for example) and when they had to move on to C++, learned just enough C++ to recreate what they were used to. And then made it the fundamental container class in millions of lines of code.
That's scarier than the containter craziness you mention
The STL makes you pay for ABI stability no matter if you want it or not. For some use cases this doesn't matter, and there are some "proven" parts of the STL that need a lot of justification for substitution, yada yada std::vector and std::string.
But it's not uncommon to see unordered_map substituted with, say, sparsehash or robin_map, and in C++ libraries creating interfaces that allow for API-compatible alternatives to use of the STL is considered polite, if not necessarily ubiquitous.
Some notable exceptions: we'd have allowed std::shared_ptr<T> and <chrono>. We might also have allowed <thread> and friends.
That also depends on how standalone the project is. Self-contained projects may be better off with depending on standard library and popular third-party libraries, but if a project integrates with other internal components, it's better to stick to internal libraries, as they likely have workarounds and special functionality specific to the company and its development workflow.
I know, I know, long run does not exists in today's investor dominated scenarios. Code modernization is a fairytale. So far I seen no exception in my limited set of experiences (but with various codebases going back to the early 90's with patchy upgrades here and there, looking like and old coat fixed many many times with diverse size of patches of various materials and colour).
There are cases, however, when the migration costs are significant enough that even those benefits aren't really enough. Migrating our date/time stuff to <chrono> seemed like one of those.
C++ is very flexible, with a lot of very mature tooling and incredibly broad platform support. If you're writing some web server to run on the hardware of your choosing, then sure, that doesn't matter. But if you're writing something deeply integrated with platform/OS interfaces, or graphics, or needs to support some less common platforms, then C++ is often your only practical option for combining expressiveness and performance.
Not to mention that Rust advocates keep forgetting their compiler is partially written in C++ (LLVM/GCC).
I'll trade you wxWidgets for FLTK.
Trying to defer to native widget rendering per platform was a mistake, and every time I've touched wxWidgets in the past decade and a half I've regretted it.
FLTK on the other hand is ugly as sin, but I've found it reliable enough to not act in surprising ways, and it's also small enough to where you can vendor it and build it alongside your program.
Maybe it could be used to remove Rust dependency on C++ infrastructure (LLVM/GCC), and bring Cranelift up to speed.
Or why bother with Rust, when the LLM gets to generate C++ code with best practices.
While I like Rust, I think AI as the next abstraction step in programming has kind of taken its relevance away, when computer assisted programming is part of the workflow.
/s of course... for now, but not for long.
But sure, if you trust it to have written C++ to a higher standard than the experts, then go for it.
But if I have to produce reams of code I’d much rather have it be reviewed by rustc than clang. rustc may take longer to satisfy, but it’ll be worth it because I won’t be responsible for horrors in production.
You’re happy to be responsible for buffer overflows written by an LLM? I’m not, which is why I prefer a language where it’s not possible.
> Use char and unprefixed character literals. Non-UTF-8 encodings are rare enough in Chromium that the value of distinguishing them at the type level is low, and char8_t* is not interconvertible with char* (what ~all Chromium, STL, and platform-specific APIs use), so using u8 prefixes would obligate us to insert casts everywhere. If you want to declare at a type level that a block of data is string-like and not an arbitrary binary blob, prefer std::string[_view] over char*.
If your codebase has those guarantees, go ahead and use it.
True, but sizeof(char) is defined to be 1. In section 7.6.2.5:
"The result of sizeof applied to any of the narrow character types is 1"
In fact, char and associated types are the only types in the standard where the size is not implementation-defined.
So the only way that a C++ implementation can conform to the standard and have a char type that is not 8 bits is if the size of a byte is not 8 bits. There are historical systems that meet that constraint but no modern systems that I am aware of.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/n49...
The cpp standard explicitly says that it has the same size, typed, signedness and alignment as unsigned char, but its a distinct type. So its pretty useless, and badly named
Have you read the standard? It says: "The result of sizeof applied to any of the narrow character types is 1." Here, "narrow character types" means char and char8_t. So technically they aren't guaranteed to be 8 bits, but they are guaranteed to be one byte.
The poster to whom you have replied has read correctly the standard.
But instead we get this mess. I guess it's because there's too much Microsoft in the standard and they are the only ones not having UTF-8 everywhere in Windows yet.
But it isn't really needed anyway: I'm using it for UTF-8 (with helper functions for the 1% cases where I need codepoints) and it works fine. But starting with C++20 it's starting to get annoying because I have to reinterpret_cast to the useless u8 versions.
Second, there is also no way to be able to guarantee that a string encodes valid UTF-8, it could just be whatever.
You can still just use std::string to store valid encoded UTF-8, you just have to be a little bit careful. And functions like codepoints_size() are pretty fringe -- unless you're not doing specialized Unicode transformations, it's more typical to just treat strings as opaque byte slices in a typical C++ application.
Texas Instruments' compiler seems to be celebrating C++14 support: https://www.ti.com/tool/C6000-CGT
CrossCore Embedded Studio apparently supports C++11 if you pass a switch in requesting it, though this FAQ answer suggests the underlying standard library is still C++03: https://ez.analog.com/dsp/software-and-development-tools/cce...
Everything I've found CodeWarrior related suggests that it is C++03-only: https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/...
Aside from that, from what I can tell, those esoteric architectures are being phased out in lieu of running DSP workloads on Cortex-M, which is just ARM.
I'd love it if someone who was more familiar with DSP workloads would chime in, but it really does seem that trying to be the language for all possible and potential architectures might not be the right play for C++ in 202x.
Besides, it's not like those old standards or compilers are going anywhere.
But for those devices you use limited subset of language features and you would be better of not linking c++ stdlib and even c stdlib at all (so junior developers don't have space for doing stupid things ;))
Back when I worked in the embedded space, chips like ZSP were around that used 16-bit bytes. I am twenty years out of date on that space though.
Anyway, I think there are two takeaways:
1. There probably do exist non-8-bit-byte architectures targeted by compilers that provide support for at-least-somewhat-recent C++ versions
2. Such cases are certainly rare
Where that leaves things, in terms of what the C++ standard should specify, I don't know. IIRC JF Bastien or one of the other Apple folks that's driven things like "twos complement is the only integer representation C++ supports" tried to push for "bytes are 8 bits" and got shot down?
Portability was always a selling point of C++. I'd personaly advise those who find it uncomfortable, to choose a different PL, perhaps Rust.
Judging by the lack of modern C++ in these crufty embedded compilers, maybe modern C++ is throwing too much good effort after bad. C++03 isn't going away, and it's not like these compilers always stuck to the standard anyway in terms of runtime type information, exceptions, and full template support.
Besides, I would argue that the selling point of C++ wasn't portability per se, but the fact that it was largely compatible with existing C codebases. It was embrace, extend, extinguish in language form.
Being conservative with features and deliberately not implementing them are two different thing. Some embedded compilers go through certification, to be allowed to be used producing mission critical software. Chasing features is prohibitively expensive, for no obvious benefit. I'd bet in 2030s most embedded compiler would support C++ 14 or even 17. Good enough for me.
There is no version of the C++ standard that lacks features like exceptions, RTTI, and fully functional templates.
If the compiler isn't implementing all of a particular standard then it's not standard C++. If an implementation has no interest in standard C++, why give those implementations a seat at the table in the first place? Those implementations can continue on with their C++ fork without mandating requirements to anyone else.
C++ have historically been driven by practicalities, and violated standards on regular basis, when it deemed useful.
> Those implementations can continue on with their C++ fork without mandating requirements to anyone else.
Then they will diverge too much, like it happened with countless number of other languages, like Lisp.
ISO/IEC 9899:2024 section 7.30
> char8_t which is an unsigned integer type used for 8-bit characters and is the same type as unsigned char;
Because "it supports Unicode" is not an expected use case for a non-8-bit DSP?
Do you have a link to a single one that does support it?
I suspect if I dug into the archives, I'd find a discussion on cxx@ with some comments about how doing this would result in some esoteric risk. If I was still on the Chrome team I'd go looking and see if it made sense to reraise the issue now; I know we had at least one stable branch security bug this caused.
Unfortunately, casting a char8_t* to char* (and then accessing the data through the char* pointer) is undefined behavior.
The problem is that too many people drank too much koolaid and trying to parrot everything to a letter without understanding the bigger picture.
The best example would be Kubernetes. Employed by many orgs that have 20 devs and 50 services.
Reasonable summary. There's some massive NIH syndrome going on.
Another piece is that a lot of stuff that makes sense in the open source world does not make sense in the context of the giant google3 monorepo with however many billions of lines of code all in one pile.
The bulk of the restrictions are justified as "Banned in the Google Style Guide."
In turn the Google Style Guide bans most of the features because they can't/won't refactor most of their legacy code to catch up with post C++0x.
So even then these guidelines are just a reflection of making sure things stay safe for upstream and downstream consumers of Google's largely unmaintained codebase.
That's different from an inability to refactor.
That triggered a flash of feeling extremely old realizing we broke ground on this codebase 20 years ago this year!
A good decision. I tried to use it once and realized that it can't even work with UTF-8 properly. It's a mystery for me how such flawed design was standardized at all.
Also regex predates utf8.
https://chromium.googlesource.com/chromium/src/+/main/styleg...
Key takeaway => "Things would probably be different if we had to do it all over again from scratch."
"On their face, the benefits of using exceptions outweigh the costs, especially in new projects. However, for existing code, the introduction of exceptions has implications on all dependent code. If exceptions can be propagated beyond a new project, it also becomes problematic to integrate the new project into existing exception-free code. Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.
Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project. The conversion process would be slow and error-prone. We don't believe that the available alternatives to exceptions, such as error codes and assertions, introduce a significant burden.
Our advice against using exceptions is not predicated on philosophical or moral grounds, but practical ones. Because we'd like to use our open-source projects at Google and it's difficult to do so if those projects use exceptions, we need to advise against exceptions in Google open-source projects as well. Things would probably be different if we had to do it all over again from scratch."
Some languages have even innovated new kinds of exceptions that you can throw but that you are admonished should almost certainly never be caught.
Unfortunately even this is also better for tool support, a problem that using a bunch of macros solves. It's cool and good when a variable gets declared inside the guts of some macro expansion (and--critically--escapes those guts).
You cant invoke a function and pretend it'll never fail.
Also, try/catch with long try blocks and a the error handling at the very end is just bad. Which of the statements in the try is throwing? Even multiple perhaps? Each should be handled individually and immediately
Zig remains to be seen how market relevant it turns out to be.
For example:
> The <filesystem> header, which does not have sufficient support for testing, and suffers from inherent security vulnerabilities.
Source code should all be UTF-8 natively, letting you directly write UTF-8 text between quotes.
Exactly their rationale.
These literals are a solution in search of a problem ... which is real but has a much better solution.
They’d rather see it done the same way it would’ve been in any other similar language than with a language specific feature.
There are also portability concerns in mind given that projects like Chromium have to be easily portable across a vast amount of platforms (this shows with things like long long which is also on the list).
I would have made C++ modules be a slightly modified namespace.
The short summary, though, is that no toolchain yet has a bulletproof implementation, though everybody at least has enough to let people kick the tires a bit.
In Chromium's UI code, we have a lot of APIs that deal with coordinates, which may be in either px or dp. And we have a lot of code that needs to hardcode various constants, e.g. layout values for different dialogs. IMO, it's sane to have UDL support here, e.g. `20_px` (at least if we had separate types to represent these two things, which we don't... don't get me started).
But that's a foot gun, there are niche cases where this crazy stunt actually is correct (ZSTs are one), but in general the compiler was in effect licensed to cause absolute mayhem because this isn't a Doodad at all, which isn't what you wanted. So, they did three things:
1. Made a new type wrapper for this purpose, MaybeUninit<Doodad> might not be a Doodad yet, so we can initialize it later and until then the compiler won't set everything on fire but it's the same shape as the Doodad.
2. Marked core::mem::uninitialized deprecated. If you use it now the compiler warns you that you shouldn't do that.
3. De-fanged it by, despite its name, scrawling the bit pattern 0x01 over all the memory. The compiler can see this has some particular value and for common types it's even valid, a bool will be true, a NonZeroU32 will be 1, and so on. This is slow and probably not what you intended, but we did warn you that it was a bad idea to call this function still so...
Allow and Expect are levels where it's OK that a diagnostic happened, but with Expect if the diagnostic expected was not found then we get another diagnostic telling us that our expectation wasn't fulfilled.
Warn and Force-Warn are levels where we get a warning but compilation results in an executable anyway. Force-warn is a level where you can't tell the compiler not to emit this warning.
Deny and Forbid are levels where the diagnostic is reported and compilation fails, so we do not get an executable. Forbid, like Force-warn cannot be overriden with compiler flags.
C# similarly has old warts that are discouraged now. .NET Framework is a great example (completely different from modern c#, which used to be called "dotnet core"). WPF and MAUI are also examples. Or when "dynamic" was used as a type escape hatch before the type system advanced to not need it. ASP being incompatible with ASP.NET, the list goes on.
They're just languages, there's no reason to pretend they're perfect.
Almost all of this is incorrect or comparing apples to oranges.
.net framework and .net core are runtime and standard library impl, not languages. C# is a language that can target either runtime or both. Framework is still supported today, and you can still use most modern C# language features in a project targeting it. WPF and Maui are both still supported and widely used. ASP predates .net - c# was never a supported language in it. ASP.net core has largely replaced ASP.net, but it's again a library and framework, not a language feature.
Dynamic in c# and the dlr are definitely not widely used because it's both difficult to use safely and doesn't fit well with the dominant paradigm of the language. If you're looking for STD lib warts binaryserializer would have been an excellent example.
I use it when deserializing unknown message types.
Dynamic is largely unnecessary, and it was unnecessary even when it was introduced.
ASP and ASP.NET are completely unrelated. ASP was designed to allow dynamic webpages to be written in VBScript (like CGI). This is not something you want to do in modern languages.
Unlike C# that has both delegates and lambdas, for example. Also finalizers and IDisposable.
I guess the difference is it's rarely "dangerous" or "hard to reason about" using the old features unlike what I see in the C++ list. Java replaces things with better things and momentum shifts behind them kind of naturally because the better things are objectively better.
The instanceof[0] operator is typically banned from use in application code and often frowned upon in library implementations.
0 - https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.htm...
Once code such as the below is rejected in a code review, the next version often submitted is an "if-else-if" ladder using `instanceof` (which is effectively the same thing).
TheReturnType r = null;
try {
DomainType1 instance = (DomainType1)obj;
r = ...
}
catch (ClassCastException ex) {}
if (r == null)
try {
DomainType2 instance = (DomainType2)obj;
r = ...
}
catch (ClassCastException ex) {}
if (r == null)
try {
DomainType3 instance = (DomainType3)obj;
r = ...
}
catch (ClassCastException ex) {}
...
And yes, I have seen the above scenario played out in a professional setting.[1] I guess you would get some push back at review time if you used auto_ptr.
> Never seen this being banned. Whats the reason?
Its usage encodes a priori assumptions of what a super-type could be, often expressed in an "if-else-if tree", thus making the logic doing so needlessly brittle and difficult to maintain.
Library logic sometimes needs to use this construct (I'd argue those abstractions need to be rethought however), but an application which does exhibits a failure in domain modelling IMHO.
Reflection - unless you really need to do something fancy almost certainly a very bad idea in normal application code.
Other than that it’s either just limiting yourself to a specific JVM version or telling people not to use old syntax/patterns that were replaced with better options long ago.
* `synchronized` blocks
* raw types
* anonymous inner classes
* checked exceptions
as well as specific uses of otherwise fine language features:
* "AbstractX" style abstract classes when static and default interface methods suffice
* old-style POJO classes when records are a better choice
Exactly. Simplicity was an explicit design goal of Java.
Most items in your list are fine actually, including raw types and inner classes.
Regarding checked exceptions, it is unfortunate that people don't fully understand it, see here: https://mckoder.medium.com/the-achilles-heel-of-c-why-its-ex...
They are clearly not against them per se. It simply wasn't practical for them to include it into their codebase.
And I think a lot of the cons of exceptions are handled in languages like F#, etc. If f calls g which calls h, and h throws an exception, the compiler will require you to deal with it somehow in g (either handle or explicitly propagate).
> the compiler will require you to deal with it somehow in g
I agree, this is the sensible solution.
Also error codes are part of the signature of a function, which exceptions aren‘t.
enum (class)? [[nodiscard]] Error {
Ok,
NoMem,
...
};
[1]: https://en.cppreference.com/w/cpp/language/attributes/nodisc... C attribute: nodiscard (since C23)
[1] https://en.cppreference.com/w/c/language/attributes/nodiscar...Exceptions in high-level languages avoid many of these issues by virtue of being much further away from the metal. It is a mis-feature for a systems language. C++ was originally used for a lot of high-level application code where exceptions might make sense that you would never use C++ for today.
I don't this this is true. There is A LOT of C++ for GUI applications, video games, all kind of utilities, scientific computing and others. In fact, I find that the transition to "modern" alternatives from native GUI toolkits in C/C++ has led to a regression in UI performance in general. Desktop programs performed better 20 years ago when everything was written in Win32, Qt, GTK and others and people did not rely on bloated Web toolkits for desktop development. Even today you can really feel how much more snappy and robust "old school" programs are relative to Electron and whatnot.
You could handwave this objection by saying it's not "really" low level or that "nothing" was an exaggeration, but at that point it seems like we'd be back to the original question of why it's wrong to say that this isn't a primary use case for C++.
I can assure you: Most C++ SW is not written for low-level.
> exceptions can introduce nasty edge cases that are difficult to detect and reason about.
That's true, except for languages that ensure you can't simply forget that something deep down the stack can throw an exception.
BTW, I'm not saying C++'s exceptions are in any way good. My point is that exceptions are bad in C++, and not necessarily bad in general.
Sometimes it is not safe to unwind the stack. The language is not relevant. Not everything that touches your address space is your code or your process.
Exception handlers must have logic and infrastructure to detect these unsafe conditions and then rewrite the control flow to avoid the unsafety. This both adds overhead to the non-exceptional happy path and makes the code flow significantly uglier.
The underlying cause still exists when you don't use exceptions but the code for reasoning about it is highly localized and usually has no overhead because you already have the necessary context to deal with it cleanly.
This where garbage collected languages shine.
Exceptions are more robust, not less.
So clean that there's no stack trace information to go with it, making the exception postmortem damn near useless.
Everyone (except Go devs) knows that those are the worst. Exceptions are better, but still less reliable than Result.
https://home.expurple.me/posts/rust-solves-the-issues-with-e...
The ultimate ideal for rare errors is almost certainly some form of exception system, but I don't think any language has quite perfected it.
Only when you don't need the Ok value from the Result (in other words, only when you have Result<(), E>). You can't get any other Ok(T) out of thin air in the Err case. You must handle (exclude) the Err case in order to unwrap the T and proceed with it.
> It also litters your code with branches, so not ideal for either I-cache or performance.
That's simply an implementation/ABI issue. See https://github.com/iex-rs/iex/
Language semantics-wise, Result and `?` are superior to automatically propagated exceptions.
Where people use things like anyhow.[0]
>forces you to handle it.
By writing `?`) And we get poor man's exceptions.
No, it's not quite the same. Checked exceptions force you to deal with them one way or another. When you use `?` and `anyhow` you just mark a call of fallible function as such (which is a plus, but the it's the only plus), and don't think even for a second about handling it.
If anything, `?` is better for actual "handling". It's explicit and can be questioned in a code review, while checked exceptions auto-propagate quietly, you don't see where it happens and where a local `catch` would be more appropriate. See the "Can you guess" section of the post. It discusses this.
Interestingly, Microsoft C / C++ compiler does support structured exception handling (SEH). It's used even in NT kernel and drivers. I'm not saying it's the same thing as C++ exceptions, since it's designed primarily for handling hardware faults and is simplified, but still shares some core principles (guarded region, stack unwinding, etc). So a limited version of exception handling can work fine even in a thing like an OS kernel.
There are two main limitations. Currently, the compiler has no idea what can be safely unwound. You could likely annotate objects to provide this information. Second, there is currently no way to tell the compiler what to do with an object in the call stack may not be unwound safely.
A lot of error handling code in C++ systems code essentially provides this but C++ exceptions can't use any of this information so it is applied manually.
e.g. https://www.javacodegeeks.com/2026/01/javas-checked-exceptio...
Google’s reasons for banning exceptions are historical, not technical. Sadly, this decision got enshrined in Google C++ Style Guide. The guide is otherwise pretty decent and is used by a lot of projects, but this particular part is IMO a disservice to the larger C++ ecosystem.
In such scenario there's no error recovery, software is expected to shutdown and raise loud error.
No, that's what assertions or contracts are for.
Most exceptions are supposed to be handled. The alternative to exceptions in C++ are error codes and `std::expected::. They are used for errors that are expected to happen (even if they may be exceptional). You just shouldn't use exceptions for control flow. (I'm looking at you, Python :)
... That seems like a pretty accurate description of how exception handling mechanisms are implemented under the hood. :)
And - very often, you would _not_ shut down. Examples:
* Failure/error in an individual operation or action does not invalidate all others in the set of stuff to be done.
* Failure/error regarding the interaction with one user does not mean the interaction with other users also has to fail.
* Some things can be retried after failing, and may succeed later: I/O; things involving resource use, etc.
* Some actions have more than one way to perform them, with the calling code not being able to know apriori whether all of them are appropriate. So, it tries one of them, if it fails tries another etc.
I like the idea of an exception as a way to blow out of the current context in order for something else to catch it and handle in a generic manner. I don’t like the idea of an exception to hide errors or for conditional logic because you have to know what is handling it all. Much easier to handle it there and then, or use a type safe equivalent (like a maybe or either monad) or just blow that shit up as soon as you can’t recover from the unexpected.
There are things you can't do easily in C++ without using exceptions, like handling errors that happen in a constructor and handling when `new` cannot alloc memory. Plus, a lot of the standard library relies on exceptions. And of course there's the stylistic argument of clearly separating error-handling from the happy-path logic.
I won't argue that it's popular to ban them, though. And often for good reasons.
On banning exceptions:"Things would probably be different if we had to do it all over again from scratch."
https://google.github.io/styleguide/cppguide.html#Exceptions
Anyway, about these C++ conventions - to each software house its own I guess. I don't think banning exceptions altogether is appropriate; and I don't see the great benefit of using abseil (but feel free to convince me it's really that good.)
https://stackoverflow.com/a/42588384/1593077
But you can and should use non-std state-of-the-art hashmaps without living in the world of abseil.
Chromium uses hundreds of other open source libraries. Is there something about Abseil specifically that sticks in your craw?
The rest looks very reasonable, like avoiding locale-hell.
Some of it is likely options that sand rough edges off of the standard lib, which is reasonable.