Attempt to implement them, manually, and you see how hard is it.
PLUS, not only you need to account for the general solution, but what could be the best considering the current data set.
And, you can't compile statically (dynamically sure).
And, should work interactively, so hopefully be solved faster than run the actual query.
P.D: Joins are normally the focus, but other constructs are also challenging. For example, just solving if and which indexes to pick can be challenging when you have dozens of predicates.
And yes, your optimizer should survive(eventually!) to solve when you get feed hundreds of joins, predicates, aggregates, sorting and arbitrary expressions.
* I worked in the optimizer of a database. EVERYTHING is tricky!
So it's possible that the full optimisation problem over all join tree shapes is "easy", even though an output-constrained version of it is NP-hard... But I think that's unlikely. Having an NP-hard constrained variant like this strongly suggests that the original problem is itself NP-hard, and I suspect this could be shown by some other reduction.
> with selectivity a function of only the two immediate child nodes in the join
This should be "with selectivity a function of the rightmost leaves of the two child subtrees", so that it still makes sense for general ("bushy") join trees. (I know, I'm talking to myself... But I needed to write this down to convince myself that the original unconstrained problem wasn't just the (very easy) minimum spanning tree problem in disguise.)
For example there's a class of join algorithms called 'worst-case optimal' - which is not a great name, but basically means that these algorithms run in time proportional to the worst-case output size. These algorithms ditch the two at a time approach that databases typically use and joins multiple relations at the same time.
One example is the leapfrog trie join which was part of the LogicBlox database.
I was also missing mentioning "sideways information passing", though some of methods are exactly that.
I am wondering whether the company consults literature or whether they fiddle about, mostly reinventing the wheel.
The really hard problem is estimating the cost of each plan once you've generated it, which necessarily must happen by some sort of heuristics combined with statistics. In particular: If you want to join A, B and C, the cost of (A JOIN B) JOIN C versus A JOIN (B JOIN C) can differ by many orders of magnitude, depending on the size of the intermediate products. And both the cost effects and any misestimation tend to compound through the plan as you add more tables and predicates.
Optimize for the p99/p99.9/worst case scenarios. Minimize unpredictability in performance where possible, even if it comes at a small cost of median/average performance. Your SREs will thank you.
There _are_ tons of corner cases that you need to address since there are some super-hard problems in there (in particular, robust cardinality estimation of join outputs is a problem so hard that most of academia barely wants to touch it, despite its huge importance), but it doesn't need to be this bad.
More generally, there are algorithms for multi-way joins (with some theoretical guarantees), but they tend to perform worse in practice than just a set of binary joins with a good implementation.
It's called cogroup in Spark and similar architectures.
It does a group-by to convert data into the format (key_col_1, ... key_col_n) -> [(other_col_1, ... other_col_n), ...]
This is useful and ergonomic in itself for lots of use-cases. A lot of Spark and similar pipelines do this just to make things easier to manipulate.
Its also especially useful if you cogroup each side before join, which gives you the key column and two arrays of matching rows, one for each side of the join.
A quick search says it's called "group join" in academia. I'm sure I've bumped into as another name in other DB engines but can't remember right now.
One advantage of this is that it is bounded memory. It doesn't actually iterate over the cartesian product of non-unique keys. In fact, the whole join can be done on pointers into the sides of the join, rather than shuffling and writing the values themselves.
My understanding is that a lot of big data distributed query engines do this, at least in mixer nodes. Then the discussion becomes how late they actually expand the product - are they able to communicate the cogrouped format to the next step in the plan or must they flatten it? Etc.
(In SQL big data engines sometimes you do this optimisation explicitly e.g. doing SELECT key, ARRAY_AGG(value) FROM ... on each side before join. But things are nicer when it happens transparently under the hood and users get the speedup without the boilerplate and brittleness and fear that it is a deoptimisation when circumstances change in the future.)
Just look at when SAS programmers are advised to use a merge or a format.
Even hash-join vs merge-join really depend on your data's cardinality (read: sizes), indices, etc.
EDIT: Other comments also point out that there are non-general joins that are already NP-hard to optimize. You really want all the educated guesses you can get.
Requires some dynamic SQL to construct, but the beauty is that you can use the SQL engine for this solution:
select top 1 *
from (select
t1.id,t2.id,...,tn.id
,sum(t1.cost+t2.cost...+tn.cost) as total_cost
from join_options t1
cross join join_options t2
...
cross join join_options tn
group by t1.id,t2.id,...,tn.id) t0
order by
t0.total_cost
On paper, join order is a combinatorial search over equivalent expressions. In reality, you’re optimizing over three very non-relational constraints: data distribution (where bytes actually live), cardinality estimation (how wrong your stats are), and memory/network contention (what everyone else is running). That’s why so many OLAP setups quietly give up and denormalize: not because joins are conceptually hard, but because getting good enough plans under bad stats and skewed data is brutally hard and very user-visible when it fails.
What’s interesting about systems like StarRocks, ClickHouse, DuckDB, etc is that they’re implicitly making a bet: “we can push the optimizer and execution engine far enough that normalized schemas become operationally cheaper than the hacks (wide tables, pre-joined materializations, bespoke streaming DAGs).” If that bet holds, the real win isn’t just faster joins, it’s shifting complexity back from application-specific pipelines into a general-purpose optimizer that can be improved once and benefit everyone.
The irony is that the more powerful the optimizer, the more your “logical” schema becomes a performance API surface. A small change in constraints, stats collection, or distribution keys can be worth more than any new feature, but it’s also harder to reason about than “this table is pre-joined.” So we’re trading one kind of complexity (manual denormalization and backfills) for another (making the cost model and distribution-aware planner smart enough to not shoot you in the foot).
There is too much heuristic fiddling involved, and way too many niche algorithms that get cobbled together with an optimiser.
As if we're missing the theory to actually solve the stuff, so we're instead hobbling along by covering as many corner cases as we can, completely missing some elegant and profound beauty.