we ran a small experiment: we sampled real, active local businesses (with websites, Google profiles, and years in operation) and checked whether they appeared in AI-generated answers.
92% didn’t show up at all.
What stood out was that when businesses did appear consistently, it wasn’t random. The models seemed to have a clearer, more structured understanding of who the business was, what it did, and when it should be recommended.
That led us to build Chatalyst — a way for businesses to intentionally define how they’re represented inside AI systems, instead of relying on models to infer it from scattered web signals.
It’s not ads, SEO, or a directory. It’s closer to providing AI with a clean, machine-readable source of truth: what a business does, who it’s for, what it should (and shouldn’t) say, and when it’s a good fit.
I’m curious how others here think about: • AI as a discovery surface vs traditional search • Whether businesses should have a first-class presence inside LLMs • What defensibility looks like as discovery formats standardize
Happy to answer questions or dig into the methodology.
You’re in production. The change is “simple”. A small UPDATE or DELETE with a WHERE clause you’ve read over multiple times.
Still, right before hitting enter, there’s that pause.
Not because you don’t know SQL. Not because you didn’t think it through. But because you know: •If this goes wrong, it’s on you •Rollback isn’t always clean or instant •And the safest option is often… “don’t touch it”
In reality, I’ve seen people deal with this by: •Manually backing up data “just in case” •Having someone else stare at the query with them •Restricting who’s allowed to run anything at all •Or simply avoiding fixing things directly in prod
I’m not asking for best practices or tooling advice.
I’m genuinely curious:
What do you personally do, when you have to change data and can’t be 100% sure it’s harmless?
Is this just an unavoidable part of working with production databases?