Product Rails at the AI Junction

AI generated image of 2 vintage robots. One is holding its head in its arms. The other is holding its overlarge head up on its small body.
AI Generated image via OpenAI’s ChatGPT.

I am picking back up on the blog here after spending a week visiting both Adobe Summit and Google Next. It’s pretty clear that the marketplace for AI is booming, and every enterprise-grade tech company has been given the mandate that This Is Happening so get onboard.

From a product ownership perspective however this feels a bit repetitive. We’ve been here. This is still in the hype cycle, regardless of however well-intentioned slides from Senior VP’s of jobs that didn’t exist 18 months ago presentations stated that we’re past the trough of disillusionment on our way to enlightenment. From the outside looking in, this feels like a ramp towards lock-in with the goal to snare as many whales as possible in your net before cinching the line and making sure everyone pays for the opportunity to rent your intelligence by the token.

For product owners & managers like myself, I think we’re all a bit trepidatious for a few reasons.

  1. We’ve been suckered by lock-in before. We know that when these enterprise platforms say they are “platform agnostic” they don’t really mean it. Building a multi-billion-dollar data center (now measured in power consumption not computing capability) isn’t going to work if your customers can migrate freely.
  2. We know what rent-seeking behavior looks like. It looks like this. It looks like every level of the technical organization building AI into every facet of every product in an effort to find the users who can no longer function without it. And then, just as the lock-in takes hold (by forcing developers into your framework) the rent hikes can come hard.
  3. Nobody is quite sure that this doesn’t all end in commoditization anyways. There’s a good chance that picking a particular horse is a waste of time & energy. Let the juggernauts beat the shit out of each other for a few more years and when the dust settles does anyone really believe that model “XYZ 9.0” is going to be truly differentiated from “ABC 14.7”?

We’re facing our own moment to start incorporating AI solutions across numerous points of contact within The Venetian Resort. Some of it is bespoke models & training built on top of one of the big 3-4 platforms. Some are working with the tools the platforms are offering directly. We’ve had some experiments with AI-systems running the SaaS playbook of trial-to-buy – but again the product/market fit isn’t quite there yet for much of this technology.

When it comes to B2B software there’s some tolerance for errors and mistakes. I can tell an internal user to double-check a result. It’s a lot harder to brace my guests for the idea that we published software that was wrong and/or is hallucinating. It’s hard enough to trust that the thousands of people working at the property actually know the answer they’re giving is real – but when an AI goes off the rails it can be so confident and believable it’s easy to imagine that the guest goes along with whatever it recommends.

For now – we’re happy to play with small & controlled experiments. We want to federate our data so it gets into the hands of LLMs so larger platforms can leverage it to drive visitors and hopefully interest & conversions. We’re building internal tools with an eye towards open-standards, so we don’t end up locked in to a single framework or LLM back-end that can force us to a 10x price increase (which would break the economics anyways; let’s save that for another time).

This is an exciting time at the AI junction, but the hype is still building. Let’s see where the platforms and standards go in the next 12 months and I expect governance, guard-rails, and cost-management will become higher on the priority list.

Cheers, Dave

Leave a Reply

Your email address will not be published. Required fields are marked *