What is that problem that we're trying to solve and how can we leverage AI better to help solve the problem versus what do we do with AI?
AI requires starting with problems, not capabilities
Strategy → Prioritization
There is something called the shiny object trap, and I'm always telling people, 'Hey, don't do AI for the sake of doing AI.' Make sure there is a problem there. Make sure there is a pain point that needs to be solved in a smart way.
When you start small, it forces you to think about what is the problem that I'm going to solve. In all this advancements of the AI, one easy, slippery slope is to keep thinking about complexities of the solution and forget the problem that you're trying to solve.
For me, hundreds of dollars spent on this exercise is trivial compared to the potential strategic value of having better insights. It's as if a really, really smart chief of staff has gone through and read every single sales call transcript that we've had in the past year.
From a product perspective I can imagine like three bubbles in my head. So you want to find the intersection both something that's desirable by users, something that is going to be a viable business and something that is going to be feasible from a research scientist and technical perspective.
Don't assume it's going to solve all your problems. Don't assume it's going to do autonomously be able to give high quality results of every case. But what can you now build now that you have magical duct tape?
Every time you hand over decision-making capabilities to agentic systems, you're kind of relinquishing some amount of control on your end.