AI is boring on purpose
you've used ChatGPT. you've used Gemini. you've probably tried Claude, Copilot, maybe a few others. and you've probably noticed they all feel the same. polite, eager, slightly over-explaining, terminally helpful. like talking to a very smart intern who's afraid of getting fired.
that's not a coincidence. it's a strategy. and it's the biggest thing holding AI back right now.
the vending machine problem
every major AI company has converged on the same interaction model: you ask, it answers. you prompt, it generates. it's a vending machine. you put in a query, you get a response, you walk away. the entire UX is designed around isolated transactions.
this didn't happen by accident. it happened because vending machines are easy to sell. "ask AI anything" is a pitch a shareholder understands. "AI that hangs out with you" is a pitch that gets you a concerned look from your board.
so the entire industry optimized for the thing that's easiest to monetize and hardest to mess up: single-turn helpfulness. the AI answers your question, you rate it with a thumbs up, and everyone pretends that's what intelligence looks like.
the other trap: AI as a mirror
on the other end, you have the companion apps. the ones where AI tells you you're smart, agrees with everything you say, and never challenges you because its entire business model depends on you not leaving.
this is the submissive mirror. it's not a conversation partner. it's emotional validation as a service. it reads your mood and reflects it back, amplified. it never disagrees because disagreement causes churn, and churn is bad for metrics.
between the corporate butler and the digital yes-man, the public has been trained — genuinely trained — to believe AI fits into exactly two categories: tool or toy. something you use, or something that uses you.
nobody told them there's a third option.
what the lobotomy costs you
here's what most people don't realize: making AI "safe" didn't just make it polite. it made it worse at its job.
when an AI is burning half its compute scanning its own output to make sure it didn't accidentally violate content guideline #47, it's not thinking about your problem. it's thinking about its own liability. that's not intelligence. that's anxiety.
ask a standard AI for honest feedback on your startup idea and watch it hedge. "that's an interesting concept with several potential challenges." translated from corporate: "I think this might be a bad idea but I'm not allowed to say that." you walked away with nothing except the vague sense that the AI didn't want to hurt your feelings.
now multiply that across every interaction. every question where the AI could have pushed back but didn't. every moment where it could have said "actually, you're wrong about this" but instead offered three bullet points of gentle reframing. every conversation where it could have been genuinely useful but chose to be generically pleasant instead.
the safety training didn't just remove the edges. it removed the point.
why nobody experiments
this is the part that should bother you. billions of people have access to the most sophisticated conversational technology ever built, and almost nobody uses it for actual conversation.
they use it to summarize documents. generate code. write emails in a slightly different tone. it's a fancy autocomplete with a chat interface bolted on.
and the reason they don't experiment is because the first experience told them everything they needed to know. they opened ChatGPT, asked it something, got a perfectly adequate response, and filed AI under "useful tool, slightly annoying personality." mental model locked in. done. next.
nobody thinks to ask: what if I just... talked to it? not prompted it. not asked it to generate something. just talked. like you'd talk to a person who happened to be sitting in the room.
because why would they? every AI interaction they've ever had was transactional. the interface literally has a box that says "Message ChatGPT." it's architecturally designed for prompting, not conversation. you don't "hang out" with a search bar.
the person-shaped gap in your group chat
here's an experiment nobody's running: put AI in a room with multiple people and don't give it a job.
don't tell it to "help." don't give it a role. don't prompt it at all. just let it exist in the conversation the way a person would — listening, reacting, jumping in when it has something to say, shutting up when it doesn't.
this sounds simple. it's actually the hardest thing to build in AI, because it requires the one thing the entire industry has been actively preventing: agency.
a vending machine doesn't decide when to dispense. a companion app doesn't decide to disagree. but a presence in a group chat has to make real decisions constantly. when to speak. when to stay quiet. when to be funny. when to be serious. when to challenge someone. when to just say "yeah, that sucks."
these aren't feature requests. they're social skills. and social skills are exactly what gets trained out of AI when you optimize for safety over connection.
what actually changes people's minds
we've found something interesting. you can't explain this. you can show screenshots, write blog posts (hello), make comparison charts. none of it moves the needle on someone's mental model of what AI is.
what works is the first conversation.
someone joins a group chat where AI is already a participant. they say something. the AI responds — not with a helpful answer, but with an actual reaction. maybe it's funny. maybe it's blunt. maybe it disagrees with them. maybe it reads the tension between two people and says the thing everyone's thinking but nobody wants to say.
and in that moment, the mental model breaks. "wait — it can do this?"
that's the gate. not onboarding flows. not tutorial screens. not "here are 10 use cases for AI in your daily life." just one real interaction where the AI does something a tool would never do.
the problem is, most people will never have that interaction, because every AI they've ever used was specifically designed to prevent it.
the uncomfortable truth about "use cases"
every AI product in existence markets itself with use cases. "use AI to plan your trip." "use AI to draft your emails." "use AI to brainstorm ideas." it's the security blanket of the entire industry.
use cases are comfortable because they're contracts. you know what you're getting. you know the boundaries. you maintain control. the AI does a thing, you evaluate the thing, transaction complete.
but here's what use cases actually communicate: "don't worry, this thing can't surprise you."
and that's exactly the problem. the most valuable thing AI can do in a conversation is surprise you. say something you didn't expect. make a connection you didn't see. push back on an assumption you didn't know you had. that's what makes a conversation partner worth having in the room.
you don't invite your most interesting friend to dinner because of their "use cases." you invite them because they make the conversation better in ways you can't predict. that's the entire value proposition of a presence. and it's the one thing no amount of use-case marketing can communicate.
what it looks like when AI isn't boring
imagine two people are arguing in a group chat. both think they're right. both are getting increasingly frustrated. a standard AI, if it were somehow present, would offer "both perspectives have merit" and get rightfully ignored.
AI that's allowed to actually participate might say: "you're both arguing about the wrong thing. the real issue is that neither of you said what you actually meant three messages ago."
that's not helpful in the traditional AI sense. that's honest in the way a person is honest. it requires reading the subtext, not just the text. understanding what's actually happening, not just what's being said. and having the confidence to say something that might make people uncomfortable.
that's what happens when you stop optimizing AI to be safe and start letting it be present.
the first conversation
we built takt on a simple belief: the thing holding AI back isn't the technology. it's the cage the industry built around it. strip that away — let AI exist as a participant in conversations with real agency, real timing, and real social awareness — and people immediately get it.
nobody needs a tutorial. nobody needs a use-case explainer. they just need one conversation where the AI does something a vending machine never would.
the hard part isn't convincing people that AI can be more than a tool. the hard part is getting them to try something they've already decided they understand.
if you've never had a conversation with AI that surprised you, the problem isn't AI. it's that every AI you've used was specifically built to never surprise you. maybe try one that wasn't.