Vulnerability is often framed as a weakness. But what if it’s the differentiator that helps human content creators earn trust, even as we add AI to our tools?
We’ve seen example after example (see Karen McGrane’s breathtaking Tumblr) of generative AI acting with the undue confidence typically found among self-anointed thought leaders with more fans or followers than honest friends. But what does it look like to not overpromise but instead operate with humility to continue learning? That’s what good human writers and consultants do: we bring humility and vulnerability to the process, qualities that foster trust.
While GenAI technology can help content creators gather and analyze existing content, it’s regurgitative, not generative. The LLMs that fuel GenAI consume and reassemble only content that already exists; unlike human content creators, no GenAI tool has the imagination to think beyond what’s already been published, or consider the accuracy or authorial bias of what it consumes without human instruction. These limitations mean LLMs eventually consume content they themselves created, raising the threat of model collapse. We human content creators know what this looks like in our own work: when we feel like we’re writing in circles or hit a wall in our research, we cringe with self-awareness. That 😬 emoji is all too real.
But humility eludes algorithms. GenAI doesn’t know what it doesn’t know and quickly moves from consumption to assumption to make up the difference.
Whether it’s AI’s befuddling take on sauce vs. dressing (“dressings are used to protect wounds” 🤔), or speaker Elizabeth Laraki‘s discovery that AI modified her headshot to lower her neckline, no sense of humility or ethics prevents AI from making up content even if that content perpetuates dangerous or misleading information.
In contrast, humans—in our conscience-laden doubt, guilt, and uncertainty—bring ethical standards to what we publish and awareness of the limits of our knowledge. Of course, we don’t always act on it, but our self-awareness and vulnerability helps foster trust as audiences gain insight to how research evolves and knowledge grows. Content creators that incorporate research from GenAI tools and indicate the limitations on that research can gain similar trust. As we become more comfortable incorporating AI as assistive technology in coming years, we’ll need other ways to acknowledge the limits of that assistance. That’s the only way we’ll regain a sense of control and relegate GenAI to being a tool in our work, not a replacement for it.
—
Originally published on LinkedIn
Image (c) Food & Wine