“You need to treat AI like an intern.” So guide, supervise, and review the work before releasing it to the world. Yet… et tu, Autocorrect?
It’s been 15+ YEARS since Apple brought autocorrect to the iPhone.
It’s been 30+ YEARS since Microsoft rolled it out in Word.
Yet it still allows (or creates!) errors that undermine trust. Best case, you get a hilarious autocorrect fail in a text message. More concerning, you get yet another email from your kids’ school, or your accountant, or an otherwise conscientious non-profit that reveals no one cared enough to review it before hitting send.
How much more work will you take on due to AI content generation? Or how much additional work will your team have to do to not just use AI but use it to create better content? And “better” to what standards?
(I won’t ask how you’ll create more content because we already have a content proliferation problem that conflates quantity and quality, and NOPE.)
Standards for creation, structure for training, and time for oversight… those are all
content strategy and content design problems. You’ll want to start by establishing a message architecture to clarify your hierarchy of communication goals. Then audit your existing content against that architecture to determine what’s irrelevant, off brand, or inaccurate. Want to skip the audit? Skip it and you risk wasting any investment in AI that will train on it, proliferating those problems, and undermining any reliability or trust associated with your brand.
I’m happy that smart organizations still prioritize this work. Because with all its promise, the problems that AI can create are not new. Generative AI doesn’t seem to improve quality and nuance in communication. It’s just adding to the ducking heap most organizations still struggle to address every day.
(And no, I didn’t use AI to write this—except for some predictive text that was right only part of the time.)
—
Originally published on LinkedIn