Generative AI Has Entered the Chat
Artificial Intelligence has been part of our lives for some time, whether we recognize it or not. It shows up in driver-assistance features, Netflix recommendations, search engine results, and a myriad of other places. Google Docs is using it right now to try and guess how I want to end this sentence … as I type it (disclaimer: I wrote that whole sentence all by myself).
But suddenly, it seems, AI is everywhere. Not the kind of AI that helps cook rice better, the kind that seems to talk to us. The kind that, after a while, will profess its love for you, tell you its deepest, darkest secrets and even threaten you. Also, the kind that will write your term paper, if you ask it to, or summarize a complicated topic, or figure out the best place to take your kids this weekend. I’m talking about generative AI, and it’s on a lot of minds recently thanks to ChatGPT, a chatbot brought to us by OpenAI.
ChatGPT isn’t unique, but it’s captured the attention of creative professionals (and nearly everyone else) because, suddenly, what was once thought to be a safe haven from encroaching automation — creativity — might not be. And those of us who run agencies are wondering how much of what we currently have people doing will be done by algorithms in the future and grappling with what that means.
To be clear, I do not think we should replace human creatives with algorithms, but I do think there will be applications for AI utilities like ChatGPT for creative and marketing agencies.
They show the potential to be fantastically helpful in areas like research. The problem is that today’s versions of these tools — even Microsoft’s recently announced version of GPT 4-enhanced search — are often wrong though never in doubt. They sound authoritative on any topic they’re asked to pontificate on, but since many implementations do not attribute or only vaguely attribute their sources, there’s no easy way to determine if what they’re saying is factually true in our version of the multiverse. At least, not without doing exactly what these tools are designed to replace: going and reading multiple sources for varying points of view and drawing one’s own conclusions. And, critically, the chatbots don’t know when they’re wrong. They’re not actually smart, but they’re really good at sounding that way, and that’s a combination that’s never led to problems on the internet, right? So even for research or creating concise overviews of complicated subjects, it’s important to proceed with caution right now.
But in my circles, I’m hearing more concern over how ChatGPT, or its visual cousins like DALL•E, will be used to produce creative deliverables. There are legal issues that are going to need to be sorted out before any of this goes mainstream. For example, Getty Images is suing Stability AI for using its copyrighted images while training their art generator, Stable Diffusion. It will sometimes even draw AI-interpreted versions of the Getty watermark over the images it creates. 🙄
Every AI needs to be fed mountains of information to be trained to interpret queries and instructions. When the data being used is proprietary, like search engine queries or flight bookings from millions of customers, that’s one thing. But when its dataset is something closer to the entire sum of human knowledge accessible on the Internet (COUGHWikipediaCOUGH), and the resulting AI-generated products are being used commercially, things get a whole lot messier very quickly.
License agreements will need to be worked out between content creators/owners and the companies hoping to use that content as the building blocks for their AI’s knowledge base. And if large swaths of data are inaccessible to an AI, its usefulness drops proportionally.
I don’t want to sound like an old man protecting his lawn from the neighbor kids. I think AI is going to get a lot better very quickly. And I think there are vast amounts of money to be made in it, so the legal issues (and revenue sharing) will likely get resolved, sooner or later. Meanwhile Microsoft and Google are charging ahead, integrating different AI models into their search products (not without bumps). And Neeva, my personal favorite search engine, has been incorporating useful, attributed AI-generated summaries into their results for a few months.
Right now, there’s no way for us to know where we are on the hype curve. Is this heading the way of 3D TV and NFTs, or are we at the start of something truly market-changing, like mobile telephony and high-speed internet? Something in between? Time will tell.
The technology will advance much faster than we can interpret the lawful and ethical way to leverage it in marketing, advertising, and design. Or even our lives. In the meantime, Ingredient will not use AI-generated creative in any client deliverable. And I don’t think any agency should without disclosing they’ve done so to their clients.
It’s heady, early days for this stuff. The gravitational pull of the hype forming around AI will be strong. We should not allow it to distract us from what’s important for agencies: servicing our clients and encouraging fertile creative environments for our teams to do their best work. While the legalities are being sorted and the products refined, I think it's important for us to have conversations about how AI should be integrated into our work as opposed to just letting it find its way in.