DALL·E 2024-05-28 11.11.00 - A semi-realistic, Pixar-like scene representing 'the benefits of generative AI.' Bright colors dominate the image, featuring happy humans and robots w.

(Image by DALL-E. Prompt: Create an image representing the concept “the benefits of generative AI.” The style should be semi-realistic. Colors should be bright. Include happy humans and robots working together, unicorns, rainbows, just an explosion of positive energy. The image should embrace positive aspects of AI. No text is necessary. The setting is a lush hillside.)

 

This is the second in a four-part series of posts on IA and AI, mainly inspired by talks at IAC24, the information architecture conference. Read Part 1 here.

Before I get into the problems with Generative AI (GenAI), I want to look at what it’s good for. As Jorge Arango has pointed out, there was a lot of hand-wringing and Chicken-Littleing about GenAI at IAC24 – and justifiably so – yet not a lot of acknowledgement of the benefits. So let’s look at the useful things that GenAI can do.

In her IAC24 talk “Structured Content in the Age of AI: The Ticket to the Promised Land?,” Carrie Hane offered the following list of things that Generative AI could do well:

  • Summarizing documents
  • Synthesizing information
  • Editing written content
  • Translation
  • Coding
  • Categorization
  • Text to speech // Speech to text
  • Search
  • Medical research and diagnosis

This seems like an excellent starting point (I might only add image generation to the above). We might want to caveat this list a bit, though. For one thing, we should remain skeptical on the use of GenAI in specific professional applications such as medical research and coding, at least until we’ve had time to see if we can poke holes in the extraordinary claims coming from AI researchers. Despite the legitimate promise of these tools, we’re still learning what AI is actually capable of and how we need to use it. There may well be good uses for these tools in industry verticals, but we’re almost certainly seeing overblown claims at this point.

For another thing, as Carrie would go on to point out, in many cases getting useful, trustworthy output from GenAI requires some additional technological support, such as integrating knowledge graphs and RAG (retrieval augmented generation). These technologies have yet to become well understood or thoroughly integrated in AI models.

In a recent online course, Kirby Ferguson listed some additional uses for GenAI:

  • Brainstorming, problem-solving, and expanding ideas
  • Understanding and incorporating diverse experiences
  • Generating pragmatic titles with good keywords
  • Proofreading documents
  • Creating project schedules and realistic time estimates
  • Outlining and initial content structuring

These use cases fall into one of two patterns:

  1. Process some supplied text in a specific way according to my directions. I’ll then review the output and do something with it. (And possibly feed it back to the AI for more processing.)
  2. Output some starting text that I can then process further according to my needs.

In other words, we’re not handing control of anything to the AI engine. We’re using it as a tool to scale up our ability to do work we could have done without the AI. GenAI is a partner in our work, not a replacement.

I don’t think you can overstate the real benefits of these use cases. For rote, repetitive work, or for things like research or analysis that’s not novel and is relatively straightforward but time-consuming, GenAI is a real boon. As long as there’s a human taking responsibility for the end product, there are a lot of use cases where GenAI makes sense and can lead to significant productivity and quality-of-life gains.

And a lot of the use cases listed above are like that: time-consuming, boring, energy-draining tasks. Anything that relieves some of that mental drudgery is welcome. This is why GenAI has gained so much traction over the past year or so: because it’s actually quite useful.

I want to be clear: there’s a lot of hype around AI, and a lot of its benefits have been overstated or haven’t been studied enough to understand them thoroughly. There are a lot of claims around AI’s utility in everything from drug manufacturing, to medical diagnosis, to stopping climate change, to replacing search engines, to replacing whole categories of workers. Many – maybe most – of these claims will turn out to be overblown, but there will still be significant benefits to be had from the pursuit of AI applications.

A frequent comparison to the hype around Generative AI is the hype a decade ago about autonomous vehicles. Full self-driving turns out to be really, really hard to accomplish outside of certain very narrow use cases. Autonomous vehicles that can go safely anywhere in any conditions may be impossible to create without another generational shift in technology. But the improvements to the driving experience that we’ve gained in pursuit of fully autonomous driving are real and have made operating a car easier and safer.

I think it’s likely that AI – Generative AI, in particular – will turn out to be more like that: a useful tool even if it falls far short of today’s breathless predictions.

So, GenAI excels when it’s used as a thinking tool, a partner working alongside humans to scale content generation. And like any tool that humans have ever used, we have to use GenAI responsibly in order to realize the full benefits. That means we have to understand and account for its real, serious weaknesses and dangers. In Kat King’s words, we have to understand when AI is a butter knife and when it’s a rusty bayonet. In Part 3 of this series, we’ll contemplate the pointy end of the bayonet.