Image by DALL·E - An abstract and surrealistic representation of 'the many faces of AI' in a busy city environment. Bright colors dominate the scene. Human and robotic.

(Image by DALL-E)


I’m not sure where I am in the race to be the last person to write about IAC24, but I have to be pretty close to leading the pack of procrastinators. The annual information architecture conference was held in my home city of Seattle this year, and despite not having to travel for it, it’s taken me a while to consolidate my thoughts.

No matter. It’s not about being first or last, just about crossing the finish line. So here I am, like a pokey yet determined walker in a marathon, presenting the first of a four-part series of posts on IA and AI, mainly inspired by presentations and conversations at IAC24 and things I’ve noticed since then as a result of better understanding the field.

In this first part, like a good information school graduate, I argue for better definitions of AI to support better discussions of AI issues. The second part covers what Generative AI seems to be useful for, and the third part is about the dangers and downsides of Generative AI. Finally, I’ll look at the opportunities for the information architecture practice in this new AI-dominated world, and I’ll suggest the proper role for AI in general (spoiler: the AI serves us, not the other way around).

Note: I used ChatGPT to brainstorm the title of the series and to create a cover photo for each post. Other than that, I wrote and edited all the content. I leave it as an exercise to the reader to decide whether an AI editor would have made any of this better or not.

The many faces of AI

Let’s start with some definitions.

AI is not a single thing. Artificial Intelligence is a catchall term that describes a range of technologies, sub-disciplines, and applications. We tend to use the term AI to apply to cutting-edge or future technologies, making the definition something of a moving target. In fact, “AI Effect” is the term used for a phenomenon where products that were once considered AI are redefined as something else, and the term AI is then applied to “whatever hasn’t been done yet”[1].

So, let’s talk a bit about what we mean when we say “AI”. As I learned from Austin Govella and Michelle Caldwell’s workshop “Information Architecture for Enterprise AI,” there are at least three different types of AI:

  1. Generative AI creates content in response to prompts.
  2. Content AI processes and analyzes content, automating business processes, monitoring and interpreting data, and building reports.
  3. Knowledge AI extracts meaning from content, building knowledge bases, recommendations engines, and expert systems.

Most of us are probably barely aware of the second two types of AI tools, but we know all about the first one. In fact, when most of us say “AI” these days, we’re really thinking about Generative AI, and probably ChatGPT specifically. The paradigm of a chat interface to large language models (LLMs) has taken up all the oxygen in AI discussion spaces, and therefore chat-based LLMs have become synonymous with AI. But it’s important to remember that’s just one version of machine-enhanced knowledge representation systems.

There are other ways to slice AI. AI can be narrow (focused on a specific task or range of tasks), general (capable of applying knowledge like humans do), or super (better than humans).[2] It can include machine learning, deep learning, natural language processing, robotics, and expert systems. There are subcategories of each, and more specific types and applications than I could list here.

The different types of AI can also take different forms. Like the ever-expanding list of pumpkin-spiced foods in the fall, AI is included in more and more of the digital products we use, from photo editors to writing tools to note-taking apps to social media… you name it. In some cases, AI is a mild flavoring that gently seasons the UI or works in the background to enhance the software. In other cases, it’s an overwhelming, in-your-face, dominant flavor of an app or service.

As IAs, we need to do a better job helping to define the discrete versions of AI so that we can have better discussions about what it is, what it can do, and how we can live with it. Because, as Kat King pointed out in her talk “Probabilities, Possibilities, and Purpose,” AI can be used for good or ill. Borrowing a metaphor from Luciano Floridi, Kat argued that AI is like a knife. It can be like a butter knife – something that is used as a tool – or like a rusty bayonet – something used as a weapon.

In containing this inherent dichotomy, AI is like any other technology, any other tool that humans have used to augment our abilities. It’s up to us to determine how to use AI tools, and to use them responsibly. To do this, we need to stop using the term “AI” indiscriminately and start naming the many faces of AI so that we can deal with each individually.

There are some attempts at this here, here, and here, and the EU has a great white paper on AI in the EU ecosystem. But we need better, more accessible taxonomies of AI and associated technologies. And, as IAs, we need to set an example by using specific language to describe what we mean.

So, in that spirit, I’m going to focus the next two parts of this series specifically on Generative AI, the kind of AI that includes ChatGPT and similar LLM-powered interfaces. In Part 2, I’ll talk about the benefits of GenAI, and in Part 3, I’ll look at GenAI’s dark side.

  1. Per Tesler’s Theorem, apparently misquoted.  ↩

  2. General AI and Super AI are still theoretical. ChatGPT, Siri, and all the other AI agents we interact with today are considered Narrow AI.  ↩