This is sad: the Living Computer Museum is closing for good. This was such a great place, filled with everything from old mainframes to Apple IIs to NeXT boxes to C64s… just about anything you could think of. And most of them you could actually touch and use! Losing this museum is a real bummer.
Information Architecture in the Age of AI, Part 4: The IA-powered AI Future
or: I, For One, Welcome Our New Robot Overlords
This is the fourth in a four-part series of posts on IA and AI, mainly inspired by talks at IAC24. Read Part 1, Part 2, and Part 3.
“As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential.” -Ethan Mollick, Co-Intelligence
As we’ve seen in the previous posts in this seires, AI is seriously useful but potentially dangerous. As Kat King put it, AI is sometimes a butter knife and sometimes a bayonet.
It’s also inevitable. As Bob Kasenchak pointed out at IAC24, AI has become a magnet for venture capital and investment, and companies are experiencing major FOMO; nobody wants to be left behind as the GenAI train pulls out of the station.[1]
So, if that’s true, what do we do about it? Specifically as information architects: what does the IA practice have to say about an AI-spiced future? I think we need to do what we’ve always done: make the mess less messy and bring deep, systemic thinking to AI-ridden problems.
In short, we need to:
- Define the damned thing
- Make AI less wrong
- Champion AI as a tool for thinking, not thought.
Define the damned (AI) thing
As I suggested in Part 1 of this series, AI needs to be better understood, and no one is better at revealing hidden structures and defining complex things than information architects.[2] Let’s work as a community to understand AI better so that we can have and encourage better conversations around it.
To do that, we need to define the types of AI, the types of benefits, the types of downsides, and the use cases for AI. In addition, catalogs of prompts and contexts, lists of AI personas, taxonomies for describing images, and so forth – all easily accessible – would help improve the ability of users to interact with LLM-based AI agents.
Here’s a starter list of things any of us could do right now:
- Figure out your own personal taxonomy of AI. Let’s talk about GenAI or Artificial General Intelligence or Large Language Models as appropriate. Don’t fall into the trap of saying just “AI” when you want to talk about a specific AI technology. To get you started with some AI taxonomies in progress, try here, here, and here.
- Get clear on the risks of AI, and the nuanced risks for each type of AI. Define what AI does well, and what it does badly. Follow Ethan Mollick, Kurt Cagle, Mike Dillinger, and Emily Bender, for a start. They’ll point the way to more experts in the field.
- Talk with clients and colleagues about AI definitions. Help them get clear on what they mean when they say “AI.”
- Help out with projects like “Shape of AI,” which is creating a pattern library of AI interactions.[3] For instance, GenAI interfaces can’t just be an empty text box. IAs know that browse behavior is a necessary complement to search behavior. How do we ensure that’s part of a GenAI experience?
- Create and distribute resources like this Chat GPT Cheat Sheet to improve the ability of people to use GenAI experiences.
- Think about what a resource might look like that listed and evaluated use cases for AI. How might we help people understand how to use AI better?
Make AI less wrong
Y’all, this is our moment. As IAs, I mean: the world needs us now more than ever. LLMs are amazing language generators, but they have a real problem with veracity; they make up facts. For some use cases, this isn’t a big issue, but there are many other use cases where there’s real risk in having GenAI make up factually inaccurate text.
The way to improve the accuracy and trustworthiness of AI is to give it a solid foundation. Building a structure to support responsible and trustworthy AI requires tools that IAs have been building for years. Meaning things like:
- ontology - an accurate representation of the world, which feeds:
- knowledge graphs - structured, well-attributed, and well-related content; which needs:
- content strategy - understanding what content is needed, what’s inaccurate or ROTting, what’s missing, and how to create and update it; which needs:
- user experience - to understand what the user needs and how they can interpret and use AI output.
As Jeffrey MacIntyre said at IAC24, “Structured data matters more than ever.” As IAs, our seat at the AI table is labeled “data quality”.
To get there, we need to define the value of data quality so that organizations understand why they should invest in it. At IAC24, Tatiana Cakici and Sara Mae O’Brien-Scott from Enterprise Knowledge gave us some clues to this when they identified the values of the semantic layer as enterprise standardization, interoperability, reusability, explainability, and scalability.
As an IA profession, we know this is true, but we’re not great about talking about these values in business terms. What’s the impact to the bottom line of interoperable or scalable data? Defining this will solidify our place as strategic operators in an AI-driven world. (For more on how to describe the value of IA for AI, pick up IAS18 keynoter Seth Earley’s book “The AI-Powered Enterprise,” and follow Nate Davis, who’s been thinking and writing about the strategic side of IA for years.)
Finally, as Rachel Price said at IAC24, IAs need to be the “adults in the room” ensuring responsible planning of AI projects. We’re the systems thinkers, the cooler heads with a long-term view. In revealing the hidden structures and complexities of projects, we can help our peers and leaders recognize opportunities to build responsible projects of all kinds.[4]
AI as a tool for thinking, not thought
In 1968, Robert S. Taylor wrote a paper titled “Question-Negotiation and Information Seeking in Libraries.” In it, he proposed a model for how information-seekers form a question (an information need) and how they express that to a reference librarian. Taylor identified the dialog between a user and a reference librarian (or a reference system) as a “compromise.” That is, the user with the information need has to figure out how to express their need in a way that the librarian (or system) can understand. This “compromised” expression may not perfectly represent the user’s interior understanding of that need. But through the process of refining that expression with the librarian (or the system), the need may become clarified.
This is a thinking process. The user and the librarian both benefit from the process of understanding the question, and knowledge is then created that both can use.
In his closing keynote at IAC24, Andy Fitzgerald warned us that “ChatGPT outputs things that LOOK like thinking.” An AI may create a domain model or a flow chart or a process diagram or some other map of concepts; but without the thinking process behind them, are they truly useful? People still have to understand the output, and understanding is a process.
As Andy pointed out, the value of these models we create is often the conversations and thinking that went into the model, not the model itself. The model becomes an external representation of a collective understanding; it’s a touchstone for our mental models. It isn’t something that can be fully understood without context. (It isn’t something an AI can understand in any sense.)
AI output doesn’t replace thinking. "The thinking is the work,” as Andy said. When you get past the hype and look at the things that Generative AI is actually good for – summarizing, synthesizing, proofreading, getting past the blank page – it’s clear that AI is a tool for humans to think better and faster. But it isn’t a thing that thinks for us.
As IAs, we need to understand this difference and figure out how to ensure that end users and people building AI systems understand it, too. We have an immensely powerful set of tools emerging into mainstream use. We need to figure out how to use that power appropriately.
I’ll repeat Ethan Mollick’s quote from the top of this post: “As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential.”
if we understand AI deeply and well, we can limit its harm and unlock its potential. Information Architecture is the discipline that understands information behavior, information seeking, data structure, information representation, and many other things that are desperately needed in this moment. We can and should apply IA thinking to AI experiences.
Epilogue
I used ChatGPT to brainstorm the title of this series and, of course, to generate the images for each post. Other than that, I wrote all the text without using GenAI tools. Why? I’m sure my writing could have been improved and I could have made these posts a lot shorter if I had I fed all this in to ChatGPT. I know it still would have been good, even. But it wouldn’t have been my voice, and I don’t think I would have learned as much.
That’s not to say I have a moral stance or anything against using AI tools to produce content. It’s just a choice I made for this set of posts, and I’m not even sure that it was the right one. After all, I’m kind of arguing in this series that the responsible use of AI is what we should be striving for, not that using it is bad or not using it is good (or vice versa). But I guess, as I said above, echoing Andy Fitzgerald, I wanted to think through this myself, to process what I learned at IAC24. I didn’t want to just crank out some text.
I do believe that with the rise of AI-generated text and machine-generated experiences, there’s going to be an increasing demand for authentic human voices and perspectives. You can see, for example, how search engines are becoming increasingly useless these days as more AI-generated content floods the search indexes. Human-curated information sources may become more sought-after as a result.
Look, a lot of content doesn’t need to be creative or clever. I think an AI could write a pretty competent Terms of Service document at this point. No human ever needs to create another one of those from scratch. But no GenAI is ever going to invent something truly new, or have a new perspective, or develop a unique voice. And it is never going to think. Only humans can do that. That’s still something that’s valuable.
So, use GenAI. Use it a lot. Experiment with it and figure out what it’s really good at. I think that’s our responsibility with these tools: to understand them. But don’t forget to use your own voice, too. No AI is going to replace you, but they might just make you think faster and better. Understanding AI and using it to make a better you… that’s the best of all outcomes.
-
Bob wanted us to pump the brakes on building new AI experiences, but I think that’s pretty unlikely at this point. Sorry, Bob. ↩
-
Our inability to define our own profession aside, of course. Exception that proves the rule, etc. ↩
-
I learned about this project on Jorge Arango’s podcast, The Informed Life. If you’re not already following Jorge, what are you waiting for? ↩
-
Rachel’s full talk is available on her site. ↩
The Onion, natch:
Biden Signs Executive Order To Deport All 340 Million Americans And Start From Scratch
Information Architecture in the Age of AI, Part 3: The Problems With GenAI
(Image by DALL-E. Prompt: An impressionistic image representing ‘the problems of generative AI.’ The colors are dark and muted, creating a moody and foreboding atmosphere. The scene is a dark alley in a gritty, noir-style city. A shadowy figure with a hat and trench coat lurks as if waiting for his next victim.)
This is the third in a four-part series of posts on IA and AI, mainly inspired by talks at IAC24, the information architecture conference. Read Part 1 and Part 2.
In this post, I’m riffing on the work of several IAC24 speakers, predominantly presentations by Emily Bender and Andrea Resmini.
In Part 2 of this series, I looked at the benefits of Generative AI. The use cases at which GenAI excels are generally those in which the AI is a partner with humans, not a replacement for humans. In these cases, GenAI is part of a process that a human would do anyway and where the human is ultimately in control of the output.
But GenAI output can have numerous issues, from hallucinating (making up false information) to generating content with biases, stereotypes, and other subtle and not-so-subtle errors. We can be lulled into believing the AI output is true because it’s presented as human-like language, so it seems like a human-like process created it. But although GenAI can create convincing human-like output, at its core it’s just a machine. It’s important to know the difference between what we think AI is and what it actually is, and between what we think it’s capable of and what it’s actually capable of.
AI isn’t human
Here is basically what Generative AI does: it makes a mathematical representation of some large corpus of material, such as text or images. It then makes new versions of that material using math. For example, if you feed an LLM (large language model) a lot of examples of text and have it run a bunch of statistics on how different words relate to each other, you can then use those statistics to recombine words in novel ways that are eerily human-like. A process called fine-tuning will teach the LLM the boundaries of acceptable output, but fine-tuning is necessarily limited and can’t always keep GenAI output error- or bias-free. (And the process may introduce new biases.) LLMs produce statistically-generated, unmoderated output that sounds human-generated.
Therein lies the main problem: Generative AI produces human-like output while not being human, or self-aware, or anything like an actual thinking thing. As Andrea Resmini put it in his talk at IAC24, “AI is technology that appears to be intelligent.” Emphasis on “appears.” Since GenAI output seems human but isn’t, it tricks our brains into believing that it’s more reliable and trustworthy than it actually is.
Humans want to see human qualities in everything. We anthropomorphize; we invest objects in the world with human attributes. We see human faces in things where they don’t exist.[1] We get easily wrapped up in the experience of seeing human qualities in other things and forget that the thing itself is just a thing; we’ve projected our own experience onto it.
It’s not too dissimilar from experiencing art. We bring meaning — our filters, our subjective experience — to the experience of art. We invest art with meaning. There is no experience of art without our participation in it. Your experience of a piece of culture might be very different from mine because of our different life experiences and thinking patterns and the different ways we interact with the work.
In the same way, when we participate in AI experiences we invest those experiences with meaning that comes from our unique subjective being. GenAI isn’t capable of delivering meaning. We bring the meaning to the experience. And the meaning we’re bringing In the moment is based on how we’ve learned to interact with other humans.
In a recent online article, Navneet Alang wrote about his experience of asking ChatGPT to write a story. He sought an explanation for his sensation of how human-like the story felt:
Robin Zebrowski, professor and chair of cognitive science at Beloit College in Wisconsin, explains the humanity I sensed this way: “The only truly linguistic things we’ve ever encountered are things that have minds. And so when we encounter something that looks like it’s doing language the way we do language, all of our priors get pulled in, and we think, ‘Oh, this is clearly a minded thing.’”
AI simulates human thought without human awareness
Attributing human thought to computer software has been going on long enough to have a name: The ELIZA Effect. ELIZA was an early computer program that would ask simple questions in response to text input. The experience of chatting with ELIZA could feel human-like – almost like a therapy session – though the illusion would eventually be broken by the limited capacity of the software.
GenAI is a way better ELIZA, but it’s still the same effect. When we impute human qualities to software, we’re making a category error: we’re equating symbolic computations with the ability to think.
GenAI can simulate the act of communicating intent and make statements that appear authoritative. But, as Dr. Bender said in her IAC24 keynote, AI doesn’t have intent. It can’t know what it’s saying, and it’s very possible for it to say something very confidently that is very wrong and possibly very harmful.
GenAI engines lack (at least) these important human qualities:
- humility - the ability to say “I don’t know”
- judgment — the ability to say “that’s not possible” or “that’s not advisable” or “that’s a racist question”, or to evaluate and question data that doesn’t seem right[2]
- the ability to learn from feedback mechanisms like pain receptors or peer pressure
- sentience - the ability to feel or experience through senses
- cognition - the ability to think
Those last points are really important: Computers can’t think and they can’t feel. They can’t know what they’re doing. They are machines made of code that can consolidate patterns of content and do some fancy math to recombine those patterns in novel ways. No more. And it’s right to be skeptical and maybe even scared of things that can’t reason and have no empathy but that can imitate external human behavior really well. They are not human, just human-like.
And so here’s the problem: if we confuse the human-like output from a GenAI engine with actual human output, we will have imputed all sorts of human-like qualities to that output. We will expect it to have passed through human filters like judgment and humility (because that’s what a human would do, and what we’re interacting with appears human). We will open ourselves up to accepting bad information because it sounds like it might be good. No matter that we might also get good information, if we’re not able to properly discern the bad information, we make ourselves vulnerable to all sorts of risks.
AI training data, however, is human… very, very human
So, human-like behavior without human thought behind it is a big problem with GenAI. Another major problem has to do with the data that’s been used to train the large language models GenAI engines are based on. And for the moment I’m setting aside the ethical questions around appropriating people’s work without notice or compensation. Let’s just focus for now on the issue of data quality.
GenAI scales the ability to create text and images based on what humans have already created. It does this by ingesting vast quantities of human-created content, making mathematical representations of probabilities inherent in that content, and then replaying those patterns in different combinations.
Through this process, GenAI reflects back to us a representation of ourselves. It’s like looking in a mirror in a well-lit bathroom, in a way, because it reveals not only our best qualities but also our flaws and biases. As Andrea Resmini pointed out, data is always dirty. It’s been created and edited by humans, and so it has human fallibility within it. Whatever reality was inherent in the training data will be reflected in the GenAI output. (Unless we take specific steps to hide or moderate those flaws, and those don’t always go well.)
If GenAI is sometimes a bathroom mirror, it’s also sometimes a funhouse mirror, distorting the interactions that it produces due to deficiencies in the training data (and because it is not self-aware and can’t correct itself). For any question that matches a sufficient amount of training data, the GenAI agent can sometimes give a reasonably accurate answer. Where there is little or no training data, a GenAI agent may make up an answer, and this answer will likely be wrong.
It’s garbage in, garbage out. Even if you get the occasional treasure, there’s still a lot of trash to deal with. How do we sort out which is which?
Context cluelessness and psychopathy
So, a GenAI engine might give you a response that’s 100% correct or 0% correct or anywhere in between. How do you know where on the spectrum any given response may lie? Without context clues, it’s impossible to be sure.
When we use search engines to find an answer to a question, we get some context from the sites we visit. We can see if it’s a source with a name we recognize, or if obvious care has been taken to craft a good online experience. And we can tell when a site has been built poorly, or is so cluttered with ads or riddled with spelling and grammar errors that we realize we should move on to the next source.
GenAI engines smooth all of its sources out like peanut butter on a slice of white bread, so that those context clues disappear into the single context of the AI agent.
And if the agent is responding as if it were human, it’s not giving you human clues to its veracity and trustworthiness. Most humans feel social pressure to be truthful and accurate in most situations, to the best of their abilities. A GenAI agent does not feel anything, much less social pressure. There is nothing in a GenAI engine that can be motivated to be cautious or circumspect or to say “I don’t know.” A GenAI agent is never going to come back to you the next day and say, “You know, I was thinking about my response to you and I think I might have gotten it wrong.”
Moreover, a human has tells. A human might hedge or hesitate, or look you in the eye or avoid your gaze. A GenAI agent won’t. Its tone will be basically the same regardless of the truth or accuracy of its response. This capacity to be completely wrong while appearing confident and authoritative is the most disturbing aspect of GenAI to me.
As GenAI gets more human-like in its behaviors it becomes a more engaging and convincing illusion, bypassing our BS detectors and skepticism receptors. As a result, the underlying flaws in GenAI become more dangerous and more insidious.
Are there even more problems with GenAI? Yeah, a few…
I’ve focused here on some high-level, systemic issues with GenAI, but there are many others. I would encourage you to read this Harvard Business Review article titled “AI’s Trust Problem” for an excellent breakdown of 12 specific AI risks, from ethical concerns to environmental impact. If AI’s lack of humanity alone isn’t enough to make you cautious about how you use it, perhaps the HBR article will do the trick.
All that said…
For all its faults, Generative AI is here to stay, like it or not. Businesses are gonna business, and we humans love a technology that makes our lives easier in some way and damn the consequences. And, as I pointed out in Part 2, GenAI can be truly useful, and we should use it for what it’s good at. My point in going through the pros and cons in such detail is to help set up some structure for how to use GenAI wisely.
So, if we accept that this type of AI is our new reality, what can we do to mitigate the risks and make it a better, more useful information tool? Is it possible information architects have a big role to play here? Stay tuned, dear reader… These questions and more will be answered in the final part of this series, coming soon to this very blog.
-
This is called pareidolia. ↩
-
At IAC24, Sherrard Glaittli and Erik Lee explored the concept of data poisoning in an excellent and entertaining talk titled “Beware of Glorbo: A Use Case and Survey of the Fight Against LLMs Disseminating Misinformation” ↩
Information Architecture in the Age of AI, Part 2: The Benefits of Generative AI
(Image by DALL-E. Prompt: Create an image representing the concept “the benefits of generative AI.” The style should be semi-realistic. Colors should be bright. Include happy humans and robots working together, unicorns, rainbows, just an explosion of positive energy. The image should embrace positive aspects of AI. No text is necessary. The setting is a lush hillside.)
This is the second in a four-part series of posts on IA and AI, mainly inspired by talks at IAC24, the information architecture conference. Read Part 1 here.
Before I get into the problems with Generative AI (GenAI), I want to look at what it’s good for. As Jorge Arango has pointed out, there was a lot of hand-wringing and Chicken-Littleing about GenAI at IAC24 – and justifiably so – yet not a lot of acknowledgement of the benefits. So let’s look at the useful things that GenAI can do.
In her IAC24 talk “Structured Content in the Age of AI: The Ticket to the Promised Land?,” Carrie Hane offered the following list of things that Generative AI could do well:
- Summarizing documents
- Synthesizing information
- Editing written content
- Translation
- Coding
- Categorization
- Text to speech // Speech to text
- Search
- Medical research and diagnosis
This seems like an excellent starting point (I might only add image generation to the above). We might want to caveat this list a bit, though. For one thing, we should remain skeptical on the use of GenAI in specific professional applications such as medical research and coding, at least until we’ve had time to see if we can poke holes in the extraordinary claims coming from AI researchers. Despite the legitimate promise of these tools, we’re still learning what AI is actually capable of and how we need to use it. There may well be good uses for these tools in industry verticals, but we’re almost certainly seeing overblown claims at this point.
For another thing, as Carrie would go on to point out, in many cases getting useful, trustworthy output from GenAI requires some additional technological support, such as integrating knowledge graphs and RAG (retrieval augmented generation). These technologies have yet to become well understood or thoroughly integrated in AI models.
In a recent online course, Kirby Ferguson listed some additional uses for GenAI:
- Brainstorming, problem-solving, and expanding ideas
- Understanding and incorporating diverse experiences
- Generating pragmatic titles with good keywords
- Proofreading documents
- Creating project schedules and realistic time estimates
- Outlining and initial content structuring
These use cases fall into one of two patterns:
- Process some supplied text in a specific way according to my directions. I’ll then review the output and do something with it. (And possibly feed it back to the AI for more processing.)
- Output some starting text that I can then process further according to my needs.
In other words, we’re not handing control of anything to the AI engine. We’re using it as a tool to scale up our ability to do work we could have done without the AI. GenAI is a partner in our work, not a replacement.
I don’t think you can overstate the real benefits of these use cases. For rote, repetitive work, or for things like research or analysis that’s not novel and is relatively straightforward but time-consuming, GenAI is a real boon. As long as there’s a human taking responsibility for the end product, there are a lot of use cases where GenAI makes sense and can lead to significant productivity and quality-of-life gains.
And a lot of the use cases listed above are like that: time-consuming, boring, energy-draining tasks. Anything that relieves some of that mental drudgery is welcome. This is why GenAI has gained so much traction over the past year or so: because it’s actually quite useful.
I want to be clear: there’s a lot of hype around AI, and a lot of its benefits have been overstated or haven’t been studied enough to understand them thoroughly. There are a lot of claims around AI’s utility in everything from drug manufacturing, to medical diagnosis, to stopping climate change, to replacing search engines, to replacing whole categories of workers. Many – maybe most – of these claims will turn out to be overblown, but there will still be significant benefits to be had from the pursuit of AI applications.
A frequent comparison to the hype around Generative AI is the hype a decade ago about autonomous vehicles. Full self-driving turns out to be really, really hard to accomplish outside of certain very narrow use cases. Autonomous vehicles that can go safely anywhere in any conditions may be impossible to create without another generational shift in technology. But the improvements to the driving experience that we’ve gained in pursuit of fully autonomous driving are real and have made operating a car easier and safer.
I think it’s likely that AI – Generative AI, in particular – will turn out to be more like that: a useful tool even if it falls far short of today’s breathless predictions.
So, GenAI excels when it’s used as a thinking tool, a partner working alongside humans to scale content generation. And like any tool that humans have ever used, we have to use GenAI responsibly in order to realize the full benefits. That means we have to understand and account for its real, serious weaknesses and dangers. In Kat King’s words, we have to understand when AI is a butter knife and when it’s a rusty bayonet. In Part 3 of this series, we’ll contemplate the pointy end of the bayonet.
Information Architecture in the Age of AI, Part 1: The Many Faces of AI
(Image by DALL-E)
I’m not sure where I am in the race to be the last person to write about IAC24, but I have to be pretty close to leading the pack of procrastinators. The annual information architecture conference was held in my home city of Seattle this year, and despite not having to travel for it, it’s taken me a while to consolidate my thoughts.
No matter. It’s not about being first or last, just about crossing the finish line. So here I am, like a pokey yet determined walker in a marathon, presenting the first of a four-part series of posts on IA and AI, mainly inspired by presentations and conversations at IAC24 and things I’ve noticed since then as a result of better understanding the field.
In this first part, like a good information school graduate, I argue for better definitions of AI to support better discussions of AI issues. The second part covers what Generative AI seems to be useful for, and the third part is about the dangers and downsides of Generative AI. Finally, I’ll look at the opportunities for the information architecture practice in this new AI-dominated world, and I’ll suggest the proper role for AI in general (spoiler: the AI serves us, not the other way around).
Note: I used ChatGPT to brainstorm the title of the series and to create a cover photo for each post. Other than that, I wrote and edited all the content. I leave it as an exercise to the reader to decide whether an AI editor would have made any of this better or not.
The many faces of AI
Let’s start with some definitions.
AI is not a single thing. Artificial Intelligence is a catchall term that describes a range of technologies, sub-disciplines, and applications. We tend to use the term AI to apply to cutting-edge or future technologies, making the definition something of a moving target. In fact, “AI Effect” is the term used for a phenomenon where products that were once considered AI are redefined as something else, and the term AI is then applied to “whatever hasn’t been done yet”[1].
So, let’s talk a bit about what we mean when we say “AI”. As I learned from Austin Govella and Michelle Caldwell’s workshop “Information Architecture for Enterprise AI,” there are at least three different types of AI:
- Generative AI creates content in response to prompts.
- Content AI processes and analyzes content, automating business processes, monitoring and interpreting data, and building reports.
- Knowledge AI extracts meaning from content, building knowledge bases, recommendations engines, and expert systems.
Most of us are probably barely aware of the second two types of AI tools, but we know all about the first one. In fact, when most of us say “AI” these days, we’re really thinking about Generative AI, and probably ChatGPT specifically. The paradigm of a chat interface to large language models (LLMs) has taken up all the oxygen in AI discussion spaces, and therefore chat-based LLMs have become synonymous with AI. But it’s important to remember that’s just one version of machine-enhanced knowledge representation systems.
There are other ways to slice AI. AI can be narrow (focused on a specific task or range of tasks), general (capable of applying knowledge like humans do), or super (better than humans).[2] It can include machine learning, deep learning, natural language processing, robotics, and expert systems. There are subcategories of each, and more specific types and applications than I could list here.
The different types of AI can also take different forms. Like the ever-expanding list of pumpkin-spiced foods in the fall, AI is included in more and more of the digital products we use, from photo editors to writing tools to note-taking apps to social media… you name it. In some cases, AI is a mild flavoring that gently seasons the UI or works in the background to enhance the software. In other cases, it’s an overwhelming, in-your-face, dominant flavor of an app or service.
As IAs, we need to do a better job helping to define the discrete versions of AI so that we can have better discussions about what it is, what it can do, and how we can live with it. Because, as Kat King pointed out in her talk “Probabilities, Possibilities, and Purpose,” AI can be used for good or ill. Borrowing a metaphor from Luciano Floridi, Kat argued that AI is like a knife. It can be like a butter knife – something that is used as a tool – or like a rusty bayonet – something used as a weapon.
In containing this inherent dichotomy, AI is like any other technology, any other tool that humans have used to augment our abilities. It’s up to us to determine how to use AI tools, and to use them responsibly. To do this, we need to stop using the term “AI” indiscriminately and start naming the many faces of AI so that we can deal with each individually.
There are some attempts at this here, here, and here, and the EU has a great white paper on AI in the EU ecosystem. But we need better, more accessible taxonomies of AI and associated technologies. And, as IAs, we need to set an example by using specific language to describe what we mean.
So, in that spirit, I’m going to focus the next two parts of this series specifically on Generative AI, the kind of AI that includes ChatGPT and similar LLM-powered interfaces. In Part 2, I’ll talk about the benefits of GenAI, and in Part 3, I’ll look at GenAI’s dark side.
-
Per Tesler’s Theorem, apparently misquoted. ↩
-
General AI and Super AI are still theoretical. ChatGPT, Siri, and all the other AI agents we interact with today are considered Narrow AI. ↩
Are Generative AIs just really expensive Ouija boards?
I wonder if Generative AIs are like Ouija boards. Every time I hear someone get freaked out about something that an AI wrote, they ascribe human characteristics to it. They anthropomorphize the computer program and act as if there’s a thinking being expressing human-like desires and feelings. But what the AI produces exists in dialog with the humans who read it. It both comes from human thought (because it has access to a vast quantity of human writing) and it gets filtered through human brains.
We interpret works of art the same way, as if if each piece has a specific, inherent, immutable meaning. But each of us brings something to a work of art. We bring our experiences and biases and we project our own identities on the work in front of us. We create meaning in dialog with the art. The art doesn’t mean anything without an observer. A blob of AI text doesn’t mean anything until we invest it with meaning.
The current crop of AIs are really good at what they do, but they don’t think any more than a Ouija board thinks. A Ouija board isn’t controlled by a supernatural being, but we can convince ourselves that it is if we want to. The AIs are a reflection of ourselves, and that reflection can often fool us into thinking there’s something in there. Like a parakeet with a mirror. It’s a really cool and useful trick, but let’s not give the really clever software more credit than it deserves.
Just bought my first carbon removal offsets on nori.com/. It was super easy, and I wish I had done it sooner.
Well, I’ve done it: I finished my Sustainability Certificate from UCLA Extension. I enrolled in it because the climate crisis felt so huge that I couldn’t begin to think about it. I’m far from an expert now, but I at least I can understand the big picture. Now to figure out how to apply what I know…
Ecological tipping points could occur much sooner than expected, study finds
“More than a fifth of ecosystems worldwide, including the Amazon rainforest, are at risk of a catastrophic breakdown within a human lifetime.”
♻️ A promising development in plastic recycling: Scientists convert everyday plastics into fully recyclable and potentially biodegradable materials If it’s scalable, plastic waste could become raw material for plastic with the same qualities as that created from virgin petroleum. No new oil needed.
Just getting the word out there: A lot of us support climate change policies… “Research published in 2022 in Nature Communications showed that although 66 to 80 percent of Americans support climate change policies, they think only 37 to 43 percent of the population does.” –Scientific American
Boy… losing Mrs. Maisel, Barry, and Ted Lasso in the same week has me a bit emotional. Three great shows with crackerjack writing and exceptional ensembles. I’ll miss them all.
A factoid I read tonight and can’t get out of my head: In 1978, there were 4 billion people on earth. Today there are over 8 billion. By 2100, there will be 10.1 billion.
Congratulations to the Toronto Maple Leafs for getting their first Round One playoff win in nearly 20 years. And big cheers to my Tampa Bay Lightning, who have had a hell of a few years of playoff hockey. Three Cup finals in three years, winning two… that’s a hell of a run.
And that’s it for the climate quiz. Hope you enjoyed it. Remember: climate change is real and scary, but it’s not hopeless. Lots of people are working on ways to keep the worst of global warming at bay, so educate yourself, and do what you can to help. 4/4
Microscopic fossil shells can reveal climatic conditions by the amount and type of calcium carbonate in their shells.
CarbonBrief.org has a great primer on “How ‘proxy’ data reveals the climate of the Earth’s distant past” 3/4
You’re familiar with paleoproxies if you’ve ever heard of studying tree rings to understand historic periods of drought, pests, or fire. Evidence of chemical changes in air and water can also be found trapped in layers of ice drilled out of ancient glaciers. 2/4
Today’s answer: By studying “paleoproxies” such as ocean sediments and sedimentary rocks, we can study ocean and atmospheric temperature from as many as tens of millions of years ago. 1/4
Sooo… How far back can we measure ocean and atmospheric temperatures? a) Hundreds of years b) Thousands of years c) Hundreds of thousands of years d) Millions of years