
October 26, 2024
How do you interest people in AI research? Not by talking about applications, says Bas Haring, a Dutch science communicator. Bas works at my institute and did a PhD in AI before it was cool. Now, he writes popular science books, makes video explainers and gives lectures in theatres. I met with him over coffee to ask him for his tips for talking AI with the general public.
First, some backstory: before I got into computer science, I studied astronomy. People around me were always fascinated, and bombarding me with questions about aliens, astrological signs and black holes. As a tour guide at the Old Observatory in Leiden, I noticed how easy it was to explain technical concepts. Planets, stars, and supernovae are all easy to visualise thanks to beautiful telescope imagery. But once I switched to computer science, everything changed. Eyes glazed over as soon as I went into any detail about algorithms. So, what does a science communication expert have to say about that?
My first question to Bas was about finding a hook to grab interest. AI research is very abstract, so I wanted examples of how AI touches people’s lives. I tried this for machine learning for Earth Observation, but I quickly realised that the closest thing to people’s lives I could come up with was local governments using satellites and AI to enforce laws. Not very sexy.
Bas’s answer surprised me. He said talking about applications is not the right route with AI. AI is already relevant– a lot of people use it or have heard about it in the news. Bas’s strategy is instead to explain very low-level concepts to reveal how computers actually work. He said: “My goal is for people to understand computers the way they understand their washing machine. We don’t entirely understand how clothes are cleaned, but at least we know it’s not magic.”
His favourite metaphor describes a computer as a bin with numbers. A computer picks a few numbers from the box, completes a calculation, then grabs the next set of numbers. He says that even people who understand AI tools like ChatGPT fairly well always say they learned something new. Even this very low-level example, which seems a far cry from today’s generative AI systems, helps people see AI tools in a different light. As washing machines rather than magic, if you will.
I told him about my experience explaining AI to kindergarten kids. One kid asked me, “Ok, but how does AI actually learn to come up with stories?” I told him: “Well, a bit like you learn to tell stories: listen to a lot of them!” As I said this, I grimaced a bit: the explanation isn’t entirely correct. He said many scientists shy away from simplified explanations that aren’t entirely accurate, and we need to dare to say things ‘wrong’ to help people understand.
When asked about other common pitfalls for scientists talking with the general public, he said most of us are too afraid of what our colleagues think. So we use jargon, and fear saying something that might trigger other experts. We need to always think about the public and broaden their understanding.
My final question was about calls-to-action. Books and courses in science communication often say: end with a concrete call to action for your audience. Bas says he never does this. What could you even ask people to do? His goal is to boost understanding. Instead of a call to action, he prefers to raise a question – something to take home and think about. I realised that in astronomy, I never had a call to action either. What could I have asked of people? To look up at the sky?
So here’s my question for you: can you think of a talk or article ending that gets people thinking about AI in a new light? (Meta, I know)