Amid all the talk about the potential ups and downs of generative artificial intelligence, there’s a tremendous opportunity that’s getting little attention: This kind of technology, which creates new content based on the data it’s given, can actually be instrumental in helping humans learn about each other. In fact, it offers humanity a chance to achieve an unprecedented level of knowledge about people in various walks of life all over the world.
MIT Technology Review reports that historians have already begun using AI to better understand populations of the past, such as by piecing through huge numbers of old documents to get a broader sense of how people in various populations lived or saw the world. And generative AI could suggest ways to fill “blank spots” in historical records by taking aggregated data – a set of available information – and determining likely answers to unknown questions.
Our work focuses on the here and now. At Glimpse, a global human research platform, we use AI to help clients discover what groups of people think and feel about all sorts of ideas and brands. We’re already seeing how generative AI can offer a window into different communities and subcommunities. While we help businesses understand consumers, this same system can help anyone – academic researchers at universities, for instance, or polling organizations – develop a much stronger sense of any group of people.
Supercharging Human Insight
Imagine that you want to know what South Korean women between the ages of 25 and 35 think about electric vehicles. Or how male gamers who live in California and have graduate degrees feel about a certain video game. The best way to find out is to ask people who match these profiles. But this is where traditional research runs into limits.
You can send out a traditional survey, piece through responses and try to read through qualitative data, such as written answers to open-ended questions. You can gather focus groups, which are often slow, expensive and involve relatively few people – an impractical, time-consuming method, particularly when you’re looking into many different groups of people with different characteristics.
Many researchers use technology to engage in “social listening” – that is, searching through social media posts on a given topic to see what people say. But social media generally rewards the loudest people who post a great deal, and it inevitably ignores the many people who don’t share their opinions on these platforms. This technique does not allow researchers to gauge what a truly representative sample of people are saying about anything, nor to get direct responses to specific questions.
With generative AI, researchers can use a whole new system. They can create and send out high quality surveys in which people who have certain characteristics are invited to answer questions thoughtfully and in their own words, as though they’re chatting to a friend – using their own lingo, modes of expression, emojis and more.
The technology can then use natural language processing to read through all of the responses – no matter how many – in real time, as responses come in. It can look for common themes and ideas, as well as expressions of emotions and sentiment. And generative AI can highlight all of these findings in a clear way so that researchers get important insights instantly.
For example, in our work, we’ve looked at how different segments of a population interact with the metaverse, inflation and more. We’ve also looked specifically for insights into the experiences of BIPOC, LGBTQ+ and female-identifying gamers. More than 70% of those surveyed through our platform said that they’d witnessed toxic behavior within gaming communities.
With the help of generative AI, we were able to instantly grasp the full range of toxic experiences, from slurs to organized harassment, as well as suggested strategies for dealing with personal attacks. That helped us develop recommendations for game publishers to build more inclusive digital communities.
This kind of research powered by generative AI can supercharge human insight by gathering extensive data sets, targeting any of the more than 5 billion people who currently access the internet, nearly two-thirds of the world’s population. And with the United Nations pushing for universal worldwide connectivity by 2030, possibly many more can be reached.
This same system also emphasizes why people are as crucial as ever in leading efforts to understand such populations. While generative AI can be used to produce, distribute, synthesize and interpret this kind of research, only humans can look at the findings and make sense of them. Even with today’s advanced technology, understanding why people say what they say remains a human area of expertise. And only humans can exercise the judgment necessary to act on that knowledge.
It’s also up to people to make sure that generative AI tools are built and trained in a way that avoids biases, casts a wide net and reaches as many people as possible. For example, members of minority groups must be included in efforts to use and evaluate these technologies. This is one of many ethical concerns that have led the Biden administration to announce “public assessments of generative AI systems.”
Rather than spending time piecing through hoards of collected data, researchers should be able to focus on these kinds of tasks that require human intelligence.
The possibilities are endless – and, perhaps, not even limited to the human realm. Google reports that the Earth Species Project is looking into whether generative AI can help people understand animal communication – and even communicate back.
Having spent years working in disruptive technological advances, we believe generative AI should be used to expand and deepen human connections, not threaten human brain power. The more people choose to embrace these technologies in ways that advance our understanding of each other, the more we’ll all benefit – and turn generative AI into a friend, not a foe.