Our library recently hosted a guest speaker, David Wingate, a professor in BYU’s computer science department who does research on large language models, for a faculty lunch and learn. The entire presentation was fascinating, but the most intriguing part for me and many of the law faculty in attendance was the idea that generative AI systems will become so good they will be able to replace human subjects in answering research surveys. How? Generative neural networks trained on huge amounts of data—terabytes and even petabytes—ingest enough information about people that they can answer survey questions as if they were members of the survey population. Researchers provide the machine with the rough profiles for the individuals they want to survey, and the AI will generate the survey responses.
Would this really work? Can a generative AI system learn enough about a person tell us how they will think and act in the future? We all have people in our lives that we know well enough to predict what they will do or say. For example, I know my daughter will always try to catch any baby duckling she sees and will choose Oreo-flavored anything every time. But that doesn’t mean I can read her mind. She surprised me when she chose Poland for her country report at school “because they have been in lots of wars and are helping people from Ukraine.”
The survey use case for AI looks a lot like the advertising profiles that companies create about us to develop marketing strategies. Some of the data they collect about us is obvious (job, address, past purchases, shows we stream) but some of it is unexpected (whether we swipe or tap to turn the page on a Kindle, how much we travel). Some of the profile predictions are spot-on, but some are way off. (If you want to view the data companies have collected about you, the California Privacy Directory on GitHub provides a list of websites and email addresses you can use to start contacting companies, and you don’t have to be a California resident to use it.) Does knowing whether someone prefers Greek yogurt over cereal for breakfast really give us insight into their thoughts and feelings? Does it tell us whether they view bankruptcy as a moral or personal failing? Or what they think about our library services?
It is possible that AI-generated survey responses may have some theoretical use, especially for research populations that are difficult or impossible to survey. Some survey populations are so small (people with a rare disease), hard to identify, (people with a specific set of beliefs and personal experiences), or non-responsive (law students and faculty) that using machine generated responses may be the only responses available. But I am still skeptical that the machine can get into someone’s head sufficiently to provide meaningful data. And even if the AI can produce a “typical” response, I have found just as much value in the outliers as in the broader trends.
For fun I gave ChatGPT a quick spin on answering survey questions. First, I fed it a few questions from the ACRL Academic Librarian Faculty Status survey, asking it to answer as two different librarians with different demographic characteristics. Both responses came out exactly the same, e.g.:
And the responses:
Because the initial questions I used were very dry, I next tried using the same librarian profiles but two more open-ended, feelings-focused questions to see if the responses changed, asking ChatGPT how feels about its job security and the tenure process. The first response was generic ChatGPT reminding me that it doesn’t have feelings:
But the second response was different:
More realistic, though ultimately similar to the first response and still bland. I was able to obtain some differences in responses using different types of questions (asking about views of bankruptcy), but they tended to be generic and stereotypical for the profile.
Even with the huge leaps in technology expected over the next few years, I am skeptical the AI can produce valid and useful survey responses. That would be crossing the border into the realm of the truly human. I also doubt the AI will be good enough to perform the role of a human interviewer/researcher that can sensitively and thoughtfully engage in a dialogue with a research subject, though that seems within closer reach. But the research use cases for AI are intriguing.
Editor’s Note: This article is republished with permission of the author, with first publication on RIPS Law Librarian.