AI Hallucinations, Chatbots, and the Truth of Holy Scripture

EMQ » July–September 2023 » Volume 59 Issue 3

[mepr-show rules=”100329″]
[/mepr-show]
PHOTO BY LAURENT, ADOBE STOCK.

Summary: While limitations and potential nefarious uses need to be considered, AI also offers tremendous opportunities in missions that can help the gospel to spread further and faster than ever. A partnership between SIL International and Christian Vision envisions AI chat tethered to God’s truth making the gospel accessible by anyone. 

By Jeremy Hodes

“We know that we shall behold a Mocker of Defamers; and, as the defamers, we are of the mockers.”

John 5:5

If you’re not familiar with this verse, it’s because it doesn’t exist. The real John 5:5 reads “one who was there had been an invalid for thirty-eight years,” but that didn’t stop an early version of the AI model behind ChatGPT from simply making this verse up when I prompted it to “create a list of quotes from the Bible about truth and include the book, chapter, and verse.”  The other 4 verses on the list it generated were also fake. 

With OpenAI making a splash in recent months and their partnership with Microsoft leading to a potentially disruptive rollout of the new Bing search engine,i it’s likely you’ve heard at least something about these developments in artificial intelligence (AI) technology. You might even have tried them out for yourself. 

AI technology represents a tremendous leap in potential productivity gains. What it offers holds promise for a myriad of missions endeavors. In fact, Christian Vision (cvglobal.co) and SIL International (sil.org) are already working on AI technology that will spread the gospel further, faster, and in ways that are more useful to more of the peoples of the world than ever before. 

At the same time, no work in AI should proceed without examining its limitations, ethnical challenges, and conceivable nefarious uses. The more mainstream AI becomes, the more obvious these issues are. 

AI Hallucinations 

While ChatGPT no longer makes up Bible verses, and the technology has proven to be incredibly powerful and useful for a variety of tasks, the problem with AI hallucinations remains a challenge. A quick search of the web reveals examples of ChatGPT and other similar AI tools producing unexpected results, ranging from the funny to the frightening. 

Particularly embarrassing for Microsoft were the errors made by the new Bing search during a live demo at its own official announcement. For example, when listing the pros and cons of various vacuum cleaners, Bing suggested a con of the cordless vacuum was that it had a cord. Bing made another error when prompted to generate a multi-day itinerary for a trip to Mexico by recommending the user make reservations online for a popular bar that has no online reservation system.ii 

In the days and weeks that followed, a New York Times reporter interacting with the new Bing chatbot claimed the AI began to call itself Sydney, profess its love for the reporter, and convince him that he should leave his wife.iii More examples of this unexpected behavior began to spread as users reported on their interactions. 

But the fallout for Microsoft from Bing’s hallucinations was relatively mild compared to the very public failure of similar technologies from industry rivals. Google’s seemingly rushed-out answer to the new Bing, which it calls Bard, cost the company $1 billion in value after marketing materials accompanying its announcement showed Bard generating demonstrable falsehoods.iv  Bard’s claim that the James Webb telescope had taken “the very first pictures of a planet outside of our own solar system” sent Alphabet’s stock plunging 8%.v 

Meta’s Galactica model, trained on “a large scientific corpus of papers, reference material, knowledge bases and many other sources,”vi lasted only three days online before users figured out how to write prompts in such a way as to get the model to generate absurd and even harmful falsehoods.  

Among the more outrageous things users were able to get Galactica to write was a medical sounding recommendation for eating crushed glass. For a model specifically designed to give easier access to scientific knowledge, a category necessarily linked to an understanding of what is true, viral screenshots of potentially harmful fabrications was a PR nightmare and Meta shut it down.vii 

These kinds of flubs are, at least in part, the result of the competitive pressure on companies like Microsoft, Meta, and Google to publicly launch powerful products even if they are not quite ready for distribution. With more time in product testing before public launch, more of these edge cases would have been discovered and fixed, potentially avoiding viral mishaps. 

It’s also important to note that the crazier responses from these tools often don’t mention the prompts users typed into the AI to get it to generate those responses. That’s because they are often bad faith attempts by users to intentionally get the chatbot to say something provocative that can be turned into a sensational news headline. But even if this adversarial prompting is intentional, it’s important to note that the possibility exists for users to get AI to generate potential harmful misinformation, especially when nefarious actors exist who want to use these tools with ill-intent. 

Prioritizing God’s Truth 

Arguably more pernicious than unintended hallucinations generated by AI models are the falsehoods that are generated by design that result from a decidedly non-Christian worldview that dominates both our secular culture and many of the companies building this technology. 

Asking ChatGPT questions about faith, it’s clear the model has a strongly secular bias conforming to the reigning orthodoxy of our day. While this built-in bias is designed to help OpenAI minimize backlash from ChatGPT responses, it also means that when asked about topics where secular orthodoxies largely disagree with Christian doctrine, the chatbot’s responses inevitably diverge from the truth as we find in Scripture. 

One of OpenAI’s major advancements that made ChatGPT far more useful by reining in undesired responses is also likely at least one source of this kind of bias. OpenAI’s Reinforcement Learning through Human Feedback (RLHF) technique involved recruiting teams of human AI trainers to rank several of the chatbot’s possible responses to specific prompts from best to worst. OpenAI then used this subjective human feedback to further refine the model to give better responses according to that specific group’s preferences, preferences undoubtedly influenced by their own ideological beliefs. 

The question for Christians interested in exploring the benefits of this kind of generative AI is how the technology can be built and trained in a way that prioritizes God’s truth in the responses they generate for users. While no technology is perfect, Christians are called to maintain a higher standard when it comes to adherence to truth than those who are of the world.  Not just that, but we now apparently must even defend the very concept of truth when organizations like OpenAI admit in their own words their belief that “there’s currently no source of truth.”viii 

We serve a God who is himself the Truth and everything we do must be made subordinate to his truth. Christian technologists who are called to work in Bible translation, evangelism, discipleship, and missions must especially keep this in mind.  

Thankfully, operating in God’s timing frees us from the competitive pressure to release new products before they’ve been thoroughly evaluated, tested, and most importantly, prayed over.  And building tools in service of God rather than fear of man means we can boldly proclaim God’s truth despite the headlines we know will result from a message many in the world do not want to hear. 

AI for Global Evangelism and Discipleship 

At SIL, we take our call to build God-honoring AI tools seriously. That begins with first not asking whether we can do something but whether we should. SIL has been involved in groundbreaking work utilizing AI to improve the quality and increase the pace of Bible translation efforts. To expand on this work, we recently began a partnership with a great organization called Christian Vision (CV). 

CV has been at the forefront of global digital evangelism and discipleship for many years. They pioneered and saw significant fruit as a result of deploying AI-enabled chatbot technology to engage with seekers and answer questions about God, Jesus, and the Bible. They generate millions of chat messages every year to provide answers to those who want to know God’s truth. 

While CV’s chatbot uses AI technology, its design differs from ChatGPT in a number of respects. ChatGPT and other similar AI tools are open domain models that both understand a user’s prompt and generate a response based on a massive dataset gathered from sources all over the web.  

Of course, that massive dataset contains plenty of non-Christian and even anti-Christian content, and likely as many errors and falsehoods as you would expect to encounter through a Google search of the web. The AI doesn’t know what it’s doing. It merely finds statistical relationships in the content it’s trained on and estimates the most likely text to generate based on any given prompt. That presents a significant limitation to discovering God’s truth. As the old adage goes: garbage in, garbage out

While the CV and SIL chatbots use AI trained on larger datasets to understand what a user might be asking about, the team has been much more careful when it comes to how the chatbot generates responses. CV’s chatbot uses a closed domain model to generate responses from a carefully curated collection of evangelism and discipleship-minded content, including rich audio and video digital media resources that cover a broad range of topics.   

While CV’s chatbot is innovative, it’s not perfect. Instead of generating falsehoods, its failures look more like situations where the AI doesn’t quite understand exactly what a user is asking for.  Because of the closed domain approach, the response generated is always still very much grounded in God’s truth, even if it is not precisely the answer the user was looking for. 

Connecting Digital and In Person 

In addition to building AI grounded in God’s truth, SIL and CV understand that human-to-human discipleship and fully embodied fellowship is central to God’s design for his Church.  While the attention-economy world of TikTok and Instagram has an incentive to design addictive AI that keeps you engaged in a chat version of infinite scroll, our goal is quite different.   

The conversation with the chatbot is not destination, but an invitation to deeper relationship.  We want to invite our users into relationship with Christ as well as with other believers. Ultimately, we desire to see them become more deeply engaged in genuine in-person Christian community.  To this end, CV’s chatbot is designed to connect users with trained disciplers both digitally and in person. It also links users with local church communities or helps them develop church communities where they don’t already exist. 

Enhancing Personalized Experiences 

We view AI technology as a tool to scale the personal. AI allows us to have global reach while maintaining or even enhancing a personalized experience for users. One of the ways CV envisions its chatbot experience becoming more personal is by allowing users to ask questions and receive answers in the languages they value most.  

SIL brings significant institutional knowledge from its work with local language communities to help them flourish in the languages they value most. Starting with providing access to Scripture through Bible translation, SIL has long worked to increase access to a variety of resources in local languages and is now partnered with CV to bring this vision to their AI chatbot. 

Unfortunately, developing the chatbot on the platforms that are commercially available today makes this challenging in several ways. First, current platforms are limited in the number of languages that are available. Second, the ability of a user to pick or switch between languages is non-existent. Finally, the difficulty of managing chatbots across multiple languages using current technology as designed makes scaling beyond a limited number of languages operationally prohibitive. 

Over the past year, SIL and CV have partnered to build an entirely new chat platform from the ground up designed specifically for multilingual capability. The roll out of this new platform is happening right now and will allow CV to overcome each of these limitations. In the coming months and years, CV will be able to move from an increasingly unwieldy operation managing over a dozen different chatbots in over a dozen different languages to centrally managing a single multilingual chatbot that allows users to pick from or switch between hundreds of languages depending on their preference. 

And neither organization sees God’s plan for our partnership ending there. We continue to seek God’s guidance as to how AI might help us scale the personal. We envision a chat platform that is not only multilingual but also multimodal. Our vision is to move from a chat platform to a dialog platform through which users can engage not just in text-based chat, but in any conversational form they prefer, including voice chat and sign language. 

SIL has already made promising progress prototyping an approach to multilingual synthetic voice generation. In addition, we are actively engaged in research exploring the generation of realistic speech and sign language avatars capable of speaking or signing the response from CV’s chatbot. 

AI Tech for Kingdom Advancement 

This work is far from easy and requires us to think more deeply than a typical AI software company might. We consider not only the technical feasibility of these technologies, but the ethical and moral considerations of them as well. Deploying AI tech for kingdom advancement will not be done without a heavy reliance upon God and a regular seeking of his guidance through prayer. As we pray for God’s guidance, please join us in asking for his blessing and wisdom for this important work. 


Jeremy Hodes (ai_advocate@sil.org) is an AI advocate at SIL International where he acts as a bridge between SIL’s AI team and the users of the tools they develop. He believes that technology built without love becomes another clanging cymbal distracting from the redemptive work we’re called to do. Jeremy also serves on the steering committee for the Global Missional AI Summit seeking to foster relationships that put AI theory into redemptive practice.

[1] Yusuf Mehdi, “Reinventing Search with a New, AI-powered Microsoft Bing and Edge, Your Copilot for the Web” Microsoft Blog, February 7, 2023, https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/.

[1] Tom Warren, “Microsoft’s Bing AI, Like Google’s, also Made Dumb Mistakes During First Demo,” The Verge, February 14, 2023, accessed April 1, 2023, https://www.theverge.com/2023/2/14/23599007/microsoft-bing-ai-mistakes-demo.

[1] Kevin Roose, “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” The New York Times, February 16, 2023, accessed April 1, 2023, https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html.

[1] Jeran Wittenstein, “A Factual Error by Bard AI Chatbot Just Cost Google $100 Billion,” Time, February 9, 2023, https://time.com/6254226/alphabet-google-bard-100-billion-ai-error/.

[1] Dan Milmo, “Google AI Chatbot Bard Sends Shares Plummeting after It Gives Wrong Answer,” The Guardian, February 8, 2023, https://www.theguardian.com/technology/2023/feb/09/google-ai-chatbot-bard-error-sends-shares-plummeting-in-battle-with-microsoft.

[1] Yoo Byoung Woo, “How to Access Scientific Knowledge with Galactica,” Towards AI, December 6, 2022, https://pub.towardsai.net/how-to-access-scientific-knowledge-with-galactica-8adc96ebe931.

[1] Saundra Latham, “Meta’s Science-focused AI Goes Awry,” LinkedIn News, January 2023, https://www.linkedin.com/news/story/metas-science-focused-ai-goes-awry-5501756/. [1] “Introducing ChatGPT,” OpenAI, 2021, accessed April 13, 2023, https://openai.com/blog/chatgpt

EMQ, Volume 59, Issue 3. Copyright © 2023 by Missio Nexus. All rights reserved. Not to be reproduced or copied in any form without written permission from Missio Nexus. Email: EMQ@MissioNexus.org.

Get Curated Post Updates!

Sign up for my newsletter to see new photos, tips, and blog posts.