Blog
Most consumers think humans should oversee AI. Here’s how it’s done - and why we should keep doing it.
Stories are powerful tools for driving belief in and adoption of new technologies. AI is no exception, and with examples like HAL 9000 from Stanley Kubrick’s 2001: A Space Odyssey (1968), Skynet from The Terminator (1984), and Samantha from Her (2013), there's no shortage of narratives casting AI in a negative light.
These portrayals often depict AI as a complex entity that challenges human understanding, raising questions about autonomy, ethics, and the role of technology in our lives. And they’ve understandably driven uncertainty about our future with AI. How will AI impact intellectual property and privacy rights? Or contribute to misinformation and bias in our public discourse? And how can it be responsibly and sustainably harnessed at scale?
These concerns underscore the importance of Human-in-the-Loop (HITL) as an approach to AI design, where human oversight ensures AI decisions align with ethical standards and serve the best interests of all stakeholders. There is already widespread support of HITL: Salesforce recently reported, for example, that 80% of consumers believe human oversight is crucial in validating AI-generated content.
HITL plays a pivotal role in revising these narratives by emphasizing collaboration between humans and AI. This involves human reviewers labeling datasets to ensure that information is correctly applied to various situations or use cases. Through HITL, human judgment guides AI's development and operation, refining its capabilities while maintaining accountability and transparency. Whether it's ensuring AI systems make ethically sound decisions or correcting biases in algorithmic outputs, HITL integrates human expertise to enhance AI's reliability and ethical compliance. This approach not only addresses societal concerns but also fosters a more nuanced understanding of AI's potential and limitations.
Simply put, HITL requires human supervision over AI decisions - and AI that follows HITL principles should also be designed to augment, not replace, human decision making. As proposed by Eduardo Mosqueria-Rey et al. (2022), “humans and computers should work together on the same task doing what each of them does best at any specific moment.” The objective is for humans and machines to work together to ensure that decisions are correct and appropriate. But how does this work in practice?
HITL design depends, of course, on the kind of AI technology as well as the context in which it will be used. Consider, for example, high-stakes applications such as healthcare, finance, and autonomous driving. In the case of self-driving cars, which largely make use of computer vision technology, humans monitor the car’s decisions, providing guidance and corrective action before allowing it to make fully autonomous decisions.
This process can be continuous. For instance, Tesla's autopilot for their self-driving cars requires the driver to grab the steering wheel every few minutes to ensure it maintains a correct course. This HITL approach subsequently trains the AI to make more accurate decisions and account for a variety of situations.
HITL for large language models (LLMS) works a bit differently. LLMS are powerful AI systems trained on huge datasets that understand and generate human language and other content types. However, these models, including ChatGPT, also pose significant risks. They have been implicated in spreading false and outdated information, which can compromise the credibility of scientific studies and propagate misinformation unintentionally. According to a recent report by Deloitte, the rapid dissemination of manipulated information through sophisticated tools like AI-driven content generation and social media bots underscores the urgency for robust validation mechanisms. These tools enable the creation of convincing fake news, deepfakes, and biased narratives that amplify public distrust, damage brand reputations, and can lead to financial losses.
HITL can significantly enhance the accuracy and reliability of LLMs by incorporating 'fact-checking' and transparency mechanisms. This approach ensures that AI’s decisions are not treated as a black box, where the process behind its results remains uncertain. A transparent LLM application provides clear explanations of its decision-making processes. This includes documenting data sources, algorithms, and methodologies used, as well as maintaining audit trails that log every step of data processing and decision-making. This trains the AI to be more accurate and account for similar situations going forward.
What does a transparent LLM look like in context? Imagine this scenario: a financial services company might use a transparent LLM to explain how it assesses investment opportunities based on specific criteria, allowing users to understand and validate the rationale behind AI-driven recommendations. This user input is the equivalent of putting hands on the wheel at regular intervals so that the model can learn what an erroneous decision looks like and edit those decisions out of future predictions.
Without HITL, AI systems may produce results that are opaque and unverified, leading to potential errors and undermining trust in the technology. For instance, in healthcare AI, medical professionals validate AI-assisted diagnoses to ensure patient safety and treatment is efficient. Without such validation, the risk of misdiagnoses or inappropriate treatments increases, which can have serious consequences for patient health.
HITL continues to face challenges in areas such as speed, efficiency, and adoption. Unlike fully automated AI systems, HITL requires human intervention to review and validate AI outputs, which not only adds costs but also slows down decision-making processes as experts verify results. Moreover, humans are themselves prone to error, inaccuracy, and biases - as is evident by the fact that LLMS exhibit these traits because of their presence across the large textual datasets on which these models are trained. HITL still requires rigorous quality assurance controls. And integrating HITL into existing AI frameworks demands significant adaptations, potentially discouraging immediate adoption.
Despite these hurdles, HITL stands out for its commitment to transparency and ethical decision-making. One of the main challenges that LLMs and other AI models continue to struggle with is accounting for context in information processing. Textual data can contain ambiguities and implicit meanings across various languages, leading to multiple interpretations of words, ideas, and concepts. HITL addresses these nuances by incorporating a variety of sources, viewpoints, and cultural insights into the auditing of LLMs, thereby helping to produce more accurate models that consider these complexities.
Imagine, for instance, the word “bank.” To an AI model, this term might simultaneously refer to a financial institution, the side of a river, or even the act of tilting an airplane. Without context, the model might confuse an article about fishing on the riverbank with one about banking regulations. HITL shines by weaving in cultural and contextual threads that a machine alone might miss.
HITL also enhances user experience. Consider the scenario where a scientific researcher uses an LLM to draft a research proposal for quantum computing. In this case, human experts collaborate with the LLM or use fact-checked information to ensure that accurate terms specific to quantum computing are incorporated into the model’s output. This ensures that technically precise language and scientific discourse are maintained in the proposal while saving time for researchers.
As users increasingly prioritize transparency in AI interactions, HITL adoption gains traction. By placing ethics at the forefront and incorporating human oversight into decision-making, HITL offers a principled alternative to conventional generative AI development. This approach not only enhances reliability and accountability but also aligns AI practices with user expectations and societal values, driving its adoption despite initial challenges.
At Narratize, Responsible AI is at the heart of everything we do. Learn more about our take on Responsible AI at Night Sky x Narratize, your source of inspiration and guidance for all things GenAI transformation and innovation.
Sign up to learn how to accelerate time-to-market for your enterprise’s best, most brilliant ideas.
Check out our blogs to learn more about the magic of Narratize in action.
We will offer webinars and events on a regular basis starting in May 2023 to help users learn more about NarratizeTM and how it can be used to improve their innovation storytelling. Check out our calendar here.
Yes, we do. Please contact us by emailing sales@narratize.com or by filling out the contact form on the "Contact Us" page. We look forward to learning more about your specific needs.
NarratizeTM currently offers customer support through email. We read every email, yes every email, and aim to respond within 1 business day.
Get Started