Keith C. Lincoln – May 2, 2023
The news cycle around ChatGPT literally brought AI to my family dinner table conversations. Recently, a family member told me. “AI is so bad, I heard it on NPR the other day. we should be concerned.” He was referring to the concept of responsible AI. And I don’t want to pick on ChatGPT, but since it’s the first public use of a large language model (GPT 3.5 was originally the LLM behind ChatGPT), it serves as an example of how responsible AI can has become a major problem. .
So while it may seem like I’m throwing ChatGPT under the bus here, the truth is that large linguistic models (LLM) have their place in the world of artificial intelligence, and expert.ai uses them in many use cases in our in the enterprise. market entry approach. But because ChatGPT raised public awareness of AI in general, it led to a much broader general conversation about responsible AI.
While ChatGPT has sparked much of the current debate, responsible AI has been an issue not only since its inception, but from the very conception of AI itself. Perhaps the most famous example (though not the first) of AI going crazy is the famous quote from the 1968 film, “I’m sorry Dave, I’m afraid I can’t do it.” 2001: A Space Odyssey. When ChatGPT first came out, much of the initial conversation focused on students cheating on school papers and college essay completion as we know it. Deeper questions soon became part of a more serious discussion.
This is because the LLMs that ChatGPT is based on use public domain data such as website content, Reddit posts, social media and other publicly available sources. As a result, the content they return can be toxic.
OpenAI, the company that developed ChatGPT, had people revise the model to minimize toxicity, but as you can imagine, the toxicity with the 176B parameters cannot be completely eliminated by manual human revision. Also, ChatGPT’s generative capabilities sound so reasonable that you can’t immediately tell if what it’s writing is true or false. It reminds me of the times my then 4-year-old daughter would tell a captivating story that would then capture her still-developing imagination and we would have to ask, “Is this a true story or a fictional story?”
This is similar to what ChatGPT does and why the term “hallucinations” has been used to describe what it produces. Unlike the child’s innocent and commonly found fake world, ChatGPT has an authority in presentation; there are examples where it creates seemingly credible links and citations that do not exist. This recent article in The New York Times cites several examples of this kind of plausible-sounding hoax. We’ve specifically seen these same types of results in response to medical inquiries (links), and if you think about medical professionals using ChatGPT for research, this is a dangerous result.
Naturally, this raises the question of responsible AI. Where does it come into play? What does it mean? And who is responsible for its implementation? Even with careful storytelling, there’s one thing that isn’t discussed much as part of a responsible AI dialogue–and therein lies the problem–there is no single definition of responsible AI.
What is responsible AI?
Responsible intelligence is similar to the famous 1960s US Supreme Court decision on the definition of obscenity, where Justice Potter Stewart famously said: “I know it when I see it.” Good to see that sometimes practicality wins the day even under legal conditions. With responsive AI, perhaps it’s the other way around: “you know it when you don’t see it.”
In early 2022, expert.ai spent considerable time developing and proposing our own version of the concept (I deliberately did not use the term “definition” here). Our experience with over 250 clients and 350+ engagements has taught us that by using hybrid AI that can incorporate a rules-based approach, businesses can move from “black box” AI to what we call “green glass approach to AI”. »
A hybrid AI approach uses different techniques to solve the problem. For example, in scenarios that involve natural language, you can address the use case with a combination of LLM, machine learning, or a rule-based (or semantic) approach. A hybrid approach uses more than one of these approaches to provide solutions. You can learn more about hybrid AI here.
As an added bonus, hybrid AI applied to text using natural language processing (NLP) can also improve both accuracy and carbon footprint for users. This is particularly important as businesses on all continents face the required disclosure of environmental, social and governance (ESG) issues. Here, transparency (explainability) and stability become extremely important.
Using hybrid NLP and incorporating a knowledge-based approach provides transparency and explainability, and thus we arrive at our green glass approach. In our opinion, an AI solution should be:
- TransparentThe results should be easily explainable, where people can understand and follow how the system achieved the results.
- stableUsing a hybrid AI approach is less computationally intensive than most pure machine learning or LLM approaches. Fewer meters = less energy = lower carbon footprint.
- PracticalUse the most efficient and reliable way to solve AI problems. Don’t use technology just because it can be done, use goals to shape solutions. in other words, a right-sized solution to a problem that delivers more value than it costs.
- human-centeredDon’t just include “people in the loop” where data and inputs can be monitored and refined by users, but make sure the solution elevates people’s work from trivial and redundant to engaged and valued.
Many of the independent entities we work with, consult with, and participate in have similar ideas of what constitutes responsible AI. Some of these organizations include: Northeastern University’s Institute for Experimental AI, NYU’s Center for Responsible AI, and the US Department of Commerce’s National Institute of Standards and Technology (NIST) Center for Trusted and Responsible AI.. Microsoft recently outlined its own responsible AI approach in a blog post following a summit on generative AI hosted by the World Economic Forum. The common theme that unites these approaches is the need to take a purposeful and thoughtful approach to the business challenges we solve with AI and the technology choices we make along the way.