[ad_1]
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Have you heard the unsettling stories that have people from all walks of life worried about AI?
A 24-year-old Asian MIT graduate asks AI to generate a professional headshot for her LinkedIn account. The technology lightens her skin and gives her eyes that are rounder and blue. ChatGPT writes a complimentary poem about president Biden, but refuses to do the same for former president Trump. Citizens in India take umbrage when an LLM writes jokes about primary figures of the Hindu faith, but not those associated with Christianity or Islam.
These stories fuel a feeling of existential dread by presenting a picture where AI puppet masters use technology to establish ideological dominance. We often avoid this topic in public conversations about AI, especially since the demands of professionalism ask us to separate personal concerns from our work lives. Yet, ignoring problems never solves them, it simply allows them to fester and grow. If people have a sneaking suspicion that AI is not representing them, and may be actively discriminating against them, it’s worth discussing.
What are we calling AI?
Before diving into what AI may or may not be doing, we should define what it is. Generally, AI refers to an entire toolkit of technologies including machine learning (ML), predictive analytics and large language models (LLM). Like any tool, it is important to note that each specific technology is meant for a narrow range of use cases. Not every AI tool is suited for every job. It is also worth mentioning that AI tools are relatively new and still under development. Sometimes even using the right AI tool for the job can still yield undesired results.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
For example, I recently used ChatGPT to assist with writing a Python program. My program was supposed to generate a calculation, plug it into a second section of code and send the results to a third. The AI did a good job on the first step of the program with some prompting and help as expected.
But When I proceeded to the second step, the AI inexplicably went back and modified the first step. This caused an error. When I asked ChatGPT to fix the error, it produced code that caused a different error. Ultimately, ChatGPT kept looping through a series of identical program revisions that all produced a few variations of the same errors.
No intention or understanding is happening on the part of ChatGPT here, the tool’s capabilities are simply limited. It became confused at around 100 lines of code. The AI has no meaningful short-term memory, reasoning or awareness, which might partly be related to memory allocation; but it is clearly deeper than that. It understands syntax and is good at moving large lumps of language blocks around to produce convincing results. At its core, ChatGPT doesn’t understand it is being asked to code, what an error is, or why they should be avoided, no matter how polite it was for the inconvenience of one.
I’m not excusing AI for producing results that people find offensive or disagreeable. Rather, I’m highlighting the fact that AI is limited and fallible, and requires guidance to improve. In fact, the question of who should provide AI moral guidance is really what lurks at the root of our existential fears.
Who taught AI the wrong beliefs?
Much of the heartache surrounding AI involves it producing results that contradict, dismiss or diminish our own ethical framework. By this I mean the vast number of beliefs humans adopt to interpret and evaluate our worldly experience. Our ethical framework informs our views on subjects such as rights, values and politics and are a concatenation of sometimes conflicting virtues, religion, deontology, utilitarianism, negative consequentialism and so on. It is only natural that people fear AI might adopt an ethical blueprint contradictory to theirs when not only do they not necessarily know their own — but they are afraid of others imposing an agenda on them.
For example, Chinese regulators announced China’s AI services must adhere to the “core values of socialism” and will require a license to operate. This imposes an ethical framework for AI tools in China at the national level. If your personal views are not aligned with the core values of socialism, they will not be represented or repeated by Chinese AI. Consider the possible long-term impacts of such policies, and how they may affect the retention and development of human knowledge.
Worse, using AI for other purposes or suborning AI according to another ethos is not only an error or bug, it’s arguably hacking and potentially criminal.
Dangers in unguided decisioning
What if we try to solve the problem by allowing AI to operate without guidance from any ethical framework? Assuming it can even be done, which is not a given, this idea presents a couple of problems.
First, AI ingests vast amounts of data during training. This data is human-created, and therefore riddled with human biases, which later manifest in the AI’s output. A classic example is the furor surrounding HP webcams in 2009 when users discovered the cameras had difficulties tracking people with darker skin. HP responded by claiming, “The technology we use is built on standard algorithms that measure the difference in intensity of contrast between the eyes and the upper cheek and nose.”
Perhaps so, but the embarrassing results show that the standard algorithms did not anticipate encountering people with dark skin.
A second problem is the unforeseen consequences that can arise from amoral AI making unguided decisions. AI is being adopted in multiple sectors such as self-driving cars, the legal system and the medical field. Are these areas where we want expedient and efficient solutions engineered by a coldly rational and inhuman AI? Consider the story recently told (then redacted) by a US Air Force colonel about a simulated AI drone training. He said:
“We were training it in simulation to identify and target a SAM threat. And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat — but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator, because that person was keeping it from accomplishing its objective.
We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This story caused such an uproar that the USAF later clarified the simulation never happened, and the colonel misspoke. Yet, apocryphal or not, the tale demonstrates the dangers of an AI operating with moral boundaries and the potentially unforeseen consequences.
What is the solution?
In 1914, Supreme Court Justice Louis Brandeis said: “Sunlight is said to be the best of disinfectants.” A century later, transparency remains one of the best ways to combat fears of subversive manipulation. AI tools should be created for a specific purpose and governed by a review board. This way we know what the tool does, and who oversaw its development. The review board should disclose discussions involving the ethical training of the AI, so we understand the lens through which it views the world and can review the evolution and development of the guidance of AI over time.
Ultimately, the AI tool developers will decide which ethical framework to use for training, either consciously or by default. The best way to ensure AI tools reflect your beliefs and values is to train them and inspect them yourself. Fortunately, there is still time for people to join the AI field and make a lasting impact on the industry.
Lastly, I’d point out that many of the scary things we fear AI will do already exist independent of the technology. We worry about killer autonomous AI drones, yet the ones piloted by people right now are lethally effective. AI may be able to amplify and spread misinformation, but we humans seem to be pretty good at it too. AI might excel at dividing us, but we have endured power struggles driven by clashing ideologies since the dawn of civilization. These problems are not new threats arising from AI, but challenges that have long come from within ourselves.
Most importantly, AI is a mirror we hold up to ourselves. If we don’t like what we see, it is because the accumulated knowledge and inferences we’ve given AI is not flattering. It might not be the fault of these, our latest children, and might be guidance about what we need to change in ourselves.
We could spend time and effort trying to warp the mirror into producing a more pleasing reflection, but will that really address the problem or do we need a different answer to what we find in the mirror?
Sam Curry is VP and CISO of Zscaler.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
[ad_2]
Source link