Skip to content
Big Think+

How should human values shape the future of AI?

Hand pointing at glowing digital brain. Artificial intelligence and future concept. 3D Rendering
AI will change the future, and the decisions we make today will determine if that future represents our values.
(Photo: Adobe Stock)

There’s a fortune to be made in data and silicon, and everyone is out for their share. Artificial intelligence is this century’s gold rush. Its promises scintillate in them there hills. But while everyone is busy setting up camp in Silicon Valley, it seems few of us have contemplated the nature of AI and weighed its potential moral consequences against its financial payouts.
Consider the following questions:

  • What’s the difference between machine learning and deep learning?
  • What is an artificial neural network, and how does it work?
  • How close are we to artificial general intelligence? How would we even recognize it?
  • Do robots fit into our projections of the future?
  • Can these machines develop consciousness?
  • What is consciousness?

Few of us would be able to answer these questions with any confidence. We’d need to conscript Google’s services to tackle the technical ones, and we probably haven’t touched the metaphysical ones since Philosophy 101. That’s not a knock against anyone; it’s completely understandable.
AI is complex and complicated. The underlying technology and techniques can take years to master. The field has branched out into a variety of specializations, such as biometrics, content creation, robotic processes, speech recognition, and text analysis. The promises made about AI’s future utility are Grade-A science fiction. No wonder so many of us leave such questions to the experts.
Here’s the thing, though: AI is not solely the domain of roboticists and software developers. Everyone’s future will change as a result of these technologies.

In this video lesson, philosopher Susan Schneider explains why our organization’s values, missions, and futures require us to consider AI deeply before we rush into it.

Be Humble

Artificial Intelligence (AI): A field of science that studies ways to build machines that can perform the kinds of tasks humans can do

  • AI has the potential to fundamentally alter human life. From intelligent robots to AI that can go inside our heads, we humans should start preparing now for a range of possibilities.
  • It’s not just about what we can do—but what we want to do and what we should do. Consider these philosophical and ethical issues:
    • If we’re going to shape the mind with AI technology, what is the mind? –What is it to be a self or person? Are machines “selves”?
    • Do we want to create cyborgs?
    • Do we want to create a class of sentient robots?

Consciousness is the core question of the mind. Why do people have experiences, emotions, and enjoy pleasures while rocks, toasters, and combustion engines do not? It’s all made of matter. The brain seems the obvious answer, but that leads to the question of how non-conscious neurons and synapses generate conscious experiences.
Truth is, we don’t know what consciousness is. Now, we’ve reached a point in history where we could develop non-organic consciousness through a combination of code and copper connectors. But if we don’t understand the nature of our consciousness, how would we recognize it elsewhere?
We don’t know, and as the questions stack up, they can leave your mind spinning—at least, we think it’s our minds.
We could move on to ethics, but that issue is no less thorny.
Researchers have begun developing brain implant technologies already. The current use case is to treat mental illnesses, such as dementia and strokes. But once the brain is unlocked, the possibilities multiply. We could create technologies that allow us to download calculus, Aztec history, and kung-fu directly into our brains Neo-style. Whoa.
While being developed with the best of intentions, the technology requires us to wrestle with major ethical issues. Given its likely expense, we may create a new class system in which the rich gain unsurpassable health and education advantages. Scholarships and college matriculation would not be based on merit but on whether you can afford the prerequisite software. And the concept of mastery would be cheapened to a commodity.
If that example is even possible—we’ll see—it’s admittedly far off. However, as we’ll see, issues like this already exist with the AI systems we currently employ.

Back from the Future: Understanding Current AI

(Photo: Wikimedia Commons)

Machine Learning (ML): A subset of AI that enables applications to learn from data and improve task accuracy on their own
Deep Learning (DL): A subset of ML that enables applications to learn from large amounts of data using neural networks

  • Algorithms can discriminate because they’re designed by humans and they’re data-driven. We need to understand the scope and limits of the different architectures that we use.
  • If you’d like to learn more about how AI is evolving, explore the latest trade books, textbooks, podcasts, and videos.

We can’t understand AI’s impact on the future if we don’t understand current AI techniques. Consider deep learning.
Deep learning is a subset of machine learning. In traditional machine learning, a programmer tasks an algorithm with identifying patterns in data—images, text, sounds, etc. The programmer sets the relevant features for the algorithm to analyze, the algorithm looks for the absence or presence of those features, and it sorts the data according to the applicable pattern. As the algorithm learns on the data, it improves its accuracy without being programmed to do so.
With deep learning, the algorithm runs on a neural network. Programmers still set the parameters, but they don’t have to decide beforehand what features best represent the data they want. The algorithm discovers that itself after analyzing vast amounts of data. Deep learning is fantastic at looking for patterns in data quickly and accurately. But there are drawbacks.
Imagine, for example, a deep learning system designed to determine eligibility for housing loans. The programmer sets the parameters of exploring past data to determine future eligibility. The system teaches itself on that data and doles out loans accordingly. But after a few months, it becomes clear the system rejects Black applicants at a higher rate than others.
It’s not that the programmer had a racist agenda; rather, the algorithm became limited by the data fed into it. The system blindly reads that there’s a gap in Black-White homeownership and interprets that as a minus for the Black applicant. Lacking the historic or socioeconomic context in which to place the data, it can’t consider the history of redlining or gentrification nor does it qualify its grade with a socioeconomic curve that takes into consideration the lasting impacts of the Great Recession. It just plugs away.
While our example is hypothetical, stories like this are coming to light. A ProPublica report found that a criminal justice algorithm labeled Black criminals as more likely to commit a future crime than White ones. A follow-up investigation found the algorithm only predicted future violent crime correctly 20 percent of the time. And let’s not forget Tay, a Microsoft AI chatbot that became a literal Nazi by learning how to be human through Twitter.
While AI is a powerful tool, we can’t assume it will support our company values, culture, and driving purposes. We need to stay on top of AI to assess its potential but also its current limitations. Then we need to devise strategies that utilize the potential, while also creating safeguards against any limitations we can’t eliminate.
That step can only be taken from a place of knowledge, understanding, and curiosity to learn more.
AI is here. We want this powerful technology to shape a desirable future, but we need to understand it first. With video lessons ‘For Business‘ from Big Think+, you can better prepare your team for this new paradigm. Susan Schneider joins more than 150 experts to teach lessons on AI, innovation, and leading change. Examples include:

  1. Help Shape the Future of AI: Why We Need to Have Difficult Conversations Around Technology and Human Values, with Susan Schneider, Philosopher and Author, Artificial You
  2. Proceed with Caution: How Your Organization Can Help AI Change the World, with Gary Marcus, Psychology Professor, NYU, and Author, Rebooting AI
  3. Accept the Machines, Lead Like a Human: Two Leadership Truths for the Age of Automation, with Andrew Yang, U.S. Presidential Candidate | CEO and Founder, Venture for America
  4. Tackle the World’s Biggest Problems: The 6 Ds of Exponential Organizations, with Peter Diamandis, Founder and Chairman, X Prize Foundation

Request a demo today!

Newsletter
Join the #1 community of L&D professionals

Sign up to receive new research and insights every Tuesday. 


Related