© 2017 by Nikki Mirghafori, PhD. All Rights Reserved.

"The Path of Practice in our Digital Age", March 2019, Buddhistdoor Global Magazine Special Issue

 

Link to the article on Buddhistdoor Global Magazine Website:  https://www.buddhistdoor.net/features/the-path-of-practice-in-our-digital-age

 

 

I was only four years old when I first developed an affection for computers. Mind you, I had never interacted with a computer, nor had I even seen one in person. It was 1974, long before the microcomputer revolution of the late 1970s and early 80s swept the West. And this was in Tehran, my hometown, where personal computers came into people’s lives even later.

 

There aren’t too many memories that I can recall from that year, but one is quite vivid. My mom’s cousin, Mammad, a young college student at Tehran University, came to visit one afternoon bearing a gift. It was a large dot matrix banner, which when unfolded, dwarfed my tiny size (if you have never seen one, I suggest you search for an image of a dot matrix banner—they are impressive). I was not yet empowered by the superpower of literacy, so I looked up in puzzlement at the adults as to the significance of the large reams of white paper printed with small starlike symbols. Then Mammad told me this was my name in Farsi and the computer had printed it. What? A benevolent computer, who hadn’t even met me, knew my name and had made this gift for me?! I was touched. The computer cared about me. So, it was only natural for my young heart to reciprocate the affection of this kindly computer with metta. I treasured the banner, and the gift hung on the wall of my room for many years. In retrospect, I have often reflected on how that moment produced strong karma, as it contributed to my decision to become a computer scientist, majoring in artificial intelligence (AI), decades later.

 

It is not uncommon for us humans to anthropomorphize computers, ascribing human-like qualities, and feeling strong emotions toward them. What I described above was my childhood’s simple attribution of cognition to a computer. The grown-up version of this today is the “singularity” argument, which predicts that computer superintelligence will far surpass human intelligence. Fear often accompanies a dystopian version of this future, with humans serving these super-intelligent overlords. 

This thesis is strongly debated. Some futurists declare that the singularity is near,* whereas many AI experts reason that it is not even likely.** I find myself in the majority camp of “not likely” votes, together with Harvard University’s Steven Pinker, who wrote: “There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jetpack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.” (IEEE Spectrum)

 

What I have often witnessed among non-technologists, who are possessed by the fear of artificially intelligent machines, is the misunderstanding of how machine intelligence is defined (solving problems based on a set of rules or using previous examples). Some even go as far as wishfully ascribing artificial consciousness to machines. Both as an AI scientist and a Buddhist teacher, I appreciate th perspective of UC Berkeley philosophy professor John Searle, who captures the crux of this point succinctly: “[Computers] have, literally . . . no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. . . . The machinery has no beliefs, desires, [or] motivations.” (Searle 2014, 54)

 

The “processes” that Searle talks about are just lines of code or algorithms that primarily learn to recognize patterns in data (e.g., images, words, speech, etc.). There is indeed no real intelligence, no motivation, no autonomy, and no agency.

As I ponder our technological era, what concerns me more is not so much the hyped up threat of super intelligent machines somewhere in the future, but how we engage with and deploy technology today. I am concerned about addiction to technology interfaces, the amount of hours we collectively spend endlessly browsing through Facebook posts, images on Instagram, Tweets on twitter, products on Amazon, etc. etc. I am troubled by the celebrated culture of multitasking, needing to be online, plugged in, and responsive 24/7 on multiple digital channels. I’m saddened by the prevalence of mental health issues in children, teens, and adults who are subject to online social pressures, suffering overwhelm, feeling the need to present a perfect profile, those who are bullied, or even more heartbreakingly, those who learn the habit of bullying others through the veneer of digital distance.  The list can go on…

 

Some of the responsibility rests on all of us, the users, who willingly participate in this charade, not knowing our own collective power to disengage from the overconsumption, to create a more healthy digital culture, and to demand more humane and ethical technologies. The brunt of the responsibility, however, rests on the shoulders of the designers of these addictive technologies who capitalize on clicks, eyeballs, and time spent on their platform. It rests on the data collectors who carefully capture, analyze, store, curate content for choice engineering, and sometimes unbeknown to us, buy, sell, and otherwise compromise our privacy. Principles of ethical technology design and privacy have become more important than ever in our society.

 

In a world where the roots of greed, hatred, and delusion can give rise to unwholesome social, political, financial, and environmental structures, it is perhaps not surprising to witness the same patterns in our digital structures. But let us remember that our practice is as much internal as it is external. As we work to cultivate non-greed (generosity), non-hatred (kindness), and non-delusion (wisdom) in our hearts and minds, we must extend our practice to the world—and our digital lives are no exception. When we train ourselves to greet this moment’s experience with an appropriate response, we trust in our capacity to meet the future without fear. Similarly, instead of fearing our digital future, I invite all of us to put our energies into collectively meeting our digital present with ethics and wisdom in every capacity, as users and designers, as our path of practice.

 

Nikki Mirghafori, PhD, is an artificial intelligence scientist and a Buddhist teacher. Nikki is a lineage holder in the Theravada tradition, empowered by the Burmese master Venerable Pa Auk Sayadaw, as well as Spirit Rock Meditation Center. She serves on the Teachers Council and Board of Directors at Spirit Rock and Insight Meditation Center in Redwood City, California. For the past two decades, Nikki has been a researcher and inventor in AI, holding multiple patents. She has directed international research programs, mentored post-doc and PhD students, authored many scientific articles, taught graduate courses at UC Berkeley, and been a scientific advisor to Silicon Valley technology startups. For more information, please see www.nikkimirghafori.com."

 

* Kurzweil, R. 2005. The Singularity is Near. London: Penguin Group.

** Grace, K., Salvatier, J., Dafoe, A., Zhang, B., Evans, O. 2017. "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].

References

Searle, J. R.  2014, “What Your Computer Can’t Know,” The New York Review of Books. 9 October 2014.

 

 

Share on Facebook
Share on Twitter
Please reload