The occasional, often ill-considered thoughts of a Roman Catholic permanent deacon who is ever grateful to God for his existence. Despite the strangeness we encounter in this life, all the suffering we witness and endure, being is good, so good I am sometimes unable to contain my joy. Deo gratias!


Although I am an ordained deacon of the Catholic Church, the opinions expressed in this blog are my personal opinions. In offering these personal opinions I am not acting as a representative of the Church or any Church organization.

Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, September 15, 2023

Progressing…to What?

I’ve often been accused of living in, or wanting to live in, the past, as if such thoughts were some kind of weird psychological aberration. After all, who would want to live in the past when the present is so very cool? And the future? Well, maybe we shouldn’t talk about that. Too many today expect to be overwhelmed by man-made climatic disasters. In truth, we face far worse man-made calamities resulting from our sinfulness. But that’s the subject of another time.

Anyway, I suppose this basic diagnosis of my mental state has some validity. The symptoms are there. For example, if you glance through my personal library, you’ll likely notice that many of my books were written before I was born. As of this week, I’m now 79 years old, so that cut-off date was a while ago. Then there’s my rather eclectic tastes in music. I listen to everything classical from Bach and Vivaldi and their Baroque buddies to Vaughan Williams and everything in between. And jazz? I’m locked into those remarkable early artists like the MJQ, Cannonball Adderley, Charlie Byrd, Dave Brubeck, John Coltrane, Stan Getz, Thelonious Monk, and so many others. I’m also a fan of the big band music of the 30s and 40s, the doo-wop era of early rock ‘n’ roll, and even the folk music — Bud and Travis style — of the same period. I’m known as well for waxing eloquently about life back in the fifties and early sixties, when I came of age. 

All of this leads others to accuse me of being some sort of Luddite. Doesn’t technological progress translate to a better life? Is life with cable, satellite, and streaming TV, with the Internet, smart phones, email, Amazon, electric vehicles, and all the rest better than life without them? I think not. And this conclusion comes from someone with a couple of degrees in technological fields, who taught computer science at the U.S. Naval Academy, and piloted hi-tech military aircraft. Am I conflicted? Not at all. It all depends on how you define goodness. Is technology in itself a good or an evil, or is it neutral, inherently amoral? Does its goodness depend on application? Do the technologists even care about how their creatures are used? How did Robert Oppenheimer put it when reflecting on the development of nuclear weapons? 
“It was therefore possible to argue also that you did not want it even if you could have it. The program in 1951 was technically so sweet that you could not argue about that.” 
Yes, indeed, from the researcher’s perspective, the technological challenge is so “sweet” it must be pursued, even if it might blow up the world.

I won’t even try to predict how long it will take, but the next “sweet” challenge, one that’s progressing with remarkable speed, is artificial intelligence. Where it will lead nobody knows, but many of its developers believe we’ve already passed the point of no-return. Now, I’m not all that knowledgeable about the state of AI today, although I did play around with it 50 years ago. When I was teaching computer science at Annapolis I used to drop in on the ArpaNet (a Department of Defense network that evolved into the Internet). I was intrigued by a program called Parry, developed by someone, as I recall, at Stanford Research Institute. Parry simulated someone suffering from paranoia and responded appropriately to questions asked by the online user. I played with it on and off and as a lark decided to write a poem-generating program. When the first version went public on the Academy’s network, it became our most popular program. The midshipmen would run it, generate a poem, and send it to their girlfriends. My first attempt was rather primitive free verse, but the second used an iambic pentameter rhyming scheme and was even more popular. One English professor actually examined some of its images in class. I assumed it was all tongue in cheek because the words were generated randomly, and any resulting “images” were strictly accidental. I was amazed by it all, but quickly realized there would be a real future programming human activity and thought. In those days I was pretty good at predicting technological advances, at least in a macro way. I recall once, back in 1974, shocking the midshipmen in my advanced programming class by predicting they would one day have computers the size of a cigar box, computers more powerful than the Academy’s mainframe computer. They didn’t believe me. 

Today AI has progressed far beyond my stupid little poems, and some of its developers strive for a consciousness that replicates and surpasses that of the human mind. The debate, of course, will ultimately turn to consciousness with or without a conscience. Personally, I don’t believe true human-like consciousness will be achieved before God steps in an ends it all. This view contradicts those espoused by folks like Ray Kurzweil — agnostic, futurist, and computer scientist — who believes we humans will soon live forever. He also looks forward to a transhuman future when nonbiological intelligence will prevail and surpass human intelligence. He believes this Singularity, as he calls it, will arrive soon because technological change is…
“so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.” 
My-oh-my, let’s pray that God spares us from such a future. But what really bothers me today is the true source of AI and what it portends. I have a hunch it didn’t drop down from heaven.

Maybe in my next post I’ll turn to the past in search of intelligence far greater than anything encountered today.


Monday, July 12, 2021

Technology, Friend or Foe?

I expect this subject to demand more than one post, so today's will be only a start. But first, some background…my own, so you’ll know where I’m coming from, at least when it comes to technology. 

I’ve been wrapped up in technology for most of my life. As a teenager I became a licensed ham radio operator and took a math/science path through high school. Okay, I also studied Latin and German, but my real interests were in the sciences. Georgetown University accepted me as an astronomy major, but then my dad convinced me to ask the university if I could switch to its School of Foreign Service. Why I agreed to this, I can’t answer today, but Dad could be persuasive. Anyway, Georgetown agreed and I spent my freshman year with a bunch of budding diplomats — nice folks, but a bit odd. 

Everything changed when I received an appointment to the U. S. Naval Academy, a school where technology rules. Four year later I graduated from Annapolis with a specialty in electrical engineering and a minor in German. I’d always been fascinated by aviation, so naval flight school was the logical next step. After earning my Navy “wings of gold,” I spent the next decade flying, attending graduate school where I studied management and computer science, teaching computer science at the Naval Academy, and doing other exciting Navy stuff at sea and ashore.  

Diane and I enjoyed Navy life, but I was facing more sea duty, more time away from my family, and more moves. Once again my dad suggested a change, and asked me to join him in his sales and management training and consulting business. And once again, this time with Diane’s support, we agreed. I resigned my Regular Navy commission and transferred to the Naval Reserve, in which I served the country for another 15 years. In the meantime we moved to Cape Cod to begin this new chapter. I stayed connected with technology, applying it as a tool in our business. In fact, we had a computer long before the advent of the PC. There were other adventures: working as a low-level dean at Providence College; teaching business programs there and at Roger Williams University; and working for a Massachusetts-based hi-tech firm that specialized in programmable telecommunications switches.

Of course, throughout these years Diane and I tried to live faith-filled lives. I read and took courses in Scripture and theology in an effort to expand my knowledge and deepen my faith. About 30 years ago, I accepted a call to begin formation for the diaconate, and was ordained a permanent deacon on May 24, 1997. 

That, then, is my story in brief, at least part of it. I’ve long been somewhat of a techno-dweeb, but have also been concerned about technology’s pervasive presence and, in truth, its growing control over so many aspects of our lives.

Let me turn now to the real subject of this post by referring to a small book written almost a century ago by 
Romano Guardini (1885-1968), one of the Church’s great 20th-century theologians,. The book, Letters from Lake Como, first published in 1926, consists of a series of letters Guardini wrote several years earlier (1923). Focused on the increasing domination of human culture by technology, these letters are remarkably prescient and lead us to question whether technology is a human accomplishment to celebrate or a means to our ultimate subjugation. This, of course, is a question many ask today as we cope with technological intrusions, both overt and covert, into even the most private aspects of human life. I find it truly remarkable that Guardini could anticipate this possibility nearly 100 years ago.

At one point, in a discussion of tools, Guardini addresses their different forms. Basic tools -- for example , a hammer -- become extensions of the human body allowing us to accomplish tasks with greater ease, accuracy, and refinement. It would be hard indeed to hammer a nail with my fist, easier perhaps with a rock, but far more satisfactory using a hammer designed specifically for the task. 

At a higher level we find the development of tools whose function does not demand direct human interaction. For example, a millstone, designed to grind wheat or other grains, can be turned by water power without the direct application of human effort. This application of natural means allows the human, uninvolved in the tool's actual work, to control the process with minimal effort. In the same way, by using horses to pull wagons or other vehicles to transport people or material, the human interacts with and controls the natural means (the horses) by which the work is accomplished.

Guardini then addresses more capable machines that "relieve us of direct work; we need only construct and supervise them." Here he includes machines that work with other machines, controlling them to accomplish increasingly complicated tasks of the sort preformed in the factories of his day. The automobile and airplane would also fall into this general category.

At this point Guardini adds that many machines and instruments have become extremely complex, their development the result of expanding scientific knowledge and technological and engineering expertise. These concepts are not understood by non-experts, who no longer experience directly the totality of the tools being used; i.e., they no longer wield the hammer. They might operate the machine, and yet have little understanding of the science and technology needed to make and use it effectively.

This, Guardini believed, leads to a kind of societal polarization. Here I think it best to offer a rather long quote from an address he gave in 1959 that forms an addendum to the latest edition of his book. As you read these words, keep in mind they were written over 60 years ago.
"...machines give us constantly increasing power. But having power means not only that those who have it can decide on different things; it also means that these different things will influence their own position. To gain power is to experience it as it lays claim to our mind, spirit, and disposition. If we have power, we have to use it, and that involves conditions. We have to use it with responsibility, and that involves an ethical problem. If we try to avoid these reactions, we leave the human sphere and fall under the logic of theoretical and practical relations.

"Thus dangers of the most diverse kind arise out of the power that machines give. Physically one human group subjugates another in open or concealed conflict. Mentally and spiritually the thinking and feelings of one influence the other. We need think only of the influence of the media, advertising, and public opinion." [p. 105-106]
I was particularly drawn to his comment that "one human group subjugates another in open or concealed conflict," and could not help but consider the application of artificial intelligence in a wide variety of forms to many aspects of our lives by government agencies, corporations, social media, etc. These forms are designed not only to gather information about us as individuals and members of various groups, but more disturbingly to use that information, applying it in ways that can alter what we do, what we believe, and how we think about our culture’s most basic values. 

After rereading Guardini’s book this week, I opened the latest issue (August/September 2021) of First Things and encountered two surprisingly relevant articles. (Unfortunately, I don’t believe either is accessible online unless you are a paid subscriber.) One, by Ned Desmond and entitled “The Threat of Artificial Intelligence,” offers a rather dark glimpse into the kind of future we might well encounter as we face the “open or concealed” threat posed by A.I. as government and industry conspire to exert greater control over our lives. Hmmm…sounds a lot like old-fashioned, traditional fascism to me.

The second article, really a book review of a new novel, The Silence, by Don DeLillo, depicts how the characters cope with the sudden collapse of all technology. The book doesn’t really address causes so much as it examines reactions to it all. I’ve read only one of DeLillo’s other novels, Underworld (1997), in which the author looks at Cold War America and its obsessions. He is a skillful writer well worth reading.

Both articles, however, only highlight the truth of Guardini’s 100-year-old ideas about the dangers of a technology misunderstood and misused, dangers that seem to be far closer to reality than most of us think. Later in life Romano Guardini expanded on his earlier thoughts in his book, The End of the Modern World (first published 1950), a prophetic examination of how we arrived at the world we experience today. Every literate human should read it. It’s that important a book.
 
More on the subject in future posts. Right now I need a nap.


Monday, November 25, 2019

AI: Clever Stuff, If a Bit Creepy

I've been aware of and followed the developments in artificial intelligence for decades. I'm by no means an expert, just an interested bystander who finds the field rather fascinating, if a little scary. I'm sure many of you share these same sentiments. It's convenient to allow technology to relieve us of many of the repetitive, mundane, and time-consuming tasks that fill our days, thus freeing us to focus on those things only humans can do. The trouble is, defining that strictly human work has become increasingly difficult as AI capabilities have expanded to include much more than simple tasks. 

For example, autonomous (self-driving) cars and trucks are already on our roads and will no doubt continue to improve. Ultimately, when autonomous vehicles actually prove to be safer than vehicles driven by human beings, we will have to answer the question: "Should humans still be permitted to drive cars?" I suspect at some point the answer wiill be, "No!"

I suppose my first involvement with AI dates to my years teaching computer science at the U.S Naval Academy during the years 1973 to 1976. Several of my faculty colleagues had managed to access an entry node into what was then called the ARPANET. I suppose, in a sense we hacked our way into the network. ARPANET (Advanced Research Projects Agency) was a Department of Defense effort aimed at creating a worldwide network accessible by government researchers, technology companies, and academic institutions through which they could access powerful computer systems from a distance. ARPANET was actually the predecessor to the Internet that we know and love today. 

I remember, during one of our ARPANET searches, coming across a program developed by someone at Stanford Research Institute. The program was called "Parry" because it simulated a person with paranoid tendencies. When you ran the program you could type in a question and it would offer seemingly paranoid responses. All very amusing 45 years ago. Not long ago I came across an interesting 1974 critique of Parry: Ten Criticisms of Parry.

Motivated by Parry, pretty much as a lark, I decided to write a poem-generating program -- cleverly called "Poem" -- which I occasionally enhanced during my three years teaching at the Academy. It was actually a good teaching tool. It held the students' iinterest and showed that programming wasn't restricted to mathematical, engineering, or scientific applications.

My first, rather simple version generated free verse in ten-syllable iambic pentameter. If I recall correctly (and it's been a while), my final version generated rhyming verses in a variety of meters, and even created some rather weird similes. The program was publicly available on the Academy's computer time-sharing network and was a big hit with midshipmen who sent this computer-aided doggerel to their girlfriends. I was hoping to code a sonnet-writing program, but never had the time before I was transferred back to sea duty and flyiing helicopters.

Today, thanks to Amazon and Apple, we have Alexa and Siri talking to us, recognizing our voices, answering our questions, running our homes, and listening in on our domestic lives. Very handy things, but, yes, more than a bit creepy.

For example, the other day I picked up my new iPhone and asked, "Hey Siri, what's the temperature?" She responded with, "It's about 81 degrees." Thinking the modifier "about" was somewhat odd when giving such an exact temperature, I turned to Alexa and asked, "Alexa, what's the temperature?" She said, "Dana [yes, Alexa recognizes my voice...], it's currently 81 degrees Fahrenheit in The Villages. Today's high will be 83 degrees, with a low of 67." Diane, having overheard these exchanges between me and the two disembodied female voices, said, "Alexa, you're a lot smarter than Siri." How did Alexa respond? In a way I never imagined: "We all have our gifts."

Yes, indeed, we've come a long way from paranoids and poems to personal partners who at some point will probably know more about you and me than we know about ourselves. I suppose the larger question is: what will be left for humans to do -- in the workplace, the home, the world? And what will happen when the tools become more intelligent than the toolmakers?