An Artificial Intelligence Appraisal

A seminal event in the development of machine learning occurred over a two-month period during the summer of 1956—about a half-hour drive from where I am now writing—at Dartmouth College in Hanover, New Hampshire. The year before, an Assistant Professor of Mathematics at Dartmouth, John McCarthy, generated the idea of compiling an eclectic group of talent who were to be tasked with conducting an original type of prognostication in an area so new that he needed a fresh term to describe it. Professor McCarthy coined a label for his proposed conference’s topic…Artificial Intelligence.

Although a decade before Gordon E. Moore of Intel prophetically surmised the doubling of transistors on a given unit of space could occur every two years, known over the years as Moore’s Law, by the 1950s it was becoming clear that the rate of electronic functionality and efficiency was improving exponentially. Given this background, John McCarthy set out to investigate the potential of this trend by bringing together an interdisciplinary potpourri of researchers from fields as diverse as cognitive science, electrical engineering, communications, neuroscience, information theory, psychology, mathematics, and computer science. Their mission for what became the Dartmouth Summer Research Project on Artificial Intelligence can best be summarized by the following sentence extracted from Professor McCarthy’s 1955 conference funding proposal: The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Such was the launch of one of the most revolutionary movements of our time.

Artificial Intelligence, or AI as it has become commonly known, is simply intelligence displayed by machines. Beyond this terse definition nothing about AI is simple. Nor is it docile. AI is already shaping our world in ways never before seen. We now live with tools such as search engines, robotic automation, virtual product recommendations, and data-driven medical diagnostics to name just a few of the innovations unheard of by most people only a generation or two ago. And on the horizon? Self-driving cars, self-learning cybersecurity systems, robotic assistants for the elderly, and tailored genomic medicines merely begin the dazzling list of inventions coming our way. AI has already demonstrated potential for disrupting and altering the way we live our lives and we are just getting started.

The volatile and variable nature of AI agency prompts me to examine this phenomenon in some detail and with some urgency. At present, AI portends to be either a source of life enhancing opportunities or a basis for grievous intrusions and dislocations that could threaten our way of life. Its strength may deliver possibilities or burdens. Given this potency it is incumbent upon us to fully and intentionally participate in examining, monitoring, and directing the course of AI development. It’s not as if this phenomenon is some act of God beyond our control. It is manmade. Therefore, we need to ask ourselves, are we going to steer AI advancement or is AI going to drive us?

I approach this inquiry as a lay person. I have no significant education or experience in subject matter related to AI or machine learning, including mathematics, computer science, robotics, cognitive modeling, information theory, data science, or any of the other sub-disciplines that go into the structure of AI. I am just a John Q. Citizen who is interested in technological change and what adaptations it can bring to individuals and society. Counter-intuitively, my non-technical background highly qualifies me to look into the possible ramifications of AI. And I should not be alone in doing so. When science becomes so abstract that average people simply resign themselves to letting the smarty-pants of the world make the profound decisions of how we are to live our lives, then we have a problem. I think it best to welcome the benefits AI can bring to us, while maintaining skepticism and a wary eye about its possible perils.

It is tempting to think that AI will display its power largely as a source of new products and services for the universe of consumers always eager to be swept away by the newest shiny object. Technological innovations often reveal new and improved ways of completing common tasks. Undoubtedly, expanded capacities will emerge allowing consumers to benefit from a myriad of novel ways to perform daily life functions. However, AI will manifest itself broadly by transforming employment and by extension how we live our day-to-day lives. This historic interchange between employment and how we structure our lives appears to be on the verge of a mammoth AI metamorphosis.

For better and for worse we have been at a similar economic and social crossroads before. It may be useful to consider what transformations the last great such revolution yielded as we ponder how to best be prepared for the AI insurgency. Two hallmarks of the seventeenth and eighteenth centuries in Europe were the rise of the Scientific Revolution and the Enlightenment. The Scientific Revolution ushered in new ways of comprehending the essence of natural materials and phenomena, which resulted from a newly developed thought process, inductive reasoning, while the Enlightenment encompassed a multitude of ideas leading to individual empowerment and the pursuit of just government. Together the Scientific Revolution and the Enlightenment set the stage for the Industrial Revolution.

Much of the world we know is the way it is because of the Industrial Revolution. Starting in England, where modern science largely took hold, and spreading quickly to many places across the globe, economies transitioned from homebased craft and agrarian work to factory-based machine production. This conversion led to many benefits such as more affordable and plentiful life enhancing goods and services, effective means of transportation, labor saving devices, and medical advances. For many non-aristocrats, wealth generation became possible and the rise in specialty professions developed. But these gains came with costs. The rapid migration of rural folk into industrialized cities, which were ill equipped to handle the influx, created inadequate, overcrowded, and disease-ridden housing. Increases in pollution, environmental degradation, and dangerous working conditions were also consequences. The lesson we can take from the historical shift of muscle to machine is that great advantages can and probably will come from fundamental economic pattern volatility, but these boons have great potential for trailing along a load of detriments and handicaps.

The Industrial Revolution impacted whole societies by changing how people lived. It ushered in an era of mass production and mass consumerism. Commensurate with this shift arose large corporate businesses, labor unions, immigration, government regulation, rural vs. urban issues, broad-based taxing, higher education, and improved widespread communications. Scientific progress introduced professional expertise in the form of rational problem solvers with job titles such as engineer, manager, and accountant.

The new AI revolution will likely rock the world similarly. When considering the scope and scale of such monumental change it can seem like it is beyond being managed. It can appear like a runaway train. Therefore, it may be helpful to view this latest grand social transition as primarily an economic one. Economics implies that what is underway is determined by people. Economics is a social science that simply cares about the production, distribution, and consumption of goods and services. Economics is not controlled by external forces of nature or acts of God. We are not trying to control a competing power. Rather we are trying to control ourselves. Choice and decision making among people is involved in executing economics and it should also be entailed in how we face the rollout of AI.

Fortunately, in regards to human agency and control, a crucial difference between the economics of the AI revolution versus the Industrial Revolution involves centralization or incorporation. The Industrial Revolution engendered a corporate approach to group organization. Large collections of people amassed with a common purpose and legal authority to execute objectives, such as transacting business under a single name. Institutionalization became a widespread structuring principle across business, government, and society in general. People found that in order to benefit from this economic order they had to adopt and adjust to these institutional systems, including their rules, their schedules, and their cultures. Depersonalization and objectification ensued.

However, the novel technology and information world order we are now experiencing is by nature more networked and distributed and less concentrated. Therefore, it should be possible for individuals to exercise more control over their engagements, even if this means diluting somewhat the power of institutions. New grassroots organizing constructs based on ad hoc resiliency that combine global talent as needed has empowerment potential for people. Fitting AI into this type of schema, whereby individuals collaborate with both distributed talent and AI to realize goals is a possibility. To the extent AI can be harnessed in a blended sharing of ideas and solutions the more command people have over the direction of the future. Let’s utilize AI to help invent our tomorrow.

It may be helpful to envision exactly what kind of tomorrow we wish to have for ourselves. Clarifying an image of an optimal future allows us to strategically orient our resources, such as AI, to realize a destiny of our own intentional making. And as I suggest, AI can be one of our key resources or it could be a serious impediment.

Cultivating a positive collective future begins of course with projecting what is best for each individual. A reasonable contention is that the greater the number of happy or content individuals present in society, then the more satisfied the society will be as a whole. Achieving this state is really very non-technical. Technology alone is not necessary for individuals to be grateful for what they have, devoted to family and positive friends, having an optimistic demeanor, living in the present, and feeling dignified and purposeful. 

In addition to these very important psychological conditions we can include an economic perspective to round out our ideal vision of the future. Economics, more than psychology, can be shaped and influenced by institutional and governmental policy initiatives. The resulting economic model of exemplary standards will need metrics utilized to determine if desired goals are being achieved. To be comprehensive, a blend of quantitative and qualitative assessments need to be applied.

To start, let’s not rely too much on the Gross Domestic Product (GDP), which is a monetary appraisal of the market value of all goods and services produced over a period of time. GDP has been widely accepted as a broad metric of the national economy for decades and by inference how well people are doing. High GDP rankings correlate with high growth economies, which traditionally is considered a good thing. Aside from the obvious problem of trying to distill as complex an edifice as a national economy to a single number, GDP is credited with encouraging a style of economic growth that manifests as resource depletion and pollution growth. In addition, for an economy with significant inequality, such as America’s, not everyone experiences GDP advantages evenly.

Zoë Baird, president of the Markle Foundation, is not alone in finding fault with GDP as an adequate measure of how Americans feel about the health of their economy, and by extension, their own lives. Rather, she proposes a more inclusive set of metrics, which highlight historic economic afflictions and begins to shape a more affirmative direction. They include evaluations of public sentiments, amounts of job creation, quantities of new businesses, indications of growing median income, signs of increased worker protections and training opportunities, and growing numbers of young workers. In short, if evidence of enhanced economic opportunity for all citizens to pursue high quality and secure work at good wages can be established, then we have at the very least constituted a foundation for widespread prosperity and happiness.

Staying with this theme of envisioning the general public’s best interests through economics we can note other useful perspectives. Daron Acemoglu of MIT points out that the greatest driver of shared prosperity are good jobs, i.e. widely available secure jobs with high wages. News flash, huh? When such employment opportunities occur within a milieu of rising productivity, persistent job formation, and equitable gains, then there is less social division and more sense of community. (As an aside for any Socialist leaning readers, Acemoglu sees proliferation of good jobs as a far more effective and preferable model than a shared prosperity paradigm of redistribution alone.)

Further public interest considerations involve democratic governance that takes into account the dignity of all citizens, prudent business regulation, worker rights protections, easily accessible high quality workplace education opportunities, and community development.

Let us now take a look at how AI is contributing to the realization of a positive future that includes the features just described. To date the record is spotty at best. The most significant concern is its inclination to enable entities to amass information and wealth. If left unchecked, this consolidation threatens to exacerbate wealth inequality among segments of the population, including among nations. Skills which benefit the AI industry will be favored with a corresponding deemphasis on skills which do not, leaving vast numbers of redundant workers, lower wages, and a decline in the tax base. A shift toward monopolistic behaviors have always occurred when a new game-changing technology has been introduced, whether railroads, oil production, telecommunications, etc. and there is no reason to think it will not happen again.

Particularly worrisome in these early days of AI are the directions of its deployment. Much of AI is targeted toward the automation of functions, the collection of data, and the conduction of surveillance. The loss of jobs occurring simultaneously with governments and business acquiring ever more quantities of our personal data threatens to disrupt social order and the future of democracy.

AI is being empowered to make decisions and to make “higher quality” decisions it needs to process ever more data. Algorithms are sequenced sets of instructions that guide AI in its decision making. Initially these are written by human programmers. Any biases which a programmer possesses can and does find it way into the algorithmic design. This can be apparent in the data selected for analysis by the algorithm. When AI decision making is scaled to impact large numbers of people these biases are amplified to a significant extent, thereby instituting discriminatory practices to a degree that can be hard to unwind.

AI is likely to boost profits for those entities controlling its mobilization. According to a McKinsey report, Alphabet (Google), Facebook, Amazon, Microsoft, Netflix, Ali Baba, and Baidu control most of worldwide AI research expenditures. Erik Brynjolfsson, professor at the Stanford Institute for Human-Centered AI, points out that when technology replaces human labor wages decline and wealth concentrates. Job disappearance is not just a statistic. When people lose self-authority and motivation much more than just a job is lost. Are we to believe these mega-corporations are concerned about the rights and welfare of low-wage and minority workers, the work strain put on those working 70-hour weeks, or the social consequences of what they are unleashing? Perhaps, but I’m not entirely content on thinking so.

Our world is growing increasingly complex. The population is escalating and with it growing numbers of problems, more initiatives, expanding interactions, and multiplying challenges. The need for enhanced iterations of cognitive processing is upon us to address this intensification. The late Stanford University philosopher Kenneth Taylor referred to the overall cognitive load and the cognitive networks required to handle it. He saw cognitive networks as divided between humans and AI. Taylor’s anticipation was that the enormous degree of necessary cognitive labor will be tasked more heavily toward AI over time, simply because AI will be more efficient at manipulating this expansive load. If decisions require answers best made through coordination of vast amounts of information, then the machines will always be better than the humans. Therefore, it is incumbent on the humans to be sure cognitive networks are governed to distribute cognitive processing and decision making tasks such that humans remain a relevant part of the mix. Human workers need to insist they be targeted to resolve those issues for which humans are better qualified than machines.

Citizenry does not as a rule play a science oversight role. The closest we come is when we sit back and wait for science to send us new technologies which we either approve of or reject with our wallets. To think of ordinary citizens as overtly directing the progress of science seems almost ridiculous. But with regards to AI, this is what needs to happen. Concentrating vast power to a technology with so many inherent risks should not be an acceptable state of affairs. The people, the body politic, the proletariat must monitor, and when necessary, sway the development and deployment of AI such that the greatest good is realized by the greatest number of people.

To take on a citizen surveillance role begins with an agreed upon code of ethics. This is where we start becoming activists. Debate about what an AI ethic should look like will be a healthy exercise for society. Guidance on this issue can come from the work of Annette Zimmerman, a lecturer in philosophy at the University of York and an expert in understanding both the technology supporting AI and the social implications of such technologies. She encourages us to consider simple and common sense issues and ask key questions when thinking about AI. For example, just because a technology can be made, should it be made? Are the issues AI is made to address worth addressing? What is the AI’s objective? What is the likely effect of AI involvement? Might negative, unjust, or harmful consequences possibly result because of AI?

We don’t ask enough of these questions. Why don’t we? Zimmerman offers several reasons. Government has not yet been urged by voters to impose stringent regulations on AI developers. The right of AI businesses to make a profit supersedes prescribing any kind of watchdog at present. Furthermore, techno-optimists are encouraged to see AI as finally reaching an inflection point where its growth is becoming exponential. Also, an attitude exists wherein the public feels that high-end information technology is somehow sacrosanct and should be left to do its thing. Together, these conditions create social passivity just at a time when our guard should be up.

When looking at AI the core decision to be made might be, are the reasons for constructing an AI tool and continually improving it justified or should it just not be built at all. In short, is the proposed AI fair? As Rachel Thomas of the Center for Applied Data Ethics at the University of San Francisco points out, the established pattern for AI progress is too centralized among system designers and operators. As a preferred alternative, Thomas proposes that AI evolution be more democratic and participatory among end users and others impacted by AI. As we are seeing, a key component of AI ethics involves ordinary people being actively engaged and persuasive with both the corporate powerful and the individual AI design talent.

At Dartmouth College in 1956, John McCarthy and his cohorts saw the potential for every aspect of human intelligence as capable of machine simulation. Where this has gone over the past sixty-five years is that machine intelligence is focused almost exclusively on what Kenneth Taylor calls AI-as-engineering — deep learning machines that process Big Data looking for patterns from which to make predictions. This is but one slice of intelligence, but the one which at present is the most profitable, hence the most common in the AI industry.

Artificial Intelligence is here to stay. It will not and should not be eradicated. Great innovations will come from AI, leading to improved lives worldwide in ways that are just now being imagined. Progress which positively influences the greatest number of people possible should be welcomed, whether it originates with AI or not. However, we should never lose sight of the aims being devised for the use of AI’s power. We should be able to manage disruptions to our way of life. But degeneration should not be tolerated. Democracy, our glorious experiment in self-government, sanctions us to control the levers of power, including when necessary power in business. The time to do so is now.