An AI Bill of Rights

Often it is difficult to separate living from working. Our personal lives and professions can become intertwined such that it can seem pointless to differentiate those aspects which are personal from professional. Such is the case when considering one of today’s hottest topics, the impact of artificial intelligence. Is AI going to sway our lives in general or be mostly an employment issue? A fair prediction is that AI is going to change the landscapes of both our lives and of our work. 

As citizens and as workers we should have a strong say in what the influence of AI is going to be in our daily lives and on our jobs. The disruptive potential is too huge to leave AI development solely up to engineers and their corporate employers. If AI advancements are to be the result of free market innovation, then those of us who are future customers and recipients of its consequences should have the freedom to weigh in and heavily influence its maturation. 

A practical way to approach this challenge is through the lens of individual rights. Ever since the seventeenth century philosopher John Locke proposed the existence of fundamental natural rights, such as of life, liberty, and property, we westerners have organized our social, political, and economic institutions around the notion of personhood rights to both preserve and extend the enjoyment of our lives. We bestow upon ourselves the rights necessary to live fruitful lives free of destructive intrusion. Now is the time to apply these rights in the face of AI infiltration. 

A useful place to ground a national debate about AI’s proliferation is with the Biden Administration’s White House Office of Science and Technology Policy’s proposal known as the Blueprint for an AI Bill of Rights (https://www.whitehouse.gov/ostp/ai-bill-of-rights/). This is a thoughtful approach to identifying the key areas of contention in the planning, application, and mobilization of AI-based automated systems. 

Five principles are presented as foundational to designating what constitutes an AI Bill of Rights. To summarize: 

Safe and Effective Systems: An AI system should undergo input and testing from various sources to ensure its ability to deliver value free from the risk of malicious or unintended consequences. Humane industry standards and protective measures should apply, including the power to shut down harmful applications. Data usage is to be transparent, necessary, and respectful of personal integrity. 

Algorithmic Discrimination Protections: The biases, inequities, and discriminatory practices of people should not migrate to automated systems. Indefensible digital treatment of people based on their individual differences is to be considered unjust. Legal protections of ordinary citizens and farsighted equity assessments of intended and unintended uses of systems should be crucial in the design and deployment of AI systems. 

Data Privacy: This concern has been with us since the advent of Web 2.0. People should have ownership and agency over their data. The right to privacy is strong among free and independent people. This should be reflected in the automated systems they use. Exercising consent and having the ability to opt in and out of these systems with no restrictions should be inherent in their development. 

Notice and Explanation: It should not take a computer science degree for ordinary users to understand what they are getting into with AI systems. Clear and unambiguous language that informs operators about system functionality, intent, outcomes, updates, and risks are to be considered basic. 

Human Alternatives, Consideration, and Fallback: In short, when a user determines that an automated system has become too unwieldy or its functionality too untenable, then he or she should be able to have access to a real person to help them. No one should feel trapped within the confines of an all-powerful system they do not understand and cannot properly operate. 

These principles could become a friendly conversation starter. As citizens we need a simple tool to unify the discussion as we confront this significant challenge. This AI Bill of Rights could be it. 

AI and Your Career Considered

Amper Music is an Artificial Intelligence (AI) application that can create music based on inputs from human users who may know nothing about music theory or how to play a musical instrument. Requests and conditions are submitted concerning the type of music desired for purposes such as podcast themes or home video soundtracks. Amper Music in turn generates original music.

DALL• E is an AI program that empowers human users to produce art and realistic images in a variety of modes and forms. Taking text descriptions which have been provided by users, the AI goes about creating stunning illustrations and depictions. Little to no human artistic talent is required to develop original art.

ChatGPT is a newly released open-source AI chatbot designed to yield fresh high quality written text on a wide variety of topics, including software code. Based on human user editing suggestions ChatGPT will even revise its text constructing multiple drafts until the output is just what the user wants for anything from a set of complicated directions to marketing copy.

Another chatbot called Franz Broseph was able to compete against twenty online players from around the world last year in a game of Diplomacy. The game compels participants to engage in political negotiations, form alliances, apply military strategies, and basically win a World War I simulation. Guess who came out on top? Yup, Franz Broseph.

We are no longer waiting to see when AI will revolutionize the world. The disruptive transformation is currently underway.

Note that I used the word disruptive above. Is this a good thing or not? Well, the term certainly brings to mind the late Clayton Christensen and his popularizing of the concept “disruptive innovation”. Christensen highlighted a process whereby a new product or service is introduced at the bottom rung of a market ladder. Eventually, it catches on and grows in usage displacing much if not all of the traditional competition. What Walmart did to Sears is an example.

In my judgment, it is safe to assume that the AI examples above are representative of a larger AI disruptive innovation which is in the process of rolling over the work world as we know it. Again, is this good or bad? Well, it could be both.

The manner in which writers, music composers, and artists have operated customarily is clearly threatened. AI is now a major new competitor on the block. To be sure, in the short term at least, consumers who prefer conventionally produced text, music, and art will purposefully acquire it and shun the AI-generated material. But eventually the innovations will seep into the mainstream and could very well become the new ordinary.

As the Borg in Star Trek put it, resistance is futile. AI engineers and self-learning AI itself will continue to breed one disruptive innovation after the next, simply because they can. Ethics or a concern for the greater wellbeing of humans, if it is ever considered, will not inhibit the creation of these products and services. If anything, these novelties will be presented as good for people.

Perhaps, these inventions will be good for people. Maybe “better” writing, music composition, and art will result. Possibly the shift we saw from an agrarian economy to a mechanized one during the Industrial Revolution will be an apt analogy to what we are now experiencing. Time will tell.

One thing is clear, however. A simultaneous adaptation to new practices and systems will need to occur such that the AI-fueled modernizations are integrated into the new normal while human careers can continue to flourish. Possibly first drafts of essays will be written by ChatGPT and future iterations will be the result of human edits and prompts bringing about a spectacular essay produced by an otherwise mediocre human writer.

The question I ask myself is, if partnering of machine and human does not lead to higher quality outcomes, then why are we bothering with AI?

 

 

 

 

Questioning the Future of AI

When I drive my E-ZPass-less car through the tollbooth on I93 in Hooksett, NH, I intentionally swing to the right to hand a dollar to the tollbooth attendant. When checking out from a shopping trip in a big box store, I prefer paying a person at a cash register rather than using the self-serve payment scan system. 

It is not that I am some sort of crotchety Luddite who shuns digital progress. I pride myself on maintaining some decent level of technical functionality as I age. But I have come to question why those who design and build our Artificial Intelligence (AI) systems are obsessed with things like automation. In fact, the more I investigate AI the more surprised I am that AI is being utilized so narrowly, unevenly, and menacingly. 

The AI movement is powerful, significant, and potentially authoritative regarding how our personal and work lives will be lived in the coming years. The scale of its reach places it in a class far beyond the technological tinkering improvements we generally see with new phone models or app developments. Machine learning is far more enigmatic than a better video camera or gaming platform. 

Momentous changes are likely in a broad range of fields from mechanics to medicine and are expected to reshape work and modify markets. Many of these transformations will be welcomed, perhaps cherished, but others perhaps should not happen at all. 

When looking at AI today it seems too much of it is focused on building systems that either automate functions, collect data, or conduct surveillance. This should be concerning. The likelihood of jobs being lost, governments and companies holding vast quantities of our personal information, and our personal freedoms becoming threatened is not some far-fetched paranoid delusion, but an ugly scenario we should work to prevent. 

There is progress and then there is degeneration. AI could give us either or both. As an analog, I think of my attitude ten to fifteen years ago about social media. Then, the crowdsourcing of unregulated input from the global community augured richer and more transparent conversations about any number of topics. Or so I thought. Today social media looks like a cesspool of disinformation and disgruntlement ushering in social breakdown. Not all innovations should be welcomed. 

In our democracy, while we still have one, the general public needs to be actively engaged in monitoring the AI powers that we have and weighing in on policies to determine what AI engineers develop. Living with a laissez-faire attitude of, ‘Well, whatever the markets come up with will be fine. Markets know best.’, can lead to costly and offensive ruptures in the very framework of society. Citizens should insist that AI be deployed in a generally advantageous manner as described by utilitarian philosophers like Jeremy Bentham — “the greatest amount of good for the greatest number”. 

Instead, it looks like AI development is being driven more by the acquisition of corporate profit and power than by what benefits society. One does not need be a wild-eyed Socialist to question whether a disruption as encompassing as AI could potentially pose hazards to society. Those who control the development and deployment of AI will have a lot of authority and say in how our economy operates and how our future day-to-day lives are experienced. Concentrations of power have traditionally been held suspect in America. Well, we have one in the making. Let’s pay attention. 

The ultimate direction AI takes does not have to be decided solely by engineers and corporate C-levels who find business in selling only surveillance and automation tools. AI could be targeted to complement and improve the work done by real people, while also creating new activities and opportunities that keep workers gainfully employed. We have a choice — let AI rule us or we rule it. Hopefully, we will choose wisely.