When I drive my E-ZPass-less car through the tollbooth on I93 in Hooksett, NH, I intentionally swing to the right to hand a dollar to the tollbooth attendant. When checking out from a shopping trip in a big box store, I prefer paying a person at a cash register rather than using the self-serve payment scan system.
It is not that I am some sort of crotchety Luddite who shuns digital progress. I pride myself on maintaining some decent level of technical functionality as I age. But I have come to question why those who design and build our Artificial Intelligence (AI) systems are obsessed with things like automation. In fact, the more I investigate AI the more surprised I am that AI is being utilized so narrowly, unevenly, and menacingly.
The AI movement is powerful, significant, and potentially authoritative regarding how our personal and work lives will be lived in the coming years. The scale of its reach places it in a class far beyond the technological tinkering improvements we generally see with new phone models or app developments. Machine learning is far more enigmatic than a better video camera or gaming platform.
Momentous changes are likely in a broad range of fields from mechanics to medicine and are expected to reshape work and modify markets. Many of these transformations will be welcomed, perhaps cherished, but others perhaps should not happen at all.
When looking at AI today it seems too much of it is focused on building systems that either automate functions, collect data, or conduct surveillance. This should be concerning. The likelihood of jobs being lost, governments and companies holding vast quantities of our personal information, and our personal freedoms becoming threatened is not some far-fetched paranoid delusion, but an ugly scenario we should work to prevent.
There is progress and then there is degeneration. AI could give us either or both. As an analog, I think of my attitude ten to fifteen years ago about social media. Then, the crowdsourcing of unregulated input from the global community augured richer and more transparent conversations about any number of topics. Or so I thought. Today social media looks like a cesspool of disinformation and disgruntlement ushering in social breakdown. Not all innovations should be welcomed.
In our democracy, while we still have one, the general public needs to be actively engaged in monitoring the AI powers that we have and weighing in on policies to determine what AI engineers develop. Living with a laissez-faire attitude of, ‘Well, whatever the markets come up with will be fine. Markets know best.’, can lead to costly and offensive ruptures in the very framework of society. Citizens should insist that AI be deployed in a generally advantageous manner as described by utilitarian philosophers like Jeremy Bentham — “the greatest amount of good for the greatest number”.
Instead, it looks like AI development is being driven more by the acquisition of corporate profit and power than by what benefits society. One does not need be a wild-eyed Socialist to question whether a disruption as encompassing as AI could potentially pose hazards to society. Those who control the development and deployment of AI will have a lot of authority and say in how our economy operates and how our future day-to-day lives are experienced. Concentrations of power have traditionally been held suspect in America. Well, we have one in the making. Let’s pay attention.
The ultimate direction AI takes does not have to be decided solely by engineers and corporate C-levels who find business in selling only surveillance and automation tools. AI could be targeted to complement and improve the work done by real people, while also creating new activities and opportunities that keep workers gainfully employed. We have a choice — let AI rule us or we rule it. Hopefully, we will choose wisely.