Develop Your Career Agility

We often hear about the virtue of agility as both a requirement to achieve business success and as a needed worker trait when navigating a career during a time of flux. Knowing how to advantageously manage change is considered smart and profitable. Many however, may wonder how exactly agility is to be practiced. Is it just a matter of adopting a new mindset or are there specific actions that need to be taken? What follows are my thoughts on agility in the context of career development for the individual worker.

Yes, mentally shifting away from habitual rigidity and unchallenged assumptions is a good place to start. It can help to adopt a heuristic such as the SMART goals model to guide and gauge your agility practice. To review, SMART is an acronym for change actions that are specific, measurable, achievable, relevant, and time bound. Bringing order and a degree of simplification to the decision-making process will improve your ability to adapt.

Beyond these beginning measures it is advisable to be aware of market, industry, and professional trends which will impact your career. These trends will be relatively temporal, so tuning in regularly to the twists and turns of work-related currents is an approach best integrated into your professional practice. To illustrate, let us examine some general contemporary proclivities in the 2024 world of work worthy of consideration when career planning.

The old career rules or conventions are changing. It has been old news for a while that rarely do workers spend their entire careers with a single employer anymore like the Boomers did. But the change does not stop there. The combination of advanced communications and analytic technology combined with a shifting ethic regarding work/life balance has disrupted the traditional relationship between employer and employee. In short, valued talent has a leg up on demanding concessions from employers yearning to keep them in-house for as long as possible.

For example, management has been getting the message that the cost burden placed on employees to work is increasingly being viewed as unacceptable. Traditionally, it has always been the responsibility of workers to absorb certain costs to remain employed, such as commuting, child care, and housing expenses. One of the consequences of the pandemic has been for workers to realize that these costs are a liability not sufficiently covered by the compensation they receive. A corresponding recognition is that productivity for many positions does not diminish when work is performed remotely.

With return to work mandates seen increasingly as an expensive and unnecessary encumbrance on employees, management is beginning to offer cost offsets such as housing subsidies, caregiver benefits, student loan assistance, and even financial literacy programs. The hope is that such initiatives will retain high quality employees.

Some employers are also becoming more agreeable to flex time arrangements. Taking extended work breaks for family reasons, delaying retirement, returning from retirement, working part time temporarily, job sharing, and other individualized deals are becoming more common. In general, management sees the need to make it simpler and more desirable for talent to stay or return from leaves.

Another trend which we have heard a lot about over the past year and a half is the onset of Artificial Intelligence and how it will both improve and devastate our jobs. AI will undoubtedly change the employment landscape in ways that practically scream for workers to be agile. I think it is safe to say that job refashioning will be a multi-year trend that is underway as we speak due to AI. One consultant who is an expert on future of work issues predicts that AI will intervene in up to 70% of text and data-laden jobs by 2025, which is up from under 10% in 2023.

Anticipating and preparing for AI’s influence in your profession would seem to be the understatement of the year. As companies and individual contributors figure our how to best leverage AI’s potential while mitigating its downsides, all of us need to find that career sweet spot between over-hyping AI’s consequences and ignoring its likely effects.

Hopefully, this gives you some practical ideas to contemplate when making career plans for an unpredictable future. Workforce disruption is likely to be significant in the near term. Exercising agility is a key component of what should be your preparedness regimen.

 

Being Valued on the Job

So here you are putting the best you have into a job you have had for several years. The compensation is decent, but not great. However, other of your work preferences are in place such as hybrid work settings, respectable collogues, manageable work volume, and most importantly you are largely able to exercise your strengths with minimal time spent on dealing with your weak areas. On balance, it is a good job, which is why you have stuck with it this long.

But as time has gone on you find yourself wondering if your bosses really care about you. You recognize that positive feedback is important to you. Confirmation of some sort is desired in order for you to continue putting your maximum effort into this endeavor. The fact that you are questioning this at all seems to be an indication something is lacking in the rapport you have with management. Yet, you just can’t put your finger on what is missing. It leaves you feeling somewhat unfulfilled with your job.

We can look at workplaces as falling into two possible camps which I will call the traditional camp and the emotive camp. The traditional workplace has an inherent expectation that employees are there to follow the direction of management — period. Employees either fit into this assumption or they don’t. It is not the job of the employee to question the instructions they are given. If they have issue with management decisions, then the the door is just over there. The message is to shut up and do your job.

The emotive camp on the other hand is characterized by a management style that believes employees need to be heard and respected. This stems from an ethic which attaches value to having the workplace be a place of learning, adaptability, and growth. This style of manager sees the incursion of contemporary business trends like globalization, technological change such as artificial intelligence, and consumers desiring personalized brand loyalty as game changing requiring employees who can function effectively in this new normal.

As an employee it is appropriate to ask yourself which of these camps is best suited for you. In some situations and for certain personality types the traditional culture may be fine. It offers a hierarchical structure with little ambiguity concerning whose place belongs to whom. However, for increasingly more employees, especially from the Millennial and Zoomer (Gen Z) generations, top-down my-way-or-the-highway supervision is unlikely to attract and retain the talent needed to meet the demands of today’s consumers.

What are some specific practices we can expect from managers in an emotive workplace? Margaret Rogers, a business consultant with a “human-centered methods” focus cites several ways. It begins with managers accepting as a priority the need to understand their employees at a more personal level than was expected in the past. The goal is to merge conditions which accentuate optimal employee performance with the needs of the organization. It is assumed each employee has career wishes aligned with related learning goals. Arranging these objectives such that company and employees both benefit can reduce turnover.

A resiliency must be worked into the decision making process of both manager and employee so that shifts can be made which satisfy fulfillment of on-the-job opportunities. Managers must have the flexibility to make good on the matches they find to bring about enhanced employee to organization interactions. As an employee, you can feel your contributions matter to the degree that you are upskilled in ways you want to be.

Integrating varied on-the-job occasions can broaden the range of skills employees develop while also expanding the talent pool from which organizations can draw as needed. Additionally, as with any high quality learning setting, superior communication among all stakeholders is required. Constant feedback, like constant data, is useful for fine-tuning the improvements all parties rightly demand.

An emotive workplace is often an organization that puts out a product or service in an always competitive marketplace. This culture realizes that by being a learning organization and sensitive to employees’ hopes they are more likely to have an employee base committed to adaptable people management.

We left you earlier wondering why your current job is leaving you feeling unaccomplished. Maybe the above analysis will help you determine where the rub may be occurring. And if you decide a change needs to happen, don’t put it off for too long. Lasting improvements are sweeter in the near term rather than indefinitely delayed.

 

 

The Mixed Story of Women in the Workforce

First to the good news for women in the workforce. Women in America are enrolled in greater numbers in higher education than men. According to the National Center for Education Statistics, during the fall of 2021 female students comprised 61% of the higher ed student body with men at 39%. A year earlier the stats were 58% female and 42% male. Projections for 2030 indicate that there will be 2.37 million more women in postsecondary institutions than men. The trend is clear. Women are more drawn to improving their levels of education compared to men.

This was not always the case. In 1970, male enrolment outnumbered women registrations. By 1980, the admissions records were at parity. And now here we are. The result of this direction shift should tip the balance of education’s benefits toward women more than men.

What are these benefits? Even at a time such as ours when the high cost of college education is causing more people to question its return on investment, there are still documented advantages to getting an undergraduate degree. These include:

  • Higher earning potential and incomes
  • More employment possibilities
  • Increased job security
  • More abundant compensation packages
  • Enhanced personal development
  • Greater networking opportunities
  • Improved job satisfaction

It is not a stretch to predict that these merits will eventually give women the edge in business leadership and economic clout. A feminization of the economic picture may or may not be an overall gain. That has yet to be seen. Will competition be strengthened by defanging it to some degree or at least softening its sharpness? Again, this has yet to be seen.

However, the outlook for women employment writ large is not so rosy across the board. Among the policy-driven data Third Way examines is the non-college economy. And in this category their data dive into the numbers provided by the Bureau of Labor Statistics (BLS) reveal a troubling forecast for women who do not pursue a college education.

Estimates are that over the next decade the rate of job loss among non-college women is expected to increase significantly. Unfortunately, many non-college women are currently employed in industries that are predicted to decline.

So, let’s look at the big picture. BLS has identified the industries projected to decline economically and by extension employment-wise over the next ten years. Among these industries, 97% of the positions not requiring a college degree will disappear. Incidentally, 60% of these job losses are now middle-wage jobs. And here is the kicker. Two-thirds of these precarious jobs are currently held by women without a college degree!

Most non-college women work in jobs considered low-wage or middle wage. Examples of low-wage jobs include cashiers and fast-food cooks. Middle-wage jobs are like office clerks and retail sales supervisors. Historically, middle-wage jobs provided the means for women to support themselves and to get established in the middle class. With many of these jobs facing elimination, the strain on non-college women to afford middle class lifestyles will become more pronounced.

To add insult to injury, it is these middle-wage jobs that are most likely to be abolished, even when compared to the low-wage jobs. In fact, low-wage jobs, those under $36,700, are under less threat according to BLS than the middle-wage jobs. If true, it becomes easy to see that a migration of non-college women from middle-wage to low-wage work is likely.

The decline of middle-wage jobs is largely being caused by automation and outsourcing. And who knows to what extent Artificial Intelligence will acerbate this movement? Examples of middle-wage jobs include:

  • Administrative assistants outside of legal, medical, and executive
  • Customer service representatives
  • Assemblers and fabricators
  • Bookkeeping, auditing, and accountant clerks
  • Frontline office supervisors

One possibility to avert this disturbing development is to hope the proliferation of industry credentials, certificates, and badges which qualify women (and men) for middle-wage positions without the need for college degrees will continue. Although such credentialing will not replace college degrees, in the short term they may stem the tide of disappearing middle-wage jobs.

Another thought is that the college educated women who will have more decision making authority in the future will design economic and employment solutions for the women who have been unable to go to college. My fingers are crossed.

 

An AI Bill of Rights

Often it is difficult to separate living from working. Our personal lives and professions can become intertwined such that it can seem pointless to differentiate those aspects which are personal from professional. Such is the case when considering one of today’s hottest topics, the impact of artificial intelligence. Is AI going to sway our lives in general or be mostly an employment issue? A fair prediction is that AI is going to change the landscapes of both our lives and of our work. 

As citizens and as workers we should have a strong say in what the influence of AI is going to be in our daily lives and on our jobs. The disruptive potential is too huge to leave AI development solely up to engineers and their corporate employers. If AI advancements are to be the result of free market innovation, then those of us who are future customers and recipients of its consequences should have the freedom to weigh in and heavily influence its maturation. 

A practical way to approach this challenge is through the lens of individual rights. Ever since the seventeenth century philosopher John Locke proposed the existence of fundamental natural rights, such as of life, liberty, and property, we westerners have organized our social, political, and economic institutions around the notion of personhood rights to both preserve and extend the enjoyment of our lives. We bestow upon ourselves the rights necessary to live fruitful lives free of destructive intrusion. Now is the time to apply these rights in the face of AI infiltration. 

A useful place to ground a national debate about AI’s proliferation is with the Biden Administration’s White House Office of Science and Technology Policy’s proposal known as the Blueprint for an AI Bill of Rights (https://www.whitehouse.gov/ostp/ai-bill-of-rights/). This is a thoughtful approach to identifying the key areas of contention in the planning, application, and mobilization of AI-based automated systems. 

Five principles are presented as foundational to designating what constitutes an AI Bill of Rights. To summarize: 

Safe and Effective Systems: An AI system should undergo input and testing from various sources to ensure its ability to deliver value free from the risk of malicious or unintended consequences. Humane industry standards and protective measures should apply, including the power to shut down harmful applications. Data usage is to be transparent, necessary, and respectful of personal integrity. 

Algorithmic Discrimination Protections: The biases, inequities, and discriminatory practices of people should not migrate to automated systems. Indefensible digital treatment of people based on their individual differences is to be considered unjust. Legal protections of ordinary citizens and farsighted equity assessments of intended and unintended uses of systems should be crucial in the design and deployment of AI systems. 

Data Privacy: This concern has been with us since the advent of Web 2.0. People should have ownership and agency over their data. The right to privacy is strong among free and independent people. This should be reflected in the automated systems they use. Exercising consent and having the ability to opt in and out of these systems with no restrictions should be inherent in their development. 

Notice and Explanation: It should not take a computer science degree for ordinary users to understand what they are getting into with AI systems. Clear and unambiguous language that informs operators about system functionality, intent, outcomes, updates, and risks are to be considered basic. 

Human Alternatives, Consideration, and Fallback: In short, when a user determines that an automated system has become too unwieldy or its functionality too untenable, then he or she should be able to have access to a real person to help them. No one should feel trapped within the confines of an all-powerful system they do not understand and cannot properly operate. 

These principles could become a friendly conversation starter. As citizens we need a simple tool to unify the discussion as we confront this significant challenge. This AI Bill of Rights could be it. 

We Are More Than Checklists

Back in 2009 a well received book was published called The Checklist Manifesto by Atul Gawande, a surgeon, author, and public health researcher. The book promotes the use of developing and utilizing checklists to enhance the quality of outcomes resulting from the execution of complex procedures. Dr. Gawande cites many examples of how the deliberate use of checklists leads to greater efficiencies, more uniform discharge of protocols, and improved protections, particularly regarding procedures in which safety is a concern.

Upon examination, causes of unintended consequences and accidents can often be attributed to missed steps in a process, which had they been followed would have mitigated or prevented the mishap. Sure, we all make mistakes. But if we take the time to analyze why a mistake was made, we often find it was because of things like hurrying too much, lacking focus, being distracted, or not having enough experience. These flaws almost always mean measures that should have been taken were not taken.

So, to deploy and to use complete checklists consistently makes perfect sense. In fact, the application of step by step lists is considered so best-practice these days that many of our careers can be seen as little more than a requirement to effectively execute a series of predetermined sequential actions. Take a look at almost any job description. It is little more than a laundry list of expected deliverables like a set of boxes to be checked. It could be said that much of our work is therefore formulaic.

To the extent that we reduce our careers to predicable, stringent, and rote to-do rosters, the more accommodating we make our careers for AI replication. Author Ian Leslie makes an interesting observation in a recent Substack piece. Responding to the fear many express about the growth of AI he points out how we assist the machines to adapt to our ways of doing things because we are adapting our work lives to the ways AI works. When human agency is overly systematized we give our replacement instructions to AI which may be better at checking boxes than we humans are.

When we model our work behavior to a simple inventory we should not be surprised when AI mimics it. AI is algorithmic. It uses models and arrangements of variables in a mechanized and calculated way. As we are finding out, AI can out-perform us over a growing number of jobs, especially the jobs that are like checklists. A pertinent quote by artist Robert Irwin in the Ian Leslie piece is, “Human beings living in and through structures become structures living in and through human beings.”

As we determined above, checklists certainly have their place. However, as people we need to look at our work lives as being beyond just an amalgamation of discreet work tasks and responsibilities. To be human, especially in our careers, must be more than that.

Our evolution requires innovation and novelty. It demands an expression of humanity which is an added value above any pre-arranged framework. It seeks to celebrate intuition and ingenuity and even uncertainty. The careers of tomorrow will thrive because they bring a richness of the human experience not easily cloned by a computation.

Romanticism arose in Europe toward the end of the eighteenth century in reaction to the heavy emphasis being culturally placed on rationalism, science, and industrialization. Instead Romanticism insisted on honoring art, music, literature, nature, and the intellectual capacity of the individual. It exulted human emotion and aesthetic experience. Above all, the message of Romanticism was that to be fully human required embracing the wide range of human expression and to not be limited to the mechanized worldview of materialists and rationalists.

The time may be ripe for a neo-Romanticism in the age of AI and checklists. Efficiencies have their place. But let’s not confuse them with being human.

 

 

An Artificial Intelligence Appraisal

A seminal event in the development of machine learning occurred over a two-month period during the summer of 1956—about a half-hour drive from where I am now writing—at Dartmouth College in Hanover, New Hampshire. The year before, an Assistant Professor of Mathematics at Dartmouth, John McCarthy, generated the idea of compiling an eclectic group of talent who were to be tasked with conducting an original type of prognostication in an area so new that he needed a fresh term to describe it. Professor McCarthy coined a label for his proposed conference’s topic…Artificial Intelligence.

Although a decade before Gordon E. Moore of Intel prophetically surmised the doubling of transistors on a given unit of space could occur every two years, known over the years as Moore’s Law, by the 1950s it was becoming clear that the rate of electronic functionality and efficiency was improving exponentially. Given this background, John McCarthy set out to investigate the potential of this trend by bringing together an interdisciplinary potpourri of researchers from fields as diverse as cognitive science, electrical engineering, communications, neuroscience, information theory, psychology, mathematics, and computer science. Their mission for what became the Dartmouth Summer Research Project on Artificial Intelligence can best be summarized by the following sentence extracted from Professor McCarthy’s 1955 conference funding proposal: The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Such was the launch of one of the most revolutionary movements of our time.

Artificial Intelligence, or AI as it has become commonly known, is simply intelligence displayed by machines. Beyond this terse definition nothing about AI is simple. Nor is it docile. AI is already shaping our world in ways never before seen. We now live with tools such as search engines, robotic automation, virtual product recommendations, and data-driven medical diagnostics to name just a few of the innovations unheard of by most people only a generation or two ago. And on the horizon? Self-driving cars, self-learning cybersecurity systems, robotic assistants for the elderly, and tailored genomic medicines merely begin the dazzling list of inventions coming our way. AI has already demonstrated potential for disrupting and altering the way we live our lives and we are just getting started.

The volatile and variable nature of AI agency prompts me to examine this phenomenon in some detail and with some urgency. At present, AI portends to be either a source of life enhancing opportunities or a basis for grievous intrusions and dislocations that could threaten our way of life. Its strength may deliver possibilities or burdens. Given this potency it is incumbent upon us to fully and intentionally participate in examining, monitoring, and directing the course of AI development. It’s not as if this phenomenon is some act of God beyond our control. It is manmade. Therefore, we need to ask ourselves, are we going to steer AI advancement or is AI going to drive us?

I approach this inquiry as a lay person. I have no significant education or experience in subject matter related to AI or machine learning, including mathematics, computer science, robotics, cognitive modeling, information theory, data science, or any of the other sub-disciplines that go into the structure of AI. I am just a John Q. Citizen who is interested in technological change and what adaptations it can bring to individuals and society. Counter-intuitively, my non-technical background highly qualifies me to look into the possible ramifications of AI. And I should not be alone in doing so. When science becomes so abstract that average people simply resign themselves to letting the smarty-pants of the world make the profound decisions of how we are to live our lives, then we have a problem. I think it best to welcome the benefits AI can bring to us, while maintaining skepticism and a wary eye about its possible perils.

It is tempting to think that AI will display its power largely as a source of new products and services for the universe of consumers always eager to be swept away by the newest shiny object. Technological innovations often reveal new and improved ways of completing common tasks. Undoubtedly, expanded capacities will emerge allowing consumers to benefit from a myriad of novel ways to perform daily life functions. However, AI will manifest itself broadly by transforming employment and by extension how we live our day-to-day lives. This historic interchange between employment and how we structure our lives appears to be on the verge of a mammoth AI metamorphosis.

For better and for worse we have been at a similar economic and social crossroads before. It may be useful to consider what transformations the last great such revolution yielded as we ponder how to best be prepared for the AI insurgency. Two hallmarks of the seventeenth and eighteenth centuries in Europe were the rise of the Scientific Revolution and the Enlightenment. The Scientific Revolution ushered in new ways of comprehending the essence of natural materials and phenomena, which resulted from a newly developed thought process, inductive reasoning, while the Enlightenment encompassed a multitude of ideas leading to individual empowerment and the pursuit of just government. Together the Scientific Revolution and the Enlightenment set the stage for the Industrial Revolution.

Much of the world we know is the way it is because of the Industrial Revolution. Starting in England, where modern science largely took hold, and spreading quickly to many places across the globe, economies transitioned from homebased craft and agrarian work to factory-based machine production. This conversion led to many benefits such as more affordable and plentiful life enhancing goods and services, effective means of transportation, labor saving devices, and medical advances. For many non-aristocrats, wealth generation became possible and the rise in specialty professions developed. But these gains came with costs. The rapid migration of rural folk into industrialized cities, which were ill equipped to handle the influx, created inadequate, overcrowded, and disease-ridden housing. Increases in pollution, environmental degradation, and dangerous working conditions were also consequences. The lesson we can take from the historical shift of muscle to machine is that great advantages can and probably will come from fundamental economic pattern volatility, but these boons have great potential for trailing along a load of detriments and handicaps.

The Industrial Revolution impacted whole societies by changing how people lived. It ushered in an era of mass production and mass consumerism. Commensurate with this shift arose large corporate businesses, labor unions, immigration, government regulation, rural vs. urban issues, broad-based taxing, higher education, and improved widespread communications. Scientific progress introduced professional expertise in the form of rational problem solvers with job titles such as engineer, manager, and accountant.

The new AI revolution will likely rock the world similarly. When considering the scope and scale of such monumental change it can seem like it is beyond being managed. It can appear like a runaway train. Therefore, it may be helpful to view this latest grand social transition as primarily an economic one. Economics implies that what is underway is determined by people. Economics is a social science that simply cares about the production, distribution, and consumption of goods and services. Economics is not controlled by external forces of nature or acts of God. We are not trying to control a competing power. Rather we are trying to control ourselves. Choice and decision making among people is involved in executing economics and it should also be entailed in how we face the rollout of AI.

Fortunately, in regards to human agency and control, a crucial difference between the economics of the AI revolution versus the Industrial Revolution involves centralization or incorporation. The Industrial Revolution engendered a corporate approach to group organization. Large collections of people amassed with a common purpose and legal authority to execute objectives, such as transacting business under a single name. Institutionalization became a widespread structuring principle across business, government, and society in general. People found that in order to benefit from this economic order they had to adopt and adjust to these institutional systems, including their rules, their schedules, and their cultures. Depersonalization and objectification ensued.

However, the novel technology and information world order we are now experiencing is by nature more networked and distributed and less concentrated. Therefore, it should be possible for individuals to exercise more control over their engagements, even if this means diluting somewhat the power of institutions. New grassroots organizing constructs based on ad hoc resiliency that combine global talent as needed has empowerment potential for people. Fitting AI into this type of schema, whereby individuals collaborate with both distributed talent and AI to realize goals is a possibility. To the extent AI can be harnessed in a blended sharing of ideas and solutions the more command people have over the direction of the future. Let’s utilize AI to help invent our tomorrow.

It may be helpful to envision exactly what kind of tomorrow we wish to have for ourselves. Clarifying an image of an optimal future allows us to strategically orient our resources, such as AI, to realize a destiny of our own intentional making. And as I suggest, AI can be one of our key resources or it could be a serious impediment.

Cultivating a positive collective future begins of course with projecting what is best for each individual. A reasonable contention is that the greater the number of happy or content individuals present in society, then the more satisfied the society will be as a whole. Achieving this state is really very non-technical. Technology alone is not necessary for individuals to be grateful for what they have, devoted to family and positive friends, having an optimistic demeanor, living in the present, and feeling dignified and purposeful. 

In addition to these very important psychological conditions we can include an economic perspective to round out our ideal vision of the future. Economics, more than psychology, can be shaped and influenced by institutional and governmental policy initiatives. The resulting economic model of exemplary standards will need metrics utilized to determine if desired goals are being achieved. To be comprehensive, a blend of quantitative and qualitative assessments need to be applied.

To start, let’s not rely too much on the Gross Domestic Product (GDP), which is a monetary appraisal of the market value of all goods and services produced over a period of time. GDP has been widely accepted as a broad metric of the national economy for decades and by inference how well people are doing. High GDP rankings correlate with high growth economies, which traditionally is considered a good thing. Aside from the obvious problem of trying to distill as complex an edifice as a national economy to a single number, GDP is credited with encouraging a style of economic growth that manifests as resource depletion and pollution growth. In addition, for an economy with significant inequality, such as America’s, not everyone experiences GDP advantages evenly.

Zoë Baird, president of the Markle Foundation, is not alone in finding fault with GDP as an adequate measure of how Americans feel about the health of their economy, and by extension, their own lives. Rather, she proposes a more inclusive set of metrics, which highlight historic economic afflictions and begins to shape a more affirmative direction. They include evaluations of public sentiments, amounts of job creation, quantities of new businesses, indications of growing median income, signs of increased worker protections and training opportunities, and growing numbers of young workers. In short, if evidence of enhanced economic opportunity for all citizens to pursue high quality and secure work at good wages can be established, then we have at the very least constituted a foundation for widespread prosperity and happiness.

Staying with this theme of envisioning the general public’s best interests through economics we can note other useful perspectives. Daron Acemoglu of MIT points out that the greatest driver of shared prosperity are good jobs, i.e. widely available secure jobs with high wages. News flash, huh? When such employment opportunities occur within a milieu of rising productivity, persistent job formation, and equitable gains, then there is less social division and more sense of community. (As an aside for any Socialist leaning readers, Acemoglu sees proliferation of good jobs as a far more effective and preferable model than a shared prosperity paradigm of redistribution alone.)

Further public interest considerations involve democratic governance that takes into account the dignity of all citizens, prudent business regulation, worker rights protections, easily accessible high quality workplace education opportunities, and community development.

Let us now take a look at how AI is contributing to the realization of a positive future that includes the features just described. To date the record is spotty at best. The most significant concern is its inclination to enable entities to amass information and wealth. If left unchecked, this consolidation threatens to exacerbate wealth inequality among segments of the population, including among nations. Skills which benefit the AI industry will be favored with a corresponding deemphasis on skills which do not, leaving vast numbers of redundant workers, lower wages, and a decline in the tax base. A shift toward monopolistic behaviors have always occurred when a new game-changing technology has been introduced, whether railroads, oil production, telecommunications, etc. and there is no reason to think it will not happen again.

Particularly worrisome in these early days of AI are the directions of its deployment. Much of AI is targeted toward the automation of functions, the collection of data, and the conduction of surveillance. The loss of jobs occurring simultaneously with governments and business acquiring ever more quantities of our personal data threatens to disrupt social order and the future of democracy.

AI is being empowered to make decisions and to make “higher quality” decisions it needs to process ever more data. Algorithms are sequenced sets of instructions that guide AI in its decision making. Initially these are written by human programmers. Any biases which a programmer possesses can and does find it way into the algorithmic design. This can be apparent in the data selected for analysis by the algorithm. When AI decision making is scaled to impact large numbers of people these biases are amplified to a significant extent, thereby instituting discriminatory practices to a degree that can be hard to unwind.

AI is likely to boost profits for those entities controlling its mobilization. According to a McKinsey report, Alphabet (Google), Facebook, Amazon, Microsoft, Netflix, Ali Baba, and Baidu control most of worldwide AI research expenditures. Erik Brynjolfsson, professor at the Stanford Institute for Human-Centered AI, points out that when technology replaces human labor wages decline and wealth concentrates. Job disappearance is not just a statistic. When people lose self-authority and motivation much more than just a job is lost. Are we to believe these mega-corporations are concerned about the rights and welfare of low-wage and minority workers, the work strain put on those working 70-hour weeks, or the social consequences of what they are unleashing? Perhaps, but I’m not entirely content on thinking so.

Our world is growing increasingly complex. The population is escalating and with it growing numbers of problems, more initiatives, expanding interactions, and multiplying challenges. The need for enhanced iterations of cognitive processing is upon us to address this intensification. The late Stanford University philosopher Kenneth Taylor referred to the overall cognitive load and the cognitive networks required to handle it. He saw cognitive networks as divided between humans and AI. Taylor’s anticipation was that the enormous degree of necessary cognitive labor will be tasked more heavily toward AI over time, simply because AI will be more efficient at manipulating this expansive load. If decisions require answers best made through coordination of vast amounts of information, then the machines will always be better than the humans. Therefore, it is incumbent on the humans to be sure cognitive networks are governed to distribute cognitive processing and decision making tasks such that humans remain a relevant part of the mix. Human workers need to insist they be targeted to resolve those issues for which humans are better qualified than machines.

Citizenry does not as a rule play a science oversight role. The closest we come is when we sit back and wait for science to send us new technologies which we either approve of or reject with our wallets. To think of ordinary citizens as overtly directing the progress of science seems almost ridiculous. But with regards to AI, this is what needs to happen. Concentrating vast power to a technology with so many inherent risks should not be an acceptable state of affairs. The people, the body politic, the proletariat must monitor, and when necessary, sway the development and deployment of AI such that the greatest good is realized by the greatest number of people.

To take on a citizen surveillance role begins with an agreed upon code of ethics. This is where we start becoming activists. Debate about what an AI ethic should look like will be a healthy exercise for society. Guidance on this issue can come from the work of Annette Zimmerman, a lecturer in philosophy at the University of York and an expert in understanding both the technology supporting AI and the social implications of such technologies. She encourages us to consider simple and common sense issues and ask key questions when thinking about AI. For example, just because a technology can be made, should it be made? Are the issues AI is made to address worth addressing? What is the AI’s objective? What is the likely effect of AI involvement? Might negative, unjust, or harmful consequences possibly result because of AI?

We don’t ask enough of these questions. Why don’t we? Zimmerman offers several reasons. Government has not yet been urged by voters to impose stringent regulations on AI developers. The right of AI businesses to make a profit supersedes prescribing any kind of watchdog at present. Furthermore, techno-optimists are encouraged to see AI as finally reaching an inflection point where its growth is becoming exponential. Also, an attitude exists wherein the public feels that high-end information technology is somehow sacrosanct and should be left to do its thing. Together, these conditions create social passivity just at a time when our guard should be up.

When looking at AI the core decision to be made might be, are the reasons for constructing an AI tool and continually improving it justified or should it just not be built at all. In short, is the proposed AI fair? As Rachel Thomas of the Center for Applied Data Ethics at the University of San Francisco points out, the established pattern for AI progress is too centralized among system designers and operators. As a preferred alternative, Thomas proposes that AI evolution be more democratic and participatory among end users and others impacted by AI. As we are seeing, a key component of AI ethics involves ordinary people being actively engaged and persuasive with both the corporate powerful and the individual AI design talent.

At Dartmouth College in 1956, John McCarthy and his cohorts saw the potential for every aspect of human intelligence as capable of machine simulation. Where this has gone over the past sixty-five years is that machine intelligence is focused almost exclusively on what Kenneth Taylor calls AI-as-engineering — deep learning machines that process Big Data looking for patterns from which to make predictions. This is but one slice of intelligence, but the one which at present is the most profitable, hence the most common in the AI industry.

Artificial Intelligence is here to stay. It will not and should not be eradicated. Great innovations will come from AI, leading to improved lives worldwide in ways that are just now being imagined. Progress which positively influences the greatest number of people possible should be welcomed, whether it originates with AI or not. However, we should never lose sight of the aims being devised for the use of AI’s power. We should be able to manage disruptions to our way of life. But degeneration should not be tolerated. Democracy, our glorious experiment in self-government, sanctions us to control the levers of power, including when necessary power in business. The time to do so is now.

 

 

 

 

 

 

 

Questioning the Future of AI

When I drive my E-ZPass-less car through the tollbooth on I93 in Hooksett, NH, I intentionally swing to the right to hand a dollar to the tollbooth attendant. When checking out from a shopping trip in a big box store, I prefer paying a person at a cash register rather than using the self-serve payment scan system. 

It is not that I am some sort of crotchety Luddite who shuns digital progress. I pride myself on maintaining some decent level of technical functionality as I age. But I have come to question why those who design and build our Artificial Intelligence (AI) systems are obsessed with things like automation. In fact, the more I investigate AI the more surprised I am that AI is being utilized so narrowly, unevenly, and menacingly. 

The AI movement is powerful, significant, and potentially authoritative regarding how our personal and work lives will be lived in the coming years. The scale of its reach places it in a class far beyond the technological tinkering improvements we generally see with new phone models or app developments. Machine learning is far more enigmatic than a better video camera or gaming platform. 

Momentous changes are likely in a broad range of fields from mechanics to medicine and are expected to reshape work and modify markets. Many of these transformations will be welcomed, perhaps cherished, but others perhaps should not happen at all. 

When looking at AI today it seems too much of it is focused on building systems that either automate functions, collect data, or conduct surveillance. This should be concerning. The likelihood of jobs being lost, governments and companies holding vast quantities of our personal information, and our personal freedoms becoming threatened is not some far-fetched paranoid delusion, but an ugly scenario we should work to prevent. 

There is progress and then there is degeneration. AI could give us either or both. As an analog, I think of my attitude ten to fifteen years ago about social media. Then, the crowdsourcing of unregulated input from the global community augured richer and more transparent conversations about any number of topics. Or so I thought. Today social media looks like a cesspool of disinformation and disgruntlement ushering in social breakdown. Not all innovations should be welcomed. 

In our democracy, while we still have one, the general public needs to be actively engaged in monitoring the AI powers that we have and weighing in on policies to determine what AI engineers develop. Living with a laissez-faire attitude of, ‘Well, whatever the markets come up with will be fine. Markets know best.’, can lead to costly and offensive ruptures in the very framework of society. Citizens should insist that AI be deployed in a generally advantageous manner as described by utilitarian philosophers like Jeremy Bentham — “the greatest amount of good for the greatest number”. 

Instead, it looks like AI development is being driven more by the acquisition of corporate profit and power than by what benefits society. One does not need be a wild-eyed Socialist to question whether a disruption as encompassing as AI could potentially pose hazards to society. Those who control the development and deployment of AI will have a lot of authority and say in how our economy operates and how our future day-to-day lives are experienced. Concentrations of power have traditionally been held suspect in America. Well, we have one in the making. Let’s pay attention. 

The ultimate direction AI takes does not have to be decided solely by engineers and corporate C-levels who find business in selling only surveillance and automation tools. AI could be targeted to complement and improve the work done by real people, while also creating new activities and opportunities that keep workers gainfully employed. We have a choice — let AI rule us or we rule it. Hopefully, we will choose wisely. 

Factor AI into Your Career Plans

It does not matter what career field you are in, anything from finance to fashion is being and will increasingly be impacted by Artificial Intelligence or AI. Whether you believe AI will create lives of no-work luxury for us all or will end civilization as we know it, our challenge in the 21st century is to understand and participate in shaping AI’s repercussions. Therefore, when pondering your career long-game a critical planning component is to consider the impact AI will have on what you do for a living. 

So, what is AI? I like Kathryn Hume’s working definition (Director, Product & Business Development Product for Borealis AI), which is that AI is whatever computers cannot do until they can. This implies that AI is a moving target, compiling and sorting vast amounts of data one year to leveraging machine learning that promotes employment obsolescence the next. 

What once passed for AI is now integrated into standard operating procedures across many industries. Currently, we are wondering about and bracing for unexpected consequences derived from ever more sophisticated machines “thinking” like superhumans. 

AI certainly engenders anxiety. Sam Daly (Builtin.com) reports on a 2018 survey in which 72% of respondents conveyed concern for human jobs being subsumed by technology. Even Elon Musk of electric car and SpaceX fame refers to AI as more dangerous than nukes. And of course, the current US Presidential campaign includes a candidate, Andrew Yang, who showcases a universal basic income for all Americans to help offset the workforce changes and employment displacement being caused by increased automation or AI. 

Given this AI anguish, what is a career planner to do? To begin, it may help to view AI as something old-school, as in business development processes which require change management procedures aimed toward adoption of innovations which lead to competitive advantages. In other words, AI may be no more threatening than any other big change. In this case, the adjustment is in the area of human-machine collaboration. (But we did that once during the Industrial Revolution, right?) 

Also, let us not think of AI as Alien Intelligence. There is nothing otherworldly going on here despite how opaque AI may seem to the layman. AI is constructed by the design and application of algorithms, which are sets of executable instructions leading to an output. Algorithms can be written to consist of one or many criteria or inputs, ranging from if…then… statements to text, images, videos, voice, and more. As the algorithms become more complex it can be unclear which criterion establishes dominance, but this does not diminish the validity and importance of the outputs. 

The quality of the inputs determines the caliber of the results. For example, if data sets that “train” algorithms are too narrowly selected, i.e., too old or demographically skewed, then that limits the scope of the output. We can think of such algorithms as biased. When relying on AI to plan market capture strategies, for instance, this can matter a lot. 

“Decisions” made by computers can also be fickle, as in different from one day to the next, requiring retrospective pattern analysis. In short, algorithms now are good at processing relatively restricted tasks, but far from totally taking over the universe of human capabilities. 

Many professional job descriptions will change due to AI. To prepare, develop a nimble and adaptable perspective to change. Do not wait to have your job transformation be forced onto you. Get out in front of the inevitable and think, for example about how AI can be used to eliminate mundane parts of your job to free you up for more innovative endeavors.  

Influence the way AI can improve your performance and the service you provide. By thinking critically about what AI can and cannot do you have a better chance of determining your professional relevance moving forward. 

Applying Technology in Hiring

Human contact, whether through professional networking, social connections, or by earned reputation still matters significantly and should in no way be minimized when describing the recruitment and hiring process. If anything, it is paramount. However, another very important track to cover when developing one’s career is the one driven by existing and emerging technologies meant to streamline and optimize the employment process. 

Today this ranges from online job boards advertising positions to Applicant Tracking Systems (ATS) that parse resumes for HR and recruiters. Also, Artificial Intelligence (AI) and machine learning tools, designed to assess the employability of candidates, are now present.  

How to advantageously position yourself for these digital aides and gatekeepers needs to be a key component of a well-planned career growth strategy. Let us take a current look at each of these technical features. 

Online job boards are not very new, in short supply, or complicated. They are little more than interactive web sites that post job descriptions from employers. More recent are job search engines like Indeed and Simply Hired that rummage the internet aggregating job postings from a variety of sources. 

These sites are seductive in that they give the appearance of a job store with profuse amounts of positions just ready for you to pick up while shopping. A common and ineffective ploy is to spend hours responding to jobs on the boards with the only thing generated being recruiters trying to lure you to high turnover 100% commission sales jobs.  

Nonetheless, working with job boards is not a complete waste of time and decent jobs can be yielded. Recommended is to spend about 10% to 20% of your job search time utilizing the boards while being careful and discriminating about what you respond to. 

ATS software allows recruiters to organize vast lists of applicants and their pertinent criteria such as qualifications, employment history, degrees earned, etc., which are most useful to hiring managers when determining who to contact for interviews. For those of us trying to secure an interview we need to be mindful of preparing resumes (and LinkedIn Profiles) that are keyword-rich with contextually used terms aligning our skills and knowledge with responsibilities and deliverables mentioned in job descriptions. 

Therefore, given the need for an ATS-friendly resume that simultaneously is attractive for human readers the challenge is to strike a visually appealing format that won’t confuse the ATS. This can be tricky. If you want a designer resume that looks like those on Pinterest, then forget about passing ATS muster. And with so many companies employing ATS the best strategy may be to pay homage to the many conditions needed to not be digitally rejected in a millisecond, while adding enough optics, and of course solid content, to not have your resume look like just another slice of white bread. Achieving this level of resume optimization is a necessary goal. 

The latest trend, which is expected to proliferate in use and sophistication, involves the impact of AI in hiring decision making. There is a growing perception that relying on a candidate’s skills alone is not consistently producing better employees. The evolving thought is to assess personality more with the goal of finding a well-rounded and compatible colleague.  

To this end, AI is being deployed to identify personality traits gleaned from resumes, online profiles, social media presences, video appearances, you name it. Apparently, this is seen as less biased than human observers. We shall see. (Cannot algorithms be biased too?) 

At any rate, developing a consistent brand and value proposition that includes both your technical talents and your work style/interpersonal characteristics across all platforms may be wise for presenting to human and technological appraisers alike. 

Being prepared for the changes and encroachment of technology into hiring decisions, and by extension career development, has become imperative in today’s employment world.