The Data Dilemma

Data-driven decision making has been the rage in business for some time now. The collection of data based on what are determined to be key performance indicators specific to a business is used to justify the outcome of higher quality organizational decision making, which in turn drives the formation and execution of business strategy.

Data aggregation can be crucial with tasks as basic as answering difficult questions to more sophisticated functions such as developing and testing hypotheses and formulating theories. Further benefits of having vital information available include risk evaluation, resource allocation, program and policy assessment, and performance measurement. The use of data is seen as a more efficient means of implementing these activities than would be the case in relying on intuition, hunches, and observation alone.

It is hard to argue against the utilization of data in running a business. McKinsey reports that data-driven organizations are twenty-three times more likely to attract customers, six times more likely to retain those customers, and nineteen times more likely to be profitable.

It is not difficult to find examples of how data-driven decision making has led to business success. Anmol Sachdeva, an independent marketing consultant, has researched several. For instance, Red Roof Inn positions their hotels near airports. They figured out how flight cancellation data and weather reports could be combined to increase their bookings. Netflix compiles user data concerning watch time, location, and programming preferences to predict which shows will become big hits. Coca-Cola leverages social media data to determine who mentions or posts pictures of their products. With this data, personalized and targeted advertising has led to a four-time increase in clickthrough rates.

In short, data collection has become the proverbial game-changer for business. By helping organizations pinpoint the factors that better address challenges and boost productivity and profits, data and its astute analysis, is now an essential component of business success.

However, despite the advantages of focused data there can come a point in which the zeal to collect information can become extreme and intrusive, particularly for employees. Of course, it is reasonable to expect that management would want data to make improvements in productivity among their workforce. Performance metrics can be used to spot shortcomings, training needs, and specifics for employee performance evaluations. An organization that leans toward a results-only performance model for their employees need objective data more than a manager’s potentially skewed or biased interpretation of how employees discharge their duties.

Quantification can be misused if it is used to go beyond the reasons stated above. There are now too many documented instances of employees being excessively monitored such that the workplace has become a surveillance culture. A career cannot thrive in a context where someone is always looking over your shoulder. Questionable monitoring may be the result of management wanting to identify organizing threats such as unionization communications. Or maybe the surveillance is used to spot ways of automating tasks so as to reduce the workforce. In extreme cases, data may be applied to limit wage growth, exploit labor, or even discriminate.

Amazon may be the poster child for such data fanaticism. Brishen Rogers, an associate professor at Temple University’s Beasley School of Law, notes how in 2020 Amazon sought to hire two “intelligence analysts” who were to use data analytics and other means to find “labor organizing threats” from among the Amazon workforce. The company goes on to insist their outsourced delivery providers hand over geo-locations, speed, and movement of drivers to use however the company wants. Inordinate corporate surveillance has also been chronicled at Uber, Lyft, Tesla, and Apple.

Inappropriate use of data results in loss of privacy, greater stress. and increased pressure on workers. The workplace can become a place of distrust and fear, not an environment conducive to innovation, high morale, and career enhancement. Instead, let’s insist that data collection be ethically construed, transparent, and legally justifiable.

 

Questioning the Future of AI

When I drive my E-ZPass-less car through the tollbooth on I93 in Hooksett, NH, I intentionally swing to the right to hand a dollar to the tollbooth attendant. When checking out from a shopping trip in a big box store, I prefer paying a person at a cash register rather than using the self-serve payment scan system. 

It is not that I am some sort of crotchety Luddite who shuns digital progress. I pride myself on maintaining some decent level of technical functionality as I age. But I have come to question why those who design and build our Artificial Intelligence (AI) systems are obsessed with things like automation. In fact, the more I investigate AI the more surprised I am that AI is being utilized so narrowly, unevenly, and menacingly. 

The AI movement is powerful, significant, and potentially authoritative regarding how our personal and work lives will be lived in the coming years. The scale of its reach places it in a class far beyond the technological tinkering improvements we generally see with new phone models or app developments. Machine learning is far more enigmatic than a better video camera or gaming platform. 

Momentous changes are likely in a broad range of fields from mechanics to medicine and are expected to reshape work and modify markets. Many of these transformations will be welcomed, perhaps cherished, but others perhaps should not happen at all. 

When looking at AI today it seems too much of it is focused on building systems that either automate functions, collect data, or conduct surveillance. This should be concerning. The likelihood of jobs being lost, governments and companies holding vast quantities of our personal information, and our personal freedoms becoming threatened is not some far-fetched paranoid delusion, but an ugly scenario we should work to prevent. 

There is progress and then there is degeneration. AI could give us either or both. As an analog, I think of my attitude ten to fifteen years ago about social media. Then, the crowdsourcing of unregulated input from the global community augured richer and more transparent conversations about any number of topics. Or so I thought. Today social media looks like a cesspool of disinformation and disgruntlement ushering in social breakdown. Not all innovations should be welcomed. 

In our democracy, while we still have one, the general public needs to be actively engaged in monitoring the AI powers that we have and weighing in on policies to determine what AI engineers develop. Living with a laissez-faire attitude of, ‘Well, whatever the markets come up with will be fine. Markets know best.’, can lead to costly and offensive ruptures in the very framework of society. Citizens should insist that AI be deployed in a generally advantageous manner as described by utilitarian philosophers like Jeremy Bentham — “the greatest amount of good for the greatest number”. 

Instead, it looks like AI development is being driven more by the acquisition of corporate profit and power than by what benefits society. One does not need be a wild-eyed Socialist to question whether a disruption as encompassing as AI could potentially pose hazards to society. Those who control the development and deployment of AI will have a lot of authority and say in how our economy operates and how our future day-to-day lives are experienced. Concentrations of power have traditionally been held suspect in America. Well, we have one in the making. Let’s pay attention. 

The ultimate direction AI takes does not have to be decided solely by engineers and corporate C-levels who find business in selling only surveillance and automation tools. AI could be targeted to complement and improve the work done by real people, while also creating new activities and opportunities that keep workers gainfully employed. We have a choice — let AI rule us or we rule it. Hopefully, we will choose wisely. 

Distributive Work Gets A Boost

One of the significant consequences foisted upon the economy during the Covid-19 outbreak has been the rapid scaling of work completed outside of the office, i.e., at home. What is commonly known as remote work, now increasingly being referred to as distributive work, has been increasing over the past twenty years or so. But in its short history it never has experienced a shot of practice like it is getting now. 

My guess is that distributive work is conventionally thought of across most businesses as secondary in its productive impact relative to being onsite, not unlike the way online courses have tried shaking off their reputation of being course lite. However, the severity of social distancing to break the chain of virus transmission is forcing the knowledge economy to rely on high quality distributive work to stay alive as never before. Indeed, it is in the knowledge economy, comprised of smart and skilled workers producing goods and services worldwide, where distributive work holds its greatest promise. 

It may be useful to know the thoughts of someone who has pioneered and cultivated distributive work for years and is now a leading voice in the movement. Matt Mullenweg was one of the founding developers of WordPress, the digital content management system, and founder of the diversified internet company Automattic with ~1200 employees distributed over 70 countries. He continues to not only evangelize distributive work but leads a set of companies that practice it daily. 

He is also convinced distributive work need not be just an off-the-shelf option management reaches for during times of disruption, but a model of productivity capable of surpassing the performance of traditional office-setting work. 

Mullenweg promotes worker autonomy as key to motivation and efficiency and is much more concerned with worker output than input. While retaining some in-person collaboration, but in a much more reduced and targeted manner, he recognizes the impediments of cramming a lot of people onto a single site. A myriad of distractions such as office politics, intrusive co-workers and managers, long off-topic chats with co-workers, shared facilitates, a narrow set of expected in-house behaviors, and a feeling of having little control over likes and dislikes from the office temperature to the smell of someone’s lunch can all negatively factor into the worker feeling a lack of autonomy. 

With that in mind he identifies five levels of distributive work from low to high effectiveness. To quickly summarize: 

  • Level 1, which is now old-school, has workers using telephone and email offsite to augment their work, but with the belief that the “real” work is done at the office. 
  • Level 2 is an attempt to recreate the office elsewhere by use of VPN and conferencing software to supplement voice and email. Most business is still mired in levels 1 and 2. 
  • Level 3 demonstrates an intentional effort to adopt the best software and equipment available to share knowledge seamlessly and transparently across the organization. This can include good lighting, microphones, and communication tools like Zoom, Slack, and P2. 
  • Level 4 places a premium on asynchronous and written communication, meaning to move away from an over-reliance on live interactions. The goal here is to improve the quality of decision making even if its pace is slowed. 
  • Level 5 is where production capability is shown to be measurably improved over traditional work methods. 

Mullenweg contends the manufacturing factory model of all employees looking busy at the same time and in the same place does not always translate well into the cognitive economy. By valuing quantifiable and qualitative output primarily and providing workers with the means necessary to cooperatively join forces across distance the “workplace” can be not only redefined but rendered more fruitful. 

Looking for a humane and profitable opportunity amidst a global contagion may be difficult. Perhaps, refining distributive work is one such occasion. 

Factor AI into Your Career Plans

It does not matter what career field you are in, anything from finance to fashion is being and will increasingly be impacted by Artificial Intelligence or AI. Whether you believe AI will create lives of no-work luxury for us all or will end civilization as we know it, our challenge in the 21st century is to understand and participate in shaping AI’s repercussions. Therefore, when pondering your career long-game a critical planning component is to consider the impact AI will have on what you do for a living. 

So, what is AI? I like Kathryn Hume’s working definition (Director, Product & Business Development Product for Borealis AI), which is that AI is whatever computers cannot do until they can. This implies that AI is a moving target, compiling and sorting vast amounts of data one year to leveraging machine learning that promotes employment obsolescence the next. 

What once passed for AI is now integrated into standard operating procedures across many industries. Currently, we are wondering about and bracing for unexpected consequences derived from ever more sophisticated machines “thinking” like superhumans. 

AI certainly engenders anxiety. Sam Daly (Builtin.com) reports on a 2018 survey in which 72% of respondents conveyed concern for human jobs being subsumed by technology. Even Elon Musk of electric car and SpaceX fame refers to AI as more dangerous than nukes. And of course, the current US Presidential campaign includes a candidate, Andrew Yang, who showcases a universal basic income for all Americans to help offset the workforce changes and employment displacement being caused by increased automation or AI. 

Given this AI anguish, what is a career planner to do? To begin, it may help to view AI as something old-school, as in business development processes which require change management procedures aimed toward adoption of innovations which lead to competitive advantages. In other words, AI may be no more threatening than any other big change. In this case, the adjustment is in the area of human-machine collaboration. (But we did that once during the Industrial Revolution, right?) 

Also, let us not think of AI as Alien Intelligence. There is nothing otherworldly going on here despite how opaque AI may seem to the layman. AI is constructed by the design and application of algorithms, which are sets of executable instructions leading to an output. Algorithms can be written to consist of one or many criteria or inputs, ranging from if…then… statements to text, images, videos, voice, and more. As the algorithms become more complex it can be unclear which criterion establishes dominance, but this does not diminish the validity and importance of the outputs. 

The quality of the inputs determines the caliber of the results. For example, if data sets that “train” algorithms are too narrowly selected, i.e., too old or demographically skewed, then that limits the scope of the output. We can think of such algorithms as biased. When relying on AI to plan market capture strategies, for instance, this can matter a lot. 

“Decisions” made by computers can also be fickle, as in different from one day to the next, requiring retrospective pattern analysis. In short, algorithms now are good at processing relatively restricted tasks, but far from totally taking over the universe of human capabilities. 

Many professional job descriptions will change due to AI. To prepare, develop a nimble and adaptable perspective to change. Do not wait to have your job transformation be forced onto you. Get out in front of the inevitable and think, for example about how AI can be used to eliminate mundane parts of your job to free you up for more innovative endeavors.  

Influence the way AI can improve your performance and the service you provide. By thinking critically about what AI can and cannot do you have a better chance of determining your professional relevance moving forward. 

Being What You Were Meant to Be

In general, career competitiveness is likely to get, well, more competitive in the coming years. There are several factors indicating that to secure and retain a truly meaningful and satisfying career each of us will need to manage stiffer headwinds. We may not be able to change the wind velocity or direction, but we can adjust our sails. 

What headwinds am I referring to? Well, as anyone who has read my pieces before knows the two principal factors impacting the future of work in the U.S. and around the world are globalization and automation. These alone are introducing a host of competitive actors, both living and non-living. Being able to offer more employment value than other people around the planet, and more than the machines which are getting better at reproducing routine and now even sophisticated tasks, makes for a tough challenge. 

Beyond the gales emanating from an increasingly integrated and technology-based economy are those of our own making. We all tend to make unforced errors that result in establishing the right career more difficult. These are the impediments we throw in front of ourselves that come from flawed thinking and behavior patterns residing deep in our psyches. And with career competition expanding due to forces beyond our control, let us at least agree it is wise to confront the missteps we tend to cause ourselves. 

Who among us cannot identify imperfect responses of our own making, many of which are based in the way we make decisions? Perhaps, we are too impatient and restless wanting quick resolutions to problems and clarity to uncertainty before the best course of action has been adequately determined. 

Stress also affects the way we decide, and it usually does so in a way that quickly mitigates temporarily the stress at the expense of a better longer-term outcome. Any actions that take us away from a carefully planned and systematic approach to making the big decisions in our lives, such as choosing and setting courses for a career, will weaken our competitiveness. 

Decision making can be thought of as a process with sequential steps to be followed. It begins with clearly identifying the decision to be made, then gathering necessary information, spotting alternatives, assessing evidence, selecting options, acting, and reviewing the chosen conclusion. Doing this well requires discipline and strength of mind, but the higher quality decision making that can emerge better positions us for career competition we will face. 

The practice of reflection also can play a powerful role in navigating through uncharted waters. The Benedictine nun, author, and speaker Joan Chittister is quoted as saying, “Find the thing that stirs your heart and make room for it. Life is about the development of self to the point of unbridled joy.”   

The same can be said about our careers. As we reflect on what matters most to us and what jobs need to be done in the world, then we can best merge the two to find our career choice. Our way to realizing our career becomes more apparent. 

The signs of how we should work have always been there. They began in childhood and have followed us through maturity. How we perceive and become aware of things, people, events, and ideas followed by the conclusions we make about these phenomena shape who we become as people and as career professionals. 

The interests we cultivate, the values we hold dear, the motivations that propel us, and the skills we develop lead to a unique set of criteria that form the foundation of our value proposition. In other words, they make us competitive. Reflect on what that is for you. 

We can look ahead and fear the storm clouds, or we can accept the adverse winds as a call to action to improve our competitiveness and to be the professionals we were meant to be.