Clearly there is much anticipation in regards to artificial intelligence (A.I.). Depending upon your system of orientation, the anticipation is for profit to be made or of problems to be mitigated. So, we have to ask: Is there really gold in them-there hills?
The Context
This is happening within the context of capitalism. The capitalist society’s system of orientation is a mechanistic materialistic world wherein the environment (Nature and society) is comprised of objects to be manipulate and exploited in service of one’s own material gain—it is called self-interest maximization, profit, wealth accumulation. It is a system of orientation void of morality since it is all about ‘me’ getting ‘mine’ with no regard for the impact on any ‘we’—in fact there is not ‘we’, especially in the neoliberal version. Thus the captains of industry—those in authority—are forever seeking the next profit making thing, which is often the latest disaster to exploit or technology to use as the means to this end, with little to no regard for the adverse effects upon people and society—using instruments at hand to get what they desire.
Collateral Damage
For a very large swath of people in society, the commercialization of A.I. means the increased commodification of people—something tantamount to people’s manipulation and exploitation on steroids. It seems reasonable to see this as a logical extension of the mechanistic materialistic system of orientation, upon which capitalism rests. In this context, A.I. is essentially a machine (an instrument) for use in profit generating endeavors as are people (units of labor). Given that machines are driven, why else would business leaders speak about ‘driving for results’. Clearly, the terms we use are telling of our (unconscious) system of orientation guiding our decision-making and behavior, which includes what we value.
If a cost doesn’t show up as a line item on the balance sheet or income statement, then it is an external cost (a.k.a. collateral damage in the pursuit of profit) and thus the lack of concern for and responsibility to society. There are so many externalities when technology is a tool for manipulation and exploitation in the pursuit of profit and power.
A.I. Could Mitigate Soft-Skills Dilemma for Management & Reduce Costs
When people cease to be just units of labor and act out of their inherent humanness, expressing what is felt, then managing/leading requires a very human core-to-core relationship between the leader and the led; a difficult task for those managing the machine and driving for results. Many managers, even leaders, in the business organization find it difficult to access their capacity for soft-skills—those associated with both social and emotional intelligence—to effectively work with subordinates who present very human (i.e. emotional) issues.
However, A.I. replacing people will fit well with such business managers/leaders, since A.I. doesn’t express feelings, or a need for meaning, or a requirement to have basic human needs met. Accordingly, with fewer people and their associated costs, A.I. could likely reduce the expenses/cost of labor and its associated items. Less expenses means greater profit, which is the intent of business. So, it’s all good! Or is it?
Past Patterns are Telling
How can we be confident that the enthusiasm and all-out excitement for A.I. is about the prospect of profit making without regard for any unintended adverse consequences to people and society? All we have to do is do what an AI algorithm would do; use the pattern in data of the past to decide an action—note to decide is to predict that the chosen action will yield a desirable outcome.
Here are a few patterns from the past: a) the fossil fuel industry continues seeking profit unabated by the well-established detrimental impact upon the viability of life on this planet; b) the Internet technology afforded the commercialization of social media in the manipulation and exploitation of its users; and c) the gun industry continues seeking to maximize profit even while society is experiences ever-increasing frequency of gun violence and death. All of these illustrate the manipulation and exploitation of people treated as objects–viewed as collateral damage and external costs—in the pursuit of profit. What’s the likelihood that A.I. in the hands of the business-minded (a.k.a. profit maximizers) would yield a different pattern?
As reported by Public Citizen
“Right now, businesses are deploying potentially dangerous A.I. tools faster than their harms can be understood or mitigated. History offers no reason to believe that corporations can self-regulate away the known risks – especially since many of these risks are as much a part of generative A.I. as they are of corporate greed. Businesses rushing to introduce these new technologies are gambling with peoples’ lives and livelihoods, and arguably with the very of foundations of a free society and livable world.”
It has even become evident to the business friendly press that the foreseeable `problems from AI’s commercialization are multiple. A short list of these include: 1) manipulation through misinformation; 2) unemployment–life destruction through job loss; 3) bias from big-data itself; 4) the future is constrained by the past– limited creativity since A.I. is mere machine training using big-data which is of the past; 5) rarity of out-of-the-box thinking– likely A.I. can only foresee what the past would suggest; 6) people’s capability for decision making will go the way of people’s capability for cursive writing has gone due to keyboard use.
Let’s just consider this: While the decision-making process involves the use of information, if not knowledge/understanding, to decide a course of action, it also requires the use of values to assess the moral soundness of each possible action. To a great extent this calls upon the morality and humanness—the care and concern of others—of the decision-maker. Though A.I.’s (machine) learning uses Big Data, which likely includes the correlations/associations or thoughts inherent in the patterns in the data, does this mean that an A.I. algorithm is or can be thoughtful in the same sense as a human being?
What’s more likely, that for those people whose job is lost to A.I. that: a) the leaders of the organization will find more meaningful if not creative work in the organization for them to do or; b) that the leaders will discard them as they would any other no longer useful tool?
Internal Impact Upon the Organization Overlooked
What about innovation (from within), which requires the inventiveness of people who are doing the organization’s work? After all, they possess the most knowledge about the work! It must not be overlooked that people are capable of creating new knowledge to the extent that the leaders of the organization facilitate/enable collaboration between/among them, the sharing of knowledge (more accurately sharing understanding of knowledge), and dialoguing about ideas. If, however, A.I. replaces many people then, even though robots can exchange information with other robots, one must ask, can new knowledge emerge through collaboration, sharing knowledge-based understanding, the dialogue on ideas among robots?
Of course, the organization would not be void of people, just those whose job can be replaced by A.I. For those remaining, what about the lack of trust that is enacted through the elimination of co-workers in the organization? What is the effect of mistrust upon organization to maintain its competitiveness and remain viable? What benefit to society would be such organizations? Just imagine the organizational culture and climate in such organizations!
A Wise Thing to Do: Perhaps We Should Stop and Think
Let’s challenge ourselves to critically think about the prospects of A.I. Perhaps the following could be a starting point for exploration, perspective gaining and understanding:
- Who (or is it what) created A.I.? Was it people/human creativity or just an organically emergent A.I. algorithm, without direct human involvement, from which it was created?
- Is A.I. really an equal replacement for human intelligence (H.I.)? If so, why is such a replacement necessary? In the replacement: What’s gained? What’s lost?
- A.I. versus the human mind (H.M.): Should this be?
- In the A.I. replacement, what of the H.M.: with A.I., should we discard any development of the mind? Why?
- The H.M. can change/transform itself to the benefit of humankind. Can A.I. do the same? If it can transform itself would this be one guided by a deep sense of connection to the living world?
- What could be the benefit of A.I. for humanity? How can A.I. aid in our development as human beings? As a society? What should be the benefit of A.I. for people, for humanity?
- If A.I. development could benefit humanity, then who has the wisdom to administer over continued development and use?
- What are the parallels between A.I. and nuclear weaponry? Should A.I. be developed?