AI – AI It’s off to work we go! (Part 2)

Introduction

In the previous post we looked at the potential social problems caused by disruptive technology.

As was seen from the Industrial Revolution these problems can be generational.

The thing to note is that it is not technology. The one common factor in any disruptive period is people.

The problem is not AI, but the people involved in the enterprise.

As always in such situations there are winners and losers.

The trick is to tip the balance so that there are more winners, and that the losers are not left desperate.

Nick Hanauer in his 2014 TED talk warns that the pitchforks are coming for the rich.

Our task as project managers and citizens is to learn and deploy the new technology, whilst being good stewards.

AI may appear wonderful to an in-work project manager.

But if the change leaves large numbers of people behind, they will reach a point of hopelessness where they are willing to tear everything down, with no regard for the consequences.

In the USA with the widespread possession of firearms, pitchforks may be the least of our worries.

Even before AI has had a major effect, we are seeing a rise in violence, and a governing group, many of whom have lost the discipline to govern. (Probably driven by and electorate that feels left behind and desperate)

My wife and I attend a Tai Chi class at a local church, where I noticed their nativity scene, with the three wise men.

It made me think that is something that we seem to be lacking now.

We desperately need wise people, not just technically or commercially savvy, but truly wise.

I remember years ago I worked with a   particularly smart engineer.

But he was forever screwing up on the day-to-day things most engineers took in their stride. A colleague summed it up thus: “He’s so smart, that he’s stupid”.

So! Can we leave AI to the brilliant engineers and entrepreneurs?

It may be that they are so smart that they are stupid, or they may just not care.

A recurring theme in the current AI literature is the need to have humans in the loop to ensure that there are no ethical violations.

How has that worked out in the past?

With humans in the loop, we have no ethical issues!

Oh! Wait!  Humans are the ethical issue in the loop.

Open AI

Most of you will be aware of the recent fiasco at Open AI.

On November 17th, 2023, the board decided to fire founder Sam Altman.

The board in a statement said that they took the action because of differences and the fact that they felt Altman had not been completely candid with them.

The sacking was followed by resignations and a threatened walkout by most employees.

Meanwhile Microsoft (who is a major contributor to company) offered Altman a job.

Under this pressure the board resigned, and Altman was reinstated, and a new board was constituted.

What was all that about?

In 2015 Open Ai was setup as a non-profit, with the aim of developing AI for the benefit of all humanity.

Following the departure of Elon Musk in 2018, and his potential future investment, the company had to attract alternative investment, and a for-profit division was created.

However, the overall company was still controlled by the not-for-profit board.

In a November 23rd article NPR sums the situation thus.

“Yetthe for-profit entity of OpenAI will continue to recruit moneyed enthusiasts who want in on the AI goldrush. The two sides are at cross purposes, with no clear way to co-exist.”

                        NPR article November 24th, 2023.1

Was the recent fiasco a case of the original mission being compromised in the eyes of the old board?

We have no way of knowing because no one is talking about it.

But as NPR states there may be no clear way for the two sides to co-exist.

As Carl Frey states in his book:

“The more serious challenge, it seems to me, exists not in technology but in the area of political economy”2.

Stewardship

As mentioned in my earlier article the 7th Edition of the PMBOK is principles base, and project managers are required to be good Stewards.3

I would consider that this involves an ethical deployment of AI in a way that protects society.

How do we balance the commercial needs with the ethical ones and avoid a repeat of the Industrial Revolution?

Douglas Liles published an interesting White paper on LinkedIn, in April 2023.4

He bases his paper around the principles and values of seven major world religions.

(Christianity, Islam, Hinduism, Buddhism, Sikhism, Judaism, and the Baha’i Faith.)

From that he extrapolates a 10 – Item Morality Constitution.

    1. Respect for Life: Safeguarding the Sanctity of All Beings
    2. Truthfulness and Integrity: Fostering Trust and Honesty in AI
    3. Privacy and Personal Boundaries: Respecting Individual Autonomy
    4. Equality and Fairness: Promoting Inclusivity and Social Justice
    5. Justice and Accountability: Ensuring AI Operates within Moral and Legal Boundaries
    6. Compassion and Empathy: Cultivating an AI that Understands and Cares
    7. Environmental Stewardship: AI as a Force for Sustainability
    8. Collaboration and Consultation: Building AI through Collective Wisdom
    9. Education and Knowledge: Empowering AI to Learn and Grow
    10. Spiritual and Moral Development: Nurturing AI with a Sense of Purpose

    Based on the morality constitution he proposes that we ensure AI aligns with human values, by developing ethical AI hardware.

    Rather than reiterate everything I have included a link to the white paper in the notes.

    Conclusion

    The problem we face is not one of avenging AI that will rise and replace us.

    But of the implementation of AI in a manner which is wildly destructive to society in the short term.

    Remember the short term can be a lifetime!

    AI is not the problem; Humans are the ethical problem in the loop.

    “AI won’t replace humans, but people who can use it will.”5

    Or people who are prepared to use it for personal gain with no regard to the societal damage it will cause.

    It is up to us to study and learn as much as we can to prepare for technological change, and at the same time be good Stewards by applying one of the other principles in the PMBOK, Adaptability and Resilience.

    Get together with like-minded people, help develop and implement a morality framework to deliver the AI that benefits all humanity.

    Take note of the final sentence in Carl Frey’s book The Technology Trap

    The bottom line is that regardless of what the future of technology holds, it is up to us to shape its economic and societal impact.6

    NOTES

      1https://www.npr.org/2023/11/24/1215015362/chatgpt-openai-sam-altman-fired-explained

      2 The Technology Trap; Chapter 13, Page 343

      3 The Standard for Project Management – Section 3 – Project Management Principles- Page 24

      4 https://www.linkedin.com/pulse/tethering-artificial-intelligence-morality-spiritual-circuitry-liles/

      5 CNBC article December 9th, 2023

      6 The Technology Trap; Chapter 13, Page 366 (Concluding sentence)

      BIBLIOGRAPHY

      • The Technology Trap – Carl Benedikt Frey
      • NPR 23rd November 2023 article
      • Weapons of Math Destruction – Cathy O’Neil
      • Tethering Artificial Intelligence to Morality: Spiritual Circuitry – Douglas Liles

      Leave a Reply