Jane Marshall - Radical Wellbeing.

Meditation teacher, coach, writer, deadlifter and dog-lover. Welcome to my site!

AI puts ethics front and centre in the boardroom

 
hero AI.jpg
 

I’ve worked in senior roles in corporate for 25 years and I can count on one hand the number of conversations I’ve either witnessed or been party to, where someone has said ‘What’s our ethical or moral position on this? What’s the right thing for us to do?’

Ethics and morality just don’t come up. It was fashionable for a while to talk about the idea that companies should have a higher purpose, and conscious capitalism had a moment. And of course there was Google with ‘do no evil’. But none of this ever really penetrated traditional corporate, where the dialogue amongst senior leaders is only ever really about profit, productivity, KPIs, targets, plans. 

The difficulties that many traditional businesses have been having since the GFC (Global Financial Crisis) - disruption and pressure on core business performance - have only cemented this. 

But AI is going to change all that - ethics are about to become vitally important. AI is, more than anything, an ethical conundrum of epic proportions.

The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy.
Henry A Kissinger

It’s not just that Boards have a job to do to govern their organisation - it’s so much bigger than that. The decisions that Boards make now have impact that goes way beyond their boundaries: their decisions affect society and humanity, and put us on a path from which there is no way back.


AI is hotting up

There are some notable recent events that demonstrate why Boards need to pay attention.

Should Google use it’s AI capability to assist the military? 

Should Google’s Duplex sound robotic?

Maybe you’re thinking, yeah but that’s just Facebook and Google. It won’t stop with them - all of the same questions will need to be answered by any company of any size eventually. There's no industry or sector that AI will leave untouched. 


There’s a lot of thought going in to the ethics of AI

If I had to summarise in a single word what the thorniest AI controversies are about, it would be “goals”. Should we give AI goals, and if so, whose goals? ….Can we ensure that these goals are retained even if the AI gets smarter? Can we change the goals of an AI that’s smarter than us?
Max Tegmark
Life 3.30
IBM, one of the leaders in this field, propose these 3 rules
  • To augment and be in service of humans
  • Transparency: the humans need to remain in control of the system
  • AI platforms to be built with people in the industry, and companies to train human workers how to use the tools to their advantage
     

The CEO of the Allen Institute for Artificial Intelligence proposes 3 slightly different rules

  • AI must be subject to all the same laws as humans
  • An AI must clearly disclose that it is not human
  • An AI system cannot retain or disclose confidential information without explicit approval from the source of that information. 

And for a really detailed review of the issues there are literally pages of questions in this excellent resource.

 

Has your company even made a start in discussing your principles?

Do you know how your company would answer any of these questions?

  • What is your companies position on personal sovereignty of data?
  • What decisions will you allow AI to make, what decisions should humans make? What are your processes for oversight of AI decisions?
  • Who will control your AI? What are your processes and controls, both during the experimental/learning phase of AI in your organisation, and then when your AI becomes very very smart - smarter than you?
  • Will you allow AI that your company creates to be used by Government or any other entity to enable abuse of human rights, or the law?
  • How will you ensure that your AIs, or algorithms, don’t perpetuate societal bias and further entrench inequality?
  • How do you bring diversity to bear in design of the AI future of your company? How effectively are you creating AI that benefits all your customers, all of the people in your organisation, all of humanity, not just use cases that excite the tech-bros who are designing the technology?
  • How effective are your processes to prevent your AI being hacked and/or mis-used? How would you stop a cyber-terrorist hijacking your AI systems, and using them for evil?
  • What is your moral obligation to society and to the communities in which you are based? What will your company do when faced with the dilemma of laying off workers when AI can do their jobs at a lower price? Will you allow AI to replace humans in your workforce? What is your position on re-training your workforce to enable them to play a role in an AI future?


For Boards it’s time to get busy on ethics

  • What’s your ethical framework for discussing issues related to privacy, digital, AI, cyber-security?
  • Do you have an ethics committee?
  • How competent is your Board to discuss the range of issues? What expertise do you have? What expertise do you need to bring in?
  • How are you engaging with the smart people in your organisation to bring the best minds to bear on the important questions?
  • In a field that’s so dynamic, how will you keep track, how often will you review your position?
  • How do you ensure the leaders of your business understand where you’re drawing the line: what’s acceptable and what’s not?
  • How will you integrate your ethical positions and your risk management to ensure you stay on the right side of the law, and to manage any potential public disasters from AI gone wrong?

Powered by Squarespace