Go To Catalyst Inc

What are the ethical implications of deep learning? - 4IRC event recap

 

On October 16th a 4IRC meetup on Deep Learning was held to discuss the ethical implications of AI.

Getting together in the new Catalyst Belfast Fintech Hub, contributors attended from corporate, startup and academic worlds. Host Emer Maguire introduced the first of four speakers.

***

SPEAKER: Brian McDermott – full stack developer at Allstate

Brian focused on a practical introduction to machine learning (ML) techniques.

“This will be a stats free presentation – I’ll show you how to get started in machine learning and deep neural networks.”

“There are three different types of ML – supervised, unsupervised and reinforcement.”

Examples:

  • Teachablemachine.withgoogle.com – you can train a machine to recognise three visual inputs and play a different sound for each – this is supervised
  • Unsupervised – an app that asks you to draw a saxophone in 20 seconds - it keeps guessing what the object is, as you draw
  • Google DeepMind’s Deep Q-Learning – you can teach this app to play and win basic games
  • Lyrebird – create your own voice avatar, it learns to speak just like you

“This is quite promising technology - but someone unethical could do a lot with this.”

“In terms of ethical questions, Amazon’s hiring tool, based on AI, was biased towards hiring men. This demonstrates that AI is only as good as the data put into it.”

“The best online resources to learn about AI are Google AI, and the online course Machine Learning by Andrew Ng on Coursera. This course is free or if you want to get a certificate, you can pay for that – he’s the best of the best.”

***

SPEAKER: Padraic Sheerin, founder of Squad

This previous TechWatch article discusses Padraic’s fintech startup, Squad, which helps young people to save their money.

“I want to share a view that I’ve been thinking about: how can AI achieve more for society than MLK did?

“Let me start with my time working in the US insurance industry.”

“Actuaries set insurance prices for years based on a pooled risk – pooling people together, then adjust price to drive consumer demand.”

“We used machine learning to come up with a better estimate of a customers’ demand based on how likely they are to buy something at a certain price.”

“We asked, could I figure out someone’s price sensitivity using a model? It predicted how likely we were to shop around. We could make prices 10 – 20% lower than the risk adjusted prices.”

“We adjusted the price of millions of people’s policies – the company was doing great, the stock price was outperforming the market. This was a perfect example of using AI to drive a really good business outcome. But before long the regulators came in, and they had a problem with it.”

“At the time we thought it was really unfair, but what I’ve learned since is that by taking AI and replacing a human decision, you use a level of detail that’s never been seen before in human history.”

“For the first time in history regulators could look at how you weighted people – now it could see whether you’re biased or not. I can measure that bias and I can decide if it’s right or wrong.”

“They ended up banning the practice in many states – there was a general fear of AI and they didn’t understand the models.”

“As a result of that ban, people ended up paying more for insurance.”

“This is to illustrate the challenges we have with AI in future.”

“Human bias – the Trolley Problem. A self-driving car will have to decide which of two people to knock down.”

“Decisions are hard regardless of who’s making the decision.”

“Even as flawed humans’ morals are, what other baseline could we use to code our machines? There is no other baseline to be used.”

Garbage in -> Garbage out

Bias in -> Bias out

“What’s important to understand is that even though AI runs the risk of encoding bias into an algorithm that exists for a long time – we now have a way to measure that bias that was never there before.”

“I ask you this: Would Amazon be as quick to change its hiring process had the AI not shown up the bias? I don’t think so – not that quickly.”

“AI shines a light and helps us understand.”

“Do you start to slow down AI?”

“You can’t do that for three key reasons – 1. Globalisation 2. Massive benefits it can give the developing world 3. Force to drive equality”

“MLK helped us overcome known bias – AI can shine a light on unconscious bias to help us eradicate that, and that kind of bias is even worse.”

***

SPEAKER: Angelina Villikudathil is a researcher at Ulster University, currently using AI techniques to stratify patients with Type 2 Diabetes. (See this article on her work)

“Humans, not robots are responsible agents.”

“Robots need to explain themselves – how did you do what you did, and why?”

Papers she recommends reading:

  • Machine Bias, ProPublica 2016
  • Concrete Problems in AI Safety

“In terms of fairness, how we can make sure machine learning (ML) systems don’t discriminate?

“Ensuring ML doesn’t impinge on

  • Human rights
  • Prioritizing well-being
  • Accountability – establish responsibility and avoid potential harm
  • Transparency – addressing the concepts of traceability
  • Technology misuse and awareness – hacking, misuse of data, exploitation”

***

SPEAKER Pete Wilson, business change and management consultant for VeroZen

“Padraic raised a really interesting story about the Trolley Problem

It’s interesting that there’s cultural influence on how humans answer the question.”

“Now we’re in a position where we have emergent tech but haven’t got framework.”

“I recommend Klaus Schwab’s book – it says we all have the capability to inform the future.”

I’d suggest that our ethics are permanently under review – especially now with the likes of Twitter – what is a cesspool of opinion.”

***

Panel session – questions from the audience were read out by host Emer Maguire.

Emer: Padraic – what’s the difference between deep learning and machine learning?

Padraic: Deep learning learns relationships with the data that a human could never recognise – it trains a neural network and finds a level of patterns we can’t see.

Emer: How can you teach an AI machine something as complex as ethics?

Pete: For example, in GDPR’s rules it states that you can’t allow a computer to make a decision that you can’t explain as a human.

Angelina

That is why we’re breaking down each decision that’s being made – we can enforce ethics.

Pete

There’s a workforce timebomb in many countries – if they don’t automate, their economy is going to tank.

Emer: To avoid bias can you just remove that data, for instance gender and race data?

Padraic

The short answer is – yes. You can train an AI system using artificial data – and the question is how do you get that data to look “fair”.

Emer: How is AI used in climate change and food waste?

Angelina

I can give a general answer – I know you can build predictive models, over time – using daily weather patterns it can learn a pattern.

Brian

Models can take the usage from different grocery stores based on sales – by better predicting the demand, you can better meet the demand with the right displayed food.

Emer: What went wrong with Microsoft?

Padraic: “A chatbot created comments on issues of the day. Twitter already had a lot of bots that were already on there. If Microsoft’s chatbot learnt from those bots, because there were more bots than people – bots were created to amplify negative comments – it reflected those comments.”

***

Make sure you catch the next 4IRC debate on November 1st on Digital Identity: Inclusive or Invasive?, to be held at Queen’s University.

comments powered by Disqus

Search

TECHWATCH mailing list

* indicates required

Categories:

Year

Make your social life pay off

Make your social life pay off

added Tuesday, October 30 2018

Deep Learning News
AI research gets a grip on diabetes

AI research gets a grip on diabetes

added Wednesday, October 24 2018

Deep Learning News
Voice-driven apps like Alexa can be augmented by this lip-reading tech
Digital ID News