Limavady-based Jason Bell is the author of Machine Learning: Hands On, a book aimed “squarely at software developers.” Jason says, “There are lots of theoretical books on technology, but when it came to a how-to practical kind of book about AI, there wasn’t much around.”
Maybe this practical experience is why Jason was asked to speak at this week’s Big Data Belfast, being organised by Analytics Engines.
I asked Jason: is deep learning just AI on speed?
He responded with a question: “Are those terms interchangeable? Big data is a good example,” he says. “People were using it as a label – but they were just talking about normal data, but they needed to put a badge against it.”
“But,,” he goes on, “from a deep learning point of view, there are certain neural networks like convolutional neural networks that go to a far deeper level than traditional ones.”
“Deep learning would use several stages of data processing to reach a desired action, far more than most machine learning. That’s what makes it impressive.”
What’s the premise of your talk at Big Data Week?
Jason says, “I want to give a whole holistic view of an automated machine learning application. This starts with: Where does the data come from? How do you store it? How do you make predictions against the model using that data?”
Our 4IRC debate this week revolves around the ethical implications of deep learning. Jason says, “Ethics start with data – I don’t think it really starts with AI, that comes later.”
He goes on, “With things like the Chinese social scoring system – there are major ethical implications.”
To bring this closer to home, Jason discusses a big corporate name everyone knows: Tesco.
“Dunnhumby hold all this data on you. What if they started scoring you through your shopping habits? If Tesco was to link insurance policies, they could use Clubcard data to offer you certain premiums – or to deny you,” he says.
If we’re to look at the evolution of machine learning, AI, and deep learning, few would refute that the ethical implications around sharing data could stymy the speed of innovation. But I believe that when it comes to your health, if it’s a matter of life or death, people will let you use whatever kind of data you want. In other words, when the stakes are high, privacy goes out the window.
As if to prove my theory, when I ask Jason what’s the real promise of deep learning, he points straight to healthcare.
“It’s immediately apparent that deep learning could be used to analyse and predict against x-rays, or MRI scans. Deep learning will be an efficient indicator, with a final decision being made by a doctor.”
“Other obvious areas will be self-driving cars and voice processing – AI-driven translation apps – using voice to be able to translate into different languages,” he says.
“When it comes to ethics, there’s always a trade off for sharing some private data. What that trade off is, and how the data is used, is really important. Is it being used against us? There’s always that possibility.”
On the darker side of data, he points out the size of our never-ending digital footprints: “There was a time when collecting data was do-able, but storing it was really difficult. No longer. Now we can store it easily, and that data will be around forever. Those data footprints can be re-evaluated at any time.”
He says, “Google and Facebook have been collecting data for 20 years and 10 years respectively; that’s why they can do the predictions that they do.”
He also goes on to say that “When it comes to an AI-driven decision changing people’s behaviour, that’s when people have a problem with it.” To go back to his first example, say you were denied an insurance policy and perceived that decision to be purely machine-made. That would irk you.
Jason, always a pragmatist, cautions that many companies claim to be using AI but aren’t really. “I’d bet that 95% of NI companies don’t have a need for deep learning. With most traditional companies they just have transactional data. They don’t need AI – they can just use SQL.”
Jason finishes our discussion – which may have elicited more questions than answers – with another question of his own: “Will we hit an AI winter, when the reality of AI is not matched by the claims?”
“A lot of the stuff we’re talking about how will not come to fruition until the next five years. I’m still sceptical, bar healthcare, where AI will be useful.”
The fact that an AI expert asks these questions is a statement in itself.