News & Views

Deep learning, deep insight, deeper resonance?

by Contagious Contributor

Jeremy Garner, creative consultant at Orange Digital, considers the potential implications if machines can ‘read’, learn and act on impulse.

The news that Google was to acquire London-based artificial intelligence company DeepMind Technologies was met with a high degree of interest from the tech community and the mainstream press earlier this year. Interestingly, however, the response from the sector for whom the acquisition holds perhaps the greatest opportunity (and threat) – marketing – was fairly muted. 
 
Deep learning, which DeepMind Technologies centres on, is concerned with the development of algorithms which allow machines to learn as humans do. This is a huge ambition, but then it should be remembered that such acquisitions are not for the immediate short-term benefit of Google’s shareholders, but to find solutions for big-scale challenges and thus may take even a decade or two before they come to fruition. 
 
So what are the merits of the acquisition and what will it do for Google? 
 
Simply put, DeepMind could act as the skewer to connect together all Google touch points (including those from other recent acquisitions like Boston Dynamics) such as search, maps, transportation, smart homes, devices, robots, Google Now, Google Glass, YouTube, Docs and Gmail to act as one joined-up, intelligent app that constantly learns, ‘thinks’ and connects users with what they need.
 
As Ray Kurzweil – who was hired as a director of engineering focused on machine learning and language processing at the search giant – sees it, Google could eventually become a ‘cybernetic friend’. 
 
DeepMind is key to this vision as it uses (in its own words) ‘the best techniques from machine learning and systems neuroscience’ to build algorithms which have the capability to learn for themselves how to do things. 
 
So you can understand Google’s excitement at this prospect. Seeing Larry Page presenting a DeepMind program which has the ability to teach itself how to play a bunch of old school video games, it’s easy to see why he took such an interest in the acquisition. 
 
‘Imagine if this kind of intelligence were thrown at your schedule or information needs,’ he muses as he watches the machine figuring out how to apply a superhuman performance to a boxing game, with it having seen nothing more than just the pixels, and with no human input or help at all. 
 
The quest to become not only relevant and smart, but connect together multiple aspects of people’s lives and truly ‘know’ the user – probably even more than they know themselves in some respects – is no small goal. 
 
But is this really possible? And is it a good idea?
 
After all, the news on The Information that DeepMind reportedly insisted on an ethics board which will determine how Google can use artificial intelligence research, is a telling if somewhat vague, detail. 
 
At this point, it’s worth taking into account the activities of Facebook, which was involved in its own talks with DeepMind prior to the Google deal. Last year, Facebook enlisted Yann LeCun as its director of AI research. In this role, he’s expected to use his knowledge of deep learning to enable Facebook to better identify faces and objects amongst the 350+ million photos and videos that are uploaded onto the site every day. 
 
It’s easy to imagine these AI applications as algorithmic curators in their own right which make informed choices as to what users might be interested in, and are constantly bettering the knowledge of their subjects all of the time.
 
Suddenly, Kurzweil’s vision begins to look alarmingly possible.
 
Furthermore, these algorithms have the potential to allow different product sectors to seamlessly connect within the narrative of our lives, and in a way which goes far beyond interconnected media or entertainment platforms.
 
A typical example of that connectedness might be this: you catch a driverless car home from the gym where, because it’s a Friday afternoon and you’d been excited to try out those new trainers you bought, you played music which was on average 20bpm faster than normal. Because of this exertion, you’d probably not only need some rehydration fluids from the supermarket on the way home (especially as the fridge’s contents were running a bit low), but you were also feeling pretty elated so you’d probably be up for trying some pretty fast-paced tunes from an interesting band you’d never heard before. Plus, you might even be open to having the driverless taxi drop you off early so you could try a new jogging route home, using Google Maps and Glass … (You get the idea.)
 
However, all this connected logic and dovetailing between people’s lives is, I can’t help feeling, too obvious, too much of an easy answer. It’s too cold and clinical. I believe that to realise Kurzweil’s vision there needs to be room for some fallibility, some serendipity. 
 
After all, if researchers were to use deep learning expertise to create algorithms that could think sufficiently to become a ‘trusted friend’ then surely the goal would be to go beyond logic and act in an often unexpected way, in order to read and anticipate the impulses of the user?
 
It’s only ‘trusted friends’ (in the true meaning of the term) who truly understand the unpredictability and irrationality of people. They do so by developing a deep understanding over time, and by applying their own empathy, emotional intelligence and gut instincts.
 
It’s these uniquely human powers of intuition which, as anyone involved in marketing knows, can be the difference between connecting with consumers in a meaningful way that resonates, inspires and moves, or not connecting at all. 
 
So how intelligent can artificial intelligence be?
 
Yoshua Bengio, a researcher in the AI field who’s based at the university of Montreal, estimates there are as few as 50 experts worldwide in deep learning, and about a dozen of them are at DeepMind.
 
This means the future intelligence of Google – and I’m talking about the ability to ‘read’, learn and act on impulse decisions from users, the real stuff which makes people people – will in large part be dependent on the thought processes, capacity for empathy and emotional intelligence, and outlook, of these individuals. 
 
It could be argued that Google’s application of deep learning might be used only to give people all the tools they need to make choices. But, if they really want to go beyond mere logic – and what trusted friends do you have who act solely on logic? Then they’ve got to figure out a process in which deep learning can operate which feels distinctly human. 
 
Of course, there’s a long way to go.
  
It’s been said that the most complex deep learning applications are currently only comparable to the brain of an insect in terms of number of neurons. 
 
But don’t expect Google’s largest European acquisition to hang about. Could AI even be subject to a similar rate of exponential growth similar to Moore’s Law eventually?
 
Whatever happens, it’ll be interesting to see what develops from an insect-brain with the resources of a $350bn company behind it. 
 
Let’s just hope it doesn’t sting. 
 
 
Jeremy Garner is the former Executive Creative Director of Weapon7 and is now a creative consultant at Orange Digital @JeremyJGarner