Page Nav

HIDE

Grid

GRID_STYLE

Trending News

latest

Deepmind’s Lila Ibrahim: ‘It’s Hard Not To Go Through Imposter Syndrome’

Lila Ibrahim / DeepMind. Lila Ibrahim is the first ever chief operating officer of  DeepMind , one of the world’s best known artificial inte...

Lila Ibrahim / DeepMind.
Lila Ibrahim is the first ever chief operating officer of DeepMind, one of the world’s best known artificial intelligence companies. She has no formal background in AI or research, which is the primary work of the company, yet she oversees half of its workforce, a global team of some 500 people, including engineers and scientists.

They are working on a single, rather amorphous mission: building an artificial general intelligence, a powerful mechanical version of the human brain that can advance science and humanity. Her task is to turn that vision into a structured operation.

“It’s hard not to go through imposter syndrome. I’m not the AI expert and here I am, working with some super-smart people . . . it took me a while to understand anything beyond the first six minutes of some of our research meetings,” she says. “But I realised I was not hired to be that expert, I was hired to bring my 30 years’ experience, my human aspect of understanding technology and impact, and to do so in a fearless way to help us realise this ambitious goal.”

The Lebanese-American engineer, 51, joined DeepMind in 2018, moving her family to London from Silicon Valley, where she had been chief operating officer at the online education company Coursera, via 20 years at Intel. Before she left Intel in 2010, she was chief executive Craig Barrett’s chief of staff for an organisation of 85,000 people, and had just had twins.

As an Arab-American in the Midwest, and a female engineer, Ibrahim was “always the oddball”. At DeepMind too, she was an outsider: she came from the corporate world, having worked in Tokyo, Hong Kong and Shanghai. She also runs a non-profit, Team4Tech, which recruits volunteers from the tech industry to improve education in the developing world.

DeepMind, based in London’s King's Cross, is run by Demis Hassabis and a mostly British leadership team. In her three years there, Ibrahim has overseen a doubling of its staff to more than 1,000 in four countries, and is tackling some of the thorniest questions in AI: how do you make breakthroughs with commercial value? How do you expand the talent pipeline in the most competitive employment market in tech? And how do you invent AI that is responsible and ethical?

Ibrahim’s first challenge has been how to measure the organisation’s success and value, when it doesn’t sell tangible products. Acquired by Google in 2014 for £400m, the company lost £477m in 2019. Its revenues of £266m in that year came from other Alphabet companies such as Google, which pay DeepMind for any commercial AI applications it develops internally.

“Having sat on a public company board before, I know the pressure that Alphabet is under. In my experience, when organisations focus on the short-term, you can often get tripped up. Alphabet has to think both short-term and long-term in terms of value,” Ibrahim says. “Alphabet sees DeepMind as being an investment in the future of AI, while giving some commercial value back into the organisation. Take WaveNet, which is DeepMind technology now integrated into Google products [such as Google Assistant] and into Project Euphonia.” This is a speech-to-text service where ALS [motor neuron disease] patients can preserve their voices.

These applications are developed primarily through the DeepMind4Google team, which works exclusively on commercialising its AI for Google’s business.

She maintains that DeepMind has as much autonomy from its parent company as it “needs so far”, structuring, for instance, its own performance management goals. “I have to tell, you when I joined I was curious, is there going to be some tension? And there hasn’t been,” she says.

Another significant challenge has been hiring researchers in a competitive job market, where companies such as Apple, Amazon and Facebook are vying for AI scientists. Anecdotally, it is reported that senior scientists may be paid in the region of £500,000, with a few commanding millions. “DeepMind [pay] is competitive, regardless of what level and position you have, but it is not the only reason people stay,” Ibrahim says. “Here, people care about the mission, and see how the work they’re doing advances the mission [of building artificial general intelligence], not just in and of itself but also as part of a larger effort.”

The third challenge Ibrahim has focused on is translating ethical principles into the practicalities of DeepMind’s AI research. Increasingly, researchers are highlighting risks posed by AI, such as autonomous killer robots, and issues such as replicating human biases and the invasion of privacy through technologies such as facial recognition.

Ibrahim has always been driven by the social impact of technologies. At Intel she worked on projects such as bringing the internet to isolated populations in the Amazon rainforest. “When I had my interview with Shane [Legg, DeepMind co-founder], I went home and thought, could I work at this company and put my twin daughters to sleep at night knowing what mommy worked on?”

DeepMind’s sister company Google has faced criticism for how it has handled ethical concerns in AI. Last year, Google allegedly fired two ethical AI researchers, Timnit Gebru and Margaret Mitchell, reportedly for suggesting that language-processing AI (which Google also develops) can echo human language bias. (Google described Gebru’s departure as a “resignation”.) The public fallout resulted in a crisis of faith among the AI community: are technology companies such as Google and DeepMind cognisant of the potential harms of AI, and do they have any intentions of mitigating them?

To that end, Ibrahim set up an internal societal impact team from a variety of disciplines. It meets the company’s core research teams to discuss the risks and affects of DeepMind’s work. “You have to continuously revisit the assumptions . . . and decisions you’ve made and update your thinking based on that,” she says.

She adds that “if we don’t have expertise around the table, we bring in experts from outside DeepMind. We have brought in people from the security space, privacy, bioethicists, social psychologists. It was a cultural hurdle for [scientists] to open up and say ‘I don’t know how this might be used, and I’m almost scared to guess it, because what if I get it wrong?’ We have done a lot to structure these meetings to be psychologically safe.”

DeepMind has not always been cautious: in 2016, it developed a hyper-accurate AI lip-reading system from videos, with possible applications for the deaf and blind, but did not acknowledge the security and privacy risks to individuals. However, Ibrahim says DeepMind now places much more consideration on the ethical implications of its products, such as WaveNet, its text-to-voice system. “We did think about potential opportunities for misuse. Where and how could we mitigate them and limit the applications for it,” she says.

Ibrahim says part of the job is knowing what AI cannot solve. “There are areas it shouldn’t be used. For example, surveillance applications are a concern [and] lethal autonomous weapons.”

She adds: “I often describe it as a moral calling. Everything I had done prepared me for this moment, to work on the most advanced technology to date, and [on] understanding . . . how it can be used.”