back to indexRana el Kaliouby: Emotion AI, Social Robots, and Self-Driving Cars | Lex Fridman Podcast #322
link |
there's a broader question here, right? As we build socially and emotionally intelligent machines,
link |
what does that mean about our relationship with them? And then we're broadly our relationship
link |
with one another, right? Because this machine is going to be programmed to be amazing at empathy,
link |
by definition, right? It's going to always be there for you. It's not going to get bored.
link |
I don't know how I feel about that. I think about that a lot.
link |
The following is a conversation with Rana L. Colubi, a pioneer in the field of emotion recognition
link |
and human centric artificial intelligence. She is the founder of Effectiva, deputy CEO of SmartEye,
link |
author of Girl Decoded, and one of the most brilliant, kind, inspiring, and fun human beings
link |
I've gotten the chance to talk to. This is the Lex Friedman podcast. To support it,
link |
please check out our sponsors in the description. And now, dear friends, here's Rana L. Colubi.
link |
You grew up in the Middle East in Egypt. What is a memory from that time that makes you smile?
link |
Or maybe a memory that stands out as helping your mind take shape and helping you define
link |
yourself in this world? So the memory that stands out is we used to live in my grandma's house.
link |
She used to have these mango trees in her garden in the summer, and so mango season was like July
link |
and August. And so in the summer, she would invite all my aunts and uncles and cousins,
link |
and it was just like maybe there were 20 or 30 people in the house, and she would cook all this
link |
amazing food. And us, the kids, we would go down the garden, and we would pick all these mangoes.
link |
And I don't know. I think it's just bringing people together that always stuck with me, the warmth.
link |
Around the mango tree. Yeah, around the mango tree. And it's just the joy, the joy of being
link |
together around food. And I'm a terrible cook, so I guess that memory didn't translate to me
link |
kind of doing the same. I love hosting people. Do you remember colors, smells? Is that what?
link |
How does memory work? What do you visualize? Do you visualize people's faces, smiles? Is there
link |
colors? Is there like a theme to the colors? Is it smells because of food involved?
link |
Yeah, I think that's a great question. So those Egyptian mangoes, there's a particular type
link |
that I love, and it's called Darwesi mangoes, and they're kind of, you know, they're oval,
link |
and they have a little red in them. So they're red and mango colored on the outside. So I remember
link |
that. Is red indicate like extra sweetness? Isn't that? Yes. That means like it's nice and ripe
link |
and stuff. Yeah. What's like a definitive food of Egypt? You know, there's like these almost
link |
stereotypical foods in different parts of the world, like Ukraine invented borsh. Borsh is this
link |
beet soup that you put sour cream on. If you know what it is, I think you know it's delicious,
link |
but if I explain it, it's just not going to sound delicious. I feel like beet soup. This
link |
doesn't make any sense, but that's kind of, and you probably have actually seen pictures of it,
link |
because it's one of the traditional foods in Ukraine, in Russia, in different parts of the
link |
Slavic world. But it's become so cliche and stereotypical that you almost don't mention it,
link |
but it's still delicious. I visited Ukraine, I eat that every single day. Do you make it
link |
yourself? How hard does it make? No, I don't know. I think to make it well, like anything,
link |
like Italians, they say, well, tomato sauce is easy to make, but to make it right,
link |
that's like a generational skill. So anyway, is there something like that in Egypt? Is there
link |
a culture of food? There is. And actually, we have a similar kind of soup. It's called molochaya,
link |
and it's made of this green plant. It's somewhere between spinach and kale,
link |
and you mince it, and then you cook it in chicken broth. And my grandma used to make,
link |
and my mom makes it really well, and I try to make it, but it's not as great. So we used to
link |
have that, and then we used to have it alongside stuffed pigeons. I'm pescatarian now, so I don't
link |
eat that anymore, but... Stuffed pigeons. Yeah, it's like, it was really yummy. It's the one thing
link |
I miss about, you know, now that I'm pescatarian, and I don't eat... The stuffed pigeons? Yeah,
link |
the stuffed pigeons. Is it, what are they stuffed with? If that doesn't bother you too much to
link |
describe? No, no, it's stuffed with a lot of like just rice, and yeah, it's just rice. Yeah, so.
link |
And you also said that your first in your book, that your first computer was an Atari,
link |
and Space Invaders was your favorite game. Is that when you first fell in love with computers?
link |
Would you say? Yeah, I would say so. Video games or just the computer itself? Just something about
link |
the machine? Ooh, this thing... It's magic in here. Yeah, I think the magical moment is definitely
link |
like playing video games with my... I have two younger sisters, and we just like had fun together,
link |
like playing games. But the other memory I have is my first code. The first code I wrote, I wrote...
link |
I drew a Christmas tree, and I'm Muslim, right? So it's kind of... It was kind of funny that I...
link |
The first thing I did was like this Christmas tree, so yeah. And that's when I realized, wow,
link |
you can write code to do all sorts of like really cool stuff. I must have been like six or seven
link |
at the time. So you can write programs, and the programs do stuff for you. That's power.
link |
If you think about it, that's empowering. That's AI. Yeah, I know what it is. I don't know if that...
link |
See, like, I don't know if many people think of it that way when they first learned to program.
link |
They just love the puzzle of it. Like, oh, this is cool. This is pretty. It's a Christmas tree,
link |
but like, it's power. It is power. Like, you... Eventually, I guess you couldn't at the time,
link |
but eventually this thing, if it's interesting enough, if it's a pretty enough Christmas tree,
link |
it can be run by millions of people and bring them joy, like that little thing. And then,
link |
because it's digital, it's easy to spread. So like, you just created something that's
link |
easily spreadable to millions of people. Totally. It's hard to think that way when you're six.
link |
In the book, you write, I am who I am because I was raised by a particular set of parents,
link |
both modern and conservative, forward thinking and yet locked in tradition. I'm a Muslim,
link |
and I feel I'm stronger and more centered for it. I adhere to the values of my religion,
link |
even if I'm not as dutiful as I once was. And I am a new American, and I'm thriving on the
link |
energy, vitality, and entrepreneurial spirit of this great country. So let me ask you about your
link |
parents. What have you learned about life from them, especially when you were young?
link |
So both my parents, they're Egyptian, but they moved to Kuwait right out. Actually,
link |
there's a cute story about how they met. So my dad taught Kobel in the 70s. Nice.
link |
And my mom decided to learn programming. So she signed up to take his Kobel programming class.
link |
And he tried to date her, and she was like, no, no, no, I don't date. And so he's like,
link |
okay, I'll propose. And that's how they got married. Whoa, strong move.
link |
Right, exactly right. That's really impressive. So those, those Kobel guys know how to, how to
link |
impress a lady. So, so yeah, so what have you learned from them?
link |
So definitely grit. One of the core values in our family is just hard work. There were no
link |
slackers in our family. And that's something I've definitely, that's definitely stayed with me,
link |
both, both as a professional, but also in my personal life. But I also think my mom, my mom
link |
always used to like, I don't know, it was like unconditional love. Like I just knew my parents
link |
would be there for me, kind of regardless of what I chose to do. And I think that's very powerful.
link |
And they got tested on it because I kind of challenged, you know, I challenge cultural norms,
link |
and I kind of took a different path, I guess, than what's expected of, you know, a woman in
link |
the Middle East. And then they, and I, you know, they still love me, which is, which is, I'm so
link |
grateful for that. When was like a moment that was the most challenging for them? Which moment
link |
were they kind of, they had to come face to face with the fact that you're a bit of a rebel?
link |
I think the first big moment was when I had just gotten married, but I decided to go do my PhD at
link |
Cambridge University. And because my husband at the time, he's now my ex, ran a company in Cairo,
link |
he was going to stay in Egypt. So it was going to be a long distance relationship.
link |
And that's very unusual in the Middle East for a woman to just head out and kind of,
link |
you know, pursue her career. And so my dad actually, my dad and my parents in law both said,
link |
you know, we do not approve of you doing this, but now you're under the jurisdiction of your
link |
husband, so he can make the call. And luckily for me, he was supportive. He said, you know,
link |
this is your dream come true. We've always wanted to do a PhD. I'm going to support you.
link |
So I think that was the first time where, you know, I challenged the cultural norms.
link |
Was that scary? Oh my God, yes. It was totally scary.
link |
What's the biggest culture shock from there to Cambridge, to London?
link |
Well, that was also during right around September 11th. So everyone thought that there was going
link |
to be a third world war. And I, at the time, I used to wear the hijab. So I was very visibly
link |
Muslim. And so my parents just were, they were afraid for my safety. But anyways, when I got
link |
to Cambridge, because I was so scared, I decided to take off my headscarf and wear a hat instead.
link |
So I just went to class wearing these like British hat, which was, in my opinion, actually worse
link |
than just showing up in a headscarf. Because it was just so awkward, right? Like fitting in class
link |
with like all these. Trying to fit in. Yeah. So after a few weeks of doing that, I was like,
link |
to heck with that, I'm just going to go back to wearing my headscarf.
link |
Yeah, you wore the hijab. So starting in 2000 and for 12 years after. So it's always whenever
link |
you're in public, you have to wear the hat covering. Can you speak to that, the hijab,
link |
maybe your mixed feelings about it? Like what does it represent in its best case?
link |
What does it represent in the worst case? Yeah. You know, I think there's a lot of,
link |
I guess I'll first start by saying I wore it voluntarily. I was not forced to wear it.
link |
And in fact, I was one of the very first women in my family to decide to put on the hijab.
link |
And my family thought it was really odd, right? Like there was, they were like,
link |
why do you want to put this on? And at its best, it's the sign of modesty, humility.
link |
Yeah. It's like me wearing a suit. People are like, why are you wearing a suit? It's a step back
link |
into some kind of tradition, a respect for tradition of sorts. So you said because it's
link |
by choice, you're kind of free to make that choice to celebrate a tradition of modesty.
link |
Exactly. And I actually like mated my own. I remember I would really match the color of my
link |
headscarf with what I was wearing. Like it was a form of self expression. And at its best,
link |
I loved wearing it. You know, I have a lot of questions around how we practice religion and
link |
religion and, you know, and I think, I think also it was a time where I was spending a lot of time
link |
going back and forth between the US and Egypt. And I started meeting a lot of people in the US
link |
who are just amazing people, very purpose driven, people who have very strong core values,
link |
but they're not Muslim. That's okay, right? And so that was when I just had a lot of questions.
link |
And politically, also the situation in Egypt was when the Muslim Brotherhood ran the country and I
link |
didn't agree with their ideology. It was at a time when I was going through a divorce. Like it was
link |
like just the perfect storm of like political personal conditions where I was like, this doesn't
link |
feel like me anymore. And it took a lot of courage to take it off because culturally, it's not,
link |
it's okay if you don't wear it, but it's really not okay to wear it and then take it off.
link |
But you're still, so you have to do that while still maintaining a deep core and pride in the
link |
origins, in your origin story. Totally. So still being Egyptian, still being a Muslim.
link |
Right. And being, I think generally like faith driven, but yeah.
link |
But what that means changes year by year for you. It's like a personal journey.
link |
Yeah, exactly. What would you say is the role of faith in that part of the world?
link |
Like how do you say, you mentioned it a bit in the book too.
link |
Yeah. I mean, I think there is something really
link |
powerful about just believing that there's a bigger force. You know, there's a kind of
link |
surrendering, I guess, that comes with religion and you surrender and you have this deep conviction
link |
that it's going to be okay. Right? Like the universe is out to like do amazing things for you
link |
and it's going to be okay. And there's strength to that. Like even when you're going through adversity,
link |
you just know that it's going to work out. Yeah, it gives you like an inner piece of calmness.
link |
Exactly. Exactly. Yeah. It's faith in all the meanings of that word.
link |
Right. Faith that everything is going to be okay. And it is because time passes
link |
and time cures all things. It's like a calmness with the chaos of the world. Yeah.
link |
And also there's like a silver, I'm a true believer of this, that something at the specific
link |
moment in time can look like it's catastrophic and it's not what you wanted in life. But then
link |
time passes and then you look back and there's a silver lining, right? It may be close the door,
link |
but it opened a new door for you. And so I'm a true believer in that, that, you know,
link |
there's a silver lining and almost anything in life, you just have to have this like,
link |
have faith or conviction that it's going to work out. So,
link |
such a beautiful way to see a shady feeling. So if you're, if you feel shady about it,
link |
current situation, I mean, it almost is always true unless it's the cliche thing of if it doesn't
link |
kill you, whatever it doesn't kill you makes you stronger. It's, it does seem that over time,
link |
when you take a perspective on things that the hardest moments and periods of your life are
link |
the most meaningful. Yeah. Yeah. So over time you get to have that perspective. Right. What,
link |
what about, because you mentioned Kuwait, what about, let me ask you about war. What's the
link |
role of war and peace? Maybe even the big love and hate in that part of the world, because it
link |
does seem to be a part of the world where there's turmoil. There was turmoil, there's still turmoil.
link |
It is so unfortunate, honestly. It's, it's such a waste of human resources and, and,
link |
yeah, and human mind share. I mean, and at the end of the day, we all kind of want the same
link |
things. We want, you know, we want a human connection, we want joy, we want to feel fulfilled,
link |
we want to feel, you know, a life of purpose. And I just, I just find it baffling, honestly,
link |
that we are still having to grapple with that. I have a story to share about this. You know,
link |
I grew up in Egypt, I'm Egyptian, American now, but, but, you know, originally from Egypt.
link |
And when I first got to Cambridge, it turned out my office mate, like my PhD kind of,
link |
you know, she ended up, you know, we ended up becoming friends, but she was from Israel.
link |
And we didn't know, yeah, we didn't know how it was going to be like.
link |
Did you guys sit there just staring at each other for a bit?
link |
Actually, she, because I arrived before she did. And it turns out she emailed our PhD advisor
link |
and asked him if she thought it was going to be okay. Yeah. Oh, this is around 9 11 too.
link |
Yeah. And, and Peter, Peter Robinson, our PhD advisor was like, yeah, like,
link |
this is an academic institution, just show up. And we became super good friends. We were both
link |
um, new moms, like we both had our kids during our PhD. We were both doing artificial emotional
link |
intelligence. She was looking at speech. I was looking at the face. We just had so the culture
link |
was so similar. Our jokes were similar. It was just, I was like, why on earth are our countries?
link |
Why is there all this like war and tension? And I think it falls back to the narrative, right?
link |
If you change the narrative, like whoever creates this narrative of war, I don't know,
link |
I don't know. We should have women around the world. Yeah, that's that's one solution.
link |
The good women, because there's also evil women in the world. True, true. Okay.
link |
But yes, yes, there could be less war if women around the world. The the other aspect is,
link |
doesn't matter the gender, the people in power. You know, I get to see this with with Ukraine
link |
and Russia, different parts of the world around that conflict now. And that's happening in Yemen
link |
as well, and everywhere else. There's these narratives told by the leaders to the populace.
link |
And those narratives take hold and everybody believes that and they have a distorted view
link |
of the humanity on the other side. In fact, especially during war, you don't even see
link |
the people on the other side as as as human, or as equal intelligence, or worth, or value,
link |
as as you, you tell all kinds of narratives about them being Nazis, or Dom, or whatever,
link |
whatever narrative you want to weave around that, or evil. But I think when you actually meet them
link |
face to face, you realize they're like the same. Exactly, right? It's actually a big shock for
link |
people to realize like, that they've been they've been essentially lied to, within their country.
link |
And I kind of have faith that social media is as ridiculous as it is to say, or any kind of technology
link |
is able to bypass the the walls that governments put up, and connect people directly. And then
link |
you get to realize who like people fall in love across different nations, and religions, and so
link |
on. And that I think ultimately can cure a lot of our ills, especially sort of in person. Just I
link |
also think that if leaders met in person, to have a conversation that would cure a lot of ills of
link |
the world, especially in private. Let me ask you about the women running, running the world.
link |
Okay. So gender does in part, perhaps shape the landscape of just our human experience.
link |
So in what ways was the limiting in in what ways was it empowering for you to be a woman in the
link |
Middle East? I think just kind of just going back to like my comment on like women running the world,
link |
I think it comes back to empathy, right, which which has been a common threat throughout my
link |
my entire career. And it's this this idea of human connection. Once you build common ground with a
link |
person or a group of people, you build trust, you build loyalty, you build friendship, and then and
link |
then you can turn that into like behavior change and motivation and persuasion. So so it's like
link |
empathy and emotions are just at the center of of everything we do. And and I think being being
link |
from the Middle East, kind of this human connection is very strong. Like we have this running joke that
link |
if you come to Egypt for a visit, people are going to we'll know everything about your life
link |
like right away, right? I have no problems asking you about your personal life. There's no like no
link |
boundaries really, no personal boundaries in terms of getting to know people we get emotionally
link |
intimate like very, very quickly. But I think people just get to know each other like,
link |
authentically, I guess, you know, there isn't this like superficial level of getting to know people,
link |
you just try to get to know people really deeply part of that. Totally. Because you can put yourself
link |
in this person's shoe and kind of, yeah, imagine, you know, what what challenges they're going through.
link |
And so I think I've definitely taken that with me. Generosity is another one too, like just being
link |
like just being generous with your time and love and attention and even with your wealth, right?
link |
Even if you don't have a lot of it, you're still very generous. And I think that's another
link |
enjoying the humanity of other people. And so you think there's a useful difference between men
link |
and women in that aspect and empathy? Or is doing these kind of big general groups, does that hinder
link |
progress? Yeah, I don't I actually don't want to over generalize. I mean, I some of the men I know
link |
are like the most empathetic humans. Yeah, I strive to be. Yeah, you're you're actually very empathetic.
link |
Yeah, so I so I don't want to over generalize. Although one of the researchers I worked with
link |
when I was at Cambridge, Professor Simon Bering Cohen, he's Sasha Bering Cohen's cousin. Yeah.
link |
And he runs the autism research center at Cambridge. And he's written multiple books
link |
on autism. And one of his one of his theories is the empathy scale, like the systemisers and
link |
the empathizers. And it there's a disproportionate amount of computer scientists and engineers who
link |
are systemisers, and perhaps not great empathizers. And then, you know, there's and there's more men
link |
in that bucket, I guess, than women. And then there's more women in the empathizers bucket.
link |
So again, not not to over generalize. I sometimes wonder about that. It's been frustrating to me
link |
how many, I guess, systemisers there are in the field of robotics. Yeah, it's actually encouraging
link |
to me because I care about, obviously, social robotics. And because it's it, there's more
link |
opportunity for people that are empathic. Exactly. I totally agree. Well, right. So it's nice. Yes.
link |
So every robotics I talk to, they don't see the the human as interesting as
link |
like it is. It's not exciting. You want to avoid the human at all costs. It's a it's a safety concern
link |
to be touching the human, which it is. But it is also an opportunity for a deep connection
link |
or collaboration or all that kind of stuff. So and because most most brilliant roboticist
link |
don't care about the human, it's an opportunity, right, for in your case, it's a business opportunity
link |
to be in general an opportunity to explore those ideas. So in this beautiful journey to Cambridge,
link |
to, you know, UK, and then to America, what, what's the moment or moments were there were
link |
most transformational for you as a scientist and as a leader. So you became an exceptionally
link |
successful CEO, founder, researcher, scientist, and so on. Was there a face shift there where
link |
like, I can be somebody, I can I can really do something in this world.
link |
Yeah, so I actually just kind of a little bit of background. So the reason why I moved from
link |
Cairo to Cambridge UK to do my PhD is because I had a very clear career plan. I was like,
link |
okay, I'll go abroad, get my PhD, gonna crush it in three or four years, come back to Egypt and
link |
teach. It was very clear, very well laid out. Was topic clear or no? The topic. Well, I did,
link |
I did my PhD around building artificial emotional intelligence and looking at the
link |
in your master plan ahead of time when you're sitting by the mango tree. Did you did you know
link |
it's going to be artificial intelligence? No, no, no, that I did not know. Although I think I kind
link |
of knew that I was going to be doing computer science, but I didn't know the specific area.
link |
But I love teaching. I mean, I still love teaching. So I just, yeah, I just wanted to go abroad,
link |
get a PhD, come back, teach. Why computer science? Can we just link on that? Because you're such an
link |
empathic person who cares about emotion and humans and so on. Aren't computers cold and
link |
emotionless. We're changing that. Yeah, I know. But like, isn't that the, or did you see computers
link |
as the having the capability to actually connect with humans? I think that was like my takeaway
link |
from my experience just growing up, like computers sit at the center of how we connect and communicate
link |
with one another, right? Or technology in general, like I remember my first experience being away
link |
from my parents, we communicated with a fax machine. But thank goodness for the fax machine,
link |
because we could send letters back and forth to each other. This was pre emails and stuff.
link |
So I think, I think there's, I think technology can be not just transformative in terms of
link |
productivity, etc. It actually does change how we connect with one another. And
link |
Can I just defend the fax machine? There's something like the haptic feel is the email
link |
is all digital. There's something really nice. I still write letters to people.
link |
There's something nice about the haptic aspect of the fax machine, because you still have to press,
link |
you still have to do something in the physical world to make this thing a reality,
link |
the sense, right? And then it like comes out as a printout and you can actually touch it and read
link |
it. Yeah, there's something, there's something lost when it's just an email. Obviously, I wonder
link |
how we can regain some of that in the digital world, which goes to the metaverse and all those
link |
kinds of things. We'll talk about it anyway. So actually, do you question on that one? Do you
link |
still, do you have photo albums anymore? Do you still print photos? No, no, but I'm a minimalist.
link |
Okay. So it was one of the one of the painful steps in my life was to scan all the photos
link |
and let go of them and then let go of all my books. You let go of your books? Yeah, switched
link |
to Kindle, everything Kindle. So I thought, I thought, okay, think 30 years from now, nobody's
link |
going to have books anymore. The technology of digital books is going to get better and better
link |
and better. Are you really going to be the guy that's still romanticizing physical books? Are
link |
you going to be the old man on the porch who was like, yes. So just get used to it because it felt,
link |
it still feels a little bit uncomfortable to read on a Kindle, but get used to it. I mean,
link |
I'm trying to learn new programming languages always. With technology, you have to kind of
link |
challenge yourself to adapt to it. I force myself to use TikTok now. That thing doesn't
link |
need much forcing. It pulls you in like the worst kind of, or the best kind of drug.
link |
Anyway, yeah. But I do love haptic things. There's a magic to the haptic. Even like touch
link |
screens, it's tricky to get right, to get the experience of a button. Anyway, what were we
link |
talking about? So AI, so the journey, your whole plan was to come back to Cairo and teach.
link |
Right. And then what did the plan go wrong? Yeah, exactly. Right. And then I got to Cambridge
link |
and I fall in love with the idea of research and kind of embarking on a path. Nobody's
link |
explored this path before. You're building stuff that nobody's built before and it's
link |
challenging and it's hard and there's a lot of nonbelievers. I just totally love that.
link |
And at the end of my PhD, I think it's the meeting that changed the trajectory of my life.
link |
Professor Roslyn Picard, who's, she runs the affective computing group at the MIT Media Lab.
link |
I had read her book. I was like following all her research.
link |
AKA Ros. Yes, AKA Ros. And she was giving a talk at a pattern recognition conference in Cambridge
link |
and she had a couple of hours to kill. So she emailed the lab and she said,
link |
you know, if any students want to meet with me, just sign up here. And so I signed up for slots
link |
and I spent like the weeks leading up to it preparing for this meeting.
link |
And I want to show her a demo of my research and everything. And we met and we ended up
link |
hitting it off like we totally clicked. And at the end of the meeting, she said,
link |
do you want to come work with me as a postdoc at MIT? And this is what I told her. I was like,
link |
okay, this would be a dream come true, but there's a husband waiting for me in Cairo.
link |
I kind of have to go back. And she said, it's fine. Just commute. And I literally started
link |
commuting between Cairo and Boston. Yeah, it was, it was a long commute. And I didn't,
link |
I did that like every few weeks, I would, you know, hop on a plane and go to Boston.
link |
But that, that changed the trajectory of my life. There was no,
link |
I kind of outgrew my dreams, right? I didn't want to go back to Egypt anymore and be faculty.
link |
Like that was no longer my dream. I had a dream. What was the, what was it like to be at MIT?
link |
What was that culture shock? You mean America in general, but also
link |
I mean Cambridge is its own culture. So what was MIT like? And what was America like?
link |
I think, I wonder if that's similar to your experience at MIT. I was just
link |
at the media lab in particular, I was just really impressed is not the right word.
link |
I didn't expect the openness to like innovation and the acceptance of taking a risk and failing.
link |
Like failure isn't really accepted back in Egypt, right? You don't want to fail.
link |
Like there's a fear of failure, which I think has been hardwired in my brain.
link |
But you get to MIT and it's okay to start things. And if they don't work out,
link |
like it's okay, you pivot to another idea. And that kind of thinking was just very new to me.
link |
That's liberating. What media lab for people don't know, MIT Media Lab is its own beautiful thing
link |
because they, I think more than other places at MIT reach for big ideas and like they try,
link |
I mean, I think, I mean, depending of course on who, but certainly with Rosalind is you try
link |
wild stuff, you try big things and crazy things and also try to take things to completion so
link |
you can demo them. So always, always, always have a demo. Like if you go, one of the sad things to
link |
me about robotics labs at MIT and there's like over 30, I think, is like usually when you show up
link |
to robotics lab, there's not a single working robot. They're all broken. All the robots are broken,
link |
which is like the normal state of things because you're working on them. But it would be nice
link |
if we lived in a world where robotics labs had some robots functioning. One of my like favorite
link |
moments that just sticks with me, I visited Boston Dynamics and there was a, first of all,
link |
seeing so many spots, so many legged robots in one place. I'm like, I'm home. But this is where
link |
I was built. The cool thing was just to see there was a random robot spot was walking down the hall.
link |
It's probably doing mapping, but it looked like he wasn't doing anything and he was wearing key or
link |
sheet. I don't know. In my mind, they're people. They have a backstory, but this one in particular
link |
definitely has a backstory because he was wearing a cowboy hat. So I just saw a spot robot with a
link |
cowboy hat walking down the hall and there was just this feeling like there's a life. He has a life.
link |
He probably has to commute back to his family at night. There's a feeling like there's life
link |
instilled in this robot. And that's magical. I don't know. It was kind of inspiring to see.
link |
Did he say hello to you? No, it's very focused. There's a focused nature to the robot. No,
link |
no, listen, I love competence and focus and great. Like he was not going to get distracted by the
link |
shallowness of small talk. There's a job to be done and he was doing it. So anyway, the fact
link |
that it was working is a beautiful thing. And I think Media Lab really prides itself on trying to
link |
always have a thing that's working that he could show off. Yes, we used to call it a demo or die.
link |
Yeah, you could not show up with PowerPoint or something. You actually had to have it working.
link |
You know what? My son, who is now 13, I don't know if this is still his lifelong goal or not,
link |
but when he was a little younger, his dream is to build an island that's just inhabited by robots,
link |
like no humans. He just wants all these robots to be connecting and having fun.
link |
And there you go. Does he have an idea of which robots he loves most? Is it
link |
Roomba like robots? Is it humanoid robots, robot dogs, or is not clear yet?
link |
We used to have a Jibo, which was one of the MIT Media Lab spinouts. And he used to love Jibo.
link |
The thing with a giant head. Yes, it spins. Right, exactly.
link |
You can rotate. And it's an eye. Not glowing. Right, exactly. It's like Hal 9000, but the
link |
friendly version. He loved that. And then he just loves, I think he loves all forms of robots,
link |
actually. So it embodies intelligence. Yes. I personally like elegant robots, especially.
link |
Anything that can wiggle its butt. No, that's not the definition of what I love. But that's
link |
just technically what I've been working on recently, except I have a bunch of legged
link |
robots now in Austin. And I've been doing, I've been trying to have them communicate
link |
affection with their body in different ways, just for art. For art, really. Because I love the idea
link |
of walking around with robots, as you would with a dog. I think it's inspiring to a lot of people,
link |
especially young people. Kids love robots. Kids love it. Parents, adults are scared of robots,
link |
but kids don't have this kind of weird construction of the world that's full of evil. They love
link |
cool things. Yeah. I remember when Adam was in first grade, so he must have been like seven or
link |
so. I went into his class with a whole bunch of robots and the Emotion AI demo and I asked
link |
the kids, I was like, do you, would you kids want to have a robot, you know, robot friend or robot
link |
companion? Everybody said yes. And they wanted it for all sorts of things, like to help them with
link |
their math homework and to like be a friend. So there's, it just struck me how there was no fear
link |
of robots. Was a lot of adults have that like us versus them. Yeah, none of that. Of course,
link |
you want to be very careful because you still have to look at the lessons of history and how
link |
robots can be used by the power centers of the world to abuse your rights and all that kind of
link |
stuff. But mostly it's good to enter anything new with an excitement and optimism. Speaking of
link |
Roz, what have you learned about science and life from Rosalind Picard? Oh my God, I've learned so
link |
many things about life from Roz. I think the thing I learned the most is perseverance.
link |
When I first met Roz, we apply and she invited me to be her postdoc, we applied for a grant to the
link |
National Science Foundation to apply some of our research to autism and we got back, we were
link |
rejected. The first time you were rejected for fun. Yeah, it was and I basically, I just took
link |
the rejection to mean, okay, we're rejected. It's done like end of story, right? And Roz was like,
link |
it's great news. They love the idea. They just don't think we can do it. So let's build it,
link |
show them and then reapply. Oh my God, that story totally stuck with me. And she's like that in
link |
every aspect of her life. She just does not take no for an answer. To reframe all negative feedback
link |
was a challenge. Yes, they liked this. Yeah, it was, it was a riot.
link |
What else about science in general, about how you see computers and
link |
also business and just everything about the world. She's a very powerful, brilliant woman
link |
like yourself. So is there some aspect of that too? Yeah, I think Roz is actually also very
link |
faith driven. She has this like deep belief in conviction. Yeah, and in the good in the world
link |
and humanity. And I think that was meeting her and her family was definitely like a defining
link |
moment for me because that was when I was like, wow, like, you can be of a different background
link |
and religion and whatever. And you can still have the same core values. So that was, that was, yeah.
link |
I'm grateful to her. So Roz, if you're listening, thank you. Yeah, she's great. She's been on this
link |
podcast before. I hope she'll be on, I'm sure she'll be on again. You were the founder and CEO of
link |
Affectiva, which is a big company that was acquired by another big company, Smart Eye. And you're now
link |
the deputy CEO of Smart Eye. So you're a powerful leader, you're brilliant, you're brilliant scientists.
link |
A lot of people are inspired by you. What advice would you give, especially to young women, but
link |
people in general who dream of becoming powerful leaders like yourself in a world where perhaps,
link |
in a world that perhaps doesn't give them a clear, easy path to do so, whether we're talking about
link |
Egypt or elsewhere? You know, here, you kind of describe me that way, kind of encapsulates,
link |
I think, what I think is the biggest challenge of all, which is believing in yourself, right?
link |
I have had to like grapple with this, what I call now the Debbie Downer voice in my head.
link |
The kind of basically, it's just chattering all the time. It's basically saying, oh, no, no, no,
link |
you can't do this. Like, you're not going to raise money. You can't start a company. Like,
link |
what business do you have, like, starting a company or running a company or selling a company?
link |
Like, you name it. It's always like, and I think my biggest advice to not just women,
link |
but people who have, who are taking a new path and, you know, they're not sure,
link |
is to not let yourself and let your thoughts be the biggest obstacle in your way.
link |
And I've had to like really work on myself to not be my own biggest obstacle.
link |
So you got that negative voice?
link |
Am I the only one? I don't think I'm the only one.
link |
No, I have that negative voice. I'm not exactly sure if it's a bad thing or a good thing. I've
link |
been really torn about it because it's been a lifelong companion. It's hard to know.
link |
It's kind of a, it drives productivity and progress, but it can't hold you back from
link |
taking big leaps. I think you, the best I can say is probably you have to somehow
link |
be able to control it. So turn it off when it's not useful and turn it on when it's useful.
link |
Like I have from almost like a third person perspective.
link |
Right. Somebody who's sitting there like.
link |
Yeah. Like because it is useful to, to be critical. Like after,
link |
like I just gave a talk yesterday at MIT and I was just, you know, there's so much love and it
link |
was such an incredible experience. So many amazing people I got. She has to talk to, but
link |
you know, afterwards when I, when I went home and just took this long walk, it was mostly
link |
just negative thoughts about me. I don't like one basic stuff. Like I don't deserve any of it.
link |
And second is like, like, why did you, that was so dumb that you said this, that's so dumb.
link |
Like you got, you should have prepared that better. Why did you say this?
link |
But I think it's good to hear that voice out. All right. And like sit in that.
link |
And ultimately, I think you grow from that. Now, when you're making really big decisions about
link |
funding or starting a company or taking a leap to go to the UK or take a leap to go to America,
link |
to work in media lab, though, yeah, there's a, that's, you should be able to shut that off then
link |
because you should have like this weird confidence, almost like faith that you said before that
link |
everything's going to work out to take the leap of faith. Despite all the negativity. I mean,
link |
there's, there's some of that you actually tweeted a really nice tweet thread. It says, quote,
link |
a year ago, a friend recommended I do daily affirmations and I was skeptical. But I was
link |
going through major transitions in my life. So I gave it a shot and it set me on a journey of
link |
self acceptance and self love. So what was that like image? Maybe talk through this idea of
link |
affirmations and how that helped you? Yeah, because really, like, I'm just like me, I'm a kind,
link |
I'd like to think of myself as a kind person in general, but I'm kind of mean to myself sometimes.
link |
And so I've been doing journaling for almost 10 years now. I use an app called day one and
link |
it's awesome. I just journal and I use it as an opportunity to almost have a conversation with
link |
the Debbie Downer voice in my, it's like a rebuttal, right? Like Debbie Downer says, oh,
link |
my God, like you, you know, you won't be able to raise this round of funny. I'm like, okay,
link |
let's talk about it. I have a track record of doing X, Y and Z. I think I can do this.
link |
And it's, it's literally like, so I wouldn't, I don't know that I can shut off the voice,
link |
but I can have a conversation with it. And it just, it just, and I bring data to the table, right?
link |
Nice. So, so that was the journaling part, which I found very helpful.
link |
But the affirmation took it to a whole next level and I just love it. I'm, I'm, I'm a year
link |
into doing this. And you literally wake up in the morning and the first thing you do,
link |
I meditate first. And then, and then I write my affirmations and it's, it's the energy I want to
link |
put out in the world that hopefully will come right back to me. So I will say, I always start
link |
with my smile lights up the whole world. And I kid you not, like people in the street will stop me
link |
and say, Oh my God, like we love your smile. Like, yes. So, so my affirmations will change depending
link |
on, you know, what's happening this day. Is it funny? I know, don't judge, don't judge.
link |
No, that's not, what? Laughter is not judgment. It's just awesome. I mean, it, it's true,
link |
but you're saying affirmations somehow help kind of, what is it that they do work to like
link |
remind you of the kind of person you are and the kind of person you want to be, which
link |
actually may be inverse order, the kind of person you want to be, and that helps you become the,
link |
the kind of person you actually are. It just, it's, it brings intentionality to like what you're
link |
doing, right? And so, by the way, I was laughing because my affirmations, which I also do are the
link |
opposite. Oh, you do? Oh, what do you do? I don't, I don't have a, my smile left up the water. Maybe
link |
I should add that because like I, I have, I just, I have, oh boy, I just, it's, it's much more stoic,
link |
like about focused about this kind of stuff, but the joy, the emotion that you're just in that
link |
little affirmation is beautiful. So maybe I should add that. I have some, I have some like
link |
focused stuff, but that's usually, but that's a cool start. That's just after all the like
link |
smiling and playful and joyful and all that, and then it's like, okay, I kick butt. Let's get you
link |
done. Right. Let's get you done affirmation. Okay, cool. So like what else is on there?
link |
Oh, what else is on there? Um, well, I, I have, I'm a, I'm, I'm a magnet for all sorts of things.
link |
So I'm an amazing people magnet. I attract like awesome people into my universe. I,
link |
so that's an actual affirmation. Yes. That's great. Yeah. So that, that's, and that, yeah,
link |
and that somehow manifests itself into like working. I think so. Yeah. Like, can you speak to like,
link |
why it feels good to do the affirmations? I honestly think it just grounds the day.
link |
And then it allows me to, instead of just like being pulled back and forth, like throughout
link |
the day, it just like grounds me like, okay, like this thing happened. It's not exactly what I wanted
link |
it to be, but I'm patient or I'm, you know, I'm, I trust that the universe will do amazing things
link |
for me, which is one of my other consistent affirmations. Or I'm an amazing mom, right? And so
link |
I can grapple with all the feelings of mom guilt that I have all the time. Or here's another one.
link |
I'm a love magnet. And I literally say, I will kind of picture the person that I'd love to end up
link |
with. And I write it all down and hasn't happened yet. But what do you, what do you picture?
link |
This is Brad Pitt. Because that's what I picture. Okay. That's what you picture? Okay.
link |
Yeah. I'm running, holding hands, running together. No, more like Fight Club that,
link |
the Fight Club Brad Pitt, where he's like standing. All right, people will know. Okay. I'm sorry.
link |
I'll get off of that. Do you have a, like when you're thinking about the being a love magnet in
link |
that way? Are you picturing specific people? Or is this almost like
link |
in the space of like energy? Right. It's somebody who is smart and well accomplished
link |
and successful in their life, but they're generous and they're well traveled and they want to travel
link |
the world. It's things like that. Like their head over heels into me is like, I know it sounds
link |
super silly, but it's literally what I write. Yeah. And I believe it'll happen one day. Oh,
link |
you actually write so you don't say it out loud? No, I write it. I write all my affirmations.
link |
I do the opposite. I say it. Yeah. If I'm alone, I'll say it out loud. Yeah. I should try that.
link |
I think it's which, what feels more powerful to you, to me, more powerful saying stuff feels
link |
more powerful. Yeah. Writing is, writing feels like I'm losing, losing the words, like losing the
link |
power of the words, maybe because I write slow. Do you hand write? No, I type. It's on this app.
link |
It's day one basically. And I just, I can, the best thing about it is I can look back
link |
and see like a year ago, what was I affirming, right? Also changes over time. It hasn't like
link |
changed a lot, but it, but the focus kind of changes over time. I got it. Yeah. I say the same
link |
exact thing over and over and over. Oh, you do? Okay. There's a comfort in the, in the sameness of
link |
it. Well, actually, let me jump around because let me ask you about, because I talked, all this talk
link |
about Brad Pitt or maybe it's just going on my side of my head. Let me ask you about dating in
link |
general. You tweeted, are you based in Boston in single question mark? And then you pointed to a
link |
startup singles night sponsored by a small dating app. Because I mean, this is jumping around a
link |
little bit, but since, since you mentioned, can AI help solve this dating love problem?
link |
What do you think? This problem of connection that is part of the human condition. Can AI help
link |
that you yourself are in the search affirming? Maybe that's what I should affirm, like build an AI.
link |
Build an AI that finds love. I think, I think there must be a science behind
link |
that first moment you meet a person and you either have chemistry or you don't, right? Like,
link |
you, I guess that was the question I was asking, would you put it brilliantly? Is that a science or
link |
an art? Ooh, I think there are like, there's actual chemicals that get exchanged when people,
link |
two people meet. Oh, well, I don't know about that. I like how you're changing. Yeah, yeah,
link |
changing your mind as we're describing it, but it feels that way. But it's what science shows us
link |
is sometimes we can explain with the rigor, the things that feel like magic. Right. So maybe
link |
you can remove all the magic. Maybe it's like, I honestly think, like I said, like Goodreads should
link |
be a dating app, which like books, I wonder, I wonder if you look at just like books or content
link |
you've consumed. I mean, that's essentially what YouTube does when it does recommendation. If you
link |
just look at your footprint of content consumed, if there's an overlap, but maybe interesting
link |
difference with an overlap, there's some I'm sure this is a machine learning problem that's solvable.
link |
Like this person is very likely to be not only there to be chemistry in the short term,
link |
but a good lifelong partner to grow together. I bet you it's a good machine learning problem.
link |
You should need the data. Let's do it. Well, actually, I do think there's so much data about
link |
each of us that there ought to be a machine learning algorithm that can ingest all this
link |
data and basically say, I think the following 10 people would be interesting connections for you,
link |
right? And so Smile Dating app kind of took one particular angle, which is humor. It matches
link |
people based on their humor styles, which is one of the main ingredients of a successful
link |
relationship. Like if you meet somebody and they can make you laugh, like that's a good thing.
link |
And if you develop like internal jokes, like inside jokes and you're bantering, like that's fun.
link |
Yeah. So I think. Yeah, definitely. But yeah, that's the
link |
number and the rate of inside joke generation. You could probably measure that and then optimize
link |
it over the first few days. You can see. Right. And then we're just turning this into a machine
link |
learning problem. I love it. But for somebody like you, who's exceptionally successful and busy,
link |
is there, is there signs to that aspect of dating? Is it tricky? Is there advice you can give?
link |
Oh my God, I'd give the worst advice. Well, I can tell you like I have a spreadsheet.
link |
Spreadsheet. That's great. Is that a good or a bad thing? Do you regret the spreadsheet?
link |
Well, I don't know. What's the name of the spreadsheet? Is it Love?
link |
It's the date track, dating tracker. It's very like. Love tracker. Yeah.
link |
And there's a rating system, I'm sure. Yeah, there's like weights and stuff.
link |
It's too close to home. Oh, is it? Do you also have a spreadsheet? Well, I don't have a spreadsheet,
link |
but I would, now that you say it, it seems like a good idea. Oh, no.
link |
Turning into data. I do wish that somebody else had a spreadsheet about me.
link |
Like I said, like you said, convert, collect a lot of data about us in a way that's
link |
privacy preserving, that I own the data, I can control it, and then use that data to find,
link |
I mean, not just romantic love, but collaborators, friends, all that kind of stuff. It seems like
link |
the data is there. That's the problem social networks are trying to solve, but I think they're
link |
doing a really poor job. Even Facebook tried to get into a dating app business. And I think there's
link |
so many components to running a successful company that connects human beings. And part of that is
link |
having engineers that care about the human side, right? As you know extremely well, it's not
link |
easy to find those, but you also don't want just people that care about the human,
link |
they also have to be good engineers. So it's like, you have to find this beautiful mix. And for some
link |
reason, just empirically speaking, people have not done a good job of that, of building companies
link |
like that. It must mean that it's a difficult problem to solve. Dating apps, it seems difficult.
link |
Okay, Cupid, Tinder, all those kind of stuff. They seem to find, of course they work, but they seem
link |
to not work as well as I would imagine it's possible. With data, wouldn't you be able to find
link |
better human connection? It's like arranged marriages on steroids essentially, arranged by
link |
machine learning algorithm.
link |
Arranged by machine learning algorithm, but not a superficial one. I think a lot of the dating
link |
apps out there are just so superficial. They're just matching on high level criteria that aren't
link |
ingredients for successful partnership. But you know what's missing though too? I don't know how
link |
to fix that, the serendipity piece of it. Like how do you engineer serendipity? Like this random
link |
chance encounter and then you fall in love with the person. I don't know how a dating app can do
link |
that. So that has to be a little bit of randomness. Maybe every 10th match is just a, you know,
link |
yeah, somebody that the algorithm wouldn't have necessarily recommended, but it allows for a
link |
little bit of... Well, it can also trick you into thinking of serendipity by somehow showing
link |
you a tweet of a person that he thinks you'll match well with, but do it accidentally as
link |
part of another search. And you just notice it and then you go down a rabbit hole and you
link |
get, you go down a rabbit hole and you connect them outside the app too. Like you connect with
link |
this person outside the app somehow. So it creates that moment of meeting. Of course,
link |
you have to think of from an app perspective how you can turn that into a business. But I think
link |
ultimately a business that helps people fall in love in any way, like that's what Apple was about.
link |
Create products that people love that's beautiful. I mean, you got to make money somehow.
link |
If you help people fall in love personally with the product, find self love or
link |
another human being, you're going to make money. You're going to figure out a way to make money.
link |
I just feel like the dating apps often will optimize for something else than love.
link |
It's the same with social networks. They optimize for engagement as opposed to like a deep,
link |
meaningful connection that's ultimately grown in like personal growth. You as a human being
link |
growing and all that kind of stuff. Let me do like a pivot to a dark topic, which you open the book
link |
with. A story, because I'd like to talk to you about just emotion and artificial intelligence.
link |
I think this is a good story to start to think about emotional intelligence. You open the book
link |
with a story of a Central Florida man, Jamel Dunn, who was drowning and drowned while five
link |
teenagers watched and laughed saying things like, you're going to die and when Jamel disappeared
link |
below the surface of the water, one of them said he just died and the others laughed. What is this
link |
incident teach you about human nature and the response to it perhaps?
link |
I mean, I think this is a really, really, really sad story and it highlights what I believe is a
link |
it's a real problem in our world today. It's an empathy crisis. Yeah, we're living through an
link |
empathy crisis. And I mean, we've talked about this throughout our conversation. We dehumanize
link |
each other. And unfortunately, yes, technology is bringing us together. But in a way, it's just
link |
dehumanizing. It's creating this dehumanizing of the other. And I think that's a huge problem. The
link |
good news is I think the solution could be technology based. I think if we rethink the
link |
way we design and deploy our technologies, we can solve parts of this problem. But I worry about
link |
it. I mean, even with my son, a lot of his interactions are computer mediated. And I just
link |
question what that's doing to his empathy skills and you know, his ability to really connect with
link |
people. So you think you think it's not possible to form empathy through the digital medium?
link |
I think it is. But we have to be thoughtful about because the way the way we engage face to face,
link |
which is what we're doing right now, right, there's the nonverbal signals, which are a majority of
link |
how we communicate. It's like 90% of how we communicate is your facial expressions.
link |
You know, I'm saying something and you're nodding your head now and that creates a feedback loop.
link |
And if you break that. And now I have anxiety about it.
link |
Poor Lex. I am not scrutinizing your facial expressions during this interview.
link |
I am. Look normal, look human, nod head. Yeah, nod head in agreement.
link |
If Rana says yes, then nod head else. Don't do it too much because it might be at the wrong time.
link |
And then it will send the wrong signal. Oh, God. And make eye contact sometimes because humans
link |
appreciate that. Okay. Yeah, but something about the, especially when we say mean things in person,
link |
you get to see the pain of the other person. If you're tweeting it at a person and you have no
link |
idea how it's going to land, you're more likely to do that on social media than you are in face
link |
to face conversations. So what do you think is more important?
link |
And EQ or IQ, EQ being emotional intelligence in terms of in what makes us human?
link |
I think emotional intelligence is what makes us human. It's how we connect with one another.
link |
It's how we build trust. It's how we make decisions, right? Like your emotions drive
link |
kind of what you had for breakfast, but also where you decide to live and what do you want to do for
link |
the rest of your life. So I think emotions are underrated.
link |
But so emotional intelligence isn't just about the effective expression of your own emotions.
link |
It's about a sensitivity and empathy to other people's emotions and that sort of being able
link |
to effectively engage in the dance of emotions with other people.
link |
Yeah, I like that explanation. I like that kind of, yeah, thinking about it as a dance,
link |
because it is really about that. It's about sensing what state the other person's in and
link |
using that information to decide on how you're going to react. And I think it can be very powerful.
link |
Like people who are the best, most persuasive, most persuasive leaders in the world tap into,
link |
you know, they have, if you have higher EQ, you're more likely to be able to motivate people
link |
to change their behaviors. So it can be very powerful.
link |
On a more kind of technical, maybe philosophical level, you've written that emotion is universal.
link |
It seems that sort of like Chomsky says language is universal. There's a bunch of other stuff like
link |
cognition, consciousness. Seems a lot of us have these aspects. So the human mind generates all
link |
this. And so what do you think is the, they all seem to be like echoes of the same thing.
link |
What do you think emotion is exactly? Like how deep does it run? Is it a surface level thing
link |
that we display to each other? Is it just another form of language or something deep within?
link |
I think it's, it's really deep. It's how, you know, we started with memory. I think emotions
link |
play a really important role. Yeah, emotions play a very important role in how we encode
link |
memories, right? Our, our memories are often encoded, almost indexed by emotions. Yeah.
link |
Yeah, it's at this core of how, you know, our decision making engine is also heavily influenced
link |
by our emotions. So emotions is part of cognition. It's totally, it's intermix into the whole thing.
link |
Yes, absolutely. And in fact, when you take it away, people are unable to make decisions.
link |
They're really paralyzed. Like they can't go about their daily or their, you know, personal or
link |
professional lives. So it does seem like there's probably some interesting interweaving of emotion
link |
and consciousness. I wonder if it's possible to have, like if they're next door neighbors somehow,
link |
or if they're actually flatmates. I don't, I don't, it feels like the, the hard problem of
link |
consciousness where it's some, it feels like something to experience the thing. Like red
link |
feels like red and it's, you know, when you eat a mango, sweet, the taste, the, the, the sweetness
link |
that it feels like something to experience that sweetness, that whatever generates emotions.
link |
But then like, I feel like emotion is part of communication. It's very much about communication.
link |
And then that means it's also deeply connected to language. But then probably human intelligence
link |
is deeply connected to the collective intelligence between humans. It's not just the standalone
link |
thing. So the whole thing is really connected. So emotion is connected to language, language is
link |
connected to intelligence. And then intelligence connected to consciousness and consciousness
link |
is connected to emotion. The whole thing is, it's a beautiful mess. So
link |
Can I comment on the emotions being a communication mechanism? Because I think
link |
there are two facets of, of our emotional experiences. One is communication, right?
link |
Like we use emotions, for example, facial expressions or other nonverbal cues to connect
link |
with other human beings and with other beings in the world, right? But even if it's not a
link |
communication context, we still experience emotions and we still process emotions and
link |
we still leverage emotions to make decisions and to learn and, you know, to experience life. So
link |
it isn't always just about communication. And we learned that very early on in our,
link |
in kind of our work at Affectiva. One of the very first applications we brought to market was
link |
understanding how people respond to content, right? So if they're watching this video of
link |
ours, like, are they interested? Are they inspired? Are they bored to death? And so we
link |
watched their facial expressions. And we had, we weren't sure if people would express any emotions
link |
if they were sitting alone. Like if you're in your bed at night, watching a Netflix TV series,
link |
would we still see any emotions on your face? And we were surprised that, yes, people still
link |
emote, even if they're alone, even if you're in your car driving around, you're singing along
link |
a song and you're joyful. We'll see these expressions. So it's not just about communicating
link |
with another person. It's sometimes really is it just about experiencing the world.
link |
First of all, I wonder if some of that is because we develop our intelligence and our
link |
emotional intelligence by communicating with other humans. And so when other humans disappear
link |
from the picture, we're still kind of a virtual human. The code still runs basically. Yeah,
link |
the code still runs. And but you're also kind of, you're still, there's like virtual humans,
link |
you don't have to think of it that way. But there's a kind of, when you like chuckle like,
link |
yeah, like you're, you're kind of chuckling to a virtual human. I mean, it's possible that
link |
the code is the has to have another human there. Because if you just grow up alone,
link |
I wonder if emotion will still be there in this visual form. So yeah, I wonder. But anyway,
link |
what can you tell from the human face about what's going on inside? So that's the problem
link |
that effective at first tackled, which is using computer vision, using machine learning to try to
link |
detect stuff about the human face as many things as possible, and convert them into a prediction of
link |
categories of emotion, anger, happiness, all that kind of stuff. How hard is that problem?
link |
It's extremely hard. It's very, very hard because there is no one to unmapping between
link |
a facial expression and your internal states. There just isn't. There's this oversimplification
link |
of the problem where it's something like, if you are smiling, then you're happy. If you do a brow
link |
furrow, then you're angry. If you do an eyebrow raise, then you're surprised. And just think about
link |
it for a moment, you could be smiling for a whole host of reasons. You could also be happy and not
link |
be smiling. You could furrow your eyebrows because you're angry or you're confused about something
link |
or you're constipated. So I think this oversimplistic approach to inferring emotion from a facial
link |
expression is really dangerous. The solution is to incorporate as many contextual signals as you
link |
can. So for example, I'm driving a car and you can see me nodding my head and my eyes are closed
link |
and the blinking rate is changing. I'm probably falling asleep at the wheel because you know
link |
the context. You understand what the person is doing. So I think or add additional channels like
link |
voice or gestures or even physiological sensors. But I think it's very dangerous to just take this
link |
over simplistic approach of smile equals happy. If you're able to, in a high resolution way,
link |
specify the context, there's certain things that are going to be somewhat reliable signals
link |
of something like drowsiness or happiness or stuff like that. I mean, when people are watching
link |
Netflix content, that problem, that's a really compelling idea that you can kind of, at least
link |
in aggregate, highlight like which part was boring, which part was exciting. How hard was
link |
that problem? That was on the scale of like difficulty. I think that's one of the easier
link |
problems to solve because it's a relatively constrained environment. You have somebody
link |
sitting in front of, initially we started with like a device in front of you like a laptop.
link |
And then we graduated to doing this on a mobile phone, which is a lot harder just because of,
link |
you know, from a computer vision perspective, the profile view of the face can be a lot more
link |
challenging. We had to figure out lighting conditions because usually people are watching
link |
content literally in their bedrooms at night, lights are dimmed. Yeah. I mean, if you're
link |
standing, it's probably going to be the looking up the nostril view. Yeah. And nobody looks good at
link |
that. I've seen data sets from that perspective. It's like, this is not a good look for anyone.
link |
Or if you're laying in bed at night, what is it? Side view or something?
link |
And half your face is like on a pillow. Actually, I would love to know, have data
link |
about like how people watch stuff in bed at night. Like, do they prop there? Is it a pillow?
link |
Like, I'm sure there's a lot of interesting dynamics there.
link |
Right. From a health and well being perspective, right? Like, it's like, oh, you're hurting,
link |
you're not. I was thinking machine learning perspective, but yes. But also, yeah. Yeah,
link |
once you have that data, you can start making all kinds of inference about health and stuff like
link |
that. Interesting. Yeah. There was an interesting thing when I was at Google that we were,
link |
it's called active authentication where you want to be able to
link |
unlock your phone without using a password. So it would face, but also other stuff,
link |
like the way you take a phone out of the pocket. So that kind of data to use the multimodal
link |
with machine learning to be able to identify that it's you or likely to be you, likely not to be you,
link |
that allows you to not always have to enter the password. That was the idea. But the funny thing
link |
about that is I just want to tell a small anecdote is because it was all male engineers.
link |
Except, so there's, my boss is our boss who's still one of my favorite humans was a woman,
link |
Regina Dugan. Oh my God, I love her. She's awesome. She's the best. So, but anyway,
link |
and there was another, there's one female engineer, brilliant female engineer on the team,
link |
and she was the one that actually, I like the fact that women often don't have pockets. Right.
link |
You know, it was like, whoa, that was not even a category in the code of like, wait a minute,
link |
you can take the phone out of some other place than your pocket. So anyway, that's a, it's a
link |
funny thing when you're considering people laying in bed watching a phone, you have to consider
link |
if you have to, you know, diversity in all its forms, depending on the problem, depending on the
link |
context. Yeah, actually, this is like a very important, I think this is, you know, you probably
link |
get this all the time, like people are worried that AI is going to take over humanity and like,
link |
get rid of all the humans, the world, and I'm like, actually, that's not my biggest concern.
link |
My biggest concern is that we are building bias into these systems. And then they're like deployed
link |
at large and at scale. And before you know it, you're kind of accentuating the bias that exists
link |
in society. And yeah, I'm not, you know, I know people, it's very important to worry about that,
link |
but I, the worry is an emergent phenomena to me, which is a very good one, because I think these
link |
systems are actually, by encoding the data that exists, they're revealing the bias in society,
link |
they're for teaching us what the bias is, therefore, we can now improve that bias within the system.
link |
So they're almost like putting a mirror to ourselves. So I'm not,
link |
we have to be open to looking at the mirror, though, we have to be right open to scrutinizing
link |
the data. And if you just take it as ground, or you don't even have to look at the, I mean,
link |
yes, the data is how you fix it, but then you just look at the behavior of the system. It's like,
link |
and you realize, holy crap, this thing is kind of racist. Like, why is that? And then you look at
link |
the data, it's like, okay. And then you start to realize that I think that's a much more effective
link |
way to, to be introspective as a society than through sort of political discourse, like AI kind
link |
of, because people are, people are for some reason more productive and rigorous in criticizing AI
link |
than they're criticizing each other. So I think this is just a nice method for studying society
link |
and see which way progress lies. Anyway, what we're talking about, you're watching the problem of
link |
watching Netflix in bed, or elsewhere, and seeing which parts are exciting, which parts are boring,
link |
you're saying that's relatively constrained, because, you know, you have a captive audience,
link |
and you kind of know the context. And one thing you said that was really key is the
link |
aggregate, you're doing this in aggregate, right? Like we're looking at aggregated response of people,
link |
and so when you see a peak, say a smile peak, they're probably smiling or laughing at something
link |
that's in the content. So that was one of the first problems we were able to solve. And, and
link |
when we see the smile peak, it doesn't mean that these people are internally happy. They're just
link |
laughing at content. So it's important to, you know, call it for what it is.
link |
But it's still really, really useful data. I wonder how that compares to, so what like YouTube
link |
and other places we use is obviously they don't have, for the most case, they don't have that kind
link |
of data. They have the data of when people tune, tune out, drop off. And I think that's a
link |
in aggregate for YouTube, at least a pretty powerful signal. I worry about what that leads to,
link |
because looking at like YouTubers that are kind of really care about views and,
link |
you know, try to maximize the number of views, I think they, when they say that the video should
link |
be constantly interesting, it seems like a good goal. I feel like that leads to this manic pace
link |
of a video. Like the idea that I would speak at the current speed that I'm speaking, I don't know.
link |
And that every moment has to be engaging, right? Engaging. I think there's value to silence,
link |
there's value to the boring bits. I mean, all some of the greatest movies ever,
link |
some of the greatest stories ever told me, they have the boring bits, seemingly boring bits. I
link |
don't know. I wonder about that. Of course, it's not that the human face can capture that either,
link |
it's just giving an extra signal. You have to really, I don't know, you have to really collect
link |
deeper, long term data about what was meaningful to people. When they think 30 days from now, what
link |
they still remember, what moved them, what changed them, what helped them grow, that kind of stuff.
link |
You know, it would be a really, I don't know if there are any researchers out there who are doing
link |
this type of work. Wouldn't it be so cool to tie your emotional expressions while you're, say,
link |
listening to a podcast interview and then 30 days later interview people and say, hey,
link |
what do you remember? You've watched this 30 days ago, like what stuck with you? And then see if
link |
there's any, there ought to be, maybe there ought to be some correlation between these emotional
link |
experiences and yeah, what you, what stays with you. So the one guy listening now on the beach
link |
in Brazil, please record a video of yourself listening to this and send it to me and then
link |
I'll interview you 30 days from now. Yeah, that would be great. It would be statistically
link |
significant. Yeah, I know one, but you know, yeah, yeah, I think that's really fascinating. I think
link |
that kind of holds the key to a future where entertainment or content is both entertaining
link |
and, I don't know, makes you better, empowering in some way. So figuring out like
link |
showing people stuff that entertains them, but also they're happy they watched 30 days from
link |
now because they've become a better person because of it. Well, you know, okay, not to
link |
riff on this topic for too long, but I have two children, right? And I see my role as a parent
link |
as like a chief opportunity officer. Like I am responsible for exposing them to all sorts of
link |
things in the world. But often I have no idea of knowing like, what's stuck? Like, what was,
link |
you know, is this actually going to be transformative, you know, for them 10 years down the line? And
link |
I wish there was a way to quantify these experiences. Like, are they, I can tell in the moment if
link |
they're engaging, right? I can tell. But it's really hard to know if they're going to remember
link |
them 10 years from now or if it's going to. Yeah, that one is weird because it seems like kids
link |
remember the weirdest things. I've seen parents do incredible stuff with their kids and they don't
link |
remember any of that. They remember some tiny, small, sweet thing a parent did. Right. Like some,
link |
I took you to like this amazing country. Yeah, whatever. And then they'll be like some,
link |
like stuffed toy you got or some or the new PlayStation or something or some, some silly
link |
little thing. So I think they just like that they were designed that way that you want to mess with
link |
your head. But definitely kids are very impacted by it seems like sort of negative events. So
link |
minimizing the number of negative events is important, but not too much, right? You can't,
link |
you can't just like, you know, there's still discipline and challenge and all those kinds of
link |
things. So you want some adversity for sure. So yeah, I mean, I'm definitely when I have kids,
link |
I'm going to drive them out into the woods. Okay. And then they have to survive and make,
link |
figure out how to make their way back home, like 20 miles out. Okay. Yeah. And after that, we can go
link |
for ice cream. Anyway, I'm working on this whole parenting thing. I haven't figured it out. Okay.
link |
Okay. What were we talking about? Yes, effective at the, the, the problem of emotion,
link |
of emotion detection. So there's some people, maybe we can just speak to that a little more,
link |
where there, there's folks like Lisa Feldman Barrett that, that challenged this idea that
link |
emotion could be fully detected, or even well detected from the human face that there's so
link |
much more to emotion. What do you think about ideas like hers, criticism like hers? Yeah,
link |
I actually agree with a lot of Lisa's criticism. So even, even my PhD worked like 20 plus years
link |
ago now. Time flies when you're having fun. I know, right? That was back when I did like
link |
dynamic Bayesian networks and that was before deep learning. That was before deep learning.
link |
Yeah. Yeah. I know. My day. Now you can just like use. Yeah. It's all, it's all the same
link |
architecture. You can apply it to anything. Yeah. Right. Right. But yeah, but, but even then I kind
link |
of, I, I did not subscribe to this like theory of basic emotions where it's just the simplistic
link |
mapping one to one mapping between facial expressions and emotions. I actually think also,
link |
we're not in, in the business of trying to identify your true emotional internal state. We
link |
just want to quantify in an objective way what's showing on your face because that's an important
link |
signal. It doesn't mean it's a true reflection of your internal emotional state. So I think a lot
link |
of the, you know, I think she's, she's just trying to kind of highlight that this is not a simple
link |
problem and overly simplistic solutions are going to hurt the industry. And I subscribe to that.
link |
And I think multimodal is the way to go. Like whether it's additional context information
link |
or different modalities and channels of information, I think that's what we,
link |
that's where we ought to go. And I think, I mean, that's a big part of what she's advocating for
link |
as well. So, but there is signal in the human face. That's just definitely signal. That's a
link |
projection of emotion. There's that, that there, at least in part is the inner state is captured
link |
in some meaningful way on the human face. I think it can sometimes be a reflection or
link |
an expression of your internal state, but sometimes it's a social signal. So the, so
link |
you cannot look at the face as purely a signal of emotion. It can be a signal of cognition and it
link |
can be a signal of a social expression. And I think to disambiguate that, we have to be
link |
careful about it and we have to add initial information. Humans are fascinating, aren't
link |
they? With the whole face thing, this can mean so many things from humor to sarcasm to everything,
link |
the whole thing. Some things we can help, some things we can't help at all.
link |
In all the years of leading effectiva and emotion recognition company, like we talked about,
link |
what have you learned about emotion, about humans and about AI? Be big sweeping questions.
link |
Yeah, that's a big sweeping question. Well, I think the thing I learned the most is that even
link |
though like we are in the business of building AI basically, right? It always goes back to the
link |
humans, right? It's always about the humans. And so for example, the thing I'm most proud of
link |
in building affectiva and yeah, the thing I'm most proud of on this journey,
link |
I love the technology and I'm so proud of the solutions we've built and we've brought to market,
link |
but I'm actually most proud of the people we've like built and cultivated at the company and
link |
the culture we've created. You know, some of the people who've joined affectiva,
link |
this was their first job. And while at affectiva, they became American citizens and they bought
link |
their first house and they found their partner and they had their first kid, right? Like key moments
link |
in life that we got to be part of. And that's the thing I'm most proud of.
link |
So that's a great thing at a company that works and I'm most proud of that, right? Me,
link |
like celebrating humanity in general, broadly speaking. And that's a great thing to have in
link |
a company that works on AI because that's not often the thing that's celebrated in AI companies.
link |
So often just raw, great engineering, just celebrating the humanity. That's great. And
link |
especially from a leadership position. Well, what do you think about the movie,
link |
her? Let me ask you that before I talk, before I talk to you about it, because it's not
link |
affectiva is and was not just about emotion. So I'd love to talk to you about smart eye.
link |
But before that, let me just jump into the movie. Her, do you think we'll have a deep
link |
meaningful connection with increasingly deep and meaningful connections with computers?
link |
Is that a compelling thing to you? Something that's already happening? The thing I love them,
link |
I love the movie her, by the way. But the thing I love the most about this movie is it demonstrates
link |
how technology can be a conduit for positive behavior change. So I forgot the guy's name in
link |
the movie, whatever. Theodore. Theodore. So Theodore was like really depressed, right? And
link |
he just didn't want to get out of bed. He was just like done with life, right? And Samantha, right?
link |
Samantha, yeah. She just knew him so well. She had, she was emotionally intelligent.
link |
And so she could persuade him and motivate him to change his behavior. And she got a man,
link |
and they went to the beach together. And I think that represents the promise of emotion AI. If done
link |
well, this technology can help us live happier lives, more productive lives, healthier lives,
link |
more connected lives. So that's the part that I love about the movie, obviously. It's Hollywood,
link |
so it takes a twist and whatever. But the key notion that technology with emotion AI can
link |
persuade you to be a better version of who you are, I think that's awesome.
link |
Well, what about the twist? You don't think it's good for spoiler alert
link |
that Samantha starts feeling a bit of a distance and basically leaves Theodore?
link |
You don't think that's a good feature? That's a, you think that's a bugger feature?
link |
Well, I think what went wrong is Theodore became really attached to Samantha. Like,
link |
I think he kind of fell in love with, you think that's wrong?
link |
I mean, I think that, I think she was putting out the signal. This is an intimate relationship,
link |
right? There was a deep intimacy to it. Right. But what does, what does, what does that mean?
link |
What does that mean? An AI system. Right. What does that mean? Right? We're just friends.
link |
Yeah, we're just friends. Well, I think when he realized, which is such a human thing of
link |
jealousy, when you realized that Samantha was talking to like thousands of people,
link |
she's parallel dating. Yeah, that did not go well. Right.
link |
You know, that doesn't, and from a computer perspective, like that doesn't take
link |
anything away from what we have. It's like you getting jealous of Windows 98 for being used
link |
by millions of people. But it's like, it's like not liking that Alexa talks to a bunch of, you
link |
know, other families. But I think Alexa currently is just a servant. It, it tells you about the
link |
weather. It's not, it doesn't do the intimate deep connection. And I think there is something
link |
there is something really powerful about that the intimacy of a connection with an AI system
link |
that would have to respect and play the human game of, of jealousy, of love, of heartbreak and
link |
all that kind of stuff, which Samantha does seem to be pretty good at. I think she, this AI systems
link |
knows what it's doing. Well, actually, let me ask you this. I don't think she was talking to anyone
link |
else. You don't think so? You think she was just done with Theodore? Yeah. Oh, yeah. And then,
link |
and she, she wanted to really put the screen. She didn't have the guts to just break it off
link |
cleanly. Okay. She wanted to put in pain. No, I don't know. Well, she could have ghosted him.
link |
She could have. Right. I'm sorry. There's our engineers. Oh God. But I think those are really,
link |
I honestly think some of that, some of it is Hollywood, but some of that is features from
link |
an engineering perspective, not a bug. I think AI systems that can leave us,
link |
now this is for more social robotics than it is for anything that's useful. Like I hate it
link |
if Wikipedia said, you know, I need a break right now. Right, right, right, right. I'm like, no,
link |
no, I need you. But if it's just purely for companionship, then I think the ability to
link |
leave is really powerful. I don't know. I never thought of that. So that's so, so fascinating
link |
because I've always taken the human perspective, right? Like for example, we had a Jibo at home,
link |
right? And my son loved it. And then the company ran out of money. And so they had to basically
link |
shut down, like Jibo basically died, right? And it was so interesting to me because we have a lot
link |
of gadgets at home and a lot of them break and my son never cares about it, right? Like if our
link |
Alexa stopped working tomorrow, I don't think he'd really care. But when Jibo stopped working,
link |
it was traumatic. Like he got really upset. And as a parent, that like made me think about this
link |
deeply, right? Did I, was I comfortable with that? I like the connection they had because I
link |
think it was a positive relationship. But I was surprised that it affected him emotionally so
link |
much. And I think there's a broader question here, right? As we build socially and emotionally
link |
intelligent machines, what does that mean about our relationship with them? And then we're
link |
broadly our relationship with one another, right? Because this machine is going to be programmed
link |
to be amazing at empathy, by definition, right? It's going to always be there for you. It's not
link |
going to get bored. In fact, there's a chat bot in China, Shao Ice. And it's like the number
link |
two or three most popular app. And it basically is just a confidant. And you can tell it anything
link |
you want. And people use it for all sorts of things. They confide in like domestic violence
link |
or suicidal attempts or, you know, if they have challenges at work, I don't know what that,
link |
I don't know if I'm, I don't know how I feel about that. I think about that a lot.
link |
Yeah, I think first of all, obviously the future in my perspective. Second of all, I think there's
link |
a lot of trajectories that that becomes an exciting future. But I think everyone should feel very
link |
uncomfortable about how much they know about the company, about where the data is going,
link |
how the data is being collected. Because I think, and this is one of the lessons of social media,
link |
that I think we should demand full control and transparency of the data on those things.
link |
Plus one, totally agree. Yeah. So like, I think it's really empowering, as long as you can walk
link |
away. As long as you can like delete the data or know how the data, it's a opt in or, or at least
link |
the clarity of like what is being used for the company. And I think as CEO or like leaders
link |
are also important about that. Like you need to be able to trust the basic humanity of the leader.
link |
Exactly. And also that that leader is not going to be a puppet of a larger machine,
link |
but they actually have a significant role in defining the culture and the way the company
link |
operates. So anyway, but we should be, we should definitely scrutinize companies on that aspect.
link |
But this, I'm personally excited about that future, but also even if you're not, it's coming.
link |
So let's figure out how to do it in the least painful and the most positive way.
link |
That's great. You're the deputy CEO of Smart Eye. Can you describe the mission of the company?
link |
What is Smart Eye? Yeah. So Smart Eye is a Swedish company. They've been in business for the last 20
link |
years. And their, their main focus, like the industry they're most focused on is the automotive
link |
industry. So bringing driver monitoring systems to basically save lives, right? So I first met
link |
the CEO, Martin Krantz. Gosh, it was right when COVID hit. It was actually the last, the last
link |
CES right before COVID. So CS 2020, right? 2020, January. Yeah, January. Exactly. So we were there,
link |
met him in person. He's basically, we were competing with each other. I think the difference was
link |
they'd been doing driver monitoring and had a lot of credibility in the automotive space. We
link |
didn't come from the automotive space, but we were using new technology like deep learning
link |
and building this emotion recognition. And you wanted to enter the automotive space. You wanted
link |
to operate in the automotive space. Exactly. It was one of the areas we were, we had just raised
link |
around a funding to focus on bringing our technology to the automotive industry. So we met and
link |
honestly, it was the first, it was the only time I met with a CEO who had the same vision
link |
as I did. Like he basically said, yeah, our vision is to bridge the gap between humans and machines.
link |
I was like, oh my God, this is like exactly almost to the, to the word, you know, how we describe it
link |
too. And we started talking and first it was about, okay, can we align strategically here? Like how
link |
can we work together? Cause we're competing, but we're also like complimentary. And then
link |
I think after four months of speaking almost every day on FaceTime, he was like,
link |
is your company interested in acquisition? And it was the first, I usually say no when people
link |
approach us. It was the first time that I was like, huh, yeah, I might be interested. Let's talk.
link |
Yeah. So you just hit it off. Yeah. So they're, they're a respected, very respected in the automotive
link |
sector of like delivering products and increasingly sort of better and better and better for, I mean,
link |
maybe you could speak to that, but it's the driver's sense. And for basically having a device that's
link |
looking at the driver and it's able to tell you where the driver is looking. Correct. It's able
link |
to also drowsiness stuff. Correct. It does stuff from the face in the eye. Exactly. Like it's
link |
monitoring driver distraction and drowsiness, but they bought us so that we could expand beyond
link |
just the driver. So the driver monitoring systems usually sit, the camera sits in the steering
link |
wheel car or around the steering wheel column and it looks directly at the driver. But now
link |
we've migrated the camera position and partnership with car companies to the rear view mirror
link |
position. So it has a full view of the entire cabin of the car. And you can detect how many
link |
people are in the car. What are they doing? So we do activity detection, like eating or drinking or
link |
in some regions of the world smoking. We can detect if a baby's in the car seat, right? And if,
link |
unfortunately, in some cases they're forgotten, parents just leave the car and forget the kid
link |
in the car. That's an easy computer vision problem to solve, right? You can detect there's a car seat,
link |
there's a baby, you can text the parent and hopefully, again, save lives. So that was the
link |
impetus for the acquisition. It's been a year. I mean, there's a lot of questions. It's a really
link |
exciting space, especially to me. I just find it a fascinating problem. It could enrich the
link |
experience in the car in so many ways, especially because we spend still, despite COVID, I mean,
link |
COVID changed things. So it's in interesting ways. But I think the world is bouncing back and we
link |
spend so much time in the car and the car is such a weird little world we'll have for ourselves.
link |
Like people do all kinds of different stuff, like listen to podcasts, they think about stuff,
link |
they get angry, they do phone calls. It's like a little world of its own with a kind of privacy
link |
that for many people, they don't get anywhere else. And it's a little box that's like a psychology
link |
experiment because it feels like the angriest many humans in this world get is inside the car.
link |
It's so interesting. So it's such an opportunity to explore how we can enrich,
link |
how companies can enrich that experience. And also as the cars get become more and more automated,
link |
there's more and more opportunity, the variety of activities that you can do in the car increases.
link |
So it's super interesting. So I mean, in a practical sense, the Smart Eye has been selected.
link |
At least I read by 14 of the world's leading car manufacturers for 94 car models. So
link |
it's in a lot of cars. How hard is it to work with car companies? So they're all different.
link |
They all have different needs. The ones I've gotten a chance to interact with are very focused on
link |
cost. And anyone who's focused on cost, it's like, all right, do you hate fun?
link |
Let's just have some fun. Let's figure out the most fun thing we can do in the worry
link |
about cost later. But I think because the way the car industry works, I mean, it's a very
link |
thin margin that you get to operate under. So you have to really, really make sure that
link |
everything you add to the car makes sense financially. So anyway, is this new industry,
link |
especially at this scale of Smart Eye, does it hold any lessons for you?
link |
Yeah, I think it is a very tough market to penetrate. But once you're in, it's awesome.
link |
Because once you're in, you're designed into these car models for like somewhere between
link |
five to seven years, which is awesome. And once they're on the road, you just get paid a royalty
link |
fee per vehicle. So it's a high barrier to entry. But once you're in, it's amazing. I think the thing
link |
that I struggle the most with in this industry is the time to market. So often we're asked to lock
link |
or do a code freeze. Two years before the car is going to be on the road, I'm like, guys,
link |
do you understand the pace with which technology moves? So I think car companies are really trying
link |
to make the Tesla, the Tesla transition to become more of a software driven
link |
architecture. And that's hard for many. It's just the cultural change. I mean,
link |
I'm sure you've experienced that, right? Oh, definitely. I think one of the biggest
link |
inventions or imperatives created by Tesla is like to me personally, okay, people are going to
link |
complain about this, but I know electric vehicle, I know autopilot AI stuff. To me, the software
link |
over there, software updates is like the biggest revolution in cars. And it is extremely difficult
link |
to switch to that because it is a culture shift. It at first, especially if you're not comfortable
link |
with it, it seems dangerous. Like there's an approach to cars is so safety focused for so
link |
many decades. They're like, what do you mean we dynamically change code? The whole point
link |
is you have a thing that you test, like, right? And like, it's not reliable. Because do you know
link |
how much you cause if we have to recall this cars? Right? There's a there's a and there's an
link |
understandable obsession with safety. But the downside of an obsession with safety
link |
is the same as with being obsessed with safety as a parent. Is like, if you do that too much, you
link |
limit the potential development and the flourishing of in that particular aspect human being, but
link |
in this particular aspect, the software, the artificial neural network of it. And but it's
link |
tough to do. It's really tough to do culturally and technically, like the deployment, the mass
link |
deployment of software is really, really difficult. But I hope that's where the industry is doing
link |
one of the reasons I really want Tesla to succeed is exact about that point, not autopilot,
link |
not the electrical vehicle, but the softwareization of basically everything but cars,
link |
especially because to me, that's actually going to increase two things, increase safety,
link |
because you can update much faster, but also increase the effectiveness of
link |
folks like you who dream about enriching the human experience with AI, because you can just like
link |
because there's a feature like you want like a new emoji or whatever, like the way TikTok
link |
releases filters, you can just release that for in car in car stuff. So but yeah, that that's
link |
definitely one of the use cases we're looking into is once you know the sentiment of the passengers
link |
in the vehicle, you can optimize the temperature in the car, you can change the lighting, right?
link |
So if the backseat passengers are falling asleep, you can dim the lights, you can lower the music,
link |
right? You can do all sorts of things. Yeah. I mean, of course, you could do that kind of
link |
stuff with a two year delay, but it's tougher. Yeah. Do you think, do you think Tesla or Waymo
link |
or some of these companies that are doing semi or fully autonomous driving should be doing
link |
driver sensing? Yes. Are you thinking about that kind of stuff? So not just how we can enhance
link |
the in cab experience for cars that are male and driven, but the ones that are increasingly
link |
more autonomously driven? Yeah. So if we fast forward to the universe where it's fully autonomous,
link |
I think interior sensing becomes extremely important because the role of the driver
link |
isn't just to drive. If you think about it, the driver almost manages the dynamics within a vehicle.
link |
And so who's going to play that role when it's an autonomous car? We want a solution that is
link |
able to say, oh my God, Lex is bored to death because the car is moving way too slow. Let's
link |
engage Lex. Or Rana is freaking out because she doesn't trust this vehicle yet. So let's tell Rana
link |
a little bit more information about the route. So I think, or somebody's having a heart attack
link |
in the car, you need interior sensing in fully autonomous vehicles. But with semi autonomous
link |
vehicles, I think it's really key to have driver monitoring because semi autonomous means that
link |
sometimes the car is in charge, sometimes the driver is in charge or the co pilot, right? And
link |
you need both systems to be on the same page. You need to know, the car needs to know if the
link |
driver's asleep before it transitions control over to the driver. And sometimes if the driver's too
link |
tired, the car can say, I'm going to be a better driver than you are right now. I'm taking control
link |
over. So this dynamic, this dance, it's so key and you can't do that without driver sensing.
link |
Yeah, there's a disagreement for the longest time I've had with Elon that this is obvious,
link |
that this should be in the Tesla from day one. And it's obvious that driver sensing is not a
link |
hindrance. It's not obvious. I should be careful because having studied this problem, nothing
link |
is really obvious, but it seems very likely driver sensing is not a hindrance to an experience.
link |
It's only enriching to the experience and likely increases the safety. That said,
link |
it is very surprising to me just having studied semi autonomous driving how well humans are able
link |
to manage that dance. Because it was the intuition before you were doing that kind of thing that
link |
humans will become just incredibly distracted. They would just let the thing do its thing.
link |
Because it is life and death and they're able to manage that somehow. But that said, there's no
link |
reason not to have driver sensing on top of that. I feel like that's going to allow you to do that
link |
dance that you're currently doing without driver sensing, except the steering wheel,
link |
to do that even better. I mean, the possibilities are endless and the machine learning possibilities
link |
are endless. It's such a beautiful, it's also constrained environment. So you could do much
link |
more effectively than you can with the external environment. External environment is full of
link |
weird edge cases and complexities. There's so much, it's so fascinating, such a fascinating
link |
world. I do hope that companies like Tesla and others, even Waymo, which I don't even know if
link |
Waymo is doing anything sophisticated inside the cab. I don't think so. What is it? I honestly
link |
think it goes back to the robotics thing we were talking about, which is like great engineers
link |
that are building these AI systems just are afraid of the human being. And not thinking about the
link |
human experience, they're thinking about the features and the perceptual abilities of that
link |
thing. They think the best way I can serve the human is by doing the best perception and control
link |
I can, by looking at the external environment, keeping the human safe. But there's a huge,
link |
I'm here. I need to be noticed and interacted with and understood and all those kinds of things,
link |
even just in a personal level for entertainment, honestly, for entertainment.
link |
You know, one of the coolest work we did in collaboration with MIT around this was we looked
link |
at longitudinal data, right? Because MIT had access to tons of data. And just seeing the
link |
patterns of people, like driving in the morning off to work versus commuting back from work,
link |
or weekend driving versus weekday driving. And wouldn't it be so cool if your car knew that
link |
and then was able to optimize either the route or the experience or even make recommendations?
link |
I think it's very powerful. Yeah, like, why are you taking this route? You're always unhappy when
link |
you take this route. And you're always happy when you take this alternative route. Take that route
link |
instead. Exactly. I mean, that if to have that even that little step of relationship with a car,
link |
I think is incredible. Of course, you have to get the privacy, right? You have to get all that kind
link |
of stuff, right? But I wish I honestly, you know, people are paranoid about this, but I would like
link |
a smart refrigerator. We have such a deep connection with food as a human civilization. I would like
link |
to have a refrigerator that would understand me that, you know, I also have a complex
link |
relationship with food because like, you know, pig out too easily and all that kind of stuff. So
link |
you try, you know, like, maybe I want the refrigerator to be like, are you sure about
link |
this? Because maybe you're just feeling down or tired. Like maybe, maybe let's leave off.
link |
Your vision of the smart refrigerator is way kinder than mine.
link |
Is it just me yelling at you? No, it was just because I don't, I don't, you know, I don't drink
link |
alcohol, I don't smoke, but I eat a ton of chocolate, like it sticks to my vice. And so I, and, and
link |
sometimes I scream too. And I'm like, okay, my smart refrigerator will just lock, lock down.
link |
They'll just say, dude, you've had way too many today, like, yeah.
link |
Yeah. No, but here's the thing. Are you, do you regret having like, let's say, not the next day,
link |
but 30 days later, would you, what would you, what would you like to the refrigerator to have done
link |
then? Well, I think actually, like the more positive relationship would be one where there's
link |
a conversation, right? As opposed to like four, that's probably like the more sustainable relationship.
link |
It's like late, late at night, just no, listen, listen, I know I told you an hour ago,
link |
that this is not a good idea, but just listen, things have changed. I can just imagine a bunch
link |
of stuff being made up just to convince. Oh my God, it's hilarious. But I mean, I just think that
link |
there's opportunities there. I mean, maybe not locking down, but for our systems that are such
link |
a deep part of our lives, like we use, we use a lot of us, a lot of people that commute use
link |
their car every single day. A lot of us use a refrigerator every single day, the microwave
link |
every single day. Like, and we just, like, I feel like certain things could be made more efficient,
link |
more enriching, and AI is there to help. Like some just basic recognition of you as a human being
link |
about your patterns, what makes you happy, not happy, and all that kind of stuff. And the car,
link |
obviously, maybe, maybe, maybe we'll say, well, instead of this like,
link |
like, Ben and Terry's ice cream, how about this hummus and carrots or something? I don't know.
link |
Yeah, like a reminder. Just in time recommendation, right?
link |
But not like a generic one, but a reminder that last time you chose the carrots,
link |
you smiled 17 times more the next day. You were happier the next day, right?
link |
Yeah, you were, you were happier the next day. And, and, but yeah, I don't, but then again,
link |
if you're the kind of person that, that gets better from negative, negative comments, you could
link |
say like, Hey, remember, like that wedding you're going to, you want to fit into that dress?
link |
Remember about that? Let's think about that for your eating this. No, I don't know. It's for some,
link |
probably that would work for me. Like a refrigerator that is just ruthless. It's
link |
shaming me. But like, I would, of course, welcome it. Like, that would work for me. Just that that
link |
well, I would know, I think it would, if it's really like smart, it would optimize its nudging
link |
based on what works for you, right? Exactly. That's the whole point. Personalization,
link |
in every way, deep personalization. You were a part of a webinar titled Advancing Road Safety,
link |
the State of Alcohol and Toxication Research. So for people who don't know, every year 1.3
link |
million people around the world die in road crashes. And more than 20% of these fatalities
link |
are estimated to be alcohol related. A lot of them are also distraction related. So can AI help
link |
with the alcohol thing? I think the answer is yes. There are signals, and we know that as humans,
link |
like we can tell in a person, you know, is it different phases of being drunk? Right? Yeah.
link |
And I think you can use technology to do the same. And again, I think the ultimate solution is going
link |
to be a combination of different sensors. How hard is the problem from the vision perspective?
link |
I think it's non trivial. I think it's non trivial. And I think the biggest part is
link |
getting the data, right? It's like getting enough data examples. So we, for this research project,
link |
we partnered with the Transportation Authorities of Sweden. And we literally had a race track
link |
with a safety driver. And we basically progressively got people drunk. Nice. So but you know, that's
link |
a very expensive data set to collect. And you want to collect it globally and in multiple conditions.
link |
Yeah, the ethics of collecting a data set where people are drunk is tricky.
link |
Which is funny because, I mean, let's put drunk driving aside, the number of drunk people in
link |
the world every day is very large. It'd be nice to have a large data set of drunk people getting
link |
progressively drunk. In fact, you could build an app where people can donate their data because
link |
it's hilarious. Right. Actually, yeah, but the liability, liability, the ethics, how do you
link |
get it right? It's tricky. It's really, really tricky. Because like drinking is one of those
link |
things that's funny and hilarious and what loves it's social, the so on and so forth. But it's
link |
also the thing that hurts a lot of people, like a lot of people. Like alcohol is one of those
link |
things. It's legal, but it's really damaging to a lot of lives. It destroys lives. And not just in
link |
driving context. I should mention, people should listen to Andrew Huberman who recently
link |
talked about alcohol. He has an amazing pocket. Andrew Huberman is a neuroscientist from Stanford
link |
and a good friend of mine. Oh, cool. And he's like a human encyclopedia about all health related
link |
wisdom. So he does a podcast. You would love it. I would love that. No, no, no, no. Oh,
link |
you don't know Andrew Huberman. Okay. Listen, you'll listen to Andrew. He's called Huberman Lab
link |
Podcast. This is your assignment. Just listen to one. Okay. I guarantee you this will be a thing
link |
where you say Lex, this is the greatest human I've ever discovered. So. Oh my God. Because I've
link |
really, I'm really on a journey of kind of health and wellness and I'm learning lots and I'm trying
link |
to like build these, I guess, atomic habits around just being healthy. So yeah, I'm definitely
link |
going to do this. I'm gonna. His whole thing. This is great. He's a legit scientist, like
link |
really well published. But in his podcast, what he does, he's not talking about his own work.
link |
He's like a human encyclopedia of papers. And so he, his whole thing is he takes the topic
link |
and in a very fast, you mentioned atomic habits, like very clear way, summarizes the research
link |
in a way that leads to protocols of what you should do. He's really big on like, not like
link |
this is what the science says, but like, this is literally what you should be doing according
link |
to science. So like, he's really big and there's a lot of recommendations he does, which several of
link |
them I definitely don't do, like get sunlight as soon as possible from waking up and like for
link |
prolonged periods of time. That's a really big one. And he's, there's a lot of science behind
link |
that one. There's a bunch of stuff like various systems. You're gonna, and you're gonna be like,
link |
Lex, this is, this is my new favorite person, I guarantee it. And if you guys somehow don't know,
link |
and you're human and you care about your well being, you know, you should definitely listen to
link |
him. Love you, Andrew. Anyway, so what were we talking about? Oh, alcohol and detecting alcohol.
link |
So this is a problem you care about and you're trying to solve.
link |
And actually like broadening it, I do believe that the car is going to be a wellness center,
link |
like, because again, imagine if you have a variety of sensors inside the vehicle,
link |
tracking, not just your emotional state or level of distraction and drowsiness and drowsiness,
link |
level of distraction, drowsiness and intoxication, but also maybe even things like
link |
your physical, you know, your heart rate and your heart rate variability and your breathing rate.
link |
And it can start like optimizing, yeah, it can optimize the ride based on what your goals are.
link |
So I think we're going to start to see more of that. And I'm excited about that.
link |
Yeah, what are the, what are the challenges you're tackling? Well, with Smart Eye currently,
link |
what's like the, the trickiest things to get? Is it, is it basically convincing more and more
link |
car companies that having AI inside the car is a good idea? Or is there some,
link |
is there more technical algorithmic challenges? What's, what's been keeping you mentally busy?
link |
I think a lot of the car companies we are in conversations with are already interested in
link |
definitely driver monitoring. Like, I think it's becoming a must have, but even interior sensing,
link |
I can see, like, we're engaged in a lot of like advanced engineering projects and proof of concepts.
link |
I think technologically though, and even the technology, I can see a path to making it happen.
link |
I think it's the use case. Like, how does the car respond once it knows something about you?
link |
Because you want it to respond in a thoughtful way that doesn't,
link |
that isn't off putting to the consumer in the car. So I think that's like the user experience.
link |
I don't think we've really nailed that. And we usually, that's not part, we're the sensing
link |
platform, but we usually collaborate with the car manufacturer to decide what the use case is. So,
link |
so say you, you figure out that somebody's angry while driving. Okay, what should the car do?
link |
You know, do you see yourself as a role of nudging, of like,
link |
basically coming up with solutions, essentially, that, and then, and then the car manufacturers
link |
kind of put their own little spin on it. Right. Like we, we, we are like the ideation creative
link |
thought partner. But at the end of the day, the car company needs to decide what's on brand for
link |
them, right? Like maybe when it figures out that you're distracted or drowsy, it shows you a coffee
link |
cup, right? Or maybe it takes more aggressive behaviors and basically said, okay, if you don't
link |
like take a rest in the next five minutes, the car is going to shut down, right? Like there's a whole
link |
range of actions the car can take. And doing the thing that is most, yeah, that builds trust with
link |
the driver and the passengers. I think that's what we need to be very careful about.
link |
Yeah, car companies are funny because they have their own, like, I mean, that's why people get
link |
cars still. I hope that changes, but they get it because it's a certain feel and look and
link |
it's a certain, they become proud like Mercedes Benz or BMW or whatever. And that's their thing.
link |
That's the family brand or something like that. Or Ford or GM, whatever, they, they stick to that
link |
thing. It's interesting. So it should be, I don't know, it should be a little more about the technology
link |
inside. And I suppose there too, there could be a branding like a very specific style of luxury
link |
or fun, all that kind of stuff. Yeah. You know, I have an AI focused fund to invest in early stage
link |
kind of AI driven companies. And one of the companies we're looking at is trying to do a
link |
Tesla did, but for boats, for recreational boats. Yeah. So they're building an electric and kind
link |
of slash autonomous boat. And it's kind of the same issues like, what kind of sensors can you put in?
link |
What kind of states can you detect both exterior and interior within the boat? Anyways, it's
link |
like really interesting. Do you boat at all? No, not well, not in that way. I do like to get on the
link |
lake or a river and fish from a boat, but that's not boating. That's different. It's still boating.
link |
Low tech, low tech boat. Get away from, get closer to nature, but I guess going out into the ocean
link |
is also, is also getting closer to nature and some deep sense. I mean, I guess that's why people love
link |
it. The enormity of the water just underneath you. Yeah. I love the water. I love both. I love
link |
saltwater. It was like the big and just it's humbling to be in front of this giant thing
link |
that's so powerful that was here before us and be here after. But I also love the piece of a small
link |
like wooded lake. It's everything's calm. You tweeted that I'm excited about Amazon's acquisition
link |
of iRobot. I think it's a super interesting, just given the trajectory of which you're part of,
link |
of these honestly small number of companies that are playing in this space that are like trying to
link |
have an impact on human beings. So it is an interesting moment in time that Amazon would
link |
acquire iRobot. You tweet, I imagine a future where home robots are as ubiquitous as microwaves
link |
or toasters. Here are three reasons why I think this is exciting. If you remember, I can look
link |
it up. But what, why is this exciting to you? I mean, I think the first reason why this is exciting
link |
to kind of remember the exact like order in which I put them. But one is just it's, it's going to be
link |
an incredible platform for understanding our behaviors within the home, right? Like, you know,
link |
if you think about Roomba, which is, you know, the robot vacuum cleaner, the flagship product of iRobot
link |
at the moment, it's like running around your home, understanding the layout is understanding what's
link |
clean and what's not, how often do you clean your house and all of these like behaviors
link |
are a piece of the puzzle in terms of understanding who you are as a consumer. And I think that could
link |
be, again, used in really meaningful ways, not just to recommend better products or whatever,
link |
but actually to improve your experience as a human being. So I think, I think that's very
link |
interesting. I think the natural evolution of these robots in the, in the home. So it's,
link |
it's interesting. Roomba isn't really a social robot, right? At the moment. But I once interviewed
link |
one of the chief engineers on the Roomba team, and he talked about how people named their Roombas.
link |
And if their Roomba broke down, they would call in and say, you know, my Roomba broke down and
link |
the company would say, well, we'll just send you a new one. And no, no, no, Rosie, like you have to
link |
like, yeah, I want you to fix this particular robot. So people have already built like
link |
interesting emotional connections with these home robots. And I think that again, that provides a
link |
platform for really interesting things to, to just motivate change, like it could help you. I mean,
link |
one of the companies that spun out of MIT, Catalia Health, the guy who started it spent a lot of
link |
time building robots that help with weight management. So weight management, sleep, eating
link |
better. Yeah, all of these things. Well, if I'm being honest, Amazon does not exactly have a track
link |
record of winning over people in terms of trust. Now that said, it's a really difficult problem
link |
for human being to let a robot in their home that has a camera on it. Right. That's really,
link |
really, really tough. And I think Roomba actually, I have to think about this, but I'm pretty sure
link |
now, or for some time already has had cameras, because they're doing the most recent Roomba.
link |
I have so many Roombas. Oh, you actually do? Well, a program that I don't use a Roomba for
link |
fact, people that have been to my place, they're like, yeah, you definitely don't use these Roombas.
link |
That could be a good, I can't tell like the valence of this comment. Was it a compliment or like?
link |
No, it's a giant math. It's just a bunch of electronics everywhere. I have six or seven
link |
computers, I have robots everywhere, I have Lego robots, I have small robots and big robots. It's
link |
just giant, just piles of robot stuff. And yeah, but including the Roombas, they're being used
link |
for their body and intelligence, but not for their purpose. I've changed them to repurpose them for
link |
other purposes, for deeper, more meaningful purposes than just like the Butter Robot. Yeah,
link |
which just brings a lot of people happiness, I'm sure. They have a camera because the thing they
link |
advertised, I had my own camera still, but the camera on the new Roomba, they have
link |
state of the art poop detection as they advertised, which is a very difficult, apparently it's a
link |
big problem for vacuum cleaners is if they go over like dark poop, it just runs it over and
link |
creates a giant mess. So they have like, apparently they collected like a huge amount of data and
link |
different shapes and looks and whatever of poop and then not able to avoid it and so on.
link |
They're very proud of this. So there is a camera, but you don't think of it as having a camera.
link |
Yeah, you don't think of it as having a camera because you've grown to trust it, I guess, because
link |
our phones, at least most of us seem to trust this phone, even though there's a camera looking
link |
directly at you. I think that if you trust that the company is taking security very seriously,
link |
I actually don't know how that trust was earned with smartphones. I think it just started to provide
link |
a lot of positive value into your life where you just took it in and then the company over time
link |
has shown that it takes privacy very seriously, that kind of stuff. But I just Amazon is not always
link |
in its social robots communicated, this is a trustworthy thing, both in terms of culture
link |
and competence. Because I think privacy is not just about what do you intend to do,
link |
but also how well, how good are you at doing that kind of thing. So that's a really hard problem to
link |
solve. But a lot of us have Alexis at home, and I mean, Alexa could be listening in the whole time
link |
and doing all sorts of nefarious things with the data. Hopefully it's not, but I don't think it is.
link |
But Amazon is not, it's such a tricky thing for a company to get right, which is to earn the trust.
link |
I don't think Alexis earned people's trust quite yet. Yeah, I think it's not there quite yet.
link |
I agree, I agree. They struggle with this kind of stuff. In fact, when these topics are brought
link |
up, people are always get nervous. And I think if you get nervous about it, the way to earn
link |
people's trust is not by like, ooh, don't talk about this. It's just be open, be frank, be
link |
transparent, and also create a culture of where it radiates at every level from engineer to CEO
link |
that like, you're good people that have a common sense idea of what it means to respect basic
link |
human rights and the privacy of people and all that kind of stuff. And I think that propagates
link |
throughout the, that's the best PR, which is like, over time, you understand that these are
link |
good, these are good folks doing good things. Anyway, speaking of social robots, have you
link |
heard about Tesla, Tesla bot, the human robot? Yes, I have. Yes, yes, yes. But I don't exactly
link |
know what it's designed to do. Do you? You probably do. No, I know it's designed to do,
link |
but I have a different perspective on it, but it's designed to, it's a humanoid form, and it's
link |
designed to, for automation tasks in the same way that industrial robot arms automate task in the
link |
factory. So it's designed to automate task in the factory. But I think that humanoid form,
link |
as we were talking about before, is one that we connect with as human beings. Anything legged,
link |
honestly, but the humanoid form especially, we anthropomorphize it most intensely. And so,
link |
the possibility to me, it's exciting to see both Atlas developed by Boston Dynamics and anyone,
link |
including Tesla, trying to make humanoid robots cheaper and more effective. The obvious way
link |
transforms the world is social robotics to me versus automation of tasks in the factory.
link |
So, yeah, I just wanted to, in case that was something you were interested in, because I find
link |
its application of social robotics super interesting. We did a lot of work with Pepper,
link |
Pepper the Robot a while back. We were like the emotion engine for Pepper, which is SoftBank's
link |
humanoid robot. And how tall is Pepper? It's like, yeah, like, I don't know, like five foot maybe,
link |
right? Yeah, pretty big, pretty big. And it was designed to be like airport lounges and retail
link |
stores, mostly customer service, right? Hotel lobbies. And I mean, I don't know where the
link |
state of the robot is, but I think it's very promising. I think there are a lot of applications
link |
where this can be helpful. I'm also really interested in, yeah, social robotics for the home,
link |
right? Like that can help elderly people, for example, transport things from one location of
link |
the home to the other, or even like just have your back in case something happens. Yeah,
link |
I don't know. I do think it's a very interesting space. It seems early though. Do you feel like
link |
the timing is now?
link |
I, yes, 100%. So it always seems early until it's not, right?
link |
Right, right, right. I think the time, I definitely think that the time is now,
link |
like this decade, for social robots, whether the humanoid form is right, I don't think so.
link |
I don't, I think the, like, if we just look Gibo, at Gibo as an example,
link |
I feel like most of the problem, the challenge, the opportunity of social connection between
link |
an AI system and a human being does not require you to also solve the problem of robot manipulation
link |
and mobility, bipedal mobility. So I think you could do that with just a screen, honestly,
link |
but there's something about the interface of Gibo we can rotate and so on that's also compelling.
link |
But you get to see all these robot companies that fail, incredible companies like Gibo and
link |
even, I mean, the iRobot in some sense is a big success story that it was able to find
link |
a niche thing and focus on it. But in some sense, it's not a success story because they
link |
didn't build any other robot, like any other, it didn't expand into all kinds of robotics,
link |
like once you're in the home, maybe that's what happens with Amazon is they'll flourish into
link |
all kinds of other robots. But do you have a sense, by the way, why it's so difficult to
link |
build a robotics company? Like why so many companies have failed? I think it's like you're
link |
building a vertical stack, right? Like you're building the hardware plus the software and you
link |
find you have to do this at a cost that makes sense. So I think Gibo was retailing at like,
link |
I don't know, like $700, $800, which for the use case, right? There's a dissonance there,
link |
it's too high. So I think cost of building the whole platform in a way that is affordable
link |
for what value it's bringing, I think that's the challenge. I think for these home robots
link |
that are going to help, you know, help you do stuff around the home, that's a challenge too,
link |
like the mobility piece of it. That's hard. Well, one of the things I'm really excited with
link |
Tesla Bot is the people working on it. And that's probably the criticism I would apply to
link |
some of the other folks who worked on social robots is the people working on Tesla Bot know
link |
how to, they're focused on and know how to do mass manufacture and create a product that's super
link |
cheap. Very cool. That's the focus. The engineering focus isn't, I would say that you can also
link |
criticize them for that, is they're not focused on the experience of the robot. They're focused
link |
on how to get this thing to do the basic stuff that the humanoid form requires to do as cheap as
link |
possible, then the fewest number of actuators, the fewest numbers of motors, the increased
link |
efficiency, they decrease the weight, all that kind of stuff. So that's really interesting.
link |
I would say that Jibo and all those folks, they focus on the design, the experience, all of that,
link |
and it's secondary how to manufacture. But it's like, no, you have to think,
link |
like the Tesla Bot folks from first principles, what is the fewest number of components, the
link |
cheapest components, how can I build it as much in house as possible without having to
link |
consider all the complexities of a supply chain, all that kind of stuff.
link |
Because if you have to build a robotics company, you're not building one robot, you're building
link |
hopefully millions of robots. You have to figure out how to do that. Where the final thing, I mean,
link |
if it's Jibo type of robot, is there a reason why Jibo, like we're going to have this lengthy
link |
discussion, is there a reason why Jibo has to be over a hundred dollars?
link |
It shouldn't be. Right. Like the basic components of it.
link |
Right. Like you could start to actually discuss like, okay, what is the essential thing about
link |
Jibo? How much, what is the cheapest way I can have a screen? What's the cheapest way I can have
link |
a rotating base, all that kind of stuff. And then you get down, continuously drive down costs.
link |
Speaking of which, you have launched an extremely successful companies, you have helped others,
link |
you've invested in companies. Can you give advice on how to start a successful company?
link |
I would say have a problem that you really, really, really want to solve. Right. Something
link |
that you're deeply passionate about. And honestly, take the first step. Like that's often the hardest
link |
and don't overthink it. Like, you know, like this idea of a minimum viable product or a minimum
link |
viable version of an idea. Right. Like, yes, you're thinking about this, like a humongous,
link |
like super elegant, super beautiful thing. What, like reduce it to the littlest thing
link |
they can bring to market that can solve a problem or that can, you know, that can help address
link |
a pain point that somebody has. They often tell you, like, start with a customer of one.
link |
Right. If you can solve a problem for one person, then there's probably yourself or
link |
some other person. Right. Pick a person. Exactly. It could be you. Yeah. It's actually
link |
often a good sign that if you enjoy a thing, enjoy a thing where you have a specific problem that
link |
you'd like to solve, that's a good, that's a good end of one to focus on. Right. What else,
link |
what else is there to actually, so step one is the hardest, but there's other steps as well,
link |
right? I also think like who you bring around the table early on is so key. Right. Like being
link |
clear on, on what I call like your core values or your North Star. It might sound fluffy, but
link |
actually it's not. So, and Roz and I, I feel like we did that very early on. We sat around her
link |
kitchen table and we said, okay, there's so many applications of this technology. How are we going
link |
to draw the line? How are we going to set boundaries? We came up with a set of core values
link |
that in the hardest of times we fell back on to determine how we make decisions. And so,
link |
I feel like just getting clarity on these core, like for us, it was respecting people's privacy,
link |
only engaging with industries where it's clear opt in. So, for instance, we don't do any work
link |
in security and surveillance. So, things like that, just getting, we very big on, you know,
link |
one of our core values is human connection and empathy. Right. And that is, yes, it's an AI company,
link |
but it's about people. Well, these are all, they become encoded in how we act, even, even if you're
link |
a small, tiny team of two or three or whatever. So, I think that's another piece of advice.
link |
So, what about finding people, hiring people? If you care about people as much as you do, like,
link |
it seems like such a difficult thing to hire the right people.
link |
I think early on, in the startup, you want people who have, who share the passion and the
link |
conviction because, because it's going to be tough. Like, I've yet to meet a startup where it was just
link |
a straight line to success. Right. Even, even not just startup, like, even everyday people's
link |
lives, right. You always like run into obstacles and you run into naysayers and
link |
so, you need people who are believers, whether they're people on your team or even your investors,
link |
you need investors who are really believers in what you're doing, because that means they will
link |
stick with you. They won't, they won't give up at the first obstacle. I think that's important.
link |
What about raising money? What about finding investors? First of all, raising, raising money,
link |
but also raising money from the right sources, from that ultimately don't hinder you, but, you
link |
know, help you, empower you, all that kind of stuff. What a device would you give there?
link |
You successfully raised money many times in your life.
link |
Yeah, again, it's not just about the money. It's about writing, finding the right investors who
link |
are going to be aligned in terms of what you want to build and believe in your core values. Like,
link |
for example, especially later on, like I, yeah, in my latest, like, round of funding,
link |
I try to bring in investors that really care about, like, the ethics of AI, right. And they,
link |
the alignment of vision and mission and core values is really important. It's like you're
link |
picking a life partner, right? It's the same kind of. So you take it that seriously for investors?
link |
Yeah, because they're going to have to stick with you. You're stuck together.
link |
For a while, anyway. Yeah. Maybe not for life, but for a while, for sure.
link |
For better or worse, I forget what the vowels usually sound like. For better or worse? No.
link |
Through sick, through something. Yeah. Oh, boy. Yeah. Anyway, it's romantic and deep,
link |
and you're in it for a while. So it's not just about the money. You tweeted about going to your
link |
first capital camp, investing, get together. And then you learned a lot. So this is about
link |
investing. So what have you learned from that? What have you learned about investing in general?
link |
From both? Because you've been on both ends of it. I mean, I try to use my experience as an operator
link |
now with my investor hat on when I'm identifying companies to invest in. First of all, I think
link |
the good news is because I have a technology background, right, and I really understand machine
link |
learning and computer vision and AI, et cetera. I can apply that level of understanding, right?
link |
Because everybody says they're an AI company or they're an AI tech. And I'm like, no, no, no, no,
link |
show me the technology. So I can do that level of diligence, which I actually love.
link |
And then I have to do the litmus test of, you know, if I'm in a conversation with you,
link |
am I excited to tell you about this new company that I just met, right? And if I'm an ambassador
link |
for that company and I'm passionate about what they're doing, I usually use that. Yeah, that's
link |
important to me when I'm investing. So that means you actually can't explain what they're doing
link |
and you're excited about it. Exactly. Exactly. Thank you for putting it so succinctly.
link |
Like rambling, but exactly that's it. No, but sometimes it's funny, but sometimes it's unclear
link |
exactly. I'll hear people tell me, you know, they'll talk for a while and it sounds cool, like they
link |
paint a picture of a world, but then when you try to summarize it, you're not exactly clear
link |
of what maybe, maybe what the core powerful idea is. Like you can't just build another Facebook or
link |
there has to be a, there has to be a core simple to explain idea that, yeah, that then you can
link |
or can't get excited about, but it's there. It's right there. Yeah. Yeah. But like, how do you
link |
ultimately pick who you think will be successful? So it's not just about the thing you're excited
link |
about, like there's other stuff. Right. And then there's all the, you know, with early stage companies,
link |
like pre seed companies, which is where I'm investing, sometimes the, the business model
link |
isn't clear yet, or the go to market strategy isn't clear. There's usually like, it's very
link |
early on that some of these things haven't been hashed out, which is okay. So the way I like
link |
to think about it is like, if this company is successful, will this be a multi billion slash
link |
trillion dollar market, you know, or company? And, and so that's definitely a lens that I use.
link |
What's pre, what's pre seed? What are the different stages? And what's the most exciting
link |
stage and what's, or not what's, what's, what's interesting about every stage, I guess.
link |
Yeah. So pre seed is usually when you're just starting out, you've maybe raised the friends
link |
and family rounds, you've raised some money from people, you know, and you're getting ready to
link |
take your first institutional check in like first check from an investor. And I love this stage.
link |
There's a lot of uncertainty. So some investors really don't like the stage because
link |
the financial models aren't there. Often the teams aren't even like formed really, really early.
link |
But to me, it's, it's like a magical stage because it's the time when there's so much conviction,
link |
so much belief, almost delusional, right? Yeah. And there's a little bit of naivete around
link |
with founders at the stage. And I just love it. It's contagious. And I love, I love that I can
link |
often they're first time founders, not always, but often they're first time founders and I can
link |
share my experience as a founder myself and I can empathize, right? And I can almost,
link |
I create a safe ground where, because, you know, you have to be careful what you tell your investors,
link |
right? And I will, I will often like say, I've been in your shoes as a founder, you can tell me
link |
if it's challenging, you can tell me what you're struggling with, it's okay to vent.
link |
So I create that safe ground. And I think, I think that's the superpower.
link |
Yeah, you have to, what I guess you have to figure out if this kind of person is going to be able
link |
to ride the roller coaster, like of many pivots and challenges and all that kind of stuff. And if
link |
the space of ideas they're working in is interesting, like the way they think about the world. Yeah,
link |
because if it's successful, the thing they end up with might be very different, the reason
link |
it's successful for it. Actually, you know, I was going to say the third, so the technology is one
link |
aspect, the market or the idea, right, is the second and the third is the founder, right? Is
link |
this somebody who I believe has conviction is a hustler, you know, is going to overcome
link |
obstacles. Yeah, I think that it's going to be a great leader, right? Like as a startup,
link |
as a founder, you're often, you are the first person and your role is to bring amazing people
link |
around you to build this thing. And so you're in an evangelist, right? So how good are you
link |
going to be at that? So I try to evaluate that too. You're also in the tweet thread about it,
link |
mentioned, is this a known concept, random rich dudes, RDS, and saying that there should be like
link |
random rich women, I guess. What's the dudes, what's the dudes version of women, the women version
link |
of dudes, ladies, I don't know. What's, what's, is this a technical term? Is this known? Random
link |
rich dudes? Well, I didn't make that up, but I was at this capital camp, which is a get together for
link |
investors of all types. And there must have been maybe 400 or so attendees, maybe 20 were women.
link |
It was just very disproportionately, you know, a male, male dominated, which I'm used to. I think
link |
you're used to this kind of thing. I'm used to it, but it's still surprising. And as I'm raising
link |
money for this fund, so my, my, my fund partner is a guy called Rob May, who's done this before. So
link |
I'm new to the investing world, but he's done this before. Most of our investors in the fund are
link |
these, I mean, awesome, I'm super grateful to them, random just rich guys. I'm like, where are the
link |
rich women? So I'm really adamant in both investing in women led AI companies. But I also would love
link |
to have women investors be part of my fund. Because I think that's how we drive change.
link |
Yeah. So the next, you know, that, that, that takes time, of course, but there's been quite,
link |
quite a lot of progress. But yeah, for, for the next Mark Zuckerberg to be a woman and all that
link |
kind of stuff, because that, that's just like a huge number of wealth generated by, by women and
link |
then controlled by women and allocated by women and all that kind of stuff. And then beyond just
link |
women, just broadly across all different measures of diversity and so on. Let me ask you to put
link |
on your wise sage hat. Okay. So we already, we already gave advice on startups and just advice
link |
for women. But in general, advice for folks in high school or college today, how to have a career
link |
they can be proud of, how to have a life they can be proud of. I suppose you have to give this kind
link |
of advice to your kids. Yeah. Well, here's the number one advice that I give to my kids.
link |
My daughter is now 19, by the way, and my son is 13 and a half. So they're not little kids anymore.
link |
But, but I think it does. They're awesome. They're my best friends. But yeah, I think the number
link |
one advice I would share is embark on a journey without attaching to outcomes and enjoy the journey.
link |
Right? So, you know, we often were so obsessed with, with the end goal. A, that doesn't allow us to
link |
be open to different endings of a journey or a story. So you become like so fixated on a
link |
particular path, you don't see the beauty in the other alternative path. And then you forget to
link |
enjoy the journey because you're just so fixated on the goal. And I've been guilty of that for many,
link |
many years in my life. And I've, I've, I've now, I'm now trying to like make the shift of no, no,
link |
no, I'm gonna, again, trust that things are going to work out and it'll be amazing and maybe even
link |
exceed your dreams. We have to be open to that. Yeah. Taking, taking a leap into all kinds of
link |
things. I think you tweeted like you went on vacation by yourself or something like this or
link |
I know this. And just, just, just, just going, just taking the leap, doing it and enjoying
link |
enjoying the, enjoying the moment, enjoying the weeks, enjoying not looking at the,
link |
some kind of career ladder next step and so on. Yeah. There's, there's something to that,
link |
like overplanning too. I'm surrounded by a lot of people that kind of, so I don't plan.
link |
You don't. No. Do you not do goal setting?
link |
My goal setting is very like, I like the affirmations is very,
link |
it's almost, I don't know how to put into words, but it's, it's a little bit like
link |
what my heart yearns for kind of in, I guess in the space of emotions more than in the space of like
link |
this would be like in the rational space. Cause I just tried to picture a world that I would like
link |
to be in and that world is not clearly pictured. It's mostly in the emotional world. I mean,
link |
I think about that from, from robots, cause you know, I have this desire. I've had it my whole
link |
life to, to, well, it took different shapes, but I think once I discovered AI, the desire was to
link |
I think in this, in the context of this conversation, could be easily easier described as basically a
link |
social robotics company. And that's something I dreamed of doing. And well, there's a lot,
link |
there's a lot of complexity to that story, but that, that's, that's the, that's the only thing,
link |
honestly I dream of doing. So I imagine a world that, that I could help create, but it's not a,
link |
there's no steps along the way. And I think I'm just kind of stumbling around and following
link |
happiness and working my ass off in almost random, like an aunt does in random directions.
link |
But a lot of people, a lot of successful people around me say this, you should have a plan,
link |
you should have a clear goal. You have a goal at the end of the month, you have a goal at the
link |
end of the year. I don't, I don't, I don't. And there's a balance to be struck, of course, but
link |
there's something to be said about really making sure that you're living life to the fullest
link |
that goals can actually get in the way of. So one of the best, like kind of most,
link |
what do you, what do you call it when it's like challenges your brain? What do you call it?
link |
The only thing that comes to mind, and this is me saying is a mind fuck, but yes.
link |
Okay, okay. Something like that. Yes.
link |
Super inspiring talk. Kenneth Stanley, he was at open AI, he just laughed. And he has a book
link |
called Why Greatness Can't Be Planned. And it's actually an AI book. So, and he's done all these
link |
experiments that basically show that when you over optimize, you, you, like the tradeoff is
link |
you're less creative, right? And to create true greatness and truly creative solutions to problems,
link |
you can't overplan it. You can't. And I thought that was, and so he generalizes it beyond AI.
link |
And he talks about how we apply that in our personal life and in our organizations and our
link |
companies, which are over KPI, right? Like look at any company in the world. And it's all like,
link |
these are the goals. These are the, you know, weekly goals and, you know, the sprints and then
link |
the quarterly goals, blah, blah, blah. And, and he just shows with a lot of his AI experiments
link |
that that's not how you create truly game changing ideas. So there you go. Yeah. Yeah.
link |
But he's awesome. Yeah, there's a balance, of course, because that's yeah,
link |
many moments of genius will not come from planning and goals, but you still have to build factories
link |
and you still have to manufacture and you still have to deliver and there's still deadlines and
link |
all that kind of stuff. And that for that's good to have goals. I do goal setting with my kids.
link |
We all have our goals, but, but, but I think we're starting to morph into more of these like
link |
bigger picture goals and not obsess about like, I don't know, it's hard. Well, I honestly think
link |
with, especially with kids, it's better, much, much better to have a plan and have goals and so
link |
on. Cause you have to, you have to learn the muscle of like what it feels like to get stuff done.
link |
Yeah. But I think once you learn that, there's flexibility for me because I spend most of my
link |
life with goal setting and so on. So like, I've gotten good with grades and school. I mean,
link |
in school, if you want to be successful at school, yeah. I mean, the kind of stuff in high school
link |
and college that kids have to do in terms of managing their time and getting so much stuff done,
link |
it's like, you know, taking five, six, seven classes in college, they're like, that would break
link |
the, the spirit of most humans if they took one of them later in life. It's like really difficult
link |
stuff, especially engineering curricula. So I think you have to learn that skill. But once
link |
you learn it, you can maybe, cause you're, you can be a little bit on autopilot and use that
link |
momentum and then allow yourself to be lost in the flow of life, you know, just kind of
link |
or also give like, I work pretty hard to allow myself to have the freedom to do that. That's
link |
really, that's a tricky freedom to have because like a lot of people get lost in the right race
link |
and they, and they also like, like financially, they, whenever you get a raise, they'll get
link |
like a bigger house or something like this. I put very, so like, you're always trapped in this
link |
race. I put a lot of emphasis on, on living like below my means always. And so there's a lot of
link |
freedoms to do whatever, whatever the heart desires that, that's a really, but everyone has
link |
to decide what's the right thing, what's the right thing for them. For some people,
link |
having a lot of responsibilities, like a house they can barely afford, or having a lot of kids,
link |
the responsibility side of that is really helps them get their shit together. Like, all right,
link |
I need to be really focused and give some of the most successful people I know have kids and the
link |
kids bring out the best of them. They make them more productive, less productive.
link |
Accountability, accountability thing. And almost something to actually live and
link |
fight and work for, like having a family. It's, it's fascinating to see because you would think
link |
kids would be a hit on productivity, but they're not for a lot of really successful people.
link |
They really like, they're like an engine of right efficiency. Oh my God. Yeah. It's weird. Yeah.
link |
I mean, it's beautiful. It's beautiful to see. And also social happiness. Speaking of which,
link |
what role do you think love plays in the human condition? Love?
link |
I think love is, is, um,
link |
yeah, I think, I think it's why we're all here. I think it would be very hard to live life without
link |
love in any of its forms, right? Yeah, that's the most beautiful forms that,
link |
that human connection takes, right? Yeah. I feel like everybody wants to feel loved,
link |
right? And one way or another, right? And to love. Yeah. And to love too. Totally. Yeah,
link |
I agree with that. Both of it. I'm not even sure what feels better.
link |
Both, both like that. To give and, to give love too. Yeah. And, and it is like we've been talking
link |
about an interesting question, whether some of that, whether one day we'll be able to love a toaster.
link |
Okay. Get some small. I wasn't quite thinking about that when I said like, love, yeah, love.
link |
That's all I was thinking about. Love and give love. Okay. I was thinking about Brad Pitt and
link |
toasters. Toasters, great. All right. Well, I think we, we started on love and ended on love.
link |
This was an incredible conversation, Ron. And thank you so much. You're an incredible person.
link |
Thank you for everything you're doing in, in, in AI, in, in the space of just caring about humanity,
link |
human emotion, about love, and being an inspiration to a huge number of people in robotics, in AI,
link |
in science, in the world of general. So thank you for talking to me. It's an honor.
link |
Thank you for having me. And you know, I'm a big fan of yours as well. So it's been a pleasure.
link |
Thanks for listening to this conversation with Rana Elkhlyubi. To support this podcast,
link |
please check out our sponsors in the description. And now let me leave you with some words from
link |
Helen Keller. The best and most beautiful things in the world cannot be seen or even touched.
link |
They must be felt with the heart. Thank you for listening and hope to see you next time.