Ramesh Srinivasan: “Whose Global Village? Rethinking How Technology […]” | Talks at Google

Ramesh Srinivasan: “Whose Global Village? Rethinking How Technology […]” | Talks at Google


[MUSIC PLAYING] SPEAKER 1: All right, well,
thank you everyone for coming. Today, we’re delighted to
welcome Ramesh Srinivasan. Ramesh studies the intersection
of technology, politics, and society. Since 2005, he’s been a member
of the faculty and professor at UCLA in the Information
Studies Department. Prior to that, he took
his PhD from Harvard, his master’s from MIT, and
his bachelor’s from Stanford and then went on to become
a fellow at the MIT Media Lab and a teaching fellow
at Harvard’s Graduate School of Design. He’s here today to discuss
his first book, “Whose Global Village? Rethinking How Technology
Shapes Our World.” The book is a sort
of call to action to include marginalized,
non-western countries in the digital revolution. And so without any further
ado, please join me in welcoming to Google
Ramesh Srinivasan. [APPLAUSE] RAMESH SRINIVASAN: Thank you. All right, it’s absolutely
a pleasure to be here. I have a bunch of friends at
Google, two of whom are here, and my new friend David. I wanted to thank you all
for making the time to listen to some of these thoughts. What I’m going to
do is set a timer. And I’m going to go about
45 minutes, maybe even less, and then we’ll
have a conversation about this material. So what you can do also
is if there are thoughts, or reactions, or
comments that you have to any of this
material, by all means, you can tweet about
it and link it to me, and we can have a kind of
asynchronous conversation afterward. And you can also
get in touch with me through my email address
and my website, which are listed on this link right here. OK so many of us know what this
diagram represents, but let me kind of unpack it a little bit. This represents the fiber
optic cable infrastructure that provides sort of the
backhaul, if you will, the very basis of
what the internet is. And, you know, many
of us think about how this sort of environment and
this landscape is shifting. But it’s very important to note
when you see diagrams like this that the internet is
actually not immaterial, but it’s actually material. It’s rooted in the
things we build and the places we connect. But what’s also striking when
you see a diagram like this is how the networks
that are actually drawn by fiber optic cables
that represent the internet, and there’s also, of course,
satellites that are part of the internet, as well,
represent the places that are already, in a sense,
very well connected in our world today, right? So if you look at a map, for
example, of plane flights, you’ll see that there’ll be
some pretty high correlation between plane flights and other
forms of traffic and exchange and these mappings as well. So that explains why
we see, you know, the West Coast of
the United States, for example, well
connected with, you know, Shanghai and
Beijing in this map. It also explains why we see
New York City well connected with London and Western Europe. But what’s particularly notable
when you look at this diagram is what is not
connected and what is not connected to what,
particularly speaking, that we’re talking about just a
couple fiber optic cables that connect the two major
continents of the global south, right, that of Africa
and South America. So what that means is this sort
of global village, so to speak, which is, you know, partly
the title of my book, a term in my book, has not
actually come to be, that if anything, this
vision that Marshall McLuhan and many others had of
sort of an internet that brings everybody together in
equitable, and flattened, and democratic
fashions, has not really been the experience on a very,
very material and specific level itself. So to kind of continue,
let’s think about one of the major
metaphors by which we think about the internet
today and that we think about data today more
broadly, which is the cloud, right? So as we all know, as we
think about the of metaphor in our lives, metaphors both
open up our understandings, but in some cases, they also
close our understandings, right? So when you think
about the cloud, you say, hey, who has a
problem with the cloud. The cloud is immaterial. The cloud is made of water. Our bodies are made of water. Clouds are everywhere, right? Who has a problem
with cloud itself? But, of course, as we
all know here at Google, but we also know
in other companies, the cloud is very much
transacted and determined by a few companies and their
terms of service in relation to personal data, right? And so the reason I have
these three logos here on this map,
including your own, is because in the
country of Mexico, and I’ll speak about
this very briefly in this talk today, where I’m
currently doing my field work, it’s really fascinating. There have been some
informal studies done, but emails and various forms
of specific communication that are sent between one
neighbor and their neighbor just next to them
are often forms of data that are transacted
on cloud-based servers through these three companies. So we all know this,
but if you send an email to someone who’s
right next to you, that email might actually
go through a server that’s outside your
country and that is part of a private corporate
kind of proprietary terms of service. So I think it’s
important to read our experiences of the internet
more largely and data as well through the specific
mechanisms by which it’s transacted and arranged. The reason why I
say that is we’re at an incredibly
staggering moment, I would argue an
inflection point, and how we think
about everything from how money is made, to
how labor is accumulated, to how services
are drawn, right? We’re at a point right now where
platforms, and specifically the so-called looseness
or flexibility of platform infrastructures,
mean, everything, right? So you think about these
particular examples, and you can substitute
very easily Alibaba, right, the retailer one
won in yellow there, with Amazon as well. And you can actually realize
that ownership and labor are actually, in a large sense,
something of the past, right? Incredible amounts of money
are accumulated simply by being some sort of
transactional agent online. And I think that that is really,
really important because that of course is brilliant. It’s disruptive. It’s an example of an incredibly
creative and efficient technological
advancement, but it also has massive, massive, massive
social and economic effects. And we need to be responsible
and aware of those effects. So I’m very happy to know
that there is increasingly a movement toward ethics and
AI, and ethics and algorithms, and ethics and thinking about
the effects of technology, not just when you
click on something, but more rooted
in our societies. And we can see that from
everything from our rent prices in San Francisco to
these sorts of questions about labor and the
independent gig economy, right? So I want you to just kind
of keep this thought in mind as we think again about
how not only the cloud has very specific transactional
arrangements that are associated with companies,
but also our experiences of taking taxis,
staying in houses, sharing media content via
Facebook, for example. Those are all
transacted by platforms that are accruing a great
amount of money and power. So the reason why I’m
getting into all this is because we’re
building systems right now here at
Google, and of course in many other capacities,
many other companies, that are so-called learning
from the world, right? And as my colleague Cathy O’Neil
says in her recent book called “Weapons of Math Destruction,”
generally speaking, and you all know this
better than I do, when you create an
algorithm, you’re basically thinking about
what is the recipe, and what is the success, right? You’re thinking about
the set of instructions by which some sort of
deterministic or successful outcome can be programmed
or created, right? And so the reason that
that is problematic, and this is sort of sad but
kind of humorous example around this, is if we
are building systems that people, generally speaking,
treat as neutral and natural, because none of us have
any time anymore, right, we’re all completely
multitasking, and the technologies
allow us to do that, but they also put
us in a position where our minds,
we sort of blindly trust that which we see, right? But if those
systems are learning from a world that already
features various forms of bias, and there’s quite a
bit of science that shows that at the minimum,
racial implicit bias is the norm rather
than the exception, then those systems are
going to create outputs that we might treat
as neutral that actually reflect those
biases and normalize those biases, right? So generally speaking,
you all know this again. What we end up creating
algorithmically is based on who we are,
and what we design, and what software
we build, right? The kind of learning
model that we apply and kind of our ability to
design that learning model, and the data sets, of course,
by which that learning model is learning, and the outcomes
for which we are optimizing those algorithmic systems. So this is sort of, again, kind
of a weird and humorous example of a company out of
Russia called FaceApp, I don’t know how many
of you heard of this, out of St. Petersburg
in Russia that was attempting to go
through the internet and take people’s faces and
make them more attractive. So I think most people think
that President Obama is pretty attractive,
right, at least in 2008, right, until he
had to age by being president of this country. And so President Obama’s face,
right, we know he’s mixed race, but President Obama’s
face was turned white by this algorithmic system. I’m not really sure he looks
that much better thanks to the system than not. OK, so now I want to kind
of touch on other issues just to kind of push
these ideas and kind of engage with you
in a conversation around these issues. I don’t know how many of
you read this article, but it’s quite persuasive for
me, published by the outlet ProPublica. And this is a really,
really interesting issue where a private
company was charged with building a system
that could predict, in theory, the
rate of recidivism. So what does recidivism mean? You committed a crime. What are the chances that you’re
going to commit a crime again, right? And what it turns out
in this particular case, and this company is called North
Point, in this particular case is, the Caucasian
man on the right, who actually had a
felony conviction already on his record, was at a
70% rate less predicted– so the black man was
basically predicted with a 70% higher rate
to commit a crime again than the white man. And the black man did not
have a felony conviction on his record. Now, the systems
themselves are not directly trained
on racial issues. So it’s important
to point that out. This is far more implicit and
pervasive as a form of learning phenomenon than actually
saying, oh, you’re black, therefore you’re going
to commit a crime again. They were asking
people other questions like, “Was one of your parents
ever sent to jail or to prison? How many of your friends
and acquaintances are taking drugs illegally?” Not sure why they answer
that question correctly. Or “How often did you get in
fights while at school,” right? And we environmental
factors are correlated with racial factors that
might allow this gentleman on the left to answer yes
to some of those questions and this gentlemen on
the right to answer no to these questions. So this is a huge issue because,
increasingly more and more, states and cities
themselves, as I’m going to show in just
a moment, are actually applying various sorts
of technological systems, in theory, to overcome
human bias, right? And in many cases, it is true. Humans are biased, right? But the problem
is, is that systems that we’re treating as neutral
and natural or actually complex reflections of
those forms of bias and highly inductive in
environmental matters. So this is something
that you all at Google, the most powerful
technology company in the world in my mind, has
got to do something about this, help us with these issues,
because the impacts of what you do are profound. Are profound, and I’m going to
talk a little bit about some of Google’s projects in
relation to this space as well. OK, so this is another example
that’s made the rounds. This is actually research
done by a colleague of mine at the UCLA Anthropology
Department, who’s built neural network
machine learning models for the Iraq battlefield. He had Pentagon-funded research. He also was building
learning models based on archaeological data. And, now, there is an attempt
to apply some of these kind of, if you will, learning
ontological models to the theme of predictive policing. And I think all of us have
seen “Minority Report.” So we know this
idea of predicting whether a given crime in this
particular case is gang related or not, right? And we all know
that gang related itself is a quite murky
definition, right? Like, what is and is
not considered a gang? Is it simply a yes or no? Are there gangs that might be
construed more as communities? Or are there gangs that
are actually violent and a threat to society? So this is actually
being potentially implemented by the Los Angeles
Police Department right where I live in Los Angeles,
a predictive policing system, again, that will–
what this will do is it will be fed crime reports. It will actually develop– its
called partially-generative research– partially-generative algorithms. And in the absence of
a written description, the neural network
will generate new text, an algorithmically-written crime
report that isn’t actually read by anyone, but it’s supposed
to provide context to a police report, and then it’s turned
into a mathematical vector that will relate to the prediction
of whether a crime is gang related or not. And so, of course,
as you can imagine, and I’m kind of
implying here very– I don’t even know
if I’m implying, I’m being pretty
clear, this, to me, is problematic, because
it doesn’t really consider all these sort of
qualitative social issues in the construction of
an algorithm, let alone considers how such an
algorithm might be regulated, how there might be oversight,
how it might be checked, the checks and
balances, and so on. So often I get the response to
this, well, people are biased. Judges are biased. The police department is biased,
no question about it at all. But for me, setting
this up as kind of algorithmic
bias or human bias is a bit of a false
positive, right, or it’s a false comparison. The question is is, what
kinds of technologies should we aspire to work? What kinds of relationships
between humans and machines are ethical and healthy for
us in our society, everything from labor, to questions
of criminality, to questions of racial bias,
which I know none of us really want to
perpetuate, right? So one of the junior authors
to this research with Jeff Brantingham, my colleague,
presented this research. Before, this kind of made
the rounds in the media, and he was asked quite a
few critical questions, if you will. And he left the room saying– he kind of ran out
of the room saying, I’m just an engineer, right? But as we all
know, and you know, this is part of our
engineering education, we’re seeing a movement toward
ethics courses in our AI, in our AI classes as well,
to integrate the two, and other colleagues are
saying we should integrate moral philosophy classes
with computational sciences classes or
anthropological classes with computer science classes. I would love to co-teach
classes like this at UCLA. And we struggled
to do this frankly. This is a pervasive issue. But we know that we’re
not simply engineers, that what we’re building has
massive impacts on society, and what we build has a lot
to do with the cultures, not only of ourselves, but the
organizations of which we’re a part, you know, we
all know that, right? So I guess I just want to kind
of use these examples to talk about the world that we’re
constructing right now and what happens when we give
private organizations that are quite secretive, because
they have intellectual property and copyright issues, all the
power in public life, right? There is that blurring. And I think, to my
next point, this is why so many people are
upset at Facebook today. OK, so this is
research that actually informs some of the
algorithms that were used by Cambridge Analytica. I don’t think I need
to introduce Cambridge Analytica to this crowd. I’ve been able to do
some interviews with them for my new book. I’ll talk about
that in a moment. But this relates
to research done by a scholar at the
Stanford Business School right up the street. I highly recommend
you having him here. He’s a very nice guy
named Michal Kosinski. And Michal has
made the argument, and it was a real barnburner
when I brought him to my university for people
to ask him some questions, that with 10 likes,
this is on Facebook, a computer knows you better
than a colleague, with 70 likes, it knows you better than
a friend or a roommate, 150, better than a family
member, and 300 likes, it knows you better
than a spouse. But it’s not simply about likes
on Facebook because, you know, to be honest, I don’t
think my sort of lens to my personality is that well
revealed through Facebook. There is a lot of things I
believe that I don’t share on Facebook and may not
even be that implicit, but it’s the
aggregation of data. So what we think of as
disaggregated activities in our lives are
being aggregated. They’re being brought
together, right? I never thought that what
I might buy at RiteAid would somehow have
any relationship to what I post, or like,
or comment when I connect with Alex on Facebook,
right, or with Matt on Facebook, both my
Facebook friends, right? So I think that those
parts of my life are independent of one
another as a human being. I think that I want the right
to be who I am, where I am. When I go home. I’d like to be who I
am in a different way, in a different fashion, than
I am when I’m here right now. Who I am in front
of my mother is different than who I am in
front of my friends, right? I’m kind of the same
person, but still. My mom’s right here. So I think these are things
that are really, really important to kind
of consider, right, like the power over our
own agency as human beings. And more generally, the idea
of knowing someone, knowing someone, true, that’s
true on some sort snapshot in a specific set
of activities, I might be able to
predict one’s behavior. But shouldn’t I have
the power to change? Should I have systems
that reinforce certain types of behavior,
feeding me information that then I have no power
to necessarily overcome, because we all are conditioned
by systems of all forms, not just technological
ones, school systems, et cetera, policing systems,
governmental systems that influence us in our lives. So we are socialized by the
systems of which we’re part. To think that people
somehow can overcome those forms of conditioning
is naive, right? So the question for me
is are multiple questions about morality, about ethics,
and about the agency of us as human beings. So what I’m getting at is
sort of a set of these issues that we’re starting to
normalize in our lives, and I’m going to give you one– well, kind of two very quick
final examples of this. This is a story that’s
made the rounds based on original research that
was published in “Science,” you know, which is like
the dream for all of us to publish in the
journal “Science.” This is related to the
word embedding algorithm. I don’t how many of you
know about this research, but it’s very much
worth knowing. So this research has been built
into automated systems that are being piloted by various
companies for human resources labor. As we all know, automated
labor, or at least semi-automated labor,
through computational systems are going to replace, if
not supplant, or perhaps create new opportunities,
I hope, for people’s jobs. At the end of the day, we need
to be building technologies that serve all of
us as human beings, even on a kind of
company-base level, right? We shouldn’t be losing
people in our jobs unless the goal is to lose
people and cut costs, right? So what these
systems were doing, word embedding in
particular, has had a 50% bias on
its CV scanning between African-Americans’
names and Caucasian names, even when the CVs are normalized
around other variables for being more or less
equivalent in terms of their level of quality. I mean, how that’s even done
is a very interesting question. But what is
essentially happening is, again, not a problem
necessarily with the system itself, but what
it’s learning from and the overall environment
around which it’s kind of computing, right? So words– so for example,
words in this system and in generally in the
corpus of English language, like female and woman
were more closely associated with arts and
humanities, surprise, surprise, while male and men were
closer to math and engineering professions. European-American
names, in terms of the outputs of these systems,
were more closely associated with words like gift and happy. And African-American
names were more commonly associated with
unpleasant words. So the issue is that unless
algorithms are explicitly programmed to address
some of these issues, they’re going to be riddled
with the same social prejudices, right? And so the question really
is, how do we do better? 840 billion words were
used to train the system, and it was providing
these types of outputs. So what’s going on here? Well, four or five factors
that I want to identify. First, questions of
diversity and inclusion, which is really
central to this book that you now have in your
hands around the design and engineering of technologies. Second, our data sets,
as I mentioned already, what kinds of data sets are
these systems learning from? Third, and this is really
central to my work, the ontological learning models. That’s a fancy term. What do I mean by that? When I say I know
something, how do I articulate that which I know? When I say– when I say
I believe something, that’s epistemological. When I want to articulate
that knowledge, that’s ontological,
how I express. So, similarly, the
systems we build can be built with particular
learning models that might be more inclusive where
people who might be targeted or unfairly excluded
by such systems can be part of the process
of designing and developing such systems, et cetera. And then I also
really believe that we need independent
oversight, particularly when these are
privatized services being used in public context, right? And the best example of all is
obviously our election, right, and elections across the world. So I write in my new
book about examples like this from my
Myanmar, Sri Lanka, India, the Philippines
with Rodrigo Duterte and the cyber troops
issue, all of these issues, right, and a lot of this
implicates Facebook. But, of course,
since you’re building a technology, many
technologies, that are accessed by
2-plus billion people. You guys should tell
me the actual numbers. I think it’s 2-plus
billion people. All I do is scrape
journalism, right? Pretty impactful stuff. So we’re going to have to think
about, what are the values that influence how we build systems? And how do we think
about the other? Especially because
the places where Google, and Facebook,
and other big companies are going to expand
in their reach are, guess what, of course,
the global south, right, places where there are more people,
higher density of population, less connected, right? But who are those people? How do we understand them,
not just as users, but as people who have voices, and
values, and certain histories, and things to say to us, you
know, like that they’re living, they can express and communicate
with us as we build and design such systems and as we
think about the effects that those systems pose. So the reason I
show this one is I think this one made the
rounds, but of course, there’s also a little bit of a movement
here in Silicon Valley, and I’ve interviewed Sam Altman
from Y Combinator about this from my new book and a couple
of other folks, concerns about this kind of
superintelligence that might be emerging
on a recursive level as various sorts of
algorithmic systems start to learn, not only
internally, and develop their own sorts of forms of
complex adaptive behavior, but they learn from one another. I think this was
totally overhyped. It was basically an
interface language that was developed between
two AI bot systems. I don’t know if you
know this story, but I still think that it
speaks to all the different ways in which we’re talking
about and thinking about AI, and it means a lot of
different things, right? Automation is not
the same as AI. Specialized AI is different
than generalized AI. AI and biases, like the
examples I gave earlier, is different than
this kind of AI. So these are all things that
are worth thinking through in my mind. So of all people, my favorite,
Tucker Carlson, that’s a joke, is his– I’m showing my
biases, that’s fine, we all should just
show our biases, right, has actually been speaking
to cognitive scientists. He is concerned about the
effects of this centralization of power around technology
on our political system because he such a
great patriot, right? So he did– he interviewed
a colleague of mine, who has been quite critical
of Google, full disclosure, named Robert Epstein. And Epstein’s research has
been looking at search results in large-scale controlled
studies in different parts of the world– Australia, India, and
the United States. And he’s been doing really,
really interesting work, and he published one
piece in the “Proceedings of the National
Academy of Sciences,” and he was the head
of “Psychology Today.” And what this research
showed, and this might be very
interesting to you, that if you searched
for political issues with Google Search results,
simple masking the ordering of those results could
flip undecided voters from 50-50 to 90-10. Let me explain what that means. So Hillary Clinton-Donald
Trump, let’s say they were both, you know, I’m totally undecided
between the two of them. And I have Trump at 1 and
3 and Hillary at 2 and 4. I could flip it from
that to Hillary 1 and 3, Trump 2 and 4, 50-50 to 90-10. So that’s really,
really interesting. Did that make sense? Basically, how you
order, even if it seems sort of not
necessarily that significant, could actually largely
affect people’s voting– their kind of– their
biases toward voting. So I think that’s a significant
issue because, again, that’s a sort of transgression
into the public space. And one of the most
famous examples of this from a few years ago,
you all may remember this, was Facebook’s A/B testing on
the Get Out the Vote button. Do y’all remember this? So they kind of said, I voted,
like, click on this, right, remember, David? It’s like, you know, click
on this to say like, I voted and share this
with your friends. So it turns out that that
influenced Facebook users who were seeing that, right,
the A group, to vote at a 0.5% higher rate, which
doesn’t seem like a lot, but we all know when we’re
talking about millions of people, that’s
very, very significant. So simply Facebook
saying you know, I voted, will push voting to that
significant an extent. And we know that those
numbers are way, way higher than the amounts by which
Trump won in Ohio, Michigan, and Pennsylvania, in
fact, put together. I simply– I just recently
looked at those numbers. So we know– I mean, to me, that’s
not necessarily a critique of Facebook,
but it’s meant to understand that
these platforms have a profound impact
on our behavior. So this is all to kind of
get at some of the work that I’ve been
doing in this space. So when this book came out
that you have in your hands, I started to make the rounds. I think it’s because
I was concerned about some of these
global, and cultural, and even these blurring into
like political and economic questions that I’ve
already brought up in this introduction. So of all people, “Morning
Joe,” MSNBC’s “Morning Joe,” had me on like two or three
weeks right after the election. And Mika Brzezinski, I literally
saw her jaw drop in front me when I spoke about concerns
with groups like Cambridge Analytica, right, how
when you create sort of so-called open
ecosystems for advertisers, but closed off
for users, you can have these pernicious
effects on our democracy or what we fight for and
aspire to in a democracy. And, of course,
this is not simply an issue with
Cambridge Analytica. It’s far more pervasive with
Russia and other examples like this, right? And, really, it’s not really
even about the effects that these folks have
on our elections. We’re not really sure
with Cambridge Analytica. I’ve yet to see a
solid piece of research to show that Cambridge Analytica
actually affected the election. And Cambridge
Analytica’s content is not the same as fake news. It’s more like framed news
based on psychometrics, right, you know like Michal’s
work is influential there. He’s not happy
about it, but still. So this is also meant to
make the point that we also need to understand, as we
design and build systems, what we value in terms of
advertisers versus users, right, and how do
we balance those? How do we think about not
just short-term effects of the technologies
that we build, but also these kind of
systemic effects on our larger political systems? And I think that that’s
really, really important. And I am appreciative that a
number of folks from Google were working with the
Obama administration afterward in helping
advise them around this. And I hope that there is a
devotion to public and civic life and democratic ideals
from our tech companies because you all are
really powerful. So all right, so my
colleagues Zeynep Tufekci has been making the rounds. Some of you might know her work. She is quite a public critic
of what’s going on these days. She in particular has been
concerned about some algorithms that have been
populating YouTube in particular, especially like
the recommendation systems. I’ve yet to see very
strong empirical work showing that the autosuggestion,
is it called autosuggestion, autosuggestion
feature of YouTube actually impacts
one’s perceptions. But it’s hard not to believe
that that is the case, because I’ll speak
for myself and speak for some cognitive science
studies showing that what we see impacts what we believe. But Tufekci’s critiques, which
I’m sure have been heard here, are that– maybe not here, but
YouTube, have been– are of her own experiences. And it is anecdotal
rather than large-scale quantitative evidence. And I think that that’s
important to note. She is– you know,
it was like, let me watch the Make America
Great Again rallies on YouTube. And then she’s– she makes the
point that she’s started to see more and more radicalized
content, right? So like what was suggested
after seeing a Make America Great Again, Donald
Trump rally, might have been content that was a
bit more Alt Right, or White nationalist, or heaven
forbid, neo-Nazi, right? And, of course, it has nothing
to do with Facebook, Google, or any other company
being pro Alt Right. Of course not, right? If anything, as we saw with some
of the interrogations of Mark Zuckerberg by our
Congress, who don’t seem to understand technology
very well, at least the ones– at least in the
Senate generally– I don’t know why their
staffers weren’t schooling them on some basic
concepts of computing. But anyway, as we saw, the
Republicans really, really did not like Mark Zuckerberg. I mean, they admired
I think his wealth, but I don’t– it didn’t look
like they really liked him very much. And that’s because,
generally speaking, we are seeing that Silicon Valley
tends to be more Democrat, you know, of these two parties. That’s just sort of been
the case historically. So I guess the question is, how
do we sort of optimize, again, recommendation systems, in
this case, with YouTube, to balance again the
goal of maintaining a tension, as my colleague
Tim Wu points out in his excellent book “The
Attention Merchants,” which he describes Google
within this as well, and therefore the
gathering of data, and click throughs, and
so on, with what might be, if you will, a vegetable. You know, I want to– I like I like my
information junk food. We all do, right? I like my french fries,
but I also want that salad. Or maybe I don’t
want that salad. I want that salad
after I eat it, and I need that salad, right? So this is kind
of a point that’s been really built into a
lot of the conversations that we’ve had about technology
for quite a bit of time, including work by my colleague
Eli Pariser and his work, “The Filter Bubble,”
several years ago, that even Obama referred to
in his interview with David Letterman
that was up on Netflix. So these are examples of these
effects in different parts of the world. I won’t go too much into this. In Sri Lanka, Facebook was using
similar techniques, I suppose, to privilege more
hysterical content. There’s not a strong,
independent, journalistic media. There are not very strong
governmental institutions in Sri Lanka that can
regulate and push this back. And that actually created
real-world violence. This is a “New
York Times” article that came out fairly recently. So these battles are
very significant, not just about
data and attention, but also about the
internet itself. This is me on Joy Reid a
few weeks ago, actually, a couple of months ago, talking
about net neutrality itself, right? And so there are
some conversations which I’m very happy that
Google supports net neutrality. But there are conversations
even about that because at this point,
many, many different forms of our democratic activities,
especially including online, are under attack. So I wrote a piece when
my book came out concerned about some of these issues. And I made the very simple point
that how we learn about others culturally is very determined
by the instruments of search and what we see in terms of
the ordering and filtering of search results that
then influence how we know and understand one another. And we even know
with Wikipedia, I don’t know if you
know this research, that it tends to still
be very asymmetrically authored by men and
specifically men from Europe and North America. And that’s generally a story
of technology at this point, right? But it doesn’t have
to be that way. So I give the very simple
example in this article I wrote for “Quartz,” and
also in my book itself, that we can do something
better about this. One way to– and
the example I gave is of the country,
just kind of random, of the country Cameroon
in West Africa. I was invited by UNESCO
to visit that country. And I search for Cameroon. Of course, I use
Google like everybody. And the first couple of results
I get on Google, in fact, my first page of
search results, I didn’t see a single
web page from Cameroon, which is actually pretty high
in internet penetration rate, quite educated, and
anglophone, and francophone. So it’s not just a
francophone country. So why was it that
I was seeing that? I thought Google
might know me better. But in this
particular case, I was seeing content that likely
was correlated to that, which was more popular and
validated perhaps by PageRank and various forms of
back linking, you know, mass validation, which is not a
problematic concept in my mind. But mass validation
is not always what should count
as knowledge, and we know what comes up in our
search results essentially are treated as
truth and knowledge by a large percentage of users,
and in many cases, by myself. So I kind of am concerned
with these issues. I write about these
issues in this book and in my second book, which
came out with my colleague Adam Fish, where we write
about kind of hacker examples, and the Silk Road, and
the Pirate Party, and Icelands. You know, these are all things
you can ask me about later, and even the examples of
powerful uses of technology in the context of
the Arab Spring. I did my field work just kind
of after I wrote this first book in Egypt in the middle
of the Arab Spring looking at what people
were doing with technology, old and new, not
just new technology, also old technologies, and using
projectors, using bedsheets, taking stuff that
they saw on YouTube and projecting it
into public spaces. And all of this is
really, really powerful and has great promise. One of the major arguments
I make in the book that I spoke about earlier,
I referred to earlier, is this concept of ontology. I describe it in chapter– basically in chapter 1,
but I really elaborate it in chapter 3 and chapter
4 based on fieldwork that I did collaboratively
from about 2002 to about 2014 with various Native
American populations, where I was thinking about
how do I build systems from languages, to databases,
to algorithms, to interfaces that those communities
themselves who are my partners could help design with me? And I’m not nearly the
engineer that most of you are, and I’m not nearly the engineer
that Google is as a whole. But I was attempting to deal
with some of these issues and think about some of these
issues in the context of trying to support communities
who are on the other side of the digital divide and
certainly simply being connected to the internet is not
good enough to actually support their voices and their
agendas, especially people who are, in
many cases, have had a great deal of cultural
and political trauma. So one example I give in
chapter three of the book is an ontology that I designed
with these communities, something like what we
call in information studies an information architecture. So folks in these
communities were building. This was a project I did with
19 Native American reservations between 2003 and 2005. It’s described in
chapter 3 of the book. And in that part of the book, I
describe how these communities are attempting to take advantage
of this internet infrastructure that they have built and
owned with the help of Hewlett Packard and actually
build a system for them to communicate with one
another, and to build their own local economies, and
preserve culture, et cetera. And so they were building
and designing the system according to this architecture,
again, pretty hierarchical, but they were naming
categories back when we used to think
about tagging, and Web 2.0, and all this kind of like
what Tim O’Reilly wrote about back in the day, and
they were basically putting content and
sharing it with one another and deciding what
categories were relevant and how those categories would
be related to one another. So what this is, is the power
of naming and giving people the opportunity to classify
their own corpuses, if you will, within their
communications systems to support hopefully
one another. Another example I
give in the book in a large amount of
detail in chapter 4 is of work I did with the Zuni. This has all been funded by the
National Science Foundation. And in this work,
I describe what happens when a group of people,
and a Native American community in New Mexico, are able
to build and design, with my help, a digital museum
system where they can actually get access to images
of objects that were taken from their
communities that are sitting in museums all over the world. And this is another
part and promise of the internet, the recovery
and hopefully promotion of cultural heritage
projects, which I’m also very interested in. And one of the coolest
parts of the book is the story I tell
toward the end of chapter 4 of the Zuni coming together
to look at this system that we’ve built where
1,500 approximately objects from different museums are now
being made available to them as images. And look what’s happening here. It’s not one person per
computer, and it’s not like– it’s not this kind of
individualized experience. To look at an object, the whole
community, and different folks within the community of
different age groups, and different what we call
kiva, or kinship groups, come together. And as they’re
looking at an object– in this particular case, I’m
talking about a project called the Anahoho, an
image of an Anahoho, which is like a kachina, people
are putting their fingers in their ears. And other people are
leaving the room. Other people from other
parts of the community are coming in the room. And that’s because
knowledge at Zuni is really based on who
you are in the community. And I want to make
that point to make the point that as we think
about Google in relation to diverse cultures in
different parts of the world, we have to understand those
cultural norms and values that are part of how people–
what people know and how people
share information. And that’s a really important
ethical question as well. So the reason people were
putting their fingers in their ears is
they were not yet at the point in the community
in terms of like kind of a– what do you call that? Like a serious
ceremonial sort of kind of becoming– initiation, right? To actually know that
knowledge, right? And at different points– so looking at one
object and getting them to share information for
themselves and with the museums took like one and a half hours. And they would
change the languages with which they would
speak about the object. So it’s really, really powerful. To me, this is an example
of technology as a catalyst for a culture as it is, right? And so my book is
not just critical, but it’s really concerned
with these questions of how can we develop
and build technologies that serve people, that serve
the economic, and political, and cultural interests
of those communities? So just to kind of
like quickly wrap up, this is perhaps like the
coolest project in the world that I’m working on right now. I’m like really
excited about this. This is– this is work I’m doing
with the group Rhizomatica, who by the way, recently
won a Google prize. So thank you for
supporting them. They are the largest
community-owned cell phone network in the world
in Southern Mexico. They’re in the mountains
all around Oaxaca, which is like a magical,
magical place, one of the most biodiverse and
culturally-diverse parts of the world, dozens of Zapotec,
Mixtec, and Mixe languages. And these communities
were not served by big telecom, specifically
Carlos Slim and Telcel, who is one of the richest
guys in the world, right? So they said, hey, we want
these communication rights. Our constitution legitimates
the communication rights for indigenous
peoples in Mexico. We’re going to build
our own networks. So they are building their own
collectively-owned networks. And we see examples of
that in the context also of net neutrality here
in the United States in places like Detroit,
in Redhook in Brooklyn, in other parts of
the world as well. The largest kind of
collectively-owned, sorry, internet network, mesh
network, in the world right now is a place called
Guifinet in Catalunya. And Guifi is how they say Wi-Fi,
so I think that’s really funny. So this is really, really
an amazing project. I’ve been doing ethnographic
work out in these mountain regions, looking at how these
networks are being built, what’s produced out of
collective ownership, does it support
people’s economies? How does it support
people who speak languages that have
never been written because of colonial histories? All of this, right? What can emerge
out of the rhizome, the rhizomatica, the rhizome
that is this project, right? So the project called
Rhizomatica, in Spanish, they also call it Telefono
Indigena Comunitaria, Indigenous Community Telephones. So it’s a really,
really amazing example of how not just we as human
beings, but our communities themselves can take
power over technology. And as I said, it’s
really cool that Google has been supporting this
project without trying to own it or own its data. So I think that
that’s to your credit. So the last thing I’ll
just mention really briefly is I’m starting to write. This is from my new
book, my third book I’m working on right
now, which will be a trade book with
the MIT Press called– I think I’m going to
call it “We the Users,” and I’m writing
about examples of how we can head into a
technology future that is human and
collective that makes sure people have enough
money, make sure people have portable health benefits. So in this context, I have
been talking to people also like Sam Altman
from Y Combinator about the universal
basic income movement. I did a really cool– so I’ve looked at this
example from Sweden, and I did a really cool
interview with this guy. He’s a Stanford graduate, one of
the youngest major-city mayors in the country, I interviewed
him yesterday, Michael Tubbs. He was on the Bill Maher
show just like two weeks ago. And Michael is
only 27 years old. He’s a Stanford Rhodes scholar,
comes from single-family home. His mother was near
the poverty line, and he’s implementing the
universal basic income project with the help of some folks
connected to the Obama administration,
actually, Google’s been kind of on the
side involved with this, to try to think about
what happens when you give people, poor
people, especially in Stockton, $500 a month. What do they do with
the money, right? And as an experiment,
not necessarily as some sort of legitimate
path forward, but as an experiment itself. So these are a couple of the
things I’m writing about. In my new book, I’m
writing about where we can go to kind of balance
flexibility, creativity, innovation, as it’s defined
here in Silicon Valley, with innovation in relation
to people and their lives and really, more than anything,
the implications of all this on our world in terms
of political equality, and democracy,
economic equality, and really allowing
our diverse world to maintain its diversity
through the technologies we built, right, rather
than flatten or reduce that diversity. I think all three of those– I’m kind of
overwhelmed because I’m trying to read about all
three at the same time, and I’ve had an opportunity to
talk to some really important and major figures, Vicente Fox,
the former president of Mexico. I’m talking to David Axelrod
from Obama 2008, and 2012, and on CNN next week, Van Jones,
other folks, Elizabeth Warren. I’ve had a chance to get these
people’s voices very briefly in the book itself and
also a number of folks here in Silicon Valley. I spoke to the head of Diversity
Inclusion here at Google. You’re the only
major tech company that’s talked to me so far. So thank you. Thank you for that. And I’m also writing about
examples, not just like these, of Michael and the Swedish
example, but also of the AI lab in Makarere
University in Uganda, which is so interesting. It’s an example of a company– sorry, an AI laboratory
that’s attempting to build artificial intelligence
models that are somehow supportive of Ugandan
interests, but also are seeded with Ugandan
data, as well as built around learning
models that are hopefully expressive of the cultures and
communities of Uganda itself. So this is a whole body of work. I would love to come back and
share more from this new book when it comes out. And I’m really excited to
get some of your feedback and thoughts on all of this. I tried to throw a
lot at you, but that’s because I am really excited
to hear your thoughts on this. Thank you for having me. [APPLAUSE] AUDIENCE: OK, I actually
have a question. So you mentioned the power
of these algorithms because of the scale that they have. And so when you take
that into account and then also take into account
the inevitable fallibility of humans who are creating
them, what can companies do to hold themselves
accountable ethically, like, you know,
maybe by training its employees a certain way
or whether there needs to be some sort of auditing process? Have you thought
about should that be– should the onus for that be
on the companies themselves, or should it be government? Yeah, maybe just talk a
little bit about that. RAMESH SRINIVASAN: Yeah,
thank you for asking that. I mean, I appreciate
the sympathy for what I’m trying to
say in that question. So absolutely, I
think that they’re not only should be kind of
internal auditing processes. I think that– I think one of them more
brilliant design companies here in Silicon Valley
was IDO, and they were really smart for
having anthropologists and sociologists in the
room with their engineers and their designers. And I think that
having teams that are more inclusive
and multi-disciplinary in the design, and
kind of engineering, and even evaluation process
is really important. I think that one
proposal I’ve had that I’ve been talking about
for the last year or two is even giving folks an
opportunity on a heuristic level, right, like a
descriptive level, of helping them understand why they
see what they see, right? I understand you can’t give up
private software code, right? I don’t think people
are interested in that, nor would they even
understand it, right? Now, I don’t think almost
any of us would, right? But at the same time, you
can explain to people, this is optimized
for this output. And here are some
other options in terms of what you could see based on
other sorts of, if you will, language or values by which
something is optimized for. But I also believe when
we talk about technologies that are kind of blurred
into other realms, right, like into
our political lives, into our educational systems,
into our criminal policing and justice systems,
economic systems, there has to be some sort
of third party, not necessarily regulating
as much as coordinating to ensure that there are
checks throughout this process to ensure that we do not
legitimated or naturalize the biases that we
have, especially, not even on an individual
level, but on a level that is collective and institutional. And that’s why I tried
to talk about these more collective examples first
like the policing example, like the pro-public
article before I talked about what Zeynep was saying
about the YouTube algorithm because I think it’s
far more pernicious, these collective
implementations, than just the individual, radicalized
content because I’ve seen radicalized content online. I don’t mind looking at that. I don’t think that
transforms me necessarily. But I think on a larger
scale level, it’s an issue. AUDIENCE: So I guess
I actually just want to build off of what’s
already been asked and what’s been answered
because I think when Mark Zuckerberg first
gave his interview after the whole
Facebook-Russia-Cambridge Analytica scandal, the
summary was, who knew? I had no idea Facebook
would ever do this. And that seems to be a pretty
common critique of technocrats of CEOs of tech companies that,
they break first, and then think about the
consequences later. And I think that cycle is
further exacerbated by the fact that a lot of these companies
are going public really quickly and are essentially pushed
to grow, and grow, and grow. And so you’ll see companies
like Facebook, and Apple, and Google try to break in in
China for the sake of growth, for the sake of
revenue, not necessarily for the sake of
inclusion, I think, but for the sake of money. And so how do we– how do
we impose some kind of, not necessarily regulation, like
you said, but like some kind of like just balance, some
check to essentially guard for these biases for this
desire for growth as opposed to like inclusion and
diversity, and you know, corrections for
algorithmic biases in ML models and things similar. RAMESH SRINIVASAN: Yeah, what
a great question, and also, it allows me to mention one
other idea I had, which is, giving people the
opportunity to visualize why they see what they see and
choose alternatives, right? Like a lot of us as designers
use to build and design systems that were
visually based, that were kind of multi-variate
in their kind of scope, and we can think about that. So let me answer your question. It’s an incredibly
difficult issue, right? Because, you know, Tim
O’Reilly makes the point that the master algorithm of all
is market cap valuation, right? And it’s an interesting point
for a father figure in computer science to say that, right,
who has made a lot of money off of his own publishing industry. I think that we have
to figure out ways to experiment with other
models to see what– and we can do this in a kind
of small-scale, lightweight fashion, we being folks in these
in companies like yourselves, to see if they are
actually creating returns that are similar or perhaps
even close to the current model, right? So I’ve been looking
at some research that is showing that you can
build very persuasive, very strong engagement products with
more diverse design teams, that even having diversity in
terms of VC investment could actually incubate highly
lucrative and profitable industries as well. Of course, it’s difficult to
want to break something or even tinker with something that
is so wildly successful. And I would say Google is
nothing if not successful, right, in terms of that level. However, move fast and
break things, right, which is kind of a casual
motto, right, that Facebook embraced and– you know, and we get it. Like, we’re– I’m a
former engineer, right? That’s just meant in that
playful way that a lot of us talk. Just like at MIT, we use
the word hack, right, like in a very loose sense. But I think what
we’re realizing is we can’t break other
aspects of our lives and kind of overweight simply
our economic bottom lines. Otherwise, the blowback
in terms of public PR, but also maybe our own
internal notions of ethics and what we’re standing for,
could be compromised, right? So you know– so I
guess my question– I would just encourage
folks to think about whether there are some different
kinds of lightweight models. It’s not a very radical
proposal, in my mind, at least, by which we could experiment
with other kinds of models of being inclusive, other kinds
of models of being transparent, other kinds of models
of being transparent, I’m sorry, kind of
accountable, right? And I guess the last thing I’ll
say is the network effects, I didn’t mention this,
the network effects of having mass amounts of users
without necessarily having to be as accountable in terms of
our governance of those systems is making a lot of money
for Facebook, right, and it probably makes a
lot of money at Google. You have a much larger
global governance team than Facebook does, which is
about two dozen people, two dozen people, for I don’t know
how many countries Facebook has, dozens of countries, right? 2-plus billion users. If you only invest that amount
of money for two dozen people to deal with all the global
effects of your technologies, you’re saying something
about what you value there. It means you’re a massive,
massively over-privileging the economic network
effects of your technology without necessarily
investing in trying to curb some of its
potentially pernicious effects. And so I guess all of this, we
should put it all on the table, we can make very
low-scale investments. Google is an experimental
space, experimental company, why not try it out? Let me help you. Going on from that,
this idea of governance, I’m interested from
you, from your travels, either your travels or
stories from colleagues and stuff like this, I’m
interested in the perception of governments, elected
governments, whether they be at a city level like
Stockton, or country level, or even an EU level,
what their thoughts are, what you’ve experienced
around their thoughts, around these issues that
you’ve talked about, specifically like the sort
of ontologies, and learning, and the biases that might
come from these huge private government private systems that
then are either incorporated directly or their ideas are
put into play in these systems. And I’m interested
in governments because governments are
ultimately, hopefully, theoretically, accountable
to their citizens. What’s the intersection
looking like there? And what are the opportunities? Like, what are some
opportunities to do good that you see are
low-hanging fruit? RAMESH SRINIVASAN:
Yeah, absolutely. I think that it’s not
that great right now when I kind of go to
different parts of the world. The kind of– the
digital divide was– it really shouldn’t have
been ever been framed about access to technology. It was more about the kind of
literacy and the opportunity to produce and
create technologies in one’s own ecosystem. That’s what the
ultimate divide was. And we should also presume
that the mere ability to create technology
in a place is somehow beneficial to a place
economically or politically. But I think that there are
two major strategies that are underway amongst the more
kind of, if you will, clued in or ahead-of-the-curve
kind of folks that I’ve seen in different parts of the world. First is to think
about how to piggy back off of these large-scale,
private, technological systems and infrastructures and build
local economic and political technological ecosystems
on top of it, right? So how do we– so that’s
been pretty successful in places like India, right? You can kind of
say, hey, you know, these Google services
are there, but we’re going to innovate on
top of that and try to create systems, and
technologies, and firms that are beneficial
to our constituencies. I mean, to be honest,
that hasn’t fully– to me, that’s not like the full
way forward, just to be honest. The other idea is
to try to build, an d this is really
interesting, I will be spending some time in
Nairobi this summer, right, where Ushahidi was born. I think some of us probably know
Ushahidi and other companies. It turns out in many
parts of the world, I think you all know this,
that there are thriving tech incubation communities. And Nairobi is one of
the largest, right? And so the question
is, is how do we start to– so I’m seeing that
happening in governments, less so the Kenyan
government, but it’s the local kind of
municipal authorities are supporting that
form of growth? But the issue is the access
to capital and VC funding. That’s been a big issue. So I think what we need
to do is pay attention to local innovations and
make them– make the idea, and think about the idea that
innovation is not something that simply happens when we have
seemingly infinite resources, but innovation also happens
when we have very little and we got to hustle, right? We just got to like adjust
within constraints, right? You see that in all sorts of
parts of the world with what people do like on the street,
like, how many have you been in various parts
of the global south. New Delhi, you’ll walk
through the middle of old– old parts of New
Delhi, people will be re-soldering and rejigging
phones in front of you and building informal
economies off of these systems. So I guess my point is, as
much as possible, I’m not seeing a lot of that promising,
but as much as possible, it would be great if local
institutions and government institutions could support
kind of other kinds of tech incubations and economies
in those parts of the world. I’m hopeful that that happens. I don’t see a lot of
it though right now. But I think in general, just
really quick point, in general, I don’t see governance officials
really understanding technology very significantly. SPEAKER 1: All right,
I think we have time for maybe one more question. AUDIENCE: Yeah, thanks,
last week, Google announced Duplex,
where we’re going to have Google send
through algorithms actually communicating
directly with humans. And one interesting
aspect of this was they train the
algorithms, it seems, to actually say ums and mhms,
one aspect of our culture, here in the US when we talk
to people casually, built in. So I’m curious what
your thoughts are on the specific technology that
we’re developing, how it will impact us worldwide culturally,
and what Google could be doing, from your perspective,
differently, to make sure that our
impacts are minimized? RAMESH SRINIVASAN:
I think in general, it would be– it would
be great if Google could be pretty transparent. It allows me to say
something I want to say about kind of
what data it’s collecting and how it’s using that data. I mean, it doesn’t have
to be too specific, but I think that that
could be, in general, as these sorts of like
useful technologies start to spread in their reach. Now, to directly
answer your question, I mean, unless these
technologies are– I don’t know if it’s so
much this specifically, but I see this as layered on top
of a larger potential problem, which call centers are
highly embedded within, which is the idea that based
on where the money is and where the customers are,
we’re going to build protocols around technology that are
forced into that logic, right? So what I’m getting
at is I actually just assigned a paper in
my graduate course this last week about the
Americanization of call center workers. So we’re in call centers– you all probably heard of
this, like, in call centers, folks are taught to take on
American or Western identities, to speak American
English, and to even learn those forms of speaking that
are familiar to all of us. So I think it’s really
important, obviously, that we don’t do the same thing
with the technologies that are going to be automated
that we’re going to spread to other parts of the world. But I think, more generally, my
larger point is really about– a kind of a sense of
social responsibility recognizes that not simply in
the gathering and acquisition of data and attention,
that’s not the only way to produce value, that there are
other ways of producing value, and we can have a more
balanced approach toward that. But to be honest,
I’m not an expert on this particular technology. I did see the video, and
I, like everyone else, was like wowed by it. It went super viral. David and I were just
talking about it, but I think the questions,
again, of design and what it means are
really central here. So feel free to ask
me more about it once I learn more
about it as well, yeah. Thank you. SPEAKER 1: Thank you, Ramesh. RAMESH SRINIVASAN: Thank you. [APPLAUSE]

7 thoughts on “Ramesh Srinivasan: “Whose Global Village? Rethinking How Technology […]” | Talks at Google

  1. Welding-Helmet-Like 'Optical Haze' Wash-Out Filter In Place – CHECK …
    Minus 40 dB Audio Distortion Filter Enabled – CHECK …
    Roll Google Talks Camera (AGAIN)
    #WillTheyEverFixIt (?)

Leave a Reply

Your email address will not be published. Required fields are marked *