Mick Champayne | "Okay Google..."31.07.2019
At the I/O Developer Conference 2019, Google only sees artificial intelligence as its future, “building a more helpful Google for everyone.” But there still seems to be a great deal of confusion about machine learning and where it is taking us. With many of the presentations touching on how to use machine learning to help disadvantaged people, what kind of ethical issues should they bear in mind as the technology creeps further into our lives?
“Okay Google, are the robots going to take over the world?”
I, for one, welcome our new robot overlords. I’ve been waiting all my life for a robot nanny/maid/life coach-in-one à la Rosie from The Jetsons, and while we’re not quite there yet, it’s safe to say we’re entering a new age of AI.
According to CEO Sundar Pichai, Google's ultimate goal is to “be more helpful.” From operating systems and apps to home devices and in-vehicle interfaces, Google is infusing AI everywhere. At the I/O Developer conference in May this year, Google doubled down on machine learning, two years after adopting its AI-first strategy.
Life has become so much simpler, easier, and more efficient with machine learning
Machine learning is the power behind every one of today’s important emerging technologies: voice assistants, image recognition, predictive interfaces, bots, AR/VR, the Internet of Things, even the human genome. In simple terms, Stanford defines it as, “the science of getting computers to act without being explicitly programmed,” and the intention is to use data to make accurate predictions and decisions, or algorithms.
As evidenced by Google’s Next Generation Smart Assistant demo, one of the current aims of Google’s AI research is to design machines and algorithms that can mimic human thinking as much as possible.
“Uh-oh Google, is that such a good idea?”
Life has become so much simpler, easier, and more efficient with machine learning: it’s how YouTube serves up related content to watch, filters your email for spam, and lets you checkout with one-click. But it brings an assortment of design challenges that come with the sometimes problematic nature of imprecise and unpredictable inputs.
We are often designing and building faster than we can realize or understand the implications.
Alexandria Ocasio-Cortez recently tweeted, “Machines are reflections of their creators, which means they are flawed, & we should be mindful of that.” When you train an AI, you have to train it on data that is representative of what it will see in the real world. If you don't include a variety of people (ethnicities, gender, sexual orientation, disabilities, class, etc.), the AI won't learn how to process them. “There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices,” says Karen Hao of MIT Technology Review.
The unquestionable part AI will play in perpetuating systemic inequalities is an extremely pressing issue. Computing power has doubled on average every 18 months for 50 years now, and we are often designing and building faster than we can realize or understand the implications.
“Well then Google, what are you going to do about it?”
Google is currently developing a robust ethical framework of AI principals meant to address unfair bias, stereotyping and prejudice. Their goals include accountability in the ethical deployment of AI, creating the tools needed to build them, and advocating for inclusivity and intersectionality.
The only way to influence a better future with AI is to ensure the people creating them are designing critically and challenging deeply embedded exclusionary social systems.
To help achieve these goals, the I/O presentations focused on two areas: putting machine learning models directly onto devices; and using machine learning to help disadvantaged people—including people who are deaf, illiterate, or early-stage cancer patients.
Highlights from the conference included:
-
Live-captioning all videos, from streaming to your camera roll. It’s built for people who are deaf or hard of hearing but could be used in public scenarios or transcribing phone calls.
-
A Google Lens update that will now contextualize text and content, translate it, and read it out loud. That in conjunction with being all done on-device as opposed to cloud-based means it can be done even with poor network connectivity and more privacy.
-
Decentralizing data with federated learning. Instead of gathering data in the cloud from users to train data sets, federated learning trains AI models on mobile devices in large batches, then transfers those learnings back to a global model without the need for data to leave the device.
-
TCAV, or Testing with Concept Activation Vectors, is an open-source technology that dissects machine learning models in order to try and understand why they make certain decisions.
But it’s not enough to rely solely on the technology. Amber Case, a design researcher and author of Calm Technology, says AI and machine learning “should amplify the best of technology and the best of humanity.” Google acknowledges that human bias already shapes data collected and what we collect in the future. The only way to influence a better future with AI and create inclusive products, practices, and outcomes is to ensure the people creating them are designing critically and challenging deeply embedded exclusionary social systems. “Building a more helpful Google for everyone means addressing bias. You need to know how a model works and how there may be bias. We will improve transparency," said Pichai, Google's CEO.
see also
- Wanjiru M Njendu: If we do nothing, we are part of the problem Papaya Rocks Film Festival
People
Wanjiru M Njendu: If we do nothing, we are part of the problem
- Gaze Out Someone Else’s Windows. WindowSwap Offers a Novel Way to Travel
News
Gaze Out Someone Else’s Windows. WindowSwap Offers a Novel Way to Travel
- The 10 Most Profitable Concert Tours of 2018. One Artist Broke the Record.
News
The 10 Most Profitable Concert Tours of 2018. One Artist Broke the Record.
- Different: Black-and-White Frames in 360 Degrees in the Latest Video for Kam-Bu and Lord Apex
News
Different: Black-and-White Frames in 360 Degrees in the Latest Video for Kam-Bu and Lord Apex
discover playlists
-
05
-
Nagrody Specjalne PYD 2020
02
Nagrody Specjalne PYD 2020
-
Branded Stories PYD 2020
03
Branded Stories PYD 2020
-
Muzeum Van Gogha w 4K
06
Muzeum Van Gogha w 4K