Tuesday, November 26, 2019

Fighting A.I. with A.I.

Weekly analysis at the intersection of artificial intelligence and industry.

Is this email not displaying correctly?
View it in your browser.


follow
Subscribe
Send Tip
November 26, 2019

Could the problems caused by A.I. be solved by artificial intelligence itself?

I put that question to IBM's Francesca Rossi, who leads Big Blue's efforts on the ethics of artificial intelligence, and Antoine Bordes, a director of Facebook's A.I. Research lab, at Fortune’s Global Forum in Paris last week.

Yes—at least in some circumstances, both researchers said.


Bordes's group, for example, is creating a benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes. And Rossi said that, in some cases, A.I. could be used to highlight potential bias in models created by other artificial intelligence algorithms.

While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment.

"Addressing this issue is really a process," Rossi told me. "When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice … can bring unconscious bias." You can read more about our discussion and watch a video here.


Later in the week, I traveled to Cambridge, England to watch the latest version of IBM's "Project Debater" A.I. help two teams of humans face off at the 200-year-old Cambridge Union debating society. The subject of the debate: whether artificial intelligence will cause more harm than good.

How it worked: An IBM website crowd-sourced more than 1,100 arguments in the week leading up to the debate. The software took those arguments and, in about two minutes, categorized them as either for or against the proposition and distilled them into five main themes for each side.

While Project Debater’s clustering and summarization skills were impressive, the human debaters stole the show. Sylvie Delacroix, a professor of law and ethics at the University of Birmingham, argued that A.I. is the rare example of a tool whose design is essential to its ethical value. "Yes, it is true that A.I. is only as good as the data it has been fed," she said. But, she argued, this potentially gave people tremendous power. If they came together and pooled their data in entities like data trusts, they could seize a lot of control over the development and use of A.I.


On the other hand, Neil Lawrence, the former head of machine learning at Amazon and a professor at Cambridge, said it was more prudent to assume the technology will do harm, because that way people will be alert about potential dangers and seek to prevent them—a position he compared to Pascal's Wager. You can read more about the debate here.

All companies working with A.I. should probably emphasize potential risks. On the train ride back from Cambridge, I read Fortune alum Jerry Useem's masterful analysis in The Atlantic of how Boeing lost its way in the decades leading up to  the 737 Max debacle. It is a must-read cautionary tale about what happens when corporate executives divorce themselves from a deep understanding of the very technology that underpins their business—and shun the expertise of engineers who know those systems best. 


Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com


.


.

A.I. IN THE NEWS


Sony Gets an A.I. Research Lab. The Japanese tech company is the latest to create its own research lab devoted to pursuing advanced artificial intelligence. The company joins the likes of Google, Microsoft, Facebook, Uber, and Salesforce. The Sony lab will have branches in Tokyo; Austin, Texas; and a yet-to-be-named European city. Hiroaki Kitano, president and CEO, Sony Computer Science Laboratories, Inc., will lead Sony A.I. globally, while UT Austin professor Peter Stone will head the U.S. research site.


Microsoft Boosts Bing With BERT. Just in case you thought the search engine wars were over, Microsoft has moved to match Google's use of a state-of-the-art natural language processing model to refine the results returned by Bing. The open-source model, called BERT, was originally developed by Google researchers. It models the meaning of entire phrases, taking note the importance of syntax and recognizing phrases with identical meanings even if they use none of the same words. In the past, search engines mostly just looked at individual words. Google announced it was using BERT to power its English-language search results in October.


Google Uses DeepMind Algorithm For Play Store Recommendations. Google has begun using an algorithm developed by its sister A.I. research shop, DeepMind, to improve Google Play Store recommendations. The algorithm overcomes the tendency of previous recommendation engines to favor apps that are displayed more often in the store. It also uses a transformer design that is responsible for some of the big leaps forward in natural language processing in the past year (see BERT, above), although the DeepMind team tweaked this model to make it more computationally efficient. You can read more about it in this DeepMind blog post


Alphabet's X Gets Back Into Robots. X, the Alphabet-owned "Moonshot Factory," has launched a new project called Everyday Robot aimed at creating a "general-purpose learning robot," or a robots that can use artificial intelligence along with a suite of cameras and sensors to learn a variety of skills without the need for hand-coded movements. The division's first robot has learned to sort trash into different piles. The new project marks Alphabet's return to robotics research after a previous strategy—lead by Android co-founder Andy Rubin, who later departed the company amidst sexual harassment allegations—failed to make much of an impact. It was under Rubin that X bought and then later sold Boston Dynamics, the robotics company known more for its viral videos than any real-world commercial success. 


NEW YORK GETS A CHIEF ALGORITHMS OFFICER


New York City Mayor Bill DeBlasio is looking for an Algorithms Management and Policy Officer to develop guidelines for the use of algorithmic tools throughout the city government. The new CAO will be especially concerned with ensuring algorithms are being used fairly and without hidden biases. A task force looking into the city's use of algorithms recommended the creation of the role. Some have faulted DeBlasio for explicitly excluding the New York Police Department from the new Algorithms Officers' purview. 


.

Content From Accenture

Scaling to new heights of competitiveness with A.I.


Three out of four C-suite executives believe that if they don't scale A.I. in the next five years, they risk going out of business entirely—yet three quarters of execs also acknowledge they struggle with the how. Our new study shows what successful scalers are doing right to move past pilots and see 3X ROI on their A.I. investments.


Learn how


.

EYE ON A.I. TALENT


Michael Kratsios, the chief technology officer of the United States, has named Winter Casey and Lynne Parker as deputy chief technology officers. Casey, who has held policy and communications roles at Google, had been assistant director for international affairs and senior advisor for technology policy. Parker, a computer scientist, had been assistant director for artificial intelligence in the CTO's office previously.


SOPHiA Genetics, a company with offices in Lausanne, Switzerland, and Boston, Massachusetts, uses machine learning techniques to speed up the time it takes to analyze genomics and radiomics data. It has appointed Milton Silva-Craig to its board. Silva-Craig is the CEO of Q-Centrix, a healthcare data registry and analytics company. 


EYE ON A.I. RESEARCH


Amazon Improves Alexa with Reinforcement Learning. Researchers at the Everything Company have published a pre-print paper detailing their efforts to use reinforcement learning—in which an A.I. algorithm learns by trial and error, usually in a simulated environment, rather than from historical data—to improve the recommendations for "skills" that Alexa offers to users. It used 180,000 real Alexa conversations to build a conversation simulator which was then used to train a reinforcement-learning algorithm. It tested this algorithm against two alternatives with real Alexa users. The reinforcement learning (RL) system did best, with a 76.99% success rate. A suggestion of five of the most popular skills only achieved a 46.42% success rate; an algorithm based on a set of human-designed rules achieved 73.41%. As businesses increasingly turn to RL-based approaches instead of traditional supervised deep learning, this may be a harbinger of things to come.


DeepMind Algorithm Can Win at Atari, Chess, Shogi and Go—Without Knowing the Rules. While DeepMind has created A.I. algorithms able to master these games at super-human levels before, in each of the previous cases, that algorithm was programmed with the rules of the game it was playing before it started training. In its newest research, published in a pre-print here, an algorithm, which DeepMind calls MuZero, must infer the rules of the game as it plays. The software uses a tree-based search , similar to what DeepMind used previously with its Go-playing algorithms, combined with a learned model of the game's environment and rules. The DeepMind researchers say this approach might pave "the way towards the application of powerful learning and planning methods to a host of realworld domains for which there exists no perfect simulator."


This Bot Knows What You're Thinking. Researchers at the Massachusetts Institute of Technology say they have developed an A.I. algorithm that can beat humans at complex, multi-player online games where players' allegiances and motivations are hidden. The software, which the MIT researchers called DeepRole, is trained using the game "The Resistance: Avalon ," in which it has to make deductions about other players based on partially observable information. The researchers used a learning model called counter-factual regret minimization that has been used successfully in poker-playing algorithms, but combined it with a further deduction mechanism. The A.I. bot did not need to use language to communicate with other players or query them, which is something humans tend to do in the game. It made its assessments only by observing other players' behavior (perhaps one of the best ways to learn, as parents often discover to their chagrin.)


FORTUNE ON A.I.


Exclusive: Why an Artificial Intelligence Wave Could Hit the Business World in 2020 — By Lance Lambert


Salesforce Debuts New 'Einstein' A.I. Voice Assistant Features for the Workplace — By Jonathan Vanian


Apple CEO Tim Cook on the False Tradeoffs in Data Collection, A.I., and Gay Marriage — By Jonathan Vanian


Deliver Us From A.I.? This Priest-Led Network Aims to Shepherd Silicon Valley Tech Ethics — By Rebecca Heilwell


UPS Says Jobs Will Survive A.I.—But With One Condition — By David Meyer


BRAIN FOOD


The A.I. Arms Race. Earlier this week, at the Bloomberg Next Economy Forum in Beijing, former U.S. Secretary of State Henry Kissinger described the U.S. and China as being "in the foothills of a Cold War." An arms race over mastery of certain strategic technologies, including artificial intelligence, certainly seems to be underway. Microsoft founder Bill Gates, speaking at the same conference, cautioned against falling into a kind of Thucydides Trap when it comes to A.I., saying that collaboration across borders was essential to developing the best technology and ensuring that most people can benefit from it.


But such warnings may fall on deaf ears. Luciano Floridi at the Oxford Internet Institute recently published a paper analyzing Chinese policy and regulation of A.I. He found that China's goal of becoming a world leader in A.I. by 2030, while not entirely driven by the logic of hard power, had clear military dimensions and was likely to trigger an arms race. And he said that attempts to counteract such a cycle "seem largely hollow" so far.


Meanwhile, China's political sensitivities are starting to affect A.I. research conferences. At the International Conference on Computer Vision (ICCV) held in Seoul, South Korea, earlier this month, it emerged that conference organizers were pressured to change a graphic depicting the number of participants and papers from around the world. The organizers apparently changed the title of the image from "countries" to "countries/regions," so as not to offend Chinese sensibilities with its inclusion of Taiwan.



.

IF YOU LIKE THIS EMAIL...


Share today's Eye on A.I. with a friend.


Did someone share this with you? Sign up here. For previous editions, click here.


For even more, check out The Ledger, Fortune's weekly newsletter on where tech and finance meet. Sign up here.


.
Email Us
Subscribe
share: Share on Twitter Share on Facebook Share on Linkedin
.
This message has been sent to you because you are currently subscribed to Eye on A.I..
Unsubscribe

Please read our Privacy Policy, or copy and paste this link into your browser:
https://fortune.com/privacy/

FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

For Further Communication, Please Contact:
Fortune Customer Service
40 Fulton Street
New York, NY 10038


Advertising Info | Subscribe to Fortune

No comments:

Post a Comment