A.I. IN THE NEWS
Google puts more A.I. into its devices. Google unveiled its latest lineup of hardware devices last week, including many A.I.-powered features. Its new Pixel 4 mobiles will now process some queries for Google's A.I.-driven digital assistant directly on the phone, without having to transmit data to the cloud. The phone also features an improved facial recognition system for unlocking, one that Google says will both easier to use and more secure.
Controversy over OpenAI's Rubik's Cube demo continues. Last week's newsletter reported on OpenAI's breakthrough training a robotic hand to solve a Rubik's Cube. While video of the robot went viral, some researchers, most notably New York University's Gary Marcus, have accused the A.I. company of overhyping its accomplishment . Among their criticisms: the robot hand mastered the physical dexterity needed to manipulate the Cube, but the solution to the puzzle was dictated by a static Cube-solving algorithm; the hand could only actually solve a fully-scrambled Cube about 20% of the time without dropping it.
U.S. border agents want facial recognition for body cameras. U.S. Customs and Border Patrol has put out a contract request for body cameras equipped with facial recognition. According to a report in The Register, which obtained a copy of the contracting specs, the system is supposed to allow agents to more easily match people's faces to their identity documents as well as to check them against lists of "people of interest."
A.I.-Powered Product Placements. Chinese tech giant Tencent and London-based A.I.-driven advertising firm Miriad have announced a two-year partnership that will see the two companies working closely together. Miriad uses computer vision technology to spot opportunities for product placements in video content, such as television shows and movies, and then uses other A.I.-based techniques to automatically edit those products or ads into the video images.
DEEPFAKES. WHAT ARE THEY GOOD FOR? ABSOLUTELY...SOMETHING (IF YOU'RE HOLLWYOOD)
Most of the discussion around deepfakes, realistic-looking fake videos created using widely-available A.I. software, has concerned malicious uses of the technology: from revenge porn to political disinformation. But Hollywood studios are eying the technique too to help generate visual effects and perhaps even, in the future, entire films, The Financial Times reports . But some involved in the visual effects industry told the newspaper that deepfakes can't yet reliably match the painstakingly rendered, and expensive, computer-generated imagery currently being used and that it remains uncertain when, if ever, they might be able to do so.
Maximize collaboration through secure data sharing
Issues of trust, security and fear of losing competitive advantage prevent organizations from sharing data and collaborating with each other. But a new family of Privacy Preserving Computation techniques are changing the game.
Learn what's possible
EYE ON A.I. TALENT
Eskalera, a San Francisco-based startup developing an A.I.-powered human resources platform, has hired Dane Holmes, a long-time partner and head of human capital management at Goldman Sachs, to be its new CEO.
Vectra AI, which uses artificial intelligence to detect cybersecurity threats, has hired Dee Clinton, a former executive at Australian telecommunications firm Telstra, to be chief of its Asia-Pacific sales channel.
EYE ON A.I. RESEARCH
Using A.I. to Restore and Decipher Fragmented Ancient Texts
Ancient inscriptions, whether in stone, papyrus or paper, are often fragmentary due to damage and deterioration over the course of thousands of years. Now a team of researchers from DeepMind, the London-based A.I. company owned by Google-parent Alphabet, has used a deep neural network to help fill in the missing pieces of ancient Greek inscriptions. Called Pythia , the system, which looks at the context of missing letters and words, achieved a 30% character-error-rate on a test text, compared to a 57% error rate for Oxford-PHD students in ancient history, according to DeepMind's blog post on the research.
Optimizing Manufacturing Operations with A.I.
Researchers at Hitachi America's AI research lab in California have used deep reinforcement learning to optimize the movement of goods around a simulated factory floor, according to a paper published in the research repository Arxiv. The researchers created a system of rewards for on-time delivery of items to the next stage in the manufacturing process and penalties for deliveries that were either tardy or too early. Their algorithm, which they called Deep Manufacturing Dispatching , or DMD, outperformed 18 other algorithms, both rule-based ones and different machine learning models, when tested in simulation.
FORTUNE ON A.I.
Flaw in Google's New Pixel 4 Raises Risk of Snooping While You Sleep — By Alyssa Newcomb
Everything to Know About Google's New Pixel 4 and Pixel XL Smartphones — By JP Mangalindan
Now Hiring: People Who Can Translate Data Into Stories and Actions— By Anne Fisher
BRAIN FOOD
In Medical A.I., Performance on Disease Sub-Sets Really Matters. Two weeks ago, I highlighted an insightful analysis from Luke Oakden-Rayner, a PHD candidate at University of Adelaide's School of Public Health, on the problem with medical A.I. competitions. Now he and co-authors Gustavo Carneiro, from Adelaide, and Jared Dunnmon and Chris Re, from Stanford University, have written a paper examining a serious flaw in the way many medical A.I. algorithms are tested.
The researchers argue that average performance across a broad test set is far less important than how these algorithms perform in detecting the smaller subset of disease features associated with the worst patient outcomes. They contend that human doctors, by contrast, tend to be highly attuned to these outliers, even if humans tend to perform worse than the machines on average across all disease types. For instance, a computer vision algorithm might be great, on average, at spotting anomalies on chest X-rays, but not do as well at finding the specific tumor type that, although rare, if not detected early, has a high mortality rate.
Most medical A.I. algorithms are not rigorously tested on these disease subsets, the researchers say, and as a result, may misrepresent how safe they are. Medical A.I. ought to be assessed for its impact on actual patient care and patient outcomes—much in the way drugs are tested today—not just on how well it performs on a broad test set.
No comments:
Post a Comment