EECS assistant Professor Anca Dragan talks with David Brancaccio about how her native Romania gave her a leg up on studying human-robot interactions, and on helping develop those skills for more would-be scientists.
EECS assistant professor Anca Dragan writes about the AI4ALL education program she’s leading at Berkeley, BAIR Camp, where high school students will explore human-centered artificial intelligence.
Berkeley Engineering is a key partner in the new $253 million Advanced Robotics Manufacturing Innovation Hub, launched this month by the Department of Defense to create and deploy next-generation robotic technology.
In a profile of artificial intelligence pioneers, Peter Norvig (Ph.D.’86 CS), director of research at Google, outlines his thoughts on human-machine partnerships and the disparate goals of neuroscience and AI research.
Emerging augmented reality and virtual reality technologies are opening up a new frontier of possibilities for researchers at Berkeley’s new Center for Augmented Cognition.
China’s Huawei on Tuesday announced a $1 million partnership between its Noah’s Ark Laboratory and the Berkeley Artificial Intelligence Research (BAIR) Lab to perform basic research into machine learning, computer vision and other areas of artificial intelligence.
EECS department chair Jitendra Malik, a researcher in computer vision for three decades, doesn’t own a Tesla, but he has advice for people who do. “Knowing what I know about computer vision,” he said, “I wouldn’t take my hands off the steering wheel.”
Twenty years ago, Stuart Russell co-wrote a book titled Artificial Intelligence: A Modern Approach, destined to become the dominant text in its field. Near the end of the book, he posed a question: “What if A.I. does succeed?”
With its acquisition of self-driving truck startup Otto, Uber is hoping for a shortcut in the race to profit from driverless vehicles. But research engineer Steven Shladover of Berkeley’s California Partners for Advanced Transportation Technology Program (PATH) sees challenges – real and perceived – in putting 40-ton trucks on the road with only software behind the wheel.
As companies contemplate deploying self-driving cars, trucks and delivery drones, Berkeley engineers are embarking on a major project to improve how they interact with humans.
The NSF on Tuesday awarded $4.6 million to VeHICaL (Verified Human Interfaces, Control, and Learning for Semi-Autonomous Systems), a project led by by EECS professor Sanjit Seshia that seeks to “impact the way humans collaborate and interact with automation.” Researchers include EECS professors Ruzena Bajcsy, Shankar Sastry, Bjoern Hartmann, Claire Tomlin, and Tom Griffiths.
Call it artificial intelligence with a human touch. This week, two California universities separately announced new centers devoted to studying the ways in which AI can help humanity.
Berkeley artificial intelligence expert Stuart Russell will lead a new Center for Human-Compatible Artificial Intelligence, launched this week. The primary focus of the multi-university center is to ensure that AI systems are beneficial to humans.
Anthony Levandowski (B.S.’02, M.S.’03 IEOR) is one the most influential engineers behind self-driving vehicles. Now that Uber has bought his latest startup, Otto, he talks about how it all started with a phone call from Mom.