Articles Tagged with

Social Robotics

Home / Social Robotics
AwardsInventXRMovement ThinkingThe Research University

Vine Robots: Achieving Locomotion and Construction by Growth

Sponsor: Stanford University
Allison Okamura [email protected] (Principal Investigator)
Jonathan Fan (Co-Principal Investigator)
Sean Follmer (Co-Principal Investigator)
Award Number: 1637446
Award Number: 1637446


In contrast to legged robots inspired by locomotion in animals, this project explores robotic locomotion inspired by plant growth. Specifically, the project creates the foundation for classes of robotic systems that grow in a manner similar to vines. Within its accessible region, a vine robot provides not only sensing, but also a physical conduit — such as a water hose that grows to a fire, or an oxygen tube that grows to a trapped disaster victim. The project will demonstrate vine-like robots able to configure or weave themselves into three-dimensional objects and structures such as ladders, antennae for communication, and shelters. These novel co-robots aim to improve human safety, health, and well-being at a lower cost than conventional robots achieving similar outcomes. Because of their low cost, vine robots offer exceptional educational opportunities; the project will include creation and testing of inexpensive educational modules for K-12 students.

This work broadens the concept of bio-inspired robots from animals to plants, the concept of locomotion from point-to-point movement to growth. In contrast to traditional terrestrial moving robots that tend to be based on the animal modality of repeated intermittent contacts with a surface, the vine modality begins with a root, harboring power and logic, and extends using growth, increasing permanent contacts throughout the process. This project will demonstrate a soft robot capable of growing over 100 times in length, withstanding being stepped on, extending through gaps a quarter of its height, climbing stairs and vertical walls, and navigating over rough, slippery, sticky and aquatic terrain. The design adopts a bio-inspired strategy of moving material through the core to the tip, allowing the established part of the robotic vine to remain stationary with respect to the environment. A thin-walled tube fills with air as it grows, allowing the vine robot to be initially stored in a small volume at its base, and to extend very large distances when controllably deployed. Mechanical modeling and new design tools will enable the development of task-specific vine robots for search and rescue, reconfigurable communication antennas, and construction. The paradigm of achieving movement and construction through growth will produce new technologies for integrated actuation, sensing, planning, and control; novel principles and software tools for robot design; and humanitarian applications that push the boundaries of collaborative robotics.


Stanford University Thingpedia Open Source Research: Autonomy and Privacy with Open Federated Virtual Assistants

Sponsor: Stanford University
Award Number: 1900638
Monica Lam [email protected] (Principal Investigator)
James Landay (Co-Principal Investigator)
Michael Bernstein (Co-Principal Investigator)
Christopher Manning (Co-Principal Investigator)
David Mazieres (Co-Principal Investigator)


Virtual assistants, and more generally linguistic user interfaces, will become the norm for mobile and ubiquitous computing. This research aims to create the best open virtual assistant designed to respect privacy. Instead of just simple commands, virtual assistants will be able to perform complex tasks connecting different Internet-of-Things devices and web services. Also, users may decide who, what, when, and how their data are to be shared. By making the technology open-source, this research helps create a competitive industry that offers a great variety of innovative products, instead of closed platform monopolies.

This project unifies all the internet services and “Internet of Things” (IoT) devices into an interoperable web, with an open, crowdsourced, universal encyclopedia of public application interfaces called Thingpedia. Resources in Thingpedia can be connected together using ThingTalk, a high-level virtual assistant language. Another key contribution will be the Linguistic User Interface Network (LUInet) that can understand how to operate the world’s digital interfaces in natural language. LUInet uses deep learning to translate natural language into ThingTalk. Privacy with fine-grain access control is provided through open-source federated virtual assistants. Transparent third-party sharing is supported by keeping human-understandable contracts and data transactions with a scalable blockchain technology.

This research contributes to the creation of a decentralized computing ecosystem that protects user privacy and promotes open competition. Natural-language programming expands the utility of computing to ordinary people, reducing the programming bottleneck. All the technologies developed in this project will be made available as open source, supporting further research and development by academia and industry. Thingpedia and the ThingTalk dataset will be an important contribution to natural language processing. The large-scale research program for college and high-school students, with a focus on diverse students, broadens participation and teaches technology, research, and the importance of privacy. All the information related to this project, papers, data, code, and results, are available at until at least 2026.

This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.

About Exponent

Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

Get Started
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google
Consent to display content from Spotify
Sound Cloud
Consent to display content from Sound