MIT Developing AI That Can See, Smell, Feel And Taste - Science Techniz

Page Nav

HIDE

Grid

GRID_STYLE

Trending News

latest

MIT Developing AI That Can See, Smell, Feel And Taste

Daniela Rus: Roboticist & CSAIL Lab Director. MIT is developing a sensor technology (GC-MS) led by Paul Liang and Daniela Rus who active...

Daniela Rus: Roboticist & CSAIL Lab Director.
MIT is developing a sensor technology (GC-MS) led by Paul Liang and Daniela Rus who actively developing AI to perceive smell, taste, and potentially touch (feeling), moving beyond vision and sound by creating large datasets like SmellNet for olfactory classification and building systems like the Anemoia Device for scent-based memory, aiming for AI that understands the world more like humans do. They use graph neural networks and sensor technology (GC-MS) to digitize complex scent molecules, enabling AI to classify substances, detect allergens, and even generate smells from images for a richer multisensory experience.

Despite dramatic progress in artificial intelligence, today’s machines remain fundamentally limited by the absence of several core human senses. While vision and language processing dominate modern AI systems, crucial sensory modalities such as touch, smell, and taste remain underdeveloped. At the Massachusetts Institute of Technology, this gap has become a central focus of research led by Daniela Rus, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), whose work explores how embodied intelligence depends on rich, multimodal perception rather than computation alone.

Professor Rus and her collaborators argue that the lack of physical and chemical sensing is one of the primary reasons robots struggle outside controlled environments. Vision-based AI excels at recognizing shapes and patterns, but it cannot reliably infer texture, softness, temperature, chemical composition, or material stability. Humans continuously compensate for these limitations through tactile exploration and chemical sensing, yet most robots operate with sparse force sensors and no olfactory or gustatory input, leaving them unable to adapt to real-world uncertainty.

A cornerstone of MIT’s work in this area is the GelSight project, originally developed within CSAIL and MIT’s vision community. GelSight is a high-resolution tactile sensing technology that allows robots to “see” surface geometry through touch by capturing fine-grained deformations when an object is pressed against a soft sensor. This approach has enabled significant improvements in robotic manipulation, allowing machines to detect slippage, texture, and micro-geometry that visual systems alone cannot resolve. Rus has emphasized that tactile feedback of this kind is essential for robots intended to operate in human-centered environments such as homes, hospitals, and factories.

Beyond touch, MIT researchers under Rus’s broader research agenda are also investigating machine olfaction and chemical perception, often referred to as electronic noses. These systems aim to detect and classify chemical signatures associated with spoilage, contamination, or environmental hazards. Unlike vision, smell does not map cleanly onto geometric representations, making it a far more complex learning problem. Chemical signals are diffuse, context-dependent, and difficult to label at scale, requiring new sensing architectures and learning models that can operate under uncertainty.

Taste, closely related to smell, presents an even greater challenge. While less central to robotics than touch or olfaction, gustatory sensing has implications for food safety, healthcare, and pharmaceutical manufacturing. MIT’s approach treats taste not as a standalone sense but as part of a broader chemical sensing framework, where molecular patterns are interpreted through learned representations rather than explicit symbolic rules.

Rus’s work frames these efforts as part of a shift toward embodied AI, the idea that intelligence emerges through interaction with the physical world. From this perspective, cognition is not merely the manipulation of symbols or pixels, but a continuous feedback loop between sensing, action, and learning. Without touch, smell, and taste, robots remain disembodied reasoners, capable of impressive demonstrations yet prone to failure when confronted with real-world variability.

The implications of this research extend across industries. In healthcare, robots lacking tactile sensitivity struggle with patient interaction and rehabilitation tasks. In manufacturing and logistics, the absence of touch limits adaptability to material variation. In agriculture and food handling, the inability to smell or chemically assess products undermines quality control. Rus has consistently argued that solving these problems requires integrating new sensor technologies with learning systems that can reason across heterogeneous sensory inputs.

Ultimately, the work led by Daniela Rus and her colleagues at MIT suggests that the next major leap in artificial intelligence will not come solely from larger models or more data, but from restoring the sensory richness that machines never possessed. By teaching AI systems to feel, smell, and chemically interpret their surroundings, these projects aim to move robotics beyond brittle automation toward systems that can safely, flexibly, and intelligently coexist with humans in the real world.

"Loading scientific content..."
"If you want to find the secrets of the universe, think in terms of energy, frequency and vibration" - Nikola Tesla
Viev My Google Scholar