As industry—across both startups and big tech—continues to scale up data collection for robotic foundation models, what role should tactile sensing play in this new era? This is a highly open-ended yet fundamental question for the field. Addressing it requires answering a series of questions, covering the full stack of hardware, integration/system, and algorithm/model training.
In the foundation-model era, the central hardware question is no longer whether tactile sensing is useful, but what form of tactile sensing can realistically scale across fleets of data collectors or robots. As robotic foundation models push data collection to unprecedented scale, tactile hardware faces a fundamental tension: richness vs. scalability. Traditionally, significant emphasis has been placed on tactile richness. However, given the new need to deploy tactile sensing at scale for robotics foundation models, we encourage researchers to reconsider which characteristics of touch sensors are truly essential for scalable, real-world use.
The value of tactile sensing emerges only when it is meaningfully integrated into the overall robotic system. Beyond the sensor itself, tactile integration introduces system-level complexity. Tactile devices often require dedicated electronics and communication protocols, adding friction to already complex robotic systems and large-scale data collection platforms. Seamlessly synchronizing tactile sensing with vision, proprioception, and control remains an open systems challenge—and a critical barrier to scaling touch beyond isolated demonstrations.
With scalable and easily integrable hardware in place, the final—and perhaps most critical—question becomes: how do we effectively use tactile signals? First, how should we learn tactile representations? What abstractions best capture the information that matters for interaction—raw signals, learned latent embeddings, or discrete contact events—and which of these representations generalize across sensor designs and embodiments? Secondly, how should tactile sensing be incorporated into multimodal learning? Should models fuse all modalities jointly, learn meta-controllers that adaptively weight or switch between them, or train modalities sequentially—using one to guide or supervise another?
The intended audience for this workshop includes researchers and practitioners interested in the full tactile sensing stack. We place special emphasis on how tactile sensing can contribute to robotic foundation models and large-scale embodied learning. We encourage junior researchers to participate by calling for short paper submissions and poster presentations. We also particularly encourage people from underrepresented groups to attend by providing travel support graciously sponsored by our industrial partners.
The workshop will feature invited talks and panel discussions with leading experts from both academia and industry. Authors of outstanding submitted papers will be invited to give short oral presentations, fostering interaction between emerging work and established perspectives. In addition, we invite people from startup companies to give 2-minute lightning pitches on how they are handling similar challenges from an industrial/startup perspective.
In this workshop, our goal is to bring together researchers from various fields of robotics, such as control, optimization, learning, planning, sensing, hardware, etc., who work on tactile sensing. We encourage researchers to submit work in the following areas (the list is not exhaustive):
Tentative schedule (subject to change)
| Time | Event |
|---|---|
| 8:30 - 8:35 (5 min) | Opening remarks |
| 8:35 - 9:00 (25 min) | Speaker 1 |
| 9:00 - 9:25 (25 min) | Speaker 2 |
| 9:25 - 9:50 (25 min) | Speaker 3 |
| 9:50 - 10:20 (30 min) | Coffee break and poster session |
| 10:20 - 10:50 (30 min) | 6 Oral paper presentations (5 minutes each) |
| 10:50 - 11:15 (25 min) | Speaker 4 |
| 11:15 - 11:40 (25 min) | Speaker 5 |
| 11:40 - 11:50 (10 min) | 5 lightning industrial pitches (2 minutes each) |
| 11:50 - 12:20 (30 min) | Panel discussion |
| 12:20 - 12:30 (10 min) | Best paper award and closing remarks |