Subscribe

W3Schools.com

How tactile sensing improves model performance

Vision-language-action models are the current state of the art in robotic manipulation. They still cannot pick up a potato chip without crushing it.

That is the result published earlier this year by the team behind the Video Tactile Action Model (VTAM). On a potato chip pick-and-place task — a task that demands high-fidelity force awareness, where vision alone cannot distinguish a crushing grasp from a holding one — VTAM outperformed the π0.5 baseline by 80%. Across the broader contact-rich benchmark suite, VTAM held a 90% average success rate.¹

The chip is an adversarial example, and that is precisely why it is the right test. At the point of grasp, only contact dynamics carry useful signals. Pressure, vibration, and force/torque tell the policy what is happening, correcting the visual estimation errors that vision-only models cannot detect on their own. A camera, however high its resolution, cannot do that work.

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on May 07, 2026 in Physical AI. 4 min read time
How tactile sensing improves model performance

Vision-language-action models are the current state of the art in robotic manipulation. They still cannot pick up a potato chip...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on May 07, 2026
Read more 4 min read time
From Physical AI to operational AI

Artificial intelligence has brought enormous excitement to robotics.

Robots can now walk, navigate complex environments, and...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Mar 31, 2026
Read more 5 min read time
Robots can see. But they still can't feel.

Artificial intelligence has dramatically improved how robots perceive the world.

Computer vision allows robots to detect...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Mar 24, 2026
Read more 5 min read time
Why Physical AI needs better hardware, not just better models

Artificial intelligence is moving fast. Large language models can write emails, summarize reports, and generate software code...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Mar 17, 2026
Read more 6 min read time
Why the gripper is the true interface between AI and the physical world

Artificial intelligence is transforming robotics. Vision systems can identify objects, machine learning models can plan...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Mar 12, 2026
Read more 4 min read time
Physical AI hardware: The missing layer between AI models and real-world manipulation

Artificial intelligence can generate actions.

Physical AI hardware determines whether those actions succeed in the real world.

...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Feb 17, 2026
Read more 3 min read time
Robots that feel: why touch is the next frontier in Physical AI

Physical AI has moved past proof-of-concept. Large models, better simulation, and faster hardware have pushed embodied...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Feb 06, 2026
Read more 5 min read time
Robotiq brings the sense of touch to Physical AI

Physical AI has reached a critical point. Robots can see, plan, and decide better than ever—but manipulation in the real world...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Jan 27, 2026
Read more 4 min read time
When digital twins meet lean palletizing on the factory floor

CES is often associated with consumer technology and futuristic concepts. At CES 2026, the focus also included something...

Jennifer Kwiatkowski
By Jennifer Kwiatkowski
on Jan 06, 2026
Read more 3 min read time

Leave a comment