Video Friday: Autonomous Robots Learn By Doing in This Factory

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
ICRA 2026: 1–5 June 2026, VIENNA
Enjoy today’s videos!
To train the next generation of autonomous robots, scientists at Toyota Research Institute are working with Toyota Manufacturing to deploy them on the factory floor.
Thanks, Erin!
This is just one story (of many) about how we tried, failed, and learned how to improve drone delivery system.
Okay but like you didn’t show the really cool bit...?
[ Zipline ]
We’re introducing KinetIQ, an AI framework developed by Humanoid, for end-to-end orchestration of humanoid robot fleets. KinetIQ coordinates wheeled and bipedal robots within a single system, managing both fleet-level operations and individual robot behaviour across multiple environments. The framework operates across four cognitive layers, from task allocation and workflow optimization to VLA-based task execution and reinforcement-learning-trained whole-body control, and is shown here running across our wheeled industrial robots and bipedal R&D platform.
[ Humanoid ]
What if a robot gets damaged during operation? Can it still perform its mission without immediate repair? Inspired by self-embodied resilience strategies of stick insects, we developed a decentralized adaptive resilient neural control system (DARCON). This system allows legged robots to autonomously adapt to limb loss, ensuring mission success despite mechanical failure. This innovative approach leads to a future of truly resilient, self-recovering robotics.
[ VISTEC ]
Thanks, Poramate!
This animation shows Perseverance’s point of view during drive of 807 feet (246 meters) along the rim of Jezero Crater on Dec. 10, 2025, the 1,709th Martian day, or sol, of the mission. Captured over two hours and 35 minutes, 53 Navigation Camera (Navcam) image pairs were combined with rover data on orientation, wheel speed, and steering angle, as well as data from Perseverance’s Inertial Measurement Unit, and placed into a 3D virtual environment. The result is this reconstruction with virtual frames inserted about every 4 inches (0.1 meters) of drive progress.
−47.4°C, 130,000 steps, 89.75°E, 47.21°N… On the extremely cold snowfields of Altay, the birthplace of human skiing, Unitree’s humanoid robot G1 left behind a unique set of marks.
[ Unitree ]
Representing and understanding 3D environments in a structured manner is crucial for autonomous agents to navigate and reason about their surroundings. In this work, we propose an enhanced hierarchical 3D scene graph that integrates open-vocabulary features across multiple abstraction levels and supports object-relational reasoning. Our approach leverages a Vision Language Model (VLM) to infer semantic relationships. Notably, we introduce a task reasoning module that combines Large Language Models (LLM) and a VLM to interpret the scene graph’s semantic and relational information, enabling agents to reason about tasks and interact with their environment more intelligently. We validate our method by deploying it on a quadruped robot in multiple environments and tasks, highlighting its ability to reason about them.
[ Norwegian University of Science & Technology, Autonomous Robots Lab ]
Thanks, Kostas!
We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a Reinforcement Learning (RL) control policy that enhances both recovery and hovering performance.
[ HO Lab via IEEE Robotics and Automation Letters ]
In this work, we present SkyDreamer, to the best of our knowledge, the first end-to-end vision-based autonomous drone racing policy that maps directly from pixel-level representations to motor commands.
[ MAVLab ]
This video showcases AI WORKER equipped with five-finger hands performing dexterous object manipulation across diverse environments. Through teleoperation, the robot demonstrates precise, human-like hand control in a variety of manipulation tasks.
[ Robotis ]
Autonomous following, 45° slope climbing, and reliable payload transport in extreme winter conditions — built to support operations where environments push the limits.
[ DEEP Robotics ]
Living architectures, from plants to beehives, adapt continuously to their environments through self-organization. In this work, we introduce the concept of architectural swarms: systems that integrate swarm robotics into modular architectural façades. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.
[ SSR Lab via Science Robotics ]
Here are a couple of IROS 2025 keynotes, featuring Bram Vanderborght and Kyu Jin Cho.
- YouTube www.youtube.com
[ IROS 2025 ]
