Discussion about this post

User's avatar
Roy Xing's avatar

I’m curious on your thoughts of having more layers instead of a pure 2 level HL->LL framework. It seems like humans do something like this with the cortex -> motor cortex -> brain stem/spinal cord. It’s interesting to see that Figure adopted this kind of hierarchy, any thoughts on the pros/cons on splitting the layered control architecture even more?

Also for what it’s worth I would vote for an IsaacSim implementation since it might be easier to have an RL pipeline that’s already kind of bundled together with active developer support than piecing together your own RL stack, sim, evals, etc. But idk it is always satisfying to piece together something from scratch haha

Bharath Suresh's avatar

From what I understand:

Skill Acquisition -> Runs offline during Training

Motor Adaptation -> Runs on the robot's compute

Is there something that runs offline, but after the initial training phase?

For example, a robot sends data about a new environment after a few hours to an offline computer, which then provides some feedback back to the robot as it continues in that environment.

Similar to how auto companies provide "OTA software updates" to your car even after you bought it to fix/improve something.

3 more comments...

No posts

Ready for more?