2 Comments
User's avatar
Roy Xing's avatar

This reminds me of some ponderings I've had about where safety checks should reside in an hierarchical architecture. It seems that in biological systems the safety checks are higher up in the chain. For example, you are able to pull ligaments and break bones on accident and need to be consciously mindful in sports not to injure yourself.

Now of course, for robots, I still believe that we should make them as safe as possible and we have the ability to inject safety checks in all layers of the stack. But it makes me wonder why in biological systems that the safety checks are so high up, is it because for nature's optimization: evolution, that this was just enough to survive?

Avik De's avatar

That’s an interesting point! I am not sure I’m qualified to comment on biology, but I had two thoughts: 1) maybe what you’re referring to as high-level safety checks could be the eventual replanning? The reason is that some things might happen too fast for a conscious brain-level decision, like a reflex. 2) Related to that, this post https://wheremachinesthink.substack.com/p/the-case-for-world-models-part-i discusses an architecture where the low-level sends error feedback up the stack. Maybe that error signal going all the way up is related to what you’re describing?