22

Understanding Moravec's Paradox

Any attention on Moravec's paradox is good imo because it is important.

That said, the article starts with several problems.

1) Claims that it isn't a paradox, which is just silly. A paradox is a counter-intuitive result. The result is generally counter-intuitive whatever explanation you give. Zeno's paradox remains a paradox despite calculus essentially explaining it, etc.

2) Calls the article "Understanding Moravec's Paradox" when it should be called "My Explanation of Moravec's Paradox".

3) The author's final explanation seems kind of simplistic; "Human activities just have a large search space". IDK. Human activity sometimes does still in things that aren't walking also. I mean, "not enough data" is an explanation why neural networks can't do a bunch of things. But not all programs are neural networks. One of the things humans are really good at is learning things from a few examples. A serious explanation of Moravec's Paradox would have to explain this as well imo.

2 days agojoe_the_user

> mean, "not enough data" is an explanation why neural networks can't do a bunch of things... One of the things humans are really good at is learning things from a few examples

I dispute the search space problem for something like folding clothes. Like a lot of human actions in space, folding clothes and other motor tasks are hierarchical sequences of smaller tasks that are strung together, similar to a sentence or paragraph of text.

We can probably learn things from each other from few examples because we are leaning on a large library of subtasks that all have learned or which are innate, and the actual novel learning of sequencing and ordering is relatively small to get to the new reward.

I expect soon we'll get AIs that have part of their training be unsupervised rl in a physics simulation, if it's not being done already.

2 days agoneerajsi

> Like a lot of human actions in space, folding clothes and other motor tasks are hierarchical sequences of smaller tasks that are strung together

I disagree, you can model those tasks as hiearchical sequences of smaller tasks. But the terminal goal of folding clothes is to turn a pile of unfolded clothes into a neat pile of folded clothes.

The reason you would break down the task is because getting between those two states with the only reward signal being "the clothes are now folded" takes a lot of steps, and given the possible actions the robot can take, results in a large search space.

2 days agohexhowells

The human ability to learn from few examples can be explained with evolution (and thus search). We evolved to be fast learners as it was key to our survival. If you touched fire and felt pain, you better learn quickly not to keep touching it. This learning from reward signals (neurotransmitters) in our brain generalises to pretty much all learning tasks

2 days agohexhowells

Everything can "be explained by evolution" but such an explanation doesn't tell you how a particular form serves a particular task.

a day agojoe_the_user

The point is that to be good at 'learning from a few examples', the architecture of the human brain had to be constructed from a enormous amount of trial and error data. This is not something you can just brush off or ignore. 'not enough data' is a perfectly valid for a 'serious' explanation.

7 hours agoog_kalu

Indeed, also ideally, the 2 second rule.

2 days agocwmoore

>> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

From Wikipedia, quoting Hans Moravec:

Moravec's paradox is the observation that, as Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

https://en.wikipedia.org/wiki/Moravec's_paradox

Note that Moravec is not saying anything about "much less computation" and he's also not talking about "reasoning", particularly since he's talking in the 1980's when AI systems excelled at reasoning (because they were still predominantly logic-based and not LLMs; then again, that's just a couple of years before the AI winter of the '90s hit and took all that away).

In my opinion the author should have started by quoting Moravec directly instead of paraphrasing so that we know he's really discussing Moravec's saying and not his own, idiosyncratic, interpretation of it.

2 days agoYeGoblynQueenne

> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

He just states that this description would be incorrect multiple times but never gives a reason why it would be incorrect.

Then he tries to simplify the paradox to a question of degree, e.g. "hard" problems for computers just have a larger search space and require more compute.

But wasn't a big part about the paradox also that we didn't even have insight as how the problems could be solved?

E.g. if you play chess or do math as a human, you're consciously aware if the patterns, strategies and "algorithms" you use - and there is a clear path to formalize them so a computer could recreate them.

However, with vision, walking, "thinking", etc, the process are entirely subconscious and we get very little information on the "algorithms" by introspection. Additionally, not just the environment and the input data is chaotic and "messy", but so is the goal of what we would want to archive in the first place. If you ever hand-labeled a classification corpus, you could experience this firsthand: If the classification criteria were even moderately abstract, labelers would often disagree how to label individual examples.

Machine learning didn't really solve this problem, it just sort of routed around it and threw it under a rug: Instead of trying to formulate a clear objective, just come up with a million examples and have the algorithm guess the objective from the examples.

I think this kind of stuff is meant with "the hard problems are easy and the easy problems are hard".

2 days agoxg15

Why the name „Moravec“ is two times correct in this article, but it is misspelled if it is a link text.

2 days agoqrios

[dead]

2 days agoaaron695

Put all the bad robots in jail, for UBI.

EDIT: someone has to order the license plates