Not even the guy you're talking to, but..
The current crop of so-called 'AI' is not fundamentally different than the shit they had 20 years ago, it's just running on bigger, faster hardware now, so it *seems* to be """smarter""", when in fact it's as dumb as a box of rocks; and *amoeba* is smarter than any so-called 'AI' they have right now. The reason why they're so fucking dumb is this:>We have no idea how 'reasoning'/'cognition' works in a living brain
Seriously, we don't have a clue. 'Deep learning algorithms' is not 'reasoning' or 'thinking', it's just a bunch of 'decision trees' without any awareness. None of them """know""" the fundamental difference between 'human being' and 'inanimate object', they're all just 'objects' with no special significance. Things a 5 year old child would know better than, like the fact that a 'STOP' sign with graffitti or a sticker on it is still a 'STOP' sign, completely fucks up these poor excuses for 'AI'. Another example: In an unfamiliar driving situation, a human driver can 'think their way through it' and figure out what they need to do; these half-assed 'AIs' have to 'phone home' after coming to a complete stop in the middle of the road, and have a HUMAN take control to guide it through it.
There is no amount of 'machine learning' that can take the place of a quality our brains have that we take for granted, and that is *essential* to navigating and negotiating your way through the Real World: the ability to THINK.
We don't even have the capability of really observing a living, working human brain on the level we'd need to, to even *begin* to understand how 'thought' and 'cognition' and 'reasoning' works. They don't even really have any *theories* about it, that's how little we really know about it.
So-called 'self driving cars' will *never* be fully capable or *safe* until we can build machines that can actually *think*, and """close""" is not close enough.