Having a general interest in and being an absolute layman on artificial intelligence, one question always crosses my mind - and I wonder if you can help me get closer to understanding the answer:
Why not focus more on the “artificial” bit than the “intelligence” bit?
All of the handful of books on the subject I dove into always try to wade into the deep end and explain that defining intelligence is difficult. Why not explore the concept and value of “artificial” and try and understand better what and how makes AI appear smart to us and what does not. What makes a system indistinguishable from a reasonable amount of biological intelligence? It’s not like us humans have utmost and ultimate intelligence - just a meaningful bit more than animals.
For us to call a synthetic intelligence “intelligent” for the first time will be akin to a snail calling a turtle quick: there’s a whole galaxy of capability beyond that. Thus, why not define AI in terms of our limitations? I suppose that’s what the Turing test does.
In essence, I suppose my question is: why are we trying to replicate and reverse engineer one of our capabilities but not first try to understand our own limitations to recognise if we had any success?
Wonder if I am making sense here. Promise that I am not posting “under the influence”. Likeliest answer is: that’s been/being done, look over here. Thank you for enlightening (or rambling with) me!
Just now reading the Jerry Kaplan Book on AI.