A simple question on artificial intelligence

Having a general interest in and being an absolute layman on artificial intelligence, one question always crosses my mind - and I wonder if you can help me get closer to understanding the answer:

Why not focus more on the “artificial” bit than the “intelligence” bit?

All of the handful of books on the subject I dove into always try to wade into the deep end and explain that defining intelligence is difficult. Why not explore the concept and value of “artificial” and try and understand better what and how makes AI appear smart to us and what does not. What makes a system indistinguishable from a reasonable amount of biological intelligence? It’s not like us humans have utmost and ultimate intelligence - just a meaningful bit more than animals.

For us to call a synthetic intelligence “intelligent” for the first time will be akin to a snail calling a turtle quick: there’s a whole galaxy of capability beyond that. Thus, why not define AI in terms of our limitations? I suppose that’s what the Turing test does.

In essence, I suppose my question is: why are we trying to replicate and reverse engineer one of our capabilities but not first try to understand our own limitations to recognise if we had any success?

Wonder if I am making sense here. Promise that I am not posting “under the influence”. Likeliest answer is: that’s been/being done, look over here. Thank you for enlightening (or rambling with) me!

Just now reading the Jerry Kaplan Book on AI.

Well let’s first clarify what you’re referring to. There are two general forms of AI: general AI, “expert systems”. What we’ve been building the last while is not general AI. I don’t think there has been much progress in general AI at all since the idea was first formulated. A general AI would be able to replace a human, and pass the Turing test. What we have had progress in is expert systems type AI. These are doing language recognition and language translation, some progress in self-driving cars, playing chess or go.

The fact is that we don’t really understand how the human brain works, how connections are made and how reasoning is done. The artificial aspect of it is that we’ve generally taken a statistical approach wherein we let the math connect the virtual neurons. The net result is that we don’t understand how AI trained this way really works. I mean we understand what did to train it, but not what understanding it derived from the training. So what we do is give it a rigorous test with known data that was not part of the training set, and see how accurate it is. AI is still very nascent in many regards.

3 Likes

Interesting - thank you!

That’s coming close to my question, I think.

Are there (m)any psychologists in the field or is it really mostly statisticians and programmers? I wonder if there is a self-referential effect in which we might not precisely understand what it is we’re trying to accomplish (except for the technological steps of building tools and cybernetics).

AI as replication of our own capabilities including its shortcomings surely cannot be the goal. It seems that a satisfying synthetic intelligence would need to take things further than we might be able to imagine.

Ok, that reads pretty shallow. Find it difficult to pin point the thought.

The problem is, it has to be something akin to human intelligence, at least to start with, so that we can assess if it is working.

And, as @PHolder says, we have only created expert systems until now, or mostly. There are a couple of more general AIs out there, but the rest as applied learning and not artificial intelligence.

AI is also bandied around a lot. It is the buzzword of the last half decade or so. Everything that does mathematical modelling of a problem is suddenly AI. The AI in camera apps, in facial recognition etc. is not AI per se. It is a model that recognises patterns and applies something to those patterns.

Many are also very poorly written and trained, as we see with facial recognition, which is a complete fiasco and a cataclysmic failure at the moment.

1 Like

Fully agree - and I suppose that this leads to the question of: why would we want that? We have 8 billion perfectly good biochemical reactors that provide us with that and we know how to produce more. We also know that once we’ve produced one that it would be ethically impossible not to bestow it with rights that make mistreatment and misuse - and indeed “use” - impossible.

I wonder if the key problem of AI, today, is that we’re not really sure why we’d need that and what to do with it. Maybe a common response would be: well, if it’s A I, we could scale it, make it super fast, and have it solve all of our problems. Thus, the ultimate goal would be convenience or profitability. But then again: if it really ticks all the boxes and becomes indiscernible from biological intelligence, we’d need to bestow rights on those creations, too. Making them inaccessible to the sort of robot slavery we may have secretly hoped for and which inspired the research in the first place.

In that sense, the research into AI does not really seem to have a valid goal - except for pure exploration of a technological opportunity or the plan to become a master race to a slave race of sentient machines.

Alexa, what’s your take on that?

But, sure, leading the horses back to the barn for a second: it IS a marketing term that many people generate much research and consulting money from. It does scratch an itch that many people have today and technological advancement may be inevitable. Does not have to make much sense. I might be taking this too seriously.

However, that’s exactly the type of problem that makes big business a bad steward of meaningful technological development on a large scale. The only valid goals here appear to be convenience and profitability. When all you have is a hammer, …

Fascinating topic. Thanks for joining the thought play!

1 Like

Business is in it for the money. They don’t want true AI, they want a learning model that can accomplish a specific task. And we have seen, especially with the facial recognition datasets that they cut corners and don’t do things properly, in order to get the product “out there”. It is only after it hits the general public that the limitations and bias comes to light… Cutting corners and costs, when it comes to something that could be more intelligent than us and could gain control of “everything” is a dangerous game to play.

True AI is with academics, but they don’t get the funding they need.

Excellent point - came to my mind after posting, too. True AI is much too much of a handful. That explains a lot. Your first paragraph makes many observations fall into place.

Good point, too. I wonder if many academics are in the subject not to necessarily develop something new but rather to understand something old, the way we think, better.

That’s an interesting dichotomy: companies try to push the boundaries forward and develop innovations - but to a degree that is commercially attractive, since that’s their governing model; academics try to expand the domain of knowledge of what is, but not necessarily of what could be, since they are scientists and not predominantly creators.

Philosophy Monday! :smiley:

With likely several big exceptions and a sizable grey zone, the above might make some general sense.

1 Like