I am wondering if AI will become powerful enough someday to find cures to diseases? How far off do you think we are away from such a breakthrough??
Well, a general AI would be a success if it was as capable as a human. A general AI as capable as a human is still a long way off according to most in the field. Since humans kinda suck at coming up with cures for diseases (in general it seems most like luck) I would expect an AI to also suck as much as the human it is meant to emulate/replace. Also, unless you’re thinking of AI somehow loaded into human clones, you’re never getting there, because in the end, you need to test any potential cure on the target demographic (the humans suffering the disease.)
Thanks but could AI run millions of scenarios to the point it could find a potential cure? Perhaps my understanding of how diseases work is limiting me how to ask the question!
First we have to create Artificial Intelligence, until then we have biased machine learning.
As long as people use the misnomer AI, they will have false expectations of its actual capability. This is why CEOs, managers and politicians rave about it, while engineers worry about the limitations and flaws they have to try and negate.
There are currently projects at work using machine learning to help with simulations but it is still early days.
Distributed computing power is used for medical research such as looking for a cure for cancer is already a long established thing, however listening for ET, or turning electricity into coins of purest madeupararium is more rewarding.
Imagine if all those crypto-warehouses with 10000s of PCs and everyones coin-miner box at home, were for just 1 week used to compute everything the cancer researchers are grinding through.
Nah, plucking coins from thin air while only using up resources never creating is much more rewarding.
Yes of course!
The question you want to ask is - are we anywhere close to creating a true AI? The answer is patently no, despite what venture capital gold diggers and PR departments would have you believe.
If you’re familiar with the Mass Effect game series, what we’re developing towards is more along the lines of a Virtual Intelligence, or VI as outlined in the game lore. Machine learning right now is basically a differencing engine accelerated by the tail end of Moore’s law. You can certainly do impressive things with it, but a true general AI is not one of them.
But that’s just my two cents.
Because it’s chemistry, I think it’s orders of magnitude much more difficult. If you take a dozen atoms, and try and combine them in all possible ways, it takes like trillions of combinations. This is why the “Folding @ home” was using distributed computing to do protein folding. For brute force, it would probably need to investigate and mostly eliminate something like 10^20 or more possibilities… which would take a lot of compute power. I don’t think you need brute force… you need to have an insight into chemistry and a hunch and some luck. (I am mostly talking out my ass, as I have little more than basic high school chemistry in my background.)
The advantage of machine learning is the boost in iterative trial and error, but as everything it needs to know to start with must be validated, we need to actually physically do a lot of the testing still.
Many chemical reactions can be predicted with each other because we have that back catalogue of knowledge to start with.
Too much is unknown about this new factor so machine learning would still need some test results with the real thing to calibrate with.
I forgot to answer the heart of the topic question.
Yes machine learning is and will be very useful in speeding things up.
Especially when we know what data to feed in !
You first have to define what true artificial intelligence really is. I see it as the ability to perceive patterns in the world, learn what the pattern means within the environment, and adjust the underlying knowledge base on an ongoing basis. This is a simple, and I’m sure incomplete definition.
Today, computers have been trained in many areas to scan their sensors (cameras, data sensors and streams, etc.) and classify that data with a confidence factor that approaches, but never achieves, 100%. The results should always be reviewed by humans to certify the results.
Are we at the point where computers can self diagnose a disease and determine a solution / cure? No, we are way off on that.
The Wikipedia “definition” is fairly straight forward: Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can.
In a more snarky mode, a lot of humans are unwilling to admit when they’re wrong… so I think solving that problem alone would be a huge advance
I think this has already happened. People are training their AI and then hand over to them databases with potential molecules, and they sort them out. I remember a recent article where AI identified some new type of antibiotics that could be used against the infections that are antibiotic resistant.
I expect this trend to become mainstream in the next few years.
That’s an interesting article, but they’re misusing the term AI… this was machine learning (overly glorified statistics, really.) And it didn’t synthesize anything, it helped sort through a list of known formulations. Don’t get me wrong, this is very cool, but it’s not “machine thinking”, more like “machine filtering.”
Yeah I see the hope there, but for now the flaw.
AI has no ability to “know” if it is wrong also, because unless told up front what is correct and what to ignore it cannot infer correctness or context (that is why it is not yet “intelligent”).
It still relies on humans to interpret the results.
Maybe one day we will be discussing if it is time to stop calling it artificial once we find it is intelligent.
As I noted previously, AI is the magic bullet Politicians have been looking for and they believe it is here, because CEOs wanting to make more money have made them believe it.
The White House are struggling to comprehend that AI won’t do what they want.
If it were really as simple as throwing money at something and switching on some computers, we would have cures for AIDS and cancer by now.
The concept that something cannot be done because you want it to be so is alien to the rich people of the planet, because in their lives if money does not produce results, lies will be produced to placate them so they believe their money was well spent.
A.I. is the Emperors new clothes, and as long as people keep reinforcing the use of the phrase AI, the lies will continue.
What, we can’t solve all the problems of the world with AI? I’m shocked! Every PR statement says it is already happening.
I just wish Monty Python made AI into a Yorksiremen sketch.
A I ?
Ee by gum.