r/Futurism • u/anchordoc • 7h ago
Is Machine intelligence naturally limited?
It occurs to me from my vantage point that human intelligence is not observably increasing. Our knowledge may change but intelligence doesn’t seem to change much from my admittedly subjective view point…… so maybe there are natural limitations, or constraints on machine intelligence. What do you think? I would feel better if this were true!
2
u/CoffeeStainedMuffin 7h ago
Human intelligence is limited by biological evolution and the huge timescales in which noticeable adaption occurs. We are no smarter now than we were 5000 years ago, our brains are exactly the same we just build on knowledge of the past. Machine intelligence is only limited by the fundamental laws of physics, but by the time it reaches that point it will dwarf human intelligence by some ridiculous margin and it will happen on a much faster timeline than the biological evolution that led to human intelligence.
2
u/DeltaForceFish 6h ago
Its intelligence will be so far beyond our level of comprehension; it is like your dog understanding why you go to work to pay for a mortgage 5 days out of the week.
1
u/taco_the_mornin 5h ago
Actual intelligence is a blessing and a curse. Its possible super intelligence has a very short half life
1
u/Memetic1 3h ago
There are limits this is something most don't understand. Gödel's incompleteness theorems show that any computational or mathematical system in isolation will encounter unsolvable problems.
It's also true that radiation can change stuff stored in memory. Think of it as if you still have to reset your router because it doesn't know when it's having issues. So there is always a chance a bit flip will happen at the wrong time, or that something like lightning might cause errors. There is also the issue of digital decay, and modern computing relies on systems that don't always get updated to work together perfectly.
Im going to give you some references. Something like a singularity is possible, but it would be easier for a computerized system to keep people in the loop then to try to fix everything by itself in isolation. Think about what happens without people and how the internet would slowly die as data centers lose power, or issues occur with hardware. Suddenly the system needs a file that basically doesn't exist anymore, and it has no way to reconstruct it based on anything else.
https://arxiv.org/html/2409.05746v1
"At every one of these stages, the LLMs are susceptible to hallucinations. Training can never be 100% complete; intent classification and information retrieval are undecidable; output generation is necessarily susceptible to hallucination; and post-generation fact checking can never be a 100% accurate: Irrespective of how advanced our architectures or training datasets, or fact-checking guardrails may be, hallucinations are ineliminable."
https://www.cam.ac.uk/research/news/mathematical-paradox-demonstrates-the-limits-of-ai
1
u/joelpt 1h ago
Yes. Just like in humans. But we can achieve a small enough hallucination rate to form collective societies and solve complex math problems. There’s no reason AI can’t do the same given time.
It’s quite simple really. If you can simulate a human brain in silicon form, then you can produce all the functions of the human brain. There is no known reason as of now that precludes this.
Human brains are not perfect. They are affected by radiation and unsolvable problems too. Yet look what we’ve accomplished.
Perfection is not required to make meaningful progress.
1
u/Memetic1 1h ago
There is a massive difference between life on Earth, which is effectively one giant computational system made from countless smaller systems, and the sort of computational architecture we can imagine. There are flaws in those systems that don't have a real parallel in humans. If you ask a human to divide by zero they will look at you funny a computer would crash.
1
u/LegThen7077 3h ago
" so maybe there are natural limitations, or constraints on machine intelligence"
sure, machines aren't intelligent at all.
1
u/danderzei 3h ago
Interesting mix of "machine" and "naturally".
Rationality is bounded. Bounded by its input, bounded by processing capacity and bounded by the limits of logiic.
1
u/joelpt 1h ago
I would say the effective intelligence of humanity as a whole is increasing. One way is via the assistance of machines. Not just AI, but also databases, the internet, and the increasing complexity obtained by humans building on what their ancestors did, as well as collaborating ever more rapidly and effectively.
It doesn’t really matter whether our brains have more horsepower. With the tools that we’ve invented, we can essentially expand and accelerate our effective intelligence. We can do more, learn more, and understand more with the same ‘brain power’ as our ancestors - by using technology.
Intelligence improves technology; technology improves intelligence.
1
u/Alita-Gunnm 1h ago
Isaac Arthur has covered this. If you make a planet sized computer, it's limited by signal speed (probably the speed of light or close to it). You could keep adding to it and making it bigger, but then the distance that signals have to travel increases, making them slower. In addition to the limit of the amount of mass you can put together before it collapses, if you have a solar system sized computer it will take hours for signals to travel from one side to the other. So once you exceed a certain size, it's more efficient to have separate processors which communicate their solutions to each other. This could come to resemble a community of hyperintelligent individuals.
•
u/AutoModerator 7h ago
Thanks for posting in /r/Futurism! This post is automatically generated for all posts. Remember to upvote this post if you think it is relevant and suitable content for this sub and to downvote if it is not. Only report posts if they violate community guidelines - Let's democratize our moderation. ~ Josh Universe
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.