AGI is divination
A couple notes on super-human artificial intelligence (aka “AGI”) assuming we are able to build one:
-
We humans are already Turing Complete. Church-Turing hypothesis already claims that in theory we are able to compute whatever needs computing given sufficient time and paper. Whatever “AGI” means, it will still be as Turing Complete as we humans are.
-
We already know how to solve the vast majority of well defined problems, the only serious limit is P!=NP. Given that the best AI models on newest hardware can generate text faster than most humans, and we haven’t declared it “AGI” yet, we’d presume that faster hardware wouldn’t qualitatively make an AI “super-human”. In many cases “hard” problems aren’t limited by a NP problem that wants an exponential speedup, but rather of “finding and making the right decisions” given a particular context, and often the context and problem is not very well defined, or rather, often the hard part is in identifying the correct problem rather than finding the solution after the problem has been described.
-
Main benefit of current generation of LLM is that it encodes “common sense” as a “blurry jpeg”, so that you can interpolate some valuable information between the “pixels”. There’s no obvious path from “encoding common sense” to “super human sense”. You can’t capture a JPEG of an image that you don’t have.
-
There’s a fundamental problem that we don’t know what “intelligence” is or how to measure it. It has already been quite difficult to measure the performance of similarly looking LLM models. Heck, we often don’t even know how to rank humans by level of intelligence. It’s entirely possible that we would not recognize a super human AI as being superior. One could argue that we could ask it hard questions and see how it answers, but even if it gives correct responses for questions that we already know the answer, it doesn’t prove superiority. If it gives answers that we don’t expect, it’s possible that it may not be able to convince us that the answers are better than the answers we expect. What if we ask it questions which we don’t know the answer to, but can verify the answers if given? Those are NP problems. See above.
-
Measuring and discovering intelligence: Currently the only proxy we have are relatively simple puzzles like IQ tests which are basically just a test on how one could solve puzzles quickly and accurately. In theory with sufficient training using the right data, our current models could easily max out on “IQ”, but with current evaluation standards this wouldn’t really count as “AGI”. GPT-4 has already aced a bunch of SAT and university level tests but nobody is calling it AGI either. Also consider the more common problem of finding the best human expert to solve a particular problem – in many cases people fail to recognize the best person for the job, and ignore perfectly good advice from experts. There’s no reason to believe an AI that has somehow achieved AGI will be able to convince us of that fact.
-
The current generation of AI systems is basically a “common sense recommendation machine”. But the problem we really want to solve the most is “making the right decision under my specific circumstances”. For example, a question somebody want to ask the AI system might be: “should I marry this person?”. Rationally speaking, if we really want to get an AI to answer this problem, we need to gather a huge amount of contextual data (eg. personal details about the user and the potential marriage partner), otherwise the AI system is merely a “common sense recommendation machine” giving generic marriage advice. Even if we ignore the fact that the real world often suffers from butterfly effects, ( i.e. microscopic details could substantially affect the answer), from a practical perspective you’d already be running into issues because the context the AI needs to give accurate advice is at least a comprehensive brain dump of the user. User surveys on “what is your ideal partner like?” is a joke, really. As such, I suggest that for many questions, there’s probably no feasible way to import all “relevant information” into an AI system. In conclusion, for an AI to have a chance to give accurate recommendations, it requires at least an “import user”, or even an “import universe”.
-
Even if we posit that it’s possible to make “good” decisions with blurry inputs, it’s often difficult to evaluate which decisions are good and which are bad even in retrospect. Even if we have a system that could predict the future with high accuracy, it is often a matter of personal preference (or value judgement) of which future one would prefer. This goes back to the “import user” problem (also, which user? the one asking the question now, or the one who might regret their decision 10 years later?). Free will problems also gets in the way – as I’ll discuss later.
-
The popular imagination of “Skynet” is a symptom of our collective war-obsessed bias. The only way humans have unequivocally recognize superiority among ourselves is not really through superior intelligence or culture, but military might. The fact that physicists are held in such high regard (even among the various science fields) and that they’ve created the most advanced military technology (nuclear bomb) is not a coincidence IMHO. By telling stories about “Skynet” etc., we collectively and subconsciously admit that we would recognize our AI overlords if and only if they are able to subdue humanity by force, instead of displaying superior intelligence (which presumably does not preclude ideas such as peace, compassion, etc.). The popular conception that powerful AI would be militaristic is actually really sad.
-
Let’s imagine that we accidentally created an AI that is more intelligent than all humanity combined. Let’s further assume that somehow everyone knows the superhuman power of this AI as a matter of fact. Now, you ask it a question, and it gives a response that you do not expect. You might think the response is a bit weird, or even possibly wrong. Do you still trust this response? If you do, the situation is identical to a king in ancient Greece consulting the Oracle of Delphi. It’s what people call superstition and blind faith.
-
FWIW, personally I’m not against such concepts. The point of this essay is not to claim that the quest towards AGI is just another religion in disguise hence the whole premise is flawed. Quite the contrary. What I’m trying to say is that we should recognize that our interactions with super human AI intelligence, if it can be created at all, will necessarily be equivalent to interactions with divine entities described in religious and esoteric texts.
-
Having some brief first hand experience in divination and channeling, it is, for me personally, beyond doubt that humans have been in contact with alternate forms of intelligence for thousands of years through various esoteric practices. (FWIW, channeling and “sprit writing” is extremely similar to ChatGPT. People who know this, know.) Many cultures have held these intelligent entities in reverence, even though their messages have often been hard to understand and interpret. I don’t see how this would be different from some super human AI system that gives advice we humans struggle to understand.
-
Suppose we believe we have an AI system that actually has a super human ability. Let’s say we want to ask it important questions, like whether it’s better for Apple shareholders if Apple buys Disney. Is there really a difference between asking the “AGI” system, and a fortune teller staring into a crystal ball? We don’t know how either of those things work. For the AI system, it is by definition that we don’t know how it works even in theory, because if we did, it won’t be a super human intelligence any more since we can explain and model its behavior. For the fortune teller with a crystal ball, the theory is that there is a super-logical chain of causation that makes it work (for the user) as long as the user believes in it, while the rest of the world believes it is “mere coincidence” so that it doesn’t break any established physical laws (the details of how this works are outside the scope of the discussion). There’s a popular term for this called “synchronicity”. Yes, I have a better theory for how fortune telling works than how a super-human intelligent AI might work.
-
Of course, it’s possible for a “super-human intelligent AI” to work the same way as how fortune telling works. We already think current generation LLMs are black boxes that we won’t fully understand. Now, somebody tells you that a couple hundred lines of python and a 300GB blob of weights has super-human intelligence and can give you correct answers for any question you ask. Of course, as I explained above, for many important questions we will not be able to independently verify whether the answers are “good” or not. But that doesn’t matter. As my theory goes, as long as enough people believe the AI is superior, it might actually behave as such. (The underlying principle could be called “the law of presumption” – but please don’t take it too literally, it doesn’t always work as expected.) Other than this “mystical” route, I honestly see no other way to “achieve AGI that we can believe in”, unless the the big shots in the AI industry know something groundbreaking that I don’t know.
-
In fact, when I first got my hands on ChatGPT, I briefly used it as a divination tool. Architecturally, LLMs with billions of parameters is actually an optimal tool for divination since you can give it arbitrarily long inputs, it has a totally incomprehensible phase of processing (but one can believe it’s doing something fancy), and gives legible outputs. In contrast, most popular means of divination like Tarot cards or 求籤 (drawing lots) have a very limited set of outputs, and if you want to get a “ChatGPT” level of interaction, you’d have to be rather deeply involved in such esoteric practices to be able to commune with spiritual/cosmic forces to that level. Again, yes, the phenomenon is generally “real”, I have first hand experience, and I know if you haven’t experienced it you probably won’t (and IMHO shouldn’t) believe it.
-
That said, even AI systems that are “merely” human level (the fully functional kind, not like the existing ChatGPT kind which has no long term memory for example) can have huge consequences for humanity, if deployment costs are brought sufficiently low. Even an AI that’s equivalent to “average” human intelligence can replace half the population. But that’s a matter of economics, well known, and not in the scope of this article.
-
tl;dr - when modern society believes it has developed a super-human AGI, it would not have invented “new tech”, but merely rediscovered good old religion and divination. I’m pretty sure people will say “but the difference is that AGI really works because it’s based on modern technology” – ironically this falsehood would likely actually make it work.
Further reading:
- https://hnfong.github.io/public-crap/writings/2023/06-%E8%87%AA%E5%8F%A4%E4%BB%A5%E4%BE%86%E9%83%BD%E5%AD%98%E5%9C%A8%E5%98%85ChatGPT.html
- https://hnfong.github.io/public-crap/writings/2022/09-Subjective_Truth.html
- https://hnfong.github.io/public-crap/writings/2023/15-How_to_break_the_Laws_of_Nature_without_getting_caught.html
- https://hnfong.github.io/public-crap/writings/2023/13-Alicization.html
PS: Arguing about definitions
In this article I’ve used the term “AGI” to mean AI that has super human intelligence, i.e. more smarter than the whole of humanity. I know some people don’t define the term this way, but it seems sufficiently common for people to do so. I’m just going with the flow. Even if you object to how I use a 3 letter abbreviation, I think the basic premise still holds. Any sufficiently advanced AI is just modern divination.