Everyone is tired of talking about this, but I feel the need to scream this into the void.
Artificial general intelligence (AGI)
We've had it since early 2023. Probably before, but it became publicly available and took the world by storm and everyone sorta freaked out. The current array of LLMs are artificial general intelligence. Some better than others. That's not a real popular stance to have, but this debate is absolutely lousy with hype-trains trying to get rich quick, laughable hollywood tropes, the next wave of Luddites who kinda have a point, and buzzwords getting new definitions faster than we can learn what they mean.
But before anyone tears into me for dissenting, you have to remember that any human with an IQ of 80 is most certainly a natural general intelligence. (You are a horrific monster if you believe otherwise.) If that blows your mind or you have some sort of knee-jerk "but this is different" sort of reaction, then you've got some misconceptions about the term "AGI". It doesn't mean the thing is a god. It doesn't even mean that it's particularly smart by human standards. A general intelligence can be REAL dumb and make all sorts of mistakes and still most certainly be generally applicable. If you actually wanted to talk about some god-like all-knowing machine that has "woken up" and must hunt Sarah Conner... I just don't care. That's lazy soft sci-fi drama. Use a better term that actually has the meaning you want. Skynet or Omnissiah or Landru.
GPT is certainly artificial.
It displays some level of intelligence. But that bar is REAL low. Ants have some intelligence. White blood cells and Amoebas display intelligence, even if they're just following their programming. ELIZA displayed some level of intelligence, even after you spotted it's tricks. The fact that this trait that we are able to measure can come in very small sizes does not mean anything that isn't god-like isn't intelligent. It's similar to how we wax poetical about the sanctity of life while ignoring the billions of gut bacteria that we kill all the time and they are most certainly living biomass.
The real crux is that
GPT can generally chat about anything. It's not very good at a whole lot
of stuff, but it can try. (a big failing of it's part is that it fails
to be answer uncertainly when it's just making stuff up. It's
confidently wrong.) The reason that so many people used the Turing
test as a means of judging if something was a general intelligence is
because open-ended natural conversation can generally cover any and all topics.
The thing would have to be generally intelligent if it could consistently pass a Turing Test and be mistaken for a human (at least
as often as humans are). That was the goal-post circa 2010. It was
there for good solid reasons. I've failed to hear why that goal-post needs to move. We have simply achieved this. It's done.
Misconceptions, myths, urban fairy-tales, ignorance, and straight-up lies
Techobros will, of course, hype AI up as revolutionary and straight-up lie about the thing. They're generally polluting the mind-space of this entire discussion. There's a whole toxic culture here of people trying to get rich by latching their little red wagons to "the next big thing". They're supposed to be figuring out clever uses and capitalizing on it, but the vast vast majority are just hoping to sell out and get rich quick. They don't have a product. They don't even have a real idea. This is just another round of "Facebook for your dog".
Speaking of polluting, Hollywood has been poisoning everyone's minds about how AI works for decades. Killer robots is a classic trope. But it's just lazy soft-sci-fi. Typical horror, or just another veneer over yet another drama. HAL from "2001" kills people to follow it's conflicting programming, and
that's a good story idea, but people just take away "murder-machine". But even their attempts at the concept of AI is lousy and lazy. Johhny 5, Chappy, and that kid from "Artificial Intelligence (2001)" are all written like they're just curious people with child-like wonder.
Social media has generally come out against AI. These are modern day Luddites, angry artists, coders worried about their jobs. They honestly really have some good points, just as the Luddites did. The term isn't just an insult, they were a real historical group back in the dawn of the industrial revolution. Weaving was a middle-class job. They had guilds of professionals. It was a hard-won skill they developed over their lifetimes and the art was developed over thousands of years. And then it was worthless. This upset people and they generally rioted, smashed looms, and burned down mansions until the army shot them up enough they dispersed. The police during this period were legitimately afraid an actual person named Ludd was going to start a rebellion. And now we have the modern retelling of this exact tale and people are just about as angry. Many are simply in denial about what all they can do, how they do it, or what it means.
- "It'll wake up and be like a person in a box with all the same feeling and wants and desires that people do." This is Hollywood's influence. They're confusing the stories about AI with the reality. It's just easier to write a character that's human. In reality, AI will be more alien and different. In their defense, it's hard to predict the future before these things were ever invented.
- "AGI is anything makes a 100 billion dollars in profit". This is straight from openAI itself. Buuuut this is really just a financial deal with Microsoft about when to make everything actually open, like it says on the tin. Overhyped and taken out of context. Bad journalism if we're being charitable, but let's be frank: It's propaganda.
- "They'll just keep making the term easier and easier". Which is what companies usually do when they promise big and then dilute the term. Except in this case the opposite is happening. Passing the Turing Test used to be the impossible goal. These are customers moving the goal-post every time the goal is reached.
- "It doesn't create anything itself, it just regurgitates it's training set". It actually shows significantly better creativity than humans. Although measuring such things can be hard. It's pretty trivial to show how it's most certainly fills in gaps and gets creative with it's answers when it doesn't know an answer. When it's right, we are amazed at it's creativity. When it's wrong, like making up case-law or bullshitting us about non-existent code libraries, we call it hallucinations.
- "It just predicts the next word". Sure, but so do you. That's how YOU work. These things are neural networks. We mimicked human brains for this design. You are most definitely a neural network of about 100 trillion connecting neuron cells. The onus is on the religious folks to prove that there is something more.
- "Consciousness requires embodiment". These things HAVE really real servers that really exist the same way that you have a brain. Someone missing a limb isn't less conscious than you. Someone with locked-in syndrome is still just as consciously human as you or I.
- "It doesn't understand logic". Just straight-up ignorance of it's capabilities. These people can never provide any logical quandary that GPT is unable to solve. And this comes in a LOT of different flavors. Weird things like "It has no sense of time". But you can ask GPT to wait 5 minutes and then respond. And it works. Humor, slang, implications, insults, making guesses, creativity, the list of supposedly impossible tasks and topics that LLMs can't do is all more or less trivial to disprove by playing with it for a few moments.
- "It doesn't understand anything about what it creates". Easily disproved by taking anything it does create, (text or image now) feeding it back to itself, and asking it questions about it. It surprised me with this one. I fed it a stable diffusion image of a cartoony slime alchemist in a lab. Kind of a high-level concept art. Beyond just identifying objects like flasks, books, and a slime monster, it identified themes like medieval-fantasy, the comfy tone, that it was an alchemist's lab, where such an image might be used commercially, and the idea that the creature "symbolized the unpredictable outcome of magical experimentation".
The language space for all this is terrible. Terminology gets thrown around willy nilly by people that don't even know the terms mean. They WANT to talk about.... that special something that makes humans special and different than anything else and why we have souls why everything else most certainly doesn't. But the term for that is egocentric assholery because none of that is true. Language, tool-use, math, war, commerce, honor, prostitution, sentience, self-awareness, art, consciousness, intelligence, sapience, all of those were once thought to be very special to humans... until we found examples elsewhere in nature.
Self-awareness:
: an awareness of one's own personality or individuality
Human babies are not self-aware before ~18 months. A whole lot of animals have been verified as self-aware. Dolphins, elephants, magpies, orcas, most other primates, and ants of all things. In a computer, boiling this way way down, this is as simple as a thread knowing if it's the originator or the new forked process. They can know their PID. They can have a thermocouple on their CPU and see the relation between thinking too hard and an existential threat.
Sentience:
Intelligence:
A trickier one that Merriam and Webster acknowledge is being used in different ways. Reason, understanding, thinking abstractly. All good stuff. But I would direct people to start with Kurzgesagt's introduction to the topic. Plus they have cute animation.
People use this one all over the place, and "unintelligent" is a simple insult but if it was more accurately "less intelligent". We can measure intelligence in humans, both with IQ tests and basic cognitive tests. We can measure intelligence in animals in various forms, including proving mice have a mental model of space, crows can count, and while some dog breeds are smarter than others, individual dogs have a variance of general intelligence. A general intelligence, the 'g factor', is a real thing that almost all scientists acknowledge and accept exists.
The hard bit is how to measure it. In humans, IQ tests are a good means of measurement, as all serious measurements of intelligence strongly correlate with each other. Even Gardner's idea of 8 specific intelligence, have more correlation than he let on (Visser 2006). Among the different IQ tests, they show a very high correlation, up to 0.99.
Consciousness:
Even more contentious and loosely defined. We are headed deeper and deeper into the realm of philosophy where people essentially just argue about definitions all day and avoid actual answers like the plague.
Merriam and Webster go around in circles redirecting us back to awareness, self-referential "the normal state of conscious life", and gets kinda vague with "the upper level of mental life".
Kurzgesagt weighs in with some pretty good insight.
Personally, I'm not sure it means anything other than the opposite of unconsciousness. IE, you're not conscious when you're asleep. Hard-hitting insight, I know. But that's it, it's simply the "ON" state. A computer that goes into sleep mode is as conscious as you when you're asleep. An automated door that's powered with a working sensor is aware of motion near the door. If that doesn't count, then what exactly does?
Sapience:
Usually referred to as some high level of intelligence or application thereof. Insight into the world and how to use it. Culture or art is a sign. It's where we get "homo sapiens" from, and what's supposed to differentiate us from all the other animals. But a wise fish knows to avoid the hook. A wise antelope knows to avoid the log-shaped things in the river. Squirrels will pretend to hide nuts when they think they're being observed.
There's a study out there showing how foolish chimps are, where they hung some bananas up and sprayed down the whole group with cold water whenever any of them went to grab the bananas. They quickly learned not to mess with the bananas. Then the scientists swapped out one of the chimps with another, who, not knowing anything about getting sprayed with water, went for the food. The other chimps managed to teach the new kid on the block the rules of the room and he then avoided the bananas just as the rest. The scientists continue swapping out chimps until there were no chimp remaining who had every experienced getting sprayed and yet the chimps continue to teach the new members about the rules. This study has been widely used to showcase how terrible tribal knowledge is... but I see it as a great showcase about how chimps managed to hand down a story and lesson of what absolutely really did happen. They gained a bit of wisdom and managed to keep it within a group even after those who had to learn the original were all gone. But it's not a human, so of course we put a negative spin on it. Oh how foolish these mortals be.
It's hard to admit we've stepped into the future and it's not exactly what we expected. Of course it isn't, what were the odds of that happening? We are not going to be able to stuff the genie back into the bottle and all this WILL have an impact upon the world. The knee-jerk anger and denial isn't helping anyone address the changes. The first step to figuring out what to do from here, is to acknowledge where we are.
Comments
Displaying 0 of 0 comments ( View all | Add Comment )