table of contents


Sam Ginn on the Singularity

Sam Ginn is a second year undergraduate student at Stanford University. He is a computer science major interested in human consciousness and whether human consciousness is artificially replicable. Sam is also a participant in the philosophical reading group at Stanford and he is a devotee of Martin Heidegger's thought. In this show Sam discusses the […]

download transcript [vtt]
This is KZSU, Stanford.
Welcome to entitled opinions. My name is Robert Harrison.
And we're coming to you from the Stanford campus.
The future ain't what it used to be, friends. I've been reading Ray Kurtz file recently.
He's one of our present-day futurists, and his crystal ball is telling him that the singularity is near.
What is the singularity? It's a period in the future during which the pace of technological change will be so rapid.
It's impact so deep that all of human life and the basic concepts by which we make sense of it will be irreversively transformed, thanks to artificial intelligence.
I quote, "With in several decades information-based technologies will encompass all human knowledge and proficiency, including ultimately the pattern recognition powers, problem-solving skills, and emotional and moral intelligence of the human brain.
The singularity will allow us to transcend the limitations of our biological bodies and brains.
We will gain power over our fates, our mortality will be in our own hands.
We will be able to live as long as we want, and we will fully understand human thinking, and will vastly extend and expand its reach.
By the end of this century, the non-biological portion of our intelligence will be trillions of trillions of times more powerful than unaided human intelligence.
We are now, continues Kurtz file, in the early stages of this transition, the acceleration of the paradigm shift, as well as the exponential growth of the capacity of information technology, are both beginning to reach the knee of the curve.
Which is the stage at which an exponential trend becomes noticeable.
Shortly after this stage, the trend quickly becomes explosive.
Before the middle of this century, the growth rates of our technology, which will be indistinguishable from ourselves, will be so steep as to appear essentially vertical.
The singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human, but that transcends our biological roots.
There will be no distinction, post singularity between human and machine, or between physical and virtual reality.
If you wonder what will remain unequivocally human in such a world, it's simply this quality.
Hours is the species that inherently seeks to extend its physical and mental reach beyond current limitations.
That's Kurtz file.
He goes on to discuss what he calls the six major epochs of cosmic evolution, and he understands evolution as a process of creating patterns of increasing order.
And believes that it's the evolution of these patterns that constitutes the ultimate story of our world.
So these six stages of cosmic evolution begin with stage one, physics and chemistry, namely the formation of atoms, and then with chemistry, the formation of molecules.
Stage two is biology, or information as it's stored in DNA.
Epoch three brains, information in neural patterns.
Epoch four is the epoch of technology, where information in hardware and software designs start to emerge.
And I presume that we are in stage four of this epochal cosmic evolution.
And then we have epoch five, the merger of technology and human intelligence.
The methods of biology, including human intelligence, are integrated into the exponentially expanding human technology base.
And then finally, epoch six, the universe wakes up.
Patterns of matter and energy in the universe become saturated with intelligent processes and knowledge.
So there you go.
You heard correctly, whatever is taking place right here on this third stone from the Sun, our Earth, will eventually in the sixth epoch wake up the entire universe,
saturating its matter and energy with a superintelligence of which we human beings will have been the humble origin.
Who said that anthropocentrism and geocentrism were bygone illusions?
The future ain't what it used to be unless you go back far enough, and then it starts to look a lot like that old story.
A universe that revolves all around us, baby, it all comes back home to the Terran system and to these human brains of ours that just yesterday we thought we're an epiphenomenon of evolution.
That's right, the human mind is the great cosmic mind after all. Once it's able to reproduce its intelligence artificially.
So the shame and tells us in his book the Singularity is near, published in 2006, will in title opinions survive the Singularity?
Well, given that in title opinions figures as the highest form that human consciousness takes in this lead up to the Singularity, and given that we are broadcasting from Stanford University, which is one of the main incubators of the fifth epoch,
it's highly likely that the archives of this show will be carried over into the Singularity, and with any luck will infect the system with the deadly virus of free and amorous thinking.
It ain't over till it's over and who knows, in title opinions may well be the fat lady in this singular drama once it's all said and done.
Speaking of Stanford and artificial intelligence, I have with me in the studio a Stanford sophomore who was helping our technology approach the knee of the curve, where the exponential growth of artificial intelligence will begin its vertical rise into the brave new world of Singularity.
Sam Ginn is a computer science major interested in human consciousness, or more precisely in whether human consciousness is artificially replicable.
Sam has taken a course with me on nihilism, is a participant in the philosophical reading group at Stanford that I run with my colleague, Zep Gumbrecht, and he is amazingly enough a devotee of Martin Heidegger's thought.
He gives me special pleasure to welcome him to in title opinions. Sam, thanks for joining us today.
Yeah, thank you so much. It's an absolute pleasure to be here.
You are immersed in the world of artificial intelligence. Could you share with us your thoughts on just how close AI is to the breakthroughs that would create the sort of exponential growth that Kurtzweil and others are calling the Singularity?
Of course.
Well, I think first we have to understand what possible point this is that Kurtzweil is talking about when he talks about the Singularity.
What type of intelligence is necessary to bring about this rapid growth in progress?
Because what Kurtzweil is talking about is that this is the course of human history.
We've been advancing technologically iterative centuries and recently iterative decades.
I mean, the smartphone was created just a couple decades ago.
But the Singularity is that point in which this progress becomes on the order of magnitude of minutes of hours.
When a computer will be able to solve special relativity, which Einstein took a decade to come up with in a matter of minutes.
And so what kind of technological progress would be required to enable this?
And that's a computer which can start learning for itself.
So when Kurtzweil talks about the Singularity, the Singularity is that point in which the humans growth curve of technological progress meets and melts with the computers growth curve of technological progress.
And the computer will then start learning on its own with its own thought, with its own thinking, exactly as how humans do now.
But it's such a rate completely inconceivable to us.
A computer that can think by itself would be able to self-replicate it on every digital surface of this planet with ease, with rapid ability.
If it wants to learn a neighboring star, it can, as you said in the beginning, with Kurtzweil opening, that we will permeate this entire universe with its existence.
Well, yeah, it depends on two things. First, what we mean by thinking, but permeating this universe why, just because our computers can get ever faster and ever broader.
How does it permeate the matter and energy of the universe?
I mean, a computer that can think for itself can perhaps be curious to explore the alpha-centaurio, which is our neighboring solar system, and build its own rocket ships.
When Kurtzweil talks about permeating, it's not a mystical permeation. It is entirely a very physical permeation.
Okay, let's take the television series Star Trek.
If you look at the science fiction of fantasy future, in either the first series, the original one, or the Star Trek, the next generation, what you find is that on the level of electronics, or computer intelligence, we are very close to what was envisioned as something three or four centuries away in time from our thing.
And in that sense, you can say the singularity is happening, there's a lot of evidence that on the other hand, we are not one inch closer to space travel today than we were then.
Our airplanes do not go 100 miles an hour faster than they ever did since they invented.
That this permeation of the universe would seem to require something that is in excess of what we would just call this artificial intelligence.
It requires some, if you want to use the analogy, I don't like these analogies, but something on the level of the hard, not hardware or such, but something that involves material travel.
And whether a computer can invent warp drive, you're telling me that it can.
Yes, I think you bring up a good point right now artificial intelligence, although it seems to be incredibly intelligent, it can do insane mathematical computations, is nowhere close to the world that curves while so eloquently paints in his picture of a world permeated by intelligence.
And that would 100% require what you say, a manufacturing of ability, ability to solve the equations of warp drive to bend space time to engage with this word physically.
And right now artificial intelligence can't do that.
And I think this brings into the interesting question of where we are right now with artificial intelligence and where we need to be to get to the singularity.
Because for curves about the singularity is not a gradual evolution of intelligence.
It's a singular point in which an artificial intelligence gains its ability to, which will go into what this means, but thinking for itself to have its own curiosity to build itself to build the robots to go out in the world and build a warp drive.
Right, so it's a point at which it all takes off and something explosive happens, almost unimaginable.
So how far is artificial intelligence from such a moment of singularity?
Yeah, so I think before we get into where AI is right now, we have to understand what I mean when I talk about the singularity.
What I mean when I talk about the singularity is the computer that can truly think for itself.
So in philosophy circles and computer science circles, we've divided this problem of intelligence or the problem of consciousness of extreme sentience into the easy problem and the hard problem of consciousness.
You had this great quote by Kurzweil that's about when he said that intelligence will be able to do informational information retrieval, pattern recognition, and more on emotional intelligence.
In those first two categories, information retrieval, pattern recognition, decision making, those are the easy problem.
These are things that we can solve with known computation given enough computation.
These are things that we can come up with equations that we know how to solve with pen and paper.
Now it might take an incredibly long time to solve with pen and paper, but it's something we can imagine doing with math, with dumb processes, but when in math and in collective they can emerge really interesting solutions.
So for instance, image recognition is a great example given a photo of a panda can a computer classify this as a panda.
Well, yes, we can teach it that pen is of ears, faces, they're white and black, etc. It can learn from a system of rules. It's something we can replicate with pen and paper.
But that type of artificial intelligence does not get at what curves will also include in this point of singularity.
That's just the first part. That's the easy problem, which is very hard to do in computers, but we have a clear path forward on how to get there.
But this last word said he used a moral and emotional intelligence.
That's the hard problem.
That problem consists in our ability as humans to feel, to interpret the world, to experience the world from a point of view.
So David Chalmers is a really famous cognitive neuroscience philosopher who talks about consciousness, and he defined this hard problem of consciousness.
It's that subjectivity that we experience every day. In Thomas Nagel's words, it feels like something to be us.
To be anything that is sentient. Something that it feels like to be a bat, something that it feels like to be a whale.
So the subjectivity that you're sentient subjectivity you're talking about, it's not exclusively human.
Yeah, exactly. It's not necessarily linked to intelligence, by the way.
Yeah, no. You can, in Chalmers, he comes up with this idea of a philosophical zombie.
You could have somebody who looks like a human who acts like a human who does everything that a human can do, but without that subjectivity.
It would feel like nothing to be inside that person's head. But this subjectivity, if we could artificially create it, then that machine would have feelings. It would have curiosity. It would have wonder, it would have its own desires.
And if it can have that, it can embark on its own exponential curve of learning.
And this is what curves by a singularity point is. It doesn't necessarily require this subjectivity, but it requires a computer which can emerge on its own.
It can emerge on its own and decide for itself that it wants to learn, that it wants to grow, that it might be interested in solving warp drive.
Until we get there, all computer science, all thinking on this topic will have to be things that can just be solved by pen and paper, but better.
We would have to know the theory of warp drive in order to instruct a computer to solve the mathematical equations.
What curves by all needs to achieve the singularity is a computer that finds the problem of warp drive interesting.
And wants to solve it and come up with the theory on its own.
So the first question that comes to mind hearing you say that is, why do we need artificial intelligence to do more than the dumb stuff?
Or the easy problems?
What do we gain by replicating human pathos, emotions, moral intelligence, and so forth?
Why can't the singularity take place without machines learning to be human in that sense?
Yes, so to answer that question, it's interesting to explore where computer science is right now.
So right now we have excellent algorithms that can do pattern recognition in superhuman feats.
In 1997, we solved the game of chess, VP chess, and that was a huge break. They're an artificial intelligence.
We now have robots that can travel on their own in Mars and react without any human control navigating harsh terrain, moving through rocks and fields.
We have self-driving cars on the streets in San Francisco.
They can do seemingly superhuman abilities in driving.
We no longer, it's interesting, like when SpaceX and NASA, they launch rockets.
We actually don't even trust humans to do a good job at launching rockets in those beginning 10 seconds right before they take off.
What we do is they wouldn't it be a risk to give such a machine emotions and desires and perversions?
It would 100% be a risk that machine might decide, maybe I don't want this rocket to launch.
It's incredibly risky. I mean, all of the great technologists right now, Elon Musk, Larry Page, Sergey Brin,
they all talk about the scary future of AI, of how it will gain sense and consciousness and be able to do things that the human programmer didn't program them.
But they talk about these things because they know the limits of artificial intelligence right now
and without imbuing these machines with this consciousness, then we will never be able to get the super-effects.
The really interesting things of warp drive, of having solved that because when you talk about Star Trek,
we see this beautiful world of a computer, but it's not that singularity that Kirzweh was talking about.
In order to get that singularity, you really need a computer that can think abstractly.
The ability to think abstractly is an ability that requires consciousness and requires us to have the idea of an abstract thought.
Right now, artificial intelligence has no notion of abstract ideas. It has no notion of concepts. It just has notions of symbols.
But it has a notion of what those symbols mean.
So, for instance, the probably most sophisticated artificial intelligence program right now is a computer program developed by Google in DeepMind,
which is owned by Google called AlphaGo, which just competed last year in the against the Go World Champion, whose name is Lisa Dall.
What is Go for? Those are, I don't know what you're about.
So, Go is a game that's similar to chess. I mean, it looks like chess. You put pieces on a board.
But whereas chess has always been this amazing feat of human prowess in terms of our intelligence,
it was actually solved by computers in 1997 and why it was solved by computers is because a computer could literally search the entire space, a possible chess position at the current space.
And then it could just see, "All right, if I make this move, I can win." Or, "If I make this move, I will lose." 50 moves down the line.
But with Go. It's similar to chess, but its state space is astronomically orders of magnitude larger than chess.
So, for instance, on any game of Go, there are more possible positions of a Go board than there are atoms in the universe.
So, there is no possible way that a computer could even hope to search this entire space.
Neither could a human mind. Yeah.
So, the question is how are humans good at playing Go?
And if you ask any Go expert, what they're going to tell you is its intuition.
This Go game is a perfect example of something that's super hard for a computer to program because we have no idea what this word intuition means.
For Lee Sudol when he talks about when he's playing, he talks about an aesthetic nature to the game of Go.
It feels like a better move over here. He can see the game unfolding not precisely, but in a way that's more favorable to him painting almost like a picture.
Now, this used to be something that computers had never hoped of replicating. But what Go, Google, did is they created a program that learns through something called reinforcement learning.
And it learned on its own without searching in an endless space, and it learned to beat Lee Sudol in the Go game.
And so, this world championship that happened last year, AlphaGo actually beat Lee Sudol four out of the five games.
And the commentators were blown away that a computer was able to achieve such an amazing feat.
In fact, one of the commentators said that on one of the moves in game two, that this was not a human move.
He thought that it was a bad move that AlphaGo made, and I thought it was a silly move.
But it turned out 20 moves later. It was crucial to AlphaGo's victory.
And this was nothing that could be predicted. AlphaGo didn't know this would be the best move, but it had an idea that this would be correct.
And it was no move that a human would make. It learned how to play Go differently than how humans played Go.
Now, this might seem very cool. This might seem like this is the path forward to get the singularity.
We have a machine that is able to learn on its own to learn things that we as humans find in scrutable,
that we as humans, we can't even tell you how AlphaGo figured out how to win this game of Go.
But when we look at what AlphaGo is actually doing, what it's doing is the same math that a fifth grader can do with pen and paper, but just really, really fast.
What it's doing, how AlphaGo learned is it played itself in the game of Go millions and millions of times.
And at the end of every game, a person told it whether it won the game or whether it lost the game.
And then it inferred patterns within its gameplay of this type of pattern, something that looked like this, would yield with a higher probability of victory.
In the end result, what this came out to be was just a mathematical equation.
It was able to just identify patterns and produce an end state, a move in this case based on the current position of the board game that would yield to a higher probability of victory in the end.
But AlphaGo had no idea it was playing a game. It had no idea that it was in a competition that was broadcasted across the world, and a no idea it was even making moves.
So when we talk about what would it would take for a computer to invent something like Warp Drive, AlphaGo might seem like something along the lines of getting there, maybe in a decade.
But in fact, it's nothing close, because what AlphaGo required is this equation in the end of whether it won the game or not.
And it required a clear state space to discover patterns that would yield to its positive feedback loop.
Basically, it was just optimizing a single mathematical equation given the inputs of a board game.
What do I need to literally multiply it by numerically in order to get an output move?
There is no way I can conceive of putting the problem of Warp Drive or the problem of space exploration in terms of an optimization problem.
And so this is really what artificial intelligence is right now.
Artificial intelligence can be described as optimization problems. It can be described as math.
So anything in the future that can be described through optimization is something where the AI right now is really good at.
So when we look at your Star Trek world, what is the computer really good at?
Well, the computer is really good at maneuvering the spaceship, maneuvering the enterprise into position, given an enemy spaceship.
How can the enterprise expertly maneuver to avoid the missiles?
Or what is the optimal firing rate and firing trajectory of its lasers in order to take down an enemy ship?
For instance, data, the Android on the Starship Enterprise, data is really good at things like calculating really, really fast in order to accomplish exactly what Captain Picard demands of him.
But what the computers are not good at is creativity. They're not good at anything that would require Captain Picard or the other Starship Enterprise officers to decide whether we should explore this planet.
How can we explore this planet, not in the optimum way, but in the way the best suits human interests?
But does that, excuse me, for interrupting, is a beautiful explanation you're giving.
But what the officers of the Starship Enterprise have that data doesn't have, is that a certain kind of intelligence?
Or is it something else? Is it on the level of use, word intuition?
Or moral intelligence? Some other kind of human faculty that is not subsumable under the rubric of intelligence.
This is the problem I have with Kurt Spiles that he thinks that intelligence is the whole game.
And we know that there's a lot more to being human than what goes on in our brains.
I agree completely. What's going on in the human brain includes the intelligence that computers can do right now, includes the intelligence of data or the computer.
But there's something more going on in the human brain, where you can call it intuition, you can call it feeling, you can call it whatever.
That enables us not just to make decisions, not just to learn, but to grasp immense complexity of multiple and manifold problems, really well in a representational manner.
So what computers are really good at is taking a very, very narrow topic, the game of go and solving it.
Where humans can do that, not as well as the computer, but what humans can do at so much order of magnitude better than computers in Star Trek, which is hundreds of years in the future, is take a whole world of data, a whole experience of data, understand it conceptually, intuitively, in some manner, and then make decisions off of that.
Computers can only make decisions off of the inputs we give them. Humans can go on in the world and experience completely unfamiliar things, and then make completely new decisions.
Can I ask a question before it slips my mind? How much of what you're describing about humans do we share in common with animals?
Yes, so that's a completely unknown question. What I'm talking about right now is the human consciousness. What is it that makes a special computer? So computers right now, even like the curiosity rover on Mars that drives itself, it's not really taking in brand new ideas and making decisions.
We trained it off of Earthly Terrain and we trained it how to navigate Terrain, a human or a dog when put on Mars. It couldn't counter things completely alien, things that it had never been taught before, and somehow learn to learn how to deal with it.
And so I think, and this is debatable, is that because they have their sentient subject to it? They have the ability to question themselves, to think about themselves.
So I think a human undeniably when put on Mars or put on a different universe even, they could think about things from their own point of view, relate to these new experience by relating it to themselves, to their own subjectivity, and I think a human can do that. Could a dog do that?
I would say a dog would be able to do that. I would say a monkey would be able to do that. Could a worm do that? Could a tree do that? I don't know. Those are interesting questions.
Yeah, we don't have to go speculate too much on that line. I'm just trying to identify. We're still in the realm you were describing of the easy problem, right? Even though these are incredibly sophisticated very, you know, the AlphaGo is incredibly stupid, but still within the domain of the easy problem.
The hard problem is getting machines to develop something along the lines of human subjectivity. Exactly.
I still don't understand what we can talk about the challenges that represent, but I still don't understand why it's necessary that they have that subjectivity. It's because without it, they're not able to interact and respond to completely unpredictable circumstances or events, and therefore not able to make decisions creatively or intuitively that would enable them to exponentially grow at the rate of a singularity.
Exactly. Exactly. Exactly. And that will matter. So when we think about computers now, AlphaGo, self-driving cars, even the computer enterprise, they do one thing very well or maybe multiple things very well, based on hordes of training data and past experiences of that specific thing.
But AlphaGo cannot play chess. AlphaGo cannot be put in a completely unfamiliar position and learn to learn how to solve that problem.
It's really this question of computers right now can do learning really well when somebody tells them what to learn, but they can't...
And this is what... I can't take the initiative from the computer. Yeah, they can't learn to learn. And this is what the very, very contemporary computer scientists are trying to create a computer to do, but we haven't been able to do it.
Would that still be within the realm of the user growth? Yeah, no. So that is getting into the realm of the hard problem. What would be required for a computer to have its own curiosity, to have its own ability to explore its own thoughts?
That would require what Chalmers and other computer scientists call it the hard problem of consciousness or strong artificial intelligence or artificial general intelligence.
This hard problem of consciousness is that subjectivity and what it feels like to be us. That enables us to learn to learn and enables us to engage with our world curiously with wonder.
And in my opinion, and this is debatable, but without this wonder, without this subjective appreciation that could not be written on a piece of paper, computers will never learn to engage in a general sense.
So what computer science calls it's right now we have artificial intelligence, but what we don't have is artificial general intelligence, an algorithm or a machine which can learn in general anything exactly how humans do it.
And that is kind of the dividing point between week AI and strong AI, the dividing point between computers we have now and computers in the Star Trek realm and those computers after the point of singularity that would go on and permeate the universe.
Well, it could be that what they don't have yet is stages of development. In particular, they don't have an infancy or a youth because we know that our species, what's so singular about our species even compared to the closest primates to us is this project.
Accessively prolonged infancy and childhood that we have in terms of percentage of our lifetime in an occupies.
And we know that this extraordinary plasticity of the infant in young mind, the learning seems to have its all its incubation matrix right there in something that is somehow associated with youth.
How that's going to help solve the artificial general intelligence problem. I have no idea.
But why? I take it that you believe that we are very far away from solving any of the hard problems. Is that correct?
Well, this is the question of I cannot put a single time line on this because when we're talking about the easy problems, decision-making.
We're talking about things that we can clearly see how we can improve upon existing algorithms to get at. So, for instance, I don't know how to make a plane or nobody knows how to make a plane travel between star systems right now really fast and efficiently.
But we can see our current planes now and we can see if we incrementally improve them over decades at an exponential rate of progress we'll be able to do that.
So I can pinpoint maybe in a hundred years we'll visit another star system.
But nobody has any theoretical conception of how you could create this consciousness.
Right now we have and very recently in the past decade since like 90s and early 2000s developed enough technology,
developed enough machinery to completely mimic all of the power and all of the relationships in the human brain.
So right now we have the technology required to create the sentience, but we don't have the theory required to create the sentience yet.
So, and why people are really worried about this now is because the technology is so easily accessible.
If some person, maybe a Stanford student, maybe some person in their garage can develop the theory.
They can create the sentient intelligence and once they do it, it's a winner take all game.
Once they do it, that is the point of singularity.
The AI, however they programmed it, would be able to learn in this exponential rate envisioned by Curves Vile and be able to do untold number of things.
So that's why it's a dangerous thought.
And that's why I can't get to it.
Can I ask you, are you a candidate for such a person?
Yes, no, I mean it's I have the technical abilities and every graduate in Stanford or any undergraduate who's taken AI classes here at Stanford has the technical ability
and has the machinery, the capability to program this future AI that wouldn't break through the singularity.
What none of us have as of yet is a pragmatic theory of how to create this consciousness.
So I take it, Sam, that you believe that the theories that are governing AI at the moment are unviable, that they represent a naive,
maybe naive is not the word, but extremely limited and perhaps misguided philosophical concept of what human intelligence is.
Can you speak about this, the Cartesian legacy of the theory of mind that is operative in the AI community and especially what you call the limitations of a state theory of intelligence?
Yeah, so let me explain, so computer scientists right now have wonderful theories about how we could create consciousness, but they're all based on this Cartesian framework.
So one of the absolute most leading neuroscientists in the world, Christoph Kalk, he has this idea of consciousness that consciousness is a fundamental part of the universe.
We don't have to go in exactly what he says, but what he bases all this theory off is, and he writes this, is they cards formulation,
and we need to argue soon. I think, therefore, I am. What he says, this is vision of how to do this, is we can develop a weak AI, and we know exactly how to do that.
Something that can do decisions can take in parameters, learn off of the parameters, and develop a solution to a problem. We can do that. Let's do that really, really well.
Then let's apply value predicates on top of that data. So as I said, AlphaGo plays Go right now, but it doesn't know it's playing Go.
What computer scientists want to do is teach it to play Go like it does now, and then apply value, apply meaning on what it's currently doing.
So it's playing Go, then we want it to teach it or inject within the meaning of playing Go.
We want it to get that it somehow has a meaningful perception of the Go game, that it understands that it's playing this Go.
Yeah, so they're still in this input output.
Way of thinking.
It can put something and it'll come out as an output, which is, yes, a very Cartesian representation model of learning.
Yeah, and they think it's this kind of special subjectivity sauce that you sprinkle on a smart machine, and magically it becomes consciousness.
And so based on this framework, this is where you've gotten a hold debate.
Well, is this special subjectivity sauce even even computable?
So you get crazy ideas from like David Chalmers or Christoph Cauch them.
Maybe consciousness requires new physics. Maybe this state, which I'll explain in one second requires a brand new idea of physics in order to explain because when you think about what it would require for AlphaGo to get this subjectivity, it's almost magical.
It's almost, there is no way a computer could do that.
We need something extra.
We need a specialness to this.
And so this is where all the modern ideas of consciousness come out of.
They come out of the idea.
All right, what is this specialness that we could inject into this machine to get it in that subjectivity?
And so some people think, well, maybe it has to do with quantum physics.
So Sir Roger Penrose is a physicist who says that our consciousness in our brain is due to quantum physics.
It's due to the superposition and the breakdown of quantum particles within our brain.
That's one idea. We don't have to get into it.
But another idea is that maybe consciousness is fundamental to the universe.
And why people are coming up with all these crazy ideas is that nobody has any viable path forward and how we can computationally inject this subjectivity under this Cartesian framework into AlphaGo to get it conscious.
So we have to come up with all these crazy, ulterior viewpoints of how we could possibly do it.
Now, I think that all of these ideas, all of these frameworks, the consciousness is fundamental.
The consciousness has to do with quantum physics that perhaps consciousness is part of what people call the integrative information theory, which is that given enough information moving and rapidly consuming and processing emerges consciousness magically.
They're working off of the wrong definition of consciousness.
They're not embarking upon the question of what would it mean to make a conscious entity.
What they're doing is they're trying to create a conscious state.
So what traumas and coke and panros and all these other theories, they're trying to build a machine that one could point to and say, this is a conscious entity.
This is a conscious agent.
It is a state, a subject.
And so this is why Coke begins with Descartes.
He says, I think, therefore, I am.
He pinpoints an eye.
He pinpoints that there's a subject behind existence, which experiences.
I don't necessarily agree with that.
I don't think, if you freeze me right now, and I'm an object, that you could say that that frozen Sam is still conscious.
Consciousness to me is not a state which can be computed, not a state that can be solved for a non-estate, that can be sprinkled into.
Not a conscious ness, but rather an act of doing.
When we think about what we do, when we are conscious.
So for instance, when I look, for instance, at leaves of grass, what I see is not the color green, the first and foremost, as RGB values, as wavelengths, as frequencies.
I see the greenness of something.
I can engage with the color green in a way that far surpasses any type of currently existing known methods of computation.
When you talk about design, which is somewhat word for a conscious entity, of a door closing.
When a door closes now, we don't hear the frequency of the sound.
We hear the door closing as a door smashing.
I'd all take place within a meaningful context.
A context of meaningfulness into which design is thrown.
And so this ability to take things as things, this is something that artificial intelligence has no idea how to do.
Even though it's not the same thing as pattern recognition.
Not at all.
Not at all.
What's the difference?
So this pattern recognition is I would first take the frequency of a door closing.
I would first take the soundways.
And then I would actively think, all right, these sound waves sound 80% like these other sound waves, which historically have then proceeded a door closing.
Got it.
That's a pattern recognition.
That is what coke and all these other consciousness scholars are trying to create.
They want to inject this meaning after the fact of the data.
And I think this is just the wrong approach to consciousness is the wrong approach to an artificial and general intelligence.
Because what hydegr is so beautifully elucidates for us is that we take things as they as meaning primordally.
Before we engage with the substance what hydegr calls their present in handness.
And that's what basically to simplify.
That's what he means by being.
Being is the aspector.
We take something as what it is.
In other words, we have access to its being.
It's being this or that or something.
Go ahead.
So when you ask me, what would it mean to create an entity that reaches curse by a singularity or reaches that artificial general intelligence?
It can't just look at data without meaning.
It can't just look at data without meaning and then learn off of that.
That could theoretically get us really, really far.
But it is completely alien to how humans do it.
And I don't think that humans have just stumbled on another way to do it.
I think part of what has made us so brilliant.
So having this ability to create planes and automies now is because we don't look at things as meaningless data.
And then apply some values on top of it which in itself would still be meaningless.
We take things as this.
And this isn't just some ontological difference that I'm making here.
This is a critical difference in how we think about building intelligence.
If things first look at things as something as meaning, they engage and they learn completely alien to how computers or other things learn.
They don't learn off of data.
They learn meaningfully.
They care about things.
They have this concern.
They experience the world from a point of view from a meaningful existence.
I mean, the high digger then goes into what is required of this design.
This design is projected into a world that is already meaningful.
It's projected forth into a multitude of worlds of experiences.
He has wonderful examples of the person in a shop, a builder.
When they're working on a car with their tools, they don't engage all the tools, classify them all,
recognize what patterns would be best to screw in a screw.
Or now, now with a hammer.
They understand that a hammer, it's meaningful existence.
It's ready at hand property.
Is that which can now now?
Is that which can be used for this, something else?
It's a pre-theoretical engagement with the tool.
As opposed to the present at hand, which is a theoretical after effect.
So for high digger, however, this ability to take things as, you mentioned earlier that for design or human existence,
or what are colleague Tom Schienkal's throne openness, we're thrown open.
We're thrown into a world in a mode of openness, and therefore things come to us,
as well as at the same time we're reaching out beyond ourselves to them.
You mentioned that things matter to us.
And in being in time, after high digger undergoes this existential analysis of the mode of being of design,
he finds that the inner core, the essence of this throne openness in a world, is care.
And it's care from which means that on the one hand we're burdened by cares, on the other hand, things matter to us,
and we take them into our care.
And he will then go on in division two of being in time to articulate what he thinks are the conditions of possibility for this pragmatic taking as.
And he'll find that it's in our temporal, dynamic projection beyond our immediate present into a future.
It's a security, design is always future, design is a being unto death, and this ultimate impossibility of its being that it can shatter at before the actual event of death takes place is what throws us back into the world.
And therefore it seems to be responsible for the mediated relation that we have to things, right?
Do you think that for artificial intelligence to replicate conscious what you're calling human consciousness,
would it have to give the machine some sense of care and some sense of futurity and perhaps like the movie Blade Runner, a sense of the imminent mortality of the replicants,
whereby they go back to their makers and creator and they want more life because they know they're going to die?
Would that be a kind of necessary condition for artificial general intelligence?
Yeah, no, I agree completely. One of the reasons why I think humans have been able to become so intelligent is in part because of this care.
And when you have this great philosophy of learning, how have we come to this learn, it's this brimming over of curiosity, this brimming over of wonder, it's this brimming of care and concern.
And so what I mean here is in order to create this artificial general intelligence, we need an entity that not only understands the world physically, but that has a concern for the future that understands its place.
And these aren't just meaningless philosophical requirements. These are necessary precipitants to really general learning.
How can a machine be in a general case without having an ability to experience the world, without having an ability to care and to have concerns for future actions?
So when I talk about consciousness, I don't talk about consciousness as in the state. I talk about consciousness as intrinsically as high degree and you put it temporal.
So what consciousness needs is not necessarily a machine. It needs an understanding of time. It needs an illusion of being projected towards the future.
And I think this is a key point in Hiedegar's style when he's critiquing Descartes.
It exists frozen. It exists by itself. Hiedegar's design is inherently projected. It is never not moving. It is always looking towards the, I'm not looking towards but engaging with the future ain't almost in the future without being in the future.
It's, Hiedegar's words projected on towards death or she and his words thrown openness. These aren't just meaningless words, but it tells about a difference in how we think about that entity which has subjectivity.
And Hiedegar's words, it's a verb rather than a noun. It's something which experiences present Lee into the future with the past, not something that just happens in an instant.
And so this temporal dynamism of Hiedegar's design is what I think is necessary for an artificial general intelligence, one to become conscious, but two to really engage with learning and engage with this world.
How can it learn to learn without this care? Without this concern and understanding of the world's around it.
And in Hiedegar we can think of the human subject exists in one world, the universe, and it engages with that one world through its special mediation. With Hiedegar that's not how consciousness exists at all.
It exists. It's projected on towards a multitude of worlds. One world might be the workshop, one world might be the temperature in the room.
And it has all of these independent concerns. So for instance, when I'm in the studio right now with you, I have the concern about what I'm talking about, what I'm going to talk about next.
And I'm trying to come up in my head about what the next word will come out of my mouth. But I also in the background have a feeling of the temperature in this room.
And right now it's a perfectly fine temperature, and I'm not really considering the temperature. But the moment it becomes too hot for me, my attention, my concern will be moved towards it.
And I don't necessarily know exactly how that happens. But that type of, I don't always say multitasking, but that type of concern with the world is what is required by an artificial general intelligence.
It needs this general ability to exist in the world as an experience.
Right. Well, the Scholastic philosophers of the Middle Ages going back to Aristotle, distinguished between two different kinds of potency, I say, call it active potency,
and passive potency. Now active potency would be associated with everything that we do when we act actively. So we read a book, we calculate all those things that maybe have to do with the easy problem of artificial intelligence are conceived of in terms of an active potency.
What Aristotle and others mean by passive potency is our, so it's a potency, namely therefore it's a potential, but it's our ability to be affected.
It is our capacity for a pathos, being open to a change of temperature in the room, and therefore responding to it.
Or feeling the pathos of another human being who might break into the studio in a state of hysteria because she's been robbed or something is this sort of passive potency, I would suggest has to play a key role in any attempt to raise artificial intelligence to the general love.
Yeah, no, I agree completely. So when I think about what I do when I read a book and I'm learning something in the book, I'm not actively so I mean I am actively reading the book, I'm reading every word, I'm thinking about what the author is saying.
But when we think about what we really get out of a book, it's those relationships we make to the, from what the book says to our own experiences.
It's when I'm reading a book such as Heidegger Aristotle, I'm reading what he says, but I'm relating that to other experiences. There's that act of potency in the action of reading, but there's that passive potency and how my mind is attuned to the world in general and then can make relationships and make distinctions.
So I think when we intelligently learn things before you make you have to receive it's a receptive this capacity to to passively receive something in an active mode. I don't know how it's to put it.
No, no, no, I agree completely. It's not the active is what I physically do. That's the discard. I think, but the passive is this conscious openness to you can call it intuitions or something.
So it's for instance, when I think about what I do, when I'm programming a computer, when I'm actually engaging in the act of coding, at one point I am actively thinking about what I'm typing and actively thinking about what is necessary for me to accomplish the next task, the next algorithm.
But there's also this passiveness, this intuition, my hands already know what they will be typing before I can consciously become aware of that radiosination going on.
I have this inkling, this feeling of what needs to happen next before it even comes into the forefront of my head. That's that passive openness to the thought that's emerging in me.
It's not my physical hands are above it, but the thought, not the conscious thought, but that open thought is ahead. My body, some, or my mind, my design, or in the world somewhere knows what the next command that I'm going to be typing to the computer is before I consciously become aware of it.
So there's this activeness of my typing, but there's this passive openness, not something that I'm consciously making that I'm consciously ratty is what I'm saying about, but that comes to me intuitively as I already pre-thought thought, something that I can engage with.
And this is really where Heidegger talks about how we live and engage in experience in this world. And I think without this, without this ability to relate to this world in multiple ways, to have these intuitions that are existing before you even come to them.
This openness to thought and passive potency is essential to an artificial general intelligence. It's essential to any entity that can learn in the broad sense, because right now I can give a computer an ability to play go, like a book on how to play go.
But it has no consumption of flowers. So for instance, go is this beautiful game in which the terminology for this civic type of states are based off of like real world things. So there's a specific type of position and go that is named after a flower.
And the right way to play is that which completes the picture of a flower. How could a computer come with this intuition that this kind of looks like a flower and that knowledge that this looks like a flower comes before it even actively thinks about it, this idea that the board where it's just a bunch of white and black pieces on a board looks like a flower, I can't actively think about that. That's something that needs to come to me in a way that's open.
Yeah, listen, you're talking about experience and there's so many dimensions and depths to it. And the one that you're referring to in some ways is what piederger might call our attunement to the world.
What is that attunement? Is it something that takes place in our brains? I don't think so not primarily. I think attunement is part of our whole personhood.
It's a somatic or sensory perception or yes, our intelligence at a certain level. But it means that we are already in touch with the world rather than this Cartesian inner ego whose relation to the world comes only through the mediation of representations or in the case of computers, algorithms or computational numbers and so forth.
That's a tall order though, Sam, to give a machine a sense of attunement to the world.
Well, here's where I'll get really practical with you and why I think this isn't just some abstract conversation about consciousness or philosophy, but that can actually impact artificial intelligence research right now.
So how can we create this attunement? You can never create an entity which is attuned a priori. You can't create this Cartesian subject which engages with the
world as you say. No, attunement is not one but a relation to the world. And this is what I want to stress moving forward with artificial intelligence research is that we need to stop thinking about how we can create the state of consciousness, but how we can create the experience, the action, the verb, the relationship itself.
And that attunement is not even, it's a great word, but it's not a mint, it's an attuned continuance of forward movement of the attunement.
I am attuned to the world, I have a concern for the world. I am in touch with the world. All these things are active things. This in my opinion is where consciousness emerges.
It doesn't emerge in the special way I connect my computer network or my neural network. It emerges in the action of how this network engages with the world. It is that verb. It's an empty space between the subject and the object. That empty space that relates the two is where consciousness is where experience emerges. And that is where what we can try to program in the future.
Remember his name, folks, Sam Ginn, we've been speaking with a Stanford sophomore, artificial intelligence, human experience, consciousness. And again, remember that name, I think we're going to hear about Sam a lot more in the future. Whether that future takes us to the singularity or not, whether he has something to do with it or not, he's going to be there somewhere.
And stay tuned for another program we're going to have on consciousness. But this case is going to be a show about how you expand your consciousness through certain kind of psychotropic drugs. That would be an interesting question. Would an artificial intelligent machine be capable of intoxication and any creation.
But we'll leave that for another conversation, Sam.
Thanks for joining us today on in title opinions.
We've tried to get you back for another follow-up sometime.
Very soon, I'm Robert Harrison for in title opinions.
Stay tuned, bye-bye.
- Yeah, thank you so much.
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)