I’ve been dismayed at the condescension toward and willful ignorance of the experiences of sex workers that is prevalent among…those…who…try to impose their values through criminalizing their work. – Peter Shafer
Every once in a while, it’s nice to see a woman whose name isn’t Maggie McNeill attacking mainstream feminist claptrap:
Barbie is a safe way for girls to explore dangerously adult concepts like sexuality…says [psychoanalyst] Joyce McFadden…she likens young girls’ play-acting Barbie sex with them trying on their mothers’ makeup or bras…Anti-Barbie arguments have a tired ring to them…when I look back at my own Barbie-influenced youth, I have a hard time pointing to anything but positive effects. “The feminist perspective is she has this unattainable figure,” McFadden says. “But Barbie was the only doll that had breasts, the only one to create a space where girls could start to fantasize about that”…Barbie is sexualized by adults, not kids…
For those who couldn’t catch it live, the podcast version of my February 16th appearance on The Bob Zadek Show is now available.
Harry Reid is on his anti-whore hobby horse again: “Las Vegas is one of the cities in contention to host the 2016 Republican National Convention, but Nevada’s prostitution laws may jeopardize the city’s chances of winning its bid…” The idea of a convention-rich state criminalizing a whole industry to win one single convention would be hilarious if it didn’t display such a blatant disregard for his constituents’ lives.
Somehow, I Doubt He Thought This Through
Unfortunately, it’s the sex workers who suffered from this idiocy: “A disgruntled punter made a complaint to Harrow Council…after a prostitute refused to have sex with him…at a brothel in the borough…The brothel is now being closed…”
The officer in charge of a police pilot project created to combat human trafficking and rescue sex trade workers…[engaged] in sexual acts and [sent] lewd messages and photos of his penis to sex workers and colleagues. Sgt. Derek Mellor…once the lead of Hamilton [Ontario] Police’s…“Project Rescue”…[pleaded guilty] to [nine] charges…
Western Australia politicians keep playing with this diseased carcass:
Professor Catharine MacKinnon said Perth was perfectly placed to prevent human trafficking from Asia…because it was used as a gateway…by criminals. However…the…Government would have to…overhaul prostitution laws to target the people who pay for sexual services rather than the prostitutes…
By “carcass” I mean the Swedish model; MacKinnon is still alive and flapping around belfries for the time being.
Considering the doctrine that consent can be retroactively revoked months or years after the fact, I sincerely doubt an informal contract without a lawyer witness would really offer much protection against future rape charges, but I guess it couldn’t hurt. It’s just sad we’ve come to this.
Welcome To Our World (TW3 #15)
Everything is magically right and moral when state actors do it:
…[Washington state] prosecutors had [a rape and kidnapping victim] arrested…The 43-year-old woman…has not been charged with any crime. She just wasn’t showing up for pre-trial meetings with prosecutors…[in other words] the woman [is being held] against her will so she can help convict someone else of holding her against her will…
And just in case you think I tend to exaggerate about neofeminists: Amanda Marcotte approves of the state’s actions.
Big masked heroes in riot gear prove their manhood by attacking Asian women:
Birmingham [Alabama] police raided an Asian massage parlor…taking four women and three patrons into custody. Police, along with FBI and ICE agents [pretended] the raid was in response to complaints of prostitution and possible human trafficking…
In the entire history of futurism, has any futurist ever been right about anything?
…a prominent futurologist has claimed that AI girlfriends…like the one…in the…film Her could become a reality by 2029. Ray Kurzweil…claims…a virtual body with a tactile sense…“will…certainly be completely convincing by the time an AI of the level of Samantha is feasible”…
A man died of a heart attack while in the embrace of a prostitute at a brothel in Ticino [Switzerland]…police confirmed that…there were no third parties involved and no criminal case…seven customers have died in brothels in the canton in the past ten years…
This is just a typical lurid “survivor” narrative, but contains an unusually telling passage near the end: “For 17 years, Lloyd told no one what she had been through…Six years ago, she contacted the Human Trafficking Task Force of Southern Colorado and met people who convinced her that she was dragged into a life of prostitution — and that it wasn’t her fault…” As usual, all events are conveniently decades in the past, with no witnesses, and she only came out with it after others convinced her that it happened.
You should’ve known we wouldn’t get through an Olympics without at least ONE “sex trafficking” scare story, and here it is courtesy of XO Jane. I’m sure we can all think of more credible explanations for this scam that don’t involve the Russian Mafia and sentences like “The Olympics is a huge draw for trafficking…and American women are typically sold for more in foreign countries.”
Photographer and regular reader Peter Schafer on his project Whores and Madonnas:
Opportunities are extremely limited in the Dominican Republic for women to make a decent living…After reading about how central the responsibility of being a mother was to their motivation to pursue sex work, it got my mind going as these women seemed to explode the well-worn dichotomy that classifies women as either Madonnas or Whores…I became their client…explained I was working on a photo project and showed them some of my other work…I paid them for their time being photographed…because prostitutes in Sosúa are the economic drivers, with all other businesses…dependent on them, they enjoy…status and respect…
Not To Be Taken Internally (TW3 #39)
An adult entertainer was sentenced in Mississippi…to seven years in prison for helping set up an illegal silicone buttocks injection that killed…Karima Gordon. Stewart, who goes by the…[stage] name…Pebbelz Da Model, took $200 to refer Gordon to the injector and falsely claimed the person was a nurse…Tracey Lynn Garner, the one suspected of administering the injections, is charged with depraved-heart murder in the deaths of Gordon and another woman, Marilyn Hale…She has pleaded not guilty, with her trial scheduled for March…
The title is, “For Smart Prostitution Laws, Ask a Prostitute”. The first couple of paragraphs sharply criticize the Canadian laws which were struck down, and the middle section discusses misconceptions about sex workers and implies we need to be consulted on new laws…then the author rapidly descends into evil pimps and helpless victims, and concludes by advocating the Swedish model.
Slowly but surely, it’s becoming acceptable for female public figures to declare their support for something other than criminalization of sex work. Legal journalist and former assistant district attorney Robin Baton recently published “Why Prostitution Should Be Legalized in the U.S.”, and while she fails to see the problems with the license-and-regulate view that’s probably just because she hasn’t thought it through; she uses arguments based on the principle of self-ownership, the lack of a bright, clear line between prostitution and other female behaviors, and the sports analogy. And though (as Scott Greenfield expressed it) “she’s no Maggie McNeill”, it’s still good to see this in a US medium.
Another honor for Chester Brown, the acclaimed cartoonist (and friend, and reader) who drew the cover for my book: he’s been included in a new book entitled “50 Canadians Who Changed the World”. Chester says the book isn’t very good and downplays his hiring of sex workers (depicted in Paying For It), but I still think it’s great that he was picked out of the many notable Canadians the author could’ve written about.
On March 14, 2014 human rights defender and transgender advocate Monica Jones will…go…to trial…for “manifestation of prostitution”…after she [was arrested for protesting]…Project ROSE …Two members of SWOP Phoenix and the Best Practices Policy Project intend to travel to Geneva to…educate the United Nation’s Human Rights Committee about human rights violations perpetrated by Project ROSE and the Phoenix police…and call on the Committee to pressure the U.S. government…This unique opportunity has emerged because the United Nations Human Rights Committee will review the U.S.’s human rights record…on…the same date as Monica’s trial…
This Hong Kong reporter seems to be trying out for a job in Beijing’s propaganda ministry:
Hong Kong police are planning to focus special attention on illegal sex trade operations in response to a wide scale crackdown on prostitution…in…Dongguan…Commissioner of Police Andy Tsang Wai-hung has expressed concern about a likely increase of prostitution in Hong Kong, but also expects additional criminal activity to accompany this vice…Prostitution was banned in China after the Communist revolution in 1949. However, it reappeared three decades ago…and…is greatly responsible for a significant increase in sexually transmitted diseases…
Though the talk of criminality and disease is typical in the West, I simply cannot fathom why so many people whose minds were not broken by Mao are nevertheless happy to repeat the ludicrous propaganda that prostitution magically ceased to exist after his official proclamation.
[At the turn of the century]…an adult performer openly escorting was relatively rare…[but in the past few years] the industry…changed drastically: the talent pool [is] larger, the number of available roles much smaller, and the advent of digital piracy [has] driven performers’ rates down. Escorting…has become one of the most lucrative ways for adult performers to supplement their flagging incomes…up to thousands of dollars for a single booking…The issue…is fairly divisive among performers…[and] was largely stigmatized within the adult film community until recently…
Gorged With Meaning (TW3 #408)
After all the attacks, the Duke porn actress decided to speak for herself:
…I couldn’t afford $60,000 in tuition, my family has undergone significant financial burden, and I saw a way to graduate from my dream school free of debt, doing something I…love…my experience in porn has been nothing but supportive, exciting, thrilling and empowering. The next question is always: “But when you graduate, you won’t be able to get a job, will you? I mean, who would hire you?” I simply shrug and say, “I wouldn’t want to work for someone who discriminates against sex workers”…
Whatever They Need To Say (TW3 #408)
Two sex workers’ flats in Soho, central London were…re-opened by a judge…[who] rejected police evidence that women…were being controlled or incited into prostitution for gain…Judge Kingston’s decision brought for the first time some common sense to legal cases, which have been rumbling through the courts since mass raids at the beginning of December…
If you’d like to answer the Canadian consultation but aren’t confident in your ability to make good points, Maggie’s (the Toronto Sex Workers Action Project) is here to help with simple guidelines (both general and question-by-question) which will help you not to get tripped up the way the government wants you to with its “when did you stop beating your wife?” phraseology. Read them, get your own answers ready and submit them in the next sixteen days.
The Course of a Disease (TW3 #408)
Mary Honeyball, desperate to smear the 549 organizations (it has since grown to 560) who spoke out against her revolting scheme which this week imposed the Swedish model as a non-binding standard for all of Europe, fell back on the usual ridiculous neofeminist ad hominem by branding them all “pimps” (savor the absurdity of that for a moment before proceeding). The ICRSE naturally had an answer to that, and here our friend Laura Lee and ally Dr. Belinda Brooks-Gordon confront Honeyball on Newsnight.
The Swedish model’s detachment from reality is so profound even Time magazine can’t miss it, despite the reporter’s belief in “sex trafficking” myths.
I crossed MacKinnon at a presentation on trafficking she gave in Sydney about a decade ago. It wasn’t hard to strip away her ‘concern’ for trafficking victims and expose her contempt for all whores. All I had to do was point out that as sex workers are the ones who best know what’s going on in the industry, where the most exploited ‘victims’ are and what their real needs are most likely to be that doubtless she would advocate working closely with them in her fight against trafficking. Seems no-one had pulled that on her before because she was momentarily lost for words before dismissing the idea with a reply that quickly descended to patronising then contemptuous of all sex workers. She lost most of the audience at that point but as we were both soon to learn she never had many of them them.
I later challenged her advocacy of normalising the immigration status of trafficking victims who gave evidence against their ‘pimps’ by suggesting that would create an incentive for sex workers (and even non-sex workers) without papers to accuse people of trafficking them whether it was true or not and wouldn’t it be better to simply grant amnesties to *all* sex workers without valid visas who came forward. That would remove the main power the networks have over trafficked women. That was when about twenty very well dressed women in the audience (about a third of the attendees) gave me a standing ovation and both MacKinnon and I realised that SWOS had turned up in force. She looked like a hunted rat for the rest of her presentation.
It took me some time to get away from the meeting as there were several attractive ladies who wanted to introduce themselves and thank me.
Yeah, that disgusted me too.
So I blogged about it.
Re: Whatever they need to say
At the end of the article, there are two interesting links. First of all, the English Collective of Prostitutes has created a petition to save Soho’s unique personality:
Don’t Rip the Heart out of Soho Petition | GoPetition
Then there’s an open letter calling for decriminalisation – as relevant as ever following what happened in the European Parliament last week: http://prostitutescollective.net/2013/09/09/3171/
Re: Femme Fatale
We have a word in French for such situation – “Epectase”. Needless to say, a sizeable number of men are saying that, when the time comes to go on the other side, they’d love to be able to go that way 🙂 … although I reckon that it must be quite distressing for the girl – at least she can be sure that he died happy! Who will start that new moral panic about the “hundreds of thousands of men dying while buying sex”? 😀
Just checked out that word on http://fr.wikipedia.org/wiki/%C3%89pectase . Even mangled by google translate, the back-story is funny as hell.
The Pygmalion Fallacy:
Ray Kurzweil is an incompetent hack that is desperately trying to do anything worthwhile with his position at Google. He has basically ignored all the research done in the the AI field in the last decades and has no clue what can and cannot be done. My guess is that Google knows he is worthless as a “researcher” and spends the pocket money needed to keep him on staff to give the company a more human face, i.e. this is misdirection. Google needs this more than ever with their huge dependency on privacy-invasion to make their business model work.
That said, “Futurism” is basically a scam that serves to inflate the egos of the actors in that field. The chances for good predictions from those calling themselves “Futurists” are less than random chance. Of course, they like to take things that are obvious and claim they predicted things there and then mix that with the pure fantasy ones. Many people fall for that.
Site note:
The current state of (human-like) AI is that there is not even a credible theoretical model of how it could be done. The only model that is there (automated mathematical deduction) needs more resources than available in this universe on pretty small problems that smart humans can solve. This means it is completely unclear of whether creating a machine that can compete with (smart) humans is even a theoretical possibility, much less a practical one. Deducing from other areas where theoretical models have been turned into actually working things, if a theoretical model for human-like AI would become available tomorrow (whis is not likely at all), it would still take 30-150 years for the actual technology to be created. And let me repeat: There is nothing at all today by way of a theoretical model, not even a credible idea. And very smart people have been trying for decades to find one.
There is also absolutely no model how inanimate matter can create things like intelligence, intuition, self-awareness, etc. None at all. In fact, current physical models say this is not possible as these are non-physical things. Yet there are human beings that seem to have these characteristics. What this means is that there is a huge gap in our understanding of reality. Just note that observing something from the outside does not tell you anything about how it works on the inside. (If you do not believe that, take a cold, hard look at your smartphone…)
You and I are on exactly the same page. I have argued the point with computer people for two decades now, but they still insist it’s just a matter of even more brute force, which is totally asinine. No, I think this is one of the things Star Trek got right: We’re only going to be BEGINNING to figure out human-like AI in the 23rd century, and even then it will be highly problematic.
You’ve only got to listen to a non-native speaker to realise just how hard it is to be totally fluent. I knew a native German speaker some years ago; he looked exactly like a typical Englishman (!) and had a PhD from Cambridge, so he spoke like one. But, just occasionally, if you listened closely, he would make a very minor error; it wasn’t the sort of thing that any of us would make. So, if it’s so difficult for the human brain to be totally fluent in a foreign language, it must be nearly impossible for a machine.
“Asinine” is a very good term. (I had to look it up, German native-speaker here 😉 It is like claiming that when you have a modern computer without any software or OS that you are practically there. You are not. It will take something like 30 years and a couple of thousand very capable IT people to get there. And that is only because they know what the end-product should look like.
I really do not know why “computer people” fall for this nonsense time and again. One theory I have is that they think physicalism (“there is nothing beside what physics tells us there is”) is the only alternative to religion. In a sense these people have made physicalism their religion and hence have fallen into the trap they tried to avoid. It is no accident that “eternal life” in some technological guise crops up so frequently with these people. They even have “God” in the form of the “singularity” cretins like Kurzweil like to rave about.
(Personally, I am an atheist and a dualist. I do not claim to have any idea what the nature of the “something extra” is, but I am pretty sure it is there and I am also pretty sure that no religion has an even remotely accurate description. I see no conflict whatsoever between atheism and dualism.)
The underlying problem may be that many people have trouble accepting white space in the landscape of what is known about reality. They crave certainty and when religion is not an option (as it is not for many computer people), they take this “techno-visionary” nonsense as a surrogate. In the bargain they get the other thing that religion provides: A community of people with the same beliefs and no need to justify any of it.
I find it very ironic that most futurists are likely to have read William Gibson. Anyone who did should remember that the few personality simulations he conjectures might exist were created at incredible expense, and include full self-awareness, or at least a highly accurate simulation of it.
Memorably, Gibson imagines that this self-awareness comes with a feeling of hollowness and incompleteness, leading to requests to be destroyed. Gibson’s imagination doesn’t bode well for Kurzweil’s.
Of course, Gibson’s not alone. I can think of other sci-fi authors who described a direct road to suicide from Kurzweil’s imagined “uploaded into the Internet”.
People like Kurzweil earn their money for technology companies like Google by whipping up excitable geeks who think that to fantasise about unlikely futures is to be ahead of the curve. It’s a pretty successful PR strategy – as the success of TEDx demonstrates.
But you’ve gotta wonder if his cheek ain’t full of tongue. AI girlfriends by 2029? Not 2025 or 2030? Must be a pretty precise science this futurism stuff.
I sure hope Microsoft doesn’t end up writing their operating systems or they’ll hog all your resources and at the most inconvenient times possible they’ll turn blue and die.
Well done; I literally LOLed at that! 😀
If history is any guide, they’ll refuse to talk to virtual girlfriends made by other companies, whine if you don’t buy them extras, try to push you into relationships with more expensive girlfriends, and pick odd moments to complain that you’ve never given them any money.
This seems like a good moment to plug a couple of my stories, “Ghost in the Machine” and “Rose“. And to say that if you like those, I have a whole book full of them, with cover art by a gentleman who commented in this very thread. 🙂
“There is also absolutely no model how inanimate matter can create things like intelligence, intuition, self-awareness, etc. None at all. In fact, current physical models say this is not possible as these are non-physical things.”
Algorithms also are non-physical things, and physical computers have no trouble with them. I smell a dualist.
Nonsense.
Computer programmers deal with algorithms. Computers deal with electrons. That is all. Everything else is the symbolic interpretation we as intelligent beings place upon them. You might as well say “Abacuses have no trouble with accounting” or “Books have no trouble with stories”.
Considering that we don’t even have robust definitions for ‘sentience’ or ‘consciousness’, much less comprehensive theories for how they arise, it is hubristic indeed to imagine that they must inevitably be producible by sufficiently sophisticated machines.
To say that mind is an emergent phenomena of the complexity of our neural connections is not only a non-explanation, but one with no basis in evidence. Quantum physics provides better evidence that matter emerges from mind – though that is also very weak and has been massively overstated by the likes of Fritjof Capra and Gary Zhakov.
Given what we currently know about consciousness, sentience and intelligence, keeping an open mind about the possibility of mind-body dualism is the only rational, skeptical and – dare I say – intelligent approach.
Indeed. There are too many things physicalism does not even begin to explain at this time to accept it as true without extraordinary evidence. Dualism has its own problems as it leaves the reference system, by design.
Basically, modern dualism boils down to recognizing the there are huge gaps in what we know about reality and strong hints that the current physical models are grossly incomplete when it comes to sentient life. Whether these models can be amended in time or whether they will remain incomplete is completely unknown and impossible to predict.
Well Celos, actually I am not a dualist. In fact I lean strongly in the opposite direction – towards advaitism – but that is based on direct experience not rationality and is not something I can even begin to explain with words.
But I agree completely with everything you attribute to modern dualism.
Well, one could argue that anything that accepts physical reality and then puts something extra on top is dualism.
From what I gather in 2 Minutes of Googling, advaitism seems to be a communication and state oriented model, and that would work without physical reality. There are other models that do not need physical reality (and hence are not dualism), like the model that says this reality is kind of a simulation.
If you really dig down, the acceptance of physical reality is kind of a weakness of dualism. 😉
Which is why I cannot really embrace it.
I am unable to reject the mystical/psychotic experiences I have had and so am unable to take physical reality at face value.
Besides which, I find empiricists like David Hume, George Berkley and Immanuel Kant very persuasive.
My stance is more one of “know the options” and the most tangible practical benefit is that it allows recognition of religion when it comes in disguised forms and run the other way. This is also a nice mental exercise.
Dualism is a minimalistic model and that is something that very much appeals to me as it does only have minimal assumptions and they are clearly visible. I like to assuming physical reality as it has quite a few pragmatic advantages. But if push comes to shove, I would say that I can do without it. It is not in any way that I can see critical to have it as part of a model of reality.
Well, maybe if physics continues to become stranger and stranger, “physical reality” will become harder and harder to define and may eventually go away entirely 😉
Of course the viability of such an approach hinges on what you use to recognise religion with. I suspect Paul Murray would think he had spotted religion in anyone professing dualism.
Because advaitism and mysticism are closely associated with Hinduism many people think I am some kind of closet Hindu. I am not a Hindu because I embrace many of the ideas of Adi Sankara any more than I am a Protestant because I embrace much of what Kant wrote. I consider myself a radical skeptic and as such I am agnostic leaning towards atheism – though I do have a personal goddess. But that is another long story.
That’s a nice thought but as far as I can tell mainstream physics has been becoming less strange since the late 1980s. I feel several sub-disciplines of physics may be heading down a dead end because of this but I am by no means expert enough in those fields to have a firm opinion.
BTW, if you like speculative fiction in this line I would heartily recommend Greg Egan’s novel ‘Distress‘.
Part of the plot involves a physicist who is on the verge of revealing the Grand Unified Theory and the attempts made to assassinate her by people convinced that when reality has been explained away it will cease to exist. They are only partly correct. Those with the other part of the puzzle are people with autism. I won’t say more as it would spoil it.
Abacuses need to be worked by a person, books need to be read, but what sense does it make to say that computer programmers deal with algorithms but robots do not, when the programmer is obviously not present while an industrial robot is building a car? When you say that a machine executing an SQL query against millions of records is not embodying an algorithm – words fail me. You are just refusing to see or admit what is plainly there.
And as for “no basis in evidence”, this so arrantly false that one cannot feel that you are arguing honestly. The effects of brain injury on personality. Hell – the effect of *ingesting alcohol* on personality. Neuroimaging, which can see emotions and cogition happening in the neurons.
And quantum physics? Most people who go on about QM and mind haven’t the foggiest about what QM involves. If you aren’t working with vectors of complex numbers, then you are not doing QM – you are blowing smoke. For a gentle introduction, permit me to recomment Feynman’s ‘QED’.
Sorry – that should have been ‘not arguing honestly’. I’ve been ingesting alcohol, you see.
… and algorithms need to be programmed. Would you consider yourself intelligent if every algorithm you ever used had to be coded into your head by someone else?
But the main point I am making is that computers do not contain algorithms. They don’t even contain instructions (any more than an automatic door does). They contain moving electrons that we symbolically interpret as instructions, algorithms or World of Warcraft orcs come to chop off our heads.
If you think your comments on brain injury and alcohol say anything you clearly have not read and thought about my examples of mobile phones and wireless internet capable computers. And if you really think neuroimaging ‘shows’ emotions or cognition or anything else other than electrical activity (or blood flow) you have clearly swallowed a truck load of fashionable neurobabble and, like drachefly, are incapable of distinguishing correlation from identity. Doubtless theologians of the middle ages had methods of ‘showing’ the presence of supernatural beings too (rider: cognition and emotions really do exist, but they are neither electrochemical activity nor blood flow).
I’ll provide a metaphor that is easier to follow for the benefit of someone whose brain is affected by the ingestion of alcohol.
If your larynx was destroyed or damaged it would severely impair your capacity for speech.
Would it therefore follow that your entire capacity for speech resides in your larynx and therefore could be mimicked by building something functionally identical to it?
If you “smell a dualist”, then you have trouble reading as I clearly have said that I am one. You comment about algorithms is dead wrong though, physical computers are only able execute them imperfectly, i.e. with a (very small today) chance of arriving at the wrong result due to some fault.
Yeees – and brains also execute algorithms (or whatever) imperfectly. If cogition is the result of perfect ideals, whence optical illusions? Whence species-wide inability to estimate probabilities correctly?
> The only model that is there (automated mathematical deduction) needs more resources than available in this universe on pretty small problems that smart humans can solve
That’s an utterly incoherent argument.
First of all, it isn’t even attempt at creating an artificial general intelligence.
At all. It’s a tool to make mathematicians’ lives easier. It’s like taking a pile-driver, checking its handling on highways, noting that it doesn’t fit under bridges, and saying that driving is impossible.
Second of all, you’re naming a whole field and saying it needs more resources than the universe, is surely based on the specific implementation. I’ve personally improved simple algorithms so that under typical cases they run three hundred thousand times quicker, and this particular field looks like it has a lot more room for optimization.
Third, Look at Watson and tell me that computers can’t do pretty well on ‘fuzzy’ deductions. And that isn’t even an attempt towards general intelligence either.
The thing is, we don’t know what ingredients we’re missing. We don’t know how hard the problem is. That in itself makes it, at a minimum, quite hard. It could be one Einstein-level crucial insight away, or it could be a hundred, or ten thousand.
Fourth… well, let’s look at this next quote.
> There is also absolutely no model how inanimate matter can create things like intelligence, intuition, self-awareness, etc. None at all. In fact, current physical models say this is not possible as these are non-physical things.
This is incoherent. Models of physics talk about the physical things. On their own, without introducing bridges to abstractions, they cannot begin to directly address those intangibles. Another way of putting it is, they certainly can’t say that they don’t exist or can’t be built, without a clear definition of what it is that you’re proposing can’t be built.
What is so special about being animate, anyway? If you’re resorting to physical models, then there’s no term in them for being animate. Being animate is itself not a fundamental physical property.
If you’re saying that by the time you’ve built something that’s got intelligence, intuition, or self-awareness, then it must already be in some sense alive (as a way of indicating how hard this is going to be)… all right. Again, this is at least very hard, but it’s clearly not impossible.
The hardest it could possibly be, and the way we could crack AGI without actually solving the above problem, is to take actual brains, figure out the functional specification each part fulfills (including deviations from said specifications: failures, susceptibility to outside influences, randomness, etc), and reimplement the same specifications in computers. Given our progress on that so far, it would be really surprising if it took as little as a decade, but it would also be somewhat surprising if it took more than thirty years. Kurzweil did not provide an error estimate on his 15 year figure (and 2029 may not be a round number but 15 years IS, silly), but it’s not insane to think that that’s a reasonable median estimate.
You’re making the assumption that intelligence is entirely a function of the brain. Given that the brain is influenced by a heck of a lot of outside factors, bodily, socially, environmentally … that’s a pretty big assumption.
‘Feral’ kids (i.e. the few confirmed cases of kids raised by animals) don’t display what would generally be recognised as human intelligence so there is clearly a socially mediated developmental factor that cannot be assumed to be emulatable via the interface between people and machines (or machines and machines).
You are also assuming that all the functions of the analog brain are mappable onto a digital computer.
And that’s only sticking with the physicalist arguments that insist on mind-body non-dualism – which are also based on a huge set of assumptions.
If someone with a slight knowledge of electronics and no knowledge of radio waves was to pick up a mobile phone, have a conversation with someone over it then pull the phone apart to see what was making it talk he might see and understand the speaker so know that the sound waves are made within the phone and assume that the intelligence behind the voice must also be in the phone. If he damaged a couple of components he might only get crackles and screeches, so he would assume he had destroyed something necessary for producing intelligence. He would probably look at the chips with incomprehension and assume that the intelligence was ’emergent’ from the complexity of the electronics. He would then deduce that if he could figure out the functional specs of each chip and reimplement them on his breadboard rig he would be well on the way to creating artificial intelligence.
That’s the same quality of logic you are bringing to bear on the problem.
> And that’s only sticking with the physicalist arguments that insist on mind-body non-dualism – which are also based on a huge set of assumptions.
One, really: that physics is anything vaguely like correct. And it acts a WHOLE LOT like it’s reasonably accurate. That’s the thing. We’ve looked inside the brain and seen that it’s made of stuff. We know how stuff works.
Your radio example is utterly broken, because the people doing that investigation do not. In fact, it’s quite backward. It’s like we’ve studied the chip design and all the hardware on a phone, and can’t understand how the author of that great app managed to get it to work so well. Do we hypothesize that it isn’t in fact software, that all of that hardware is in fact NOT how the app works? No.
Alternately, even if our physics are completely wrong: That the machinery of the brain that acts in every way like it’s actually carrying out our thoughts: that when you get brain damage, you don’t have diminished reception but the potential drastic personality changes; that chemicals added to the brain change your thoughts; that individual thoughts can be detected or induced by electrodes in the brain… that that brain is actually doing the thing it looks like it’s doing.
And why would one invoke dualism anyway? What determines the content of this external process? If it’s determined, then why is it necessary? If it’s not determined, then what value does it actually provide?
> You’re making the assumption that intelligence is entirely a function of the brain. Given that the brain is influenced by a heck of a lot of outside factors, bodily, socially, environmentally … that’s a pretty big assumption.
Umm… what the fuck? Why do you assume that this computer wouldn’t be hooked up to input/output systems that provide all those things?
THAT is the quality of YOUR arguments.
Being accurate for what it is does not make it the full answer to everything.
Where, in physics, do you find morality, meaning, etc?
And we know for starters that quantum physics and relativity are theoretically incompatible even though both seem to work experimentally.
So there is clearly something very important missing from our current physical model of the universe.
You remind me of the 19th Century physicists who were certain that once dark body radiation and the photoelectric effect were explained the entire universe would be calculable.
While your counterexample might suffice for my mobile phone metaphor it wouldn’t even work for something as complex as a PC with wireless internet. You could fiddle with components and get drastically different output that was not at all consistent with merely impaired reception.
And individual thoughts most certainly cannot be detected with electrodes (or fMRI). Only electrical activity.
And how, exactly, do you propose to emulate the I/O from those systems – or even work out what systems and I/O are necessary – simply by examining the functionality of brain components?
> Where, in physics, do you find morality, meaning, etc?
In its ability to produce systems which fulfill the functional requirements of systems (e.g. people) which can have those things.
> quantum physics and relativity are theoretically incompatible even though both seem to work experimentally.
Whoooa there. Really, slow down. Note that I didn’t say actually correct or complete. I said acts like it’s vaguely correct. I said that specifically to head off an argument that our physical theories are incomplete. Just how we reconcile relativity with QM is irrelevant to brain architecture, since brain architecture is buried deeply in the non-relativistic limit.
If I remind you of 19th century physicists, that says more about you than it does about me.
>While your counterexample might suffice for my mobile phone metaphor it wouldn’t even work for something as complex as a PC with wireless internet
Okay, but now it’s legitimately not clear, even based on the hardware, whether the computation is being done inside or outside the system. If our brains had bizarre physics not seen elsewhere and seemed to do computations that the hardware couldn’t support, we’d look for answers outside of them. Which, you know, is where the whole idea of external souls came from, since back then no one had the slightest clue that the soul might be made of meat.
> And individual thoughts most certainly cannot be detected with electrodes (or fMRI). Only electrical activity.
I didn’t say arbitrary thoughts. But if you hook in electrodes to individual neuron clusters, you can pick up on very specific thoughts.
http://www.wired.com/business/2007/03/the_pamela_ande/
> And how, exactly, do you propose to emulate the I/O from those systems – or even work out what systems and I/O are necessary – simply by examining the functionality of brain components?
umm… if you can do this to the brain, you can do it for the eyes, for the ears, for the skin, for the tongue, etc. Why do you even have to ask?
Again, a circular argument resting entirely on your faith-based adherence to physicalism. If morality and meaning are not a function of brain physics (or chemistry) then humans are not entirely the product of physical systems.
Pure speculation.
As far as we can tell, Newtonian physics is entirely deterministic. If the brain runs according to Newtonian physics then free will is impossible. Most people believe the contrary. If four thousand years of philosophy have been unable to resolve the argument of determinism vs free will do you really think you can validly eliminate one or the other from your model?
Our brains *do* have bizarre functions not apparent elsewhere.
Intelligence for example.
Isn’t that what this whole conversation is about?
In the example you link to they detected electrical activity in a particular neuron associated with “Pamela Andersonish” stimuli. That is in no way a thought.
A photoelectric cell can be set up to respond only to a specific wavelength of light. That does not mean it is thinking ‘red’.
Would facial + voice recognition software that could recognise PA’s face or the mention of her name be thinking about Pamela Anderson?
So now you have to functionally emulate other organs too.
But you will also need to functionally emulate the various gland systems that influence the brain.
And the sort of feedback you get from self directed movement.
And the social environment that allows intelligence to develop.
How long did you say your project was going to take?
And most importantly, there could be input into the brain that simply is not detectible with our current physics. You are making the 19th Century assumption that there is not a huge mystery still lurking in the gaps.
You’re skeptical? Okay. HOW skeptical? Put odds on it. How likely do you think that our intelligence relies on something not in the brain? 5%? 50%? 95%? 99.9%?
I accept evidence. The evidence is overwhelming in favor of physicalism.
>Pure speculation.
Brains are manifestly not moving significant fractions of the speed of light, and the only parts of them that are (core electrons) face only minor corrections with no effect on the dynamics of the brain that aren’t screened at a higher level of abstraction.
If there’s an approximation that our models of physics are missing in any attempt to model the brains, the non-relativistic approximation is not it.
> So now you have to functionally emulate other organs too.
> But you will also need to functionally emulate the various gland systems that influence the brain.
> And the sort of feedback you get from self directed movement.
> And the social environment that allows intelligence to develop.
2 was what I said we’d already need to do. The part of 3 not already in the brain has already been solved with advances in prosthetics.
4… what? O.O We’d need to *emulate* a social environment?
See, this is why I have been dismissive of you (no, you haven’t been imagining it). You’re clever at thinking of arguments, but you don’t seem to be censoring the stupid ones. Just think about that problem for four seconds.
FOUR FUCKING SECONDS.
I’m out of here.
Being a skeptic means not pretending you can call odds when you don’t have access to all the variables.
You should try suspending your religious devotion to physicalism for a while and try it.
That’s a faith based statement if ever I heard one.
Your evidence for physicalism clearly does not extend beyond physics.
It’s no different to a Christian fundamentalist who will accept no evidence beyond what’s in the Bible.
How does your physicalism explain the appearance of free will?
Do you believe a machine with no capacity for self-direction could truly be intelligent?
You have no evidence that quantum effects in the brain are ‘screened’ at a higher level (whatever the hell that’s supposed to mean).
If you have read any speculative fiction involving time travel or know anything about chaos theory you would know of cascade effects whereby tiny inputs are amplified at higher levels of interaction.
Why would something as complex and non-linear as the brain be immune to the effects described in chaos theory?
If the brain was simple enough for us to understand we would be too simple to understand it.
Actually I’ve been thinking about that problem for about forty years.
Any robust epistemology does not simply dismiss arguments because narrow minded people who approach questions armed with the prejudice of faith dismiss them as stupid. If they did we would still be living on a flat earth in a geocentric universe.
As I pointed out in my feral child example, brains without social interaction do not seem to produce minds. Not as we know them anyway. It would appear to be a necessary component of at least some of the software.
> Being a skeptic means not pretending you can call odds when you don’t have access to all the variables.
You can’t call the odds precisely, no. But if you think one variable is enough to keep me from telling the difference between 1:1000000 odds and 1000000:1 odds as a several-order-of-magnitude estimate, that’s silly.
> How does your physicalism explain the appearance of free will?
Do you believe a machine with no capacity for self-direction could truly be intelligent?
What about physicalism is it that is supposed to be incompatible with free will — let ALONE the ‘appearance’ of free will? Let’s go whole-hog and imagine that the universe is deterministic (even indexically), so there is Fate. ‘OMG, we have no free will!’ one might complain. ‘Fate has determined everything beforehand!’ But… which part of Fate determined what you do? If you zoom in, you find that a lot of the part of Fate which was determining your actions was in fact, oh, the mechanics of your mind (whether in your brain or elsewhere, let’s factor these arguments a little, OK?). Oh horrors, you’re a slave to your own mind!
Facehoof.
You ARE your own mind. How terrible to be a slave to yourself.
As for the second question, define ‘self-direction’ and ‘intelligent’ and the question answers itself. Those are very fuzzy terms.
> You have no evidence that quantum effects in the brain are ‘screened’ at a higher level (whatever the hell that’s supposed to mean).
First, that isn’t what I said at all. I said that the relativistic correction isn’t important.
Second, logical screening is very well defined. If there’s a partial causal chain A->B->C where arrow means ‘influences but does not determine on its own’, then B screens A from C because knowing everything about B would make A no longer help predict C. If A had an arrow directly to C, then knowing everything about B would still leave A some influence.
Alternately, it’s the derivative in A of C, holding B fixed. In this case, A is relativity, B is the empirical values for chemical models, and C is what happens chemically speaking. dC/dA|B is incredibly small, many orders of magnitude smaller than noise.
Third, we do – extremely strong evidence. The brain is a messy environment which is constantly undergoing decoherence events. Decoherence events are what make our classical-seeming world out of the quantum base it lies on. Like, in quantum mechanics, if you have a particle in some state, and you cut that state into two parts and let both of them propagate, and then combine them together again, you’ll find, in general, that they can interfere with each other. Decoherence is a process which splits the states up such that they don’t. The theory and practice on this match exceedingly well. So, at a very very small level, anything larger than either a small molecule or a medium molecule that carefully guards its environment to preserve quantum coherence (e.g. porphryns), it’s effectively classical.
Fourth, so what if quantum mechanics is relevant? Do you mean that would make the brain harder to compute? Well, yes, it would – though because of the above paragraph, we would have a much easier time than meat. Do you mean that makes the brain not physical somehow? Well, physical reality is quantum. This wouldn’t make it any less physical.
> Actually I’ve been thinking about that problem for about forty years.
Well, that’s nice, but in the four seconds before posting you didn’t realize that we already have a society and we can have this putative person interact with it as a member. We don’t need to go out and build one.
Whaat?
When you have no clue what order of magnitude that variable might have nor where it sits within the overall algorithm?
Who is being silly here?
An oft repeated sophistry in the centuries long argument between determinists and believers in free will.
If your own mind is entirely determined by extrinsic causes you are as completely a slave as it is possible to be – except of course there would be no slave master other than the whole universe (which the religious would equate to ‘God’).
Which nonetheless you think you understand precisely enough to be able to reproduce in a machine.
If decoherence was universal the sun wouldn’t shine and atomic bombs would not go off.
The brain – like most biological systems – suppresses destructive decoherence thanks to the benefits of millions of years of evolution. If one of our ancestors stumbled across a form of constructive decoherence (as free will may be – though the results on that aren’t yet in) it would not be suppressed but more probably enhanced by selection pressure.
I raised the point merely to refute your overwheening confidence that the mind is entirely explicable via Newtonian mechanics. Personally I am very strongly inclined to reject the notion that quantum randomness plays any part in cognition or free will but I also know that Newtonian mechanics simply cannot explain the effects we can see in minds through either observation or introspection. There is something else going on there, albeit probably not QM (though I think the observations that lead to QM – particularly the Copenhagen interpretation – are probably tied in closely with some factor of our own minds that we are not yet close to understanding).
As you can see from my subsequent comment I initially misunderstood what you were trying to say in point #4. I corrected myself later and more than adequately addressed your objection.
A slight correction to my last comment.
A skeptic will not call odds if he does not believe he has access to all the systemic variables.
You might call odds on a roulette outcome without knowing everything about the timing of the ball release, velocity of the wheel, elasticity of the surfaces etc. But if you’re not sure what that little button hidden under the edge of the table by the croupier does you will not.
In the case of mind there are lots of little buttons without an obvious purpose and lots of effects without obvious buttons. So it is not safe to simply assume they do nothing.
Apologies. I realise upon rereading that I missed the point of your point #4.
Yes we do need to *emulate* social environment. Unless you believe your project will not only produce a machine with brain functionality but also one that will interact with carers via touch and smell and crawl, toddle and eventually walk around it’s environment making self-directed inquiries of other beings it can see as similar to itself (and who treat it as something similar to them).
As Ceausescu demonstrated in his orphanages, denying humans that kind of input is likely to prevent the development of their minds to anything recognisable as intelligence (or sanity).
Thanks for the argument drachefly.
At least one of us thoroughly enjoyed it.
You even provided part of the inspiration for a haiku that arose from my perplexity as to why neo-Krapaelian psychiatrists and some proponents of intelligent design insist on the identity between mind and body/brain, yet insist equally vehemently on the separation between mind/brain/body and the rest of the universe.
To unpack the reductionist ‘logic’ at work in this argument.
Here drachefly goes way beyond the common fallacy of ‘correlation = causation’. He is not saying thoughts of Pamela Anderson causes specific electro-chemical activity in the brain nor is he saying specific electro-chemical activity in the brain causes thoughts of Pamela Anderson.
He says electro-chemical activity in the brain is thoughts of Pamela Anderson.
Still, that kind of messy thinking does seem to be a feature of intelligence. A computer would not ‘make’ such a mistake. So maybe he’s onto something. Or on something.
Imagine all the possible causal systems that cognition is made of. Where in this system do you propose that something not implemented by quantum field theory is occurring? What is it responsible for? How would we behave differently if it didn’t fulfill that responsibility?
The typical dualist answer is ‘the other thing is not involved in the mechanical causal chain, but without that other thing, no cognition occurs’. In other words, this would suggest the possibility of P-zombies. I have an argument prepared for this.
Alternately, and what seems more likely in this case, is that there is some computation you think that we perform that you think is incomputable. Or at least, not computable with that little matter or at that speed. If so, I would like to have some sort of clue what that’s supposed to be. Automated proof generation was mentioned as an impractical route to AI (it is), but we don’t do that ourselves, so what is it that has you convinced that we do that mere matter wouldn’t let us do – something that has outside-observable effects? You have repeatedly referred to signs that it’s not all happening in the brain, as if these effects were well known. Well, I don’t know of them. That’s why I hold my position.
Yes I do.
The ‘computation’ of a decision made according to free will.
I also think that for all of Douglas Hofstadter’s theories about recursive self-reference and Antonio Damasio’s work on neuro-cognitive siloing there is still no convincing framework that might suggest how self-awareness could be produced mechanically. There’s a heck of a lot of recursively self-referent and object oriented code out there but I’m yet to hear a computer say “I think therefore I am”.
You are strawmanning me again.
What I have repeatedly said is that there is no evidence that all of that stuff is happening in the brain. A very different claim.
I am aware of that; let me clarify.
In the case that everything in cognition is computable, we agree. Focusing on the case where we disagree, please outline the disagreement. Can you do that? What is it that requires something that can’t be done in matter. Is it hypercomputation? If we had hypercomputational oracles in ourselves, I don’t think we’d find infinity so confusing.
As for Hofstadter (sister comment to the above), well, that’s a funny thing. He thought we’d need to solve general AI to play chess superhumanly. We didn’t. We just solved the problem we were trying to solve.
Considering that animals of greater progam-complexity than every machine ever built to date (since the neuronal configurations are effectively programs) don’t seem to have much in the way of reflective ability… well, I have no idea why one would be all THAT surprised that we haven’t gone out and achieved that ourselves with much less hardware, despite hardly trying.
If by ‘computable’ you mean ‘theoretically capable of being performed by computers’ then I regret we are still in disagreement. IMHO the jury is not just out on that, but way over the horizon and probably not even asking the right questions yet.
As Celos pointed out, human decision making does not usually seem to involve algorithmic crunching but rather the kind of ‘fuzzy logic’, ‘instinct’ or ‘intuition’ that relies on methods we do not yet understand and have been very poor at simulating thus far. This applies not only to humans but even to relatively simple animals who get by on no more than reptilian or piscean brains so it would require a very modest definition of ‘cognition’ to assert with confidence that it is all computable or ever will be.
I’m not sure what you find confusing about infinity. Working with transfinites is not a particularly difficult skill to learn, though it does seem to be rare in that few people take the trouble to learn it. If you mean it is difficult to envisage infinity (e.g. to picture an infinite number of monkeys in the same way we can picture a small group of them) then that goes without saying. But that would require not just ‘hypercomputing’ but ‘trans-ultracomputing’ of a kind I cannot even imagine (perhaps due to the difficulty I have in envisaging infinity).
And there is no reason to believe that free will, consciousness, self-awareness, sense of worth (qualia) or any number of other mental functions and qualities are soluble with either computation or hypercomputation.
Try to understand that brain-as-meat-computer is nothing more than a metaphor. Just as earlier technologists used brain-as-telephone-switchboard or brain-as-engine-driver’s-compartment.
Do you think a sufficiently sophisticated and complex telephone system could develop intelligence or practice cognition?
Why would you imagine that seeing neurons as inefficient computational logic switches made out of meat would be any more fruitful?
How many switches do you need for intelligence to emerge magically from them?
Wow, Hofstadter is not an infallible futurist! Who’d have thought it?
BTW, I’m still waiting for a computer program that can provide a decent challenge to a moderately experienced Go player. I don’t even have a dan rating but I still have to handicap myself two stones or more to get a worthwhile game out of them. And all possible games of Go are computable, albeit with a heck of a lot of branching.
As a matter of fact, even if you functionally emulate the physical specifications of each isolated component of the computer I am typing this text into, then join them all up, you would not get something capable of wordprocessing or web browsing. There is a further ingredient called ‘software’.
What you would get with your proposed solution would be the emulation of the brain of a freshly dead corpse.
The nice thing about specifications is that they exist at multiple levels of abstraction within the same system. You don’t just have the specification of the cell, you also have the specification of whole brain systems. So, fulfilling the specifications involves making something that actually works the same way.
DUH.
Gawd.
You should study the limits of what can actually be specified exactly at some time. It is rather limited and the effort raises at least quadratic with the complexity of the object. One of the reasons why complex software projects take so long and fail so often.
In particular, a more abstract specification is either less exact (and thereby e.g. specifying “an elephant” can well produce a pink fluffy one) or refers other exact specification that goes into the overall specification complexity.
At this time, not even a single cell can be specified to a degree that an exact (software) replica can be made from that specification. All such simulations are hugely inaccurate and can deliver completely wrong results. And you want to specify a whole brain? Preposterous.
Each cell is incredibly complex. I would make a large bet that much of that complexity is also below the layer of abstraction that thought lies in. Like, if at the cell-as-the-smallest-abstraction layer we discover that the contract that cells must fulfil can be fulfilled by something much simpler than a cell, then we can use that instead.
Like, look at the mindboggling complexity that goes into propagating an ion pulse down a nerve.
We have MUCH simpler ways of doing that job, even with the same rate controls.
Only if you even understand what that job is.
Propagation of a nerve impulse does not only carry a bit of information. As you hint, the sodium/potassium pump is also a timer that helps to co-ordinate impulses through the nervous system and prevent the sort of overloads that result in seizures. And the leap across the synaptic gap is a vital component of how we ‘learn’ things, both at the cognitive and physical level. It is through usage (or not) that dendrites, axions, transmitter and receptor sites develop or atrophy. There are probably several other vital ‘side-effects’ in how neurons work that we don’t yet understand.
> we don’t yet understand.
Yay! We’re on the same page, here. We don’t understand that… yet. I did say I expected it would take quite a few years to unravel this.
I think we have always been in agreement about our lack of understanding.
Where we disagree is that you seem to be insisting not only that understanding must inevitably come about, but that it will do so according to a particular timeframe and can be bypassed by a particular method.
To clarify what I mean by ‘particular timeframe’ I offer this metaphor.
I have no idea what kind of person Jesus might have been nor even if he ever actually existed and I don’t really believe anyone else does either.
If someone was to tell me “Jesus will return in 2039” and I calculated that was fifteen years away I would reject the claim as evidence free.
If I later had my calculation error pointed out, I would NOT say, “Oh, twenty-five years before the return of Jesus. Yes that sounds fair enough”.
Sorry, but you really have no clue what you are talking about on the algorithmic side. These are _exponential_ algorithms and there is no mathematical way to make them better. Implementation quality does not matter.
From the rest, I deduce you practice physicalism as religion, thereby abusing it. Your reasoning is circular, as is typical for religion. In science, this is deemed unacceptable.
ugh. I was talking about improvements in computaitonal complexity – in other words, finding algorithms that achieve the same ends and work in the vast majority of cases, but are not exponential.
As for the argument for physicalism, I hadn’t actually provided it because it’s such a freaking huge thing and what part of the argument is necessary really depends on where you-all are starting (and since there’s more than one that could be multiple places) and I hadn’t nailed that down and, most importantly, Cabrogal was making so little apparent effort to understand me, being so stingy with her(?) mental resources, and generally logically rude, that it would be like hiking up a molasses river.
So yes, I can see why it would appear circular. It’s not.
If the problem is exponential (and this one is), no such improvements can exist. Have a look at algorithmic complexity theory some time. Sorry, but this really is first year CS stuff.
As to physicalism, if you use it as religion but refuse to admit so, that makes it no less obvious to smart people. Or briefly: Sorry, I don’t buy it.
A lot of exponential or worse problems are very very similar to much easier problems.
Like, Traveling salesman. Exponential time to get the very best solution.
But where in cognition is there a need for an exponential-time problem be solved exactly? WHERE? Where will it matter that you got the BEST traveling salesman solution instead of a good solution? Good solutions aren’t exponential, not even close.
That’s the kind of improvement I’m talking about. And THAT should also be first-year CS stuff. That you didn’t choose to apply it is baffling.
And thereby you miss the point completely. Traveling salesman is an optimization problem, i.e. you get a solution with a quality measure. If you relax the quality requirements, quite a few heuristics work. Even doing a solution purely at random produces a solution. Automated deduction is fundamentally different. You either get a solution or not. There is no “quality” parameter in there you can tweak. But my guess is you actually understand that and are trying to confuse the issue. Not nice.
Since when is exhaustive proof search an ingredient of general-purpose cognition?
It’s not.
But I would hardly call something intelligent if it was incapable of making reasonable guesstimates in the face of complex problems in which not all variables are known.
Oho! Lo and behold, by talking about reasonable guesstimates you’ve changed the problem definition so it’s not that exponential problem anymore.
> But my guess is you actually understand that and are trying to confuse the issue. Not nice.
Back at ya.
Celos diverged somewhat from his original point in response to a question from me.
But my understanding is that the point he was originally making is that it is precisely the human capacity to guesstimate that prevents the exponential proliferation of decision branching in all but the simplest of problems.
I’d like to withdraw that last line of snark, since A) you didn’t say that, and B) I’d like to see what I can do about raising the level of discourse here even if it were true, and C) I could see ways it might not be even if Celos had said it. So…
Indeed. I objected to it actually.
But my objection was probably more … err … objectionable to you than Celos’ original comment.
You see, to me you have swallowed what Greg Egan calls a circular self-referential meme. As long as it is internally consistent the only way out is for something to break through that is utterly inconsistent with it. Unfortunately such memes have evolved for survival and have their own way to defend against such incursions and those defences usually generate the sort of cognitive dissonance that cripples the host’s critical intelligence in certain areas.
I once worked with a physics and chemistry Master who was also a committed charismatic Christian. We had an argument that went on for months during which he successfully batted away all the objections I could raise to Young Earth Creationism without needing to abandon any of the things he’d learned in the course of his studies. Mind you, a lot of his explanations relied on the existence of a very devious God. This was my first experience of that sort of thing and needless to say I was dumbfounded. I have since encountered many other people like that and even a family member of mine has fallen for a cult (developed under the CIA MKULTRA program!) that is equally impregnable to external reasoning.
So I’m afraid your prognosis is not good.
That’s not fair Celos.
You forget that drachefly is essentially professing his faith. In order to maintain it in the face of all the contrary evidence he must be experiencing a lot of cognitive dissonance. That would mean that in some areas he is actually incapable of understanding what may seem obvious to those who do not share his faith. It does not mean he is not trying. Very trying.
WHAT evidence? I’ve asked for it over and over and I get nothing. Name one process that would be useful for general cognition that is actually computationally intractable.
Proof search is not such a process, on account of its not being useful for general cognition.
Regrettably, your psychological projection of your personal failures onto me is also not such a process, because if it were one, it wouldn’t actually be happening.
Umm, swap out ‘useful’ and put in ‘necessary’ in that. Of course being able to solve certain computaitonally intractable problems would be USEFUL.
Precisely.
I am speaking more broadly of your faith in physicalism BTW, and your apparent inability to acknowledge the many phenomena encountered in day to day life that are not explicable with current physics and which show no sign they ever will be.
> the many phenomena encountered in day to day life that are not explicable with current physics and which show no sign they ever will be.
NAME THREE
I have already named more than three but for your benefit I will repeat three of the biggies.
1. The feeling of subjectivity. That there is a ‘self’ in there looking out through the senses.
2. The appearance of free will.
3. A sense of meaning or purpose to life.
But as these are subjective and you may be inclined to dismiss them as illusory, here are three more from just outside the realms of physics itself.
1. The time at which a particular radioactive particle will decay.
2. What may ‘exist’ on the other side of a singularity.
3. The way the observer herself seems to bring certain quantum events into being. (In ‘A Brief History of Time‘ Hawkings suggests that such events do not decohere at all and that we are retrospectively bringing the entire universe into being via our observations. i.e. matter arises from mind – not visa versa).
Hmm. You may have a point there. “Malicious meme infection” indeed. And like any successful parasite trained by evolution, that meme defends itself by inhibiting some cognitive functions when applied to some objects if that threatens its existence. Makes sense.
Ok, I retract the “not nice” and the sentence before that. Sorry about it drachenfly, what I said was uncalled for and unfair.
… stingy with his mental resources actually. But hey, I didn’t want to completely outclass you. It would have been rather humiliating no?
Your mistake about my gender was understandable.
The suffix ‘gal’ actually means ‘man’ or ‘men’ in the Dharug language of my forbears.
The suffix for woman is ‘galyon’.
yeah yeah, smack talk. You’re the one who seriously raised the cultural objection as if we couldn’t use the one we already have.
No, I said that we can’t use the cultural interactions we currently have.
Unless your project will result in something that is culturally equivalent to a human baby.
I don’t know why you accuse me of failing to bring mental capacity to bear on understanding arguments.
I did say worst case. In the worst case, yes, our first AI is a human baby. Why do you think that this is not what I had in mind from the start?
Oh, right, because you’ve decided I’m a total idiot.
So your project is now not merely the functional emulation of a brain but the functional and physical emulation of an entire body. And you still think it possible within 25 years?!! What was the stardate at which Data was manufactured?
Of course there are ways of making a human baby and some of them involve artificial intervention, but to call the result ‘artificial intelligence’ is stretching things a bit.
No, not the entire human body. We would need to emulate brain with some degree of abstraction that does minimal to no damage to cognition; this would include providing the effects of its immediate support systems, sensory input, and motor output. We don’t need to provide those in the same way, and we certainly don’t need to emulate internal details that are not required to fulfill the contract required for the abstractions of cognition. That’s what I’ve said all along.
And it’s far FAR simpler than actually simulating a whole human body.
So how will your ‘human baby’ receive the touch and attention that seems to be vital to the development of mind in real human babies if not via emulation?
How will she come to identify with others she sees as being like her?
Or will she remain isolated in a box like one of Ceausescu’s orphans?
If she is truly the functional equivalent of a human baby you are proposing a serious human rights abuse here. But I guess that human rights are not computable or definable via the laws of physics so they don’t exist in your universe.
@Cabrogal, who said:
>So how will your ‘human baby’ receive the touch and attention that seems to be vital to the development of mind in real human babies if not via emulation?
>
>How will she come to identify with others she sees as being like her?
>
>Or will she remain isolated in a box like one of Ceausescu’s orphans?
>
>If she is truly the functional equivalent of a human baby you are proposing a serious human rights abuse here. But I guess that human rights are not computable or definable via the laws of physics so they don’t exist in your universe.
The only real problem here is #2, and with enough love I think it can be overcome. On the others, fuck you.
So lets see where we are with this argument.
drachefly: All we need for AGI is functional emulation of the human brain.
cabrogal: No. We would also need functional emulation of many things that interact with it – including society.
drachefly: Ha! We already have a society. We don’t need to emulate it.
cabrogal: But we need to emulate how it would interact so that it would be accessible to a functional ‘brain’ that is not embodied in human form.
drachefly: Fuck you!
Hmm, a profound and subtle answer. I will have to meditate upon it.
I hope you or your collaborators on this project are such extraordinary human beings as to be able to love a machine so much that you identify with it and visa versa despite having virtually nothing in common with it (assuming none of your colleagues think like newborns here).
Actually, I retract that (I see you have responded to this, but have not yet read it).
That’s not the worst case, obviously – but it’s all you argued me down to. I mean that is an acceptable worst case.
I was hoping you might explain this more fully so I will start the ball rolling by putting forward my understanding of what you are hinting at here.
The brain is not a digital computer it is an analog device. It does not ‘learn’ by simply altering internal bit-states. It grows new dendrites, axions, neurotransmitter and neuroreceptor sites (possibly engrams too, though this remains a theory only).
The potential number of interconnections in the human brain is several orders of magnitude greater than the number of atoms in the universe. So to make a computer capable of emulating the human brain based on even the most extreme expansion of current technology you would need more bit-switches than there are atoms in the universe.
The only way forward using the method drachefly proposed would be if we developed computers that essentially already were human brains (i.e. cyborg computing) in which case it would raise the question of whether such an intelligence could really be said to be ‘artificial’; or if some other radical new technology such as quantum computing proved viable.
O.k., I will me elaborate a bit: In automated deduction you end up with a search tree that has a width (the number of parallel possibilities you try) and a depth (the length of any chain of reasoning in there). The problem is that you do not know which paths you can safely terminate as they will not lead to a result. That would require what human intelligence and intuition does and there is no mathematical or technological equivalent.
This means that if you have a theorem that is true and, say, 100 steps removed from the axiom set, you have to come up with all possible proofs in this theory that are 100 steps long to find it. Automated deduction does very simple steps, so 100 is actually not that long. Now, while some branches of the search tree terminate by themselves, they get exponentially more for each level of depth you add. Say this starts with a small axiom set of 10 axioms (geometry, for example, has 5, set theory has 9 and you may need both or more). What you observe is that your state at level n is roughly 10^n in size. (These numbers are made up, but are not unrealistic, at least if you go to advanced mathematics.)
You now have two choices in searching this tree: Depth-first-search, which fails as there are infinite paths in there. They lead to nothing but cannot readily be recognized (incompleteness and computability play a role here). You also have to keep your current path completely in memory in order to trace back if you find it actually stops. That is clearly infeasible in a finite universe.
Then there is breadth-first search, that does not have to keep states, except the full current level and is ensured to find the proof at level n if there is one at level n. But for the numbers above, even if you manage to keep one element of the state in one elementary particle, the end of the line in this universe is somewhere at 10^80. That assumes you need nothing for the mechanism itself, but ignores that you can get more than one bit into one elementary particle. (Well, maybe. You need to hold it in place somehow. It seems to be a good upper boundary of the maximum amount of bits representable in this universe.) So breadth-first search will fail, because it cannot get to the required depth in this universe.
But what humans do is called A*-search, which basically says, make an intelligent decision at each branch. This cuts the depth of the tree to a degree that humans can find things that machines cannot and are unlikely to ever be able. But note that mathematicians often do their PhD by proving or disproving a single conjecture and that usually takes a few years and some mathematicians spend a decade or two on a single problem. A few of them are even successful.
Now the thing is, nobody has even a theory how the “magic” in A*-search could be modeled except for exceedingly simple things. This is why automated deduction will not produce human-like intelligence in this very specific field. Of course, reasoning about reality is fuzzy as you do not start with a clear axiom set. Automated deduction completely fails in that setting.
A setting where automated deduction does very well today is when somebody comes with an already done mathematical proof and wants a verification. They can then give the path from the axioms to the theorem to the verifier and it can formally verify each step. As it does not have to go trough more than one path, the effort is small an a completely formalized and mechanically checked proof comes out of the process.
Sorry about the length and complexity. There is several years of advanced study in here, even if the idea itself is simple.
Thank you for going to the trouble of explaining.
I think you were very clear.
I can see a couple of potential ways through the problem though.
The most obvious would be the incremental refinement of automated A*-search. While we are a long way from automating the sort of intelligent decision making at branches that humans can do we have been able to develop primitive automated simulcras of fuzzy decision making (at the cost of a lot of processing resources) and though it is not a given that they will ever approach human levels of sophistication it is not certain that eventually they would not reach something that would suffice – especially given the speeds at which machines operate. How long this might take is anyone’s guess though.
Another approach which feels more likely to bear fruit to me (gut instinct here) is the development of technologies such as quantum computing whereby a theoretically unlimited number of branches could be explored at the same time. Again, the potential problems with this approach are anyone’s guess at this stage and could be insurmountable but I am certainly not in any position to rule it out. And again, the time span would be impossible to estimate beyond saying “no time soon”.
Thanks, that was the first time I actually put that in writing.
The thing about A* search for this problem is that nobody has any idea how to refine it. The suspicion is that it requires human-equivalent intelligence, but there is no proof of that either. Something may or may eventually be discovered, but “not anytime soon”. If something that works well is ever discovered here, that would be really great. It would however only invalidate the argument and not really tell us more about human intelligence. It would be an extremely valuable tool though.
Whether quantum computing will ever produce anything useful is anybody’s guess at this time. My bet is that this approach dies from too high practical effort. Another possibility is that some fundamental assumptions of quantum theory get invalidated. Maybe then it can fit with relativity 😉
BTW, tanks for the recommendation on “Distress”. (Seems this forum software has reached its indentation limit, so no possibility to reply where you recommended it…) I know Egan, but that one I was unsure about whether to buy or not. Just picked up a copy.
It was churlish of me to say that 50 Canadians Who Changed The World “isn’t very good”, ESPECIALLY since I haven’t read the whole book, only the chapter that discusses my work. I was genuinely delighted to be featured in it. There are a few minor errors about me in there, and I might wish that certain things had been emphasized that weren’t, but to point them out here would look like I was biting a hand that was feeding me (if one considers praise to be food).
I don’t think it was churlish at all; in fact the three reviews I read all said pretty much the same thing. I took your saying that as sincerely intended to advise people not to rush out & get it JUST because you were in it. Does that make sense?
That makes complete sense. Don’t worry, Maggie — I find this situation amusing, not upsetting.
Oh, good! 🙂
Though I’m sympathetic to many feminist concerns, I’ve never understood the whole “playing with Barbie’s makes girls hate their bodies and have unrealistic career aspirations” thing. I mean, surely the same could said for boys who play with G.I. Joes, the big muscles and all?
As a boy, I played with both G.I. Joes and Barbies, and have yet to get breast implants or become a ninja assassin– yet.
In the article about the Birmingham Asian spa, the vice cop made this ridiculous statement:
Such “spas” he said are a nuisance. “Anything that involves a lot of currency always brings other crimes, like robbery, ” Sellers said. “Either way, it’s a blight on the community.”
According to that idiotic logic, we need to ban convenience stores, liquor stores, restaurants, banks or any business establishment that “involves a lot of currency.”
I patronized various Asian spas in my area of the southeastern U.S. for years (before cops finally shut them down). I was never cheated or had anything stolen from me, even though I would routinely leave my wallet containing cash and a credit card in the room while I went to get a table shower.
However, once one of the spas was robbed at gunpoint late at night by thugs who were not customers. Of course the police never caught them and probably didn’t even try to. They were too busy trying to find reasons to shut down the spas.
Thanks Maggie for noting my work. That’s pretty damn cool. Your blog has been a resource for me in many ways. I came across a quote that resonated with me soon after my piece went up at the linked site, and I wonder what you and your readers think of it. This, to me, is the potential power of photography
http://dish.andrewsullivan.com/2014/02/23/quote-for-the-day-325/
I’m hardly a theologian. I’m barely literate in Christianity. But maybe we can generalize to our own battles that words and abstract arguments are not enough, maybe flesh and blood humanity is more of what we need to show the poverty of the ideas of those who deny the humanity of others, those we know and respect.
Thanks for sharing that John Boswell quote, Peter. It really struck me.
[…] to Maggie McNeill for pointing this one out to […]
I can subtract, really. Just, my expectations of how over-optimistic he would be told me there was no way he’d say 25 years!
So, actually, his estimate sounds about right to me, and that’s assuming we don’t figure out whatever it was we’re missing on building it from scratch. THAT, however, I wouldn’t want to put a timeline on.
(how did this not get properly nested under Maggie’s ’25 years’ comment? Well, it belongs there)
I see nothing but circularity in your argument.
i.e. “The brain functions to produce intelligence therefore intelligence is a function of the brain”.
If intelligence is not solely the function of the brain – and there is no good reason to believe it is – then making something that works the same way will not produce intelligence.
That comment should have been in reply to drachefly’s comment #82083.
Just what is this external entity supposed to be doing?
How might it act on the brain?
Don’t forget to make sure that your explanation incorporates all of the data from neuroscience.
So far, the only reason I can see that you’ve presented for its existence is that the brain couldn’t do what it does alone. Now, THAT’s circular.
I say no such thing.
I am a skeptic.
It is you who needs the assumption that the brain *must* be able to do what it does alone for your proposed solution to AGI to fly.
You are the one standing on empty faith.
And considering that every brain that demonstrates intelligence is hooked up to the body, the society, the environment, the experiential universe as a whole … you must have a very comprehensive faith.
I need to correct my previous comment.
Brains do *not* demonstrate intelligence.
People do.
Well, some of us anyway.
Indeed. But still, I really do not understand why people use physicalism as replacement for religion, with all the usual fallacies that religion uses like circular reasoning, huge gaps that are glossed over, ignoring well-established facts, complete certainty in things, etc. It is not. It is just a philosophical world-model to allow one to reason about things, it is not something to believe in. Same for dualism. When I say “I am a dualist” that does _not_ mean I believe in dualism, it just means I consider it the more likely model at this time, given the observable facts. No belief involved at all.
As an aspie there are a lot of things about human behaviour that I just don’t get and this is a question that has long stumped me too. I’m inclined to address the broader question of the religion of Scientism, because there are also people who come at it from a twisted version of science that is not grounded in Newtonian physics but rather genetics or quantum mysticism. Scientology too is a version of Scientism, albeit a particularly cartoonish one.
The only theory I have is that our social interactions have not substantially changed since nearly everyone was a religious believer and that predisposes people to strongly need some kind of religious faith to prop up the socially mediated faiths they have in authority, justice, natural law, etc. As the old religions are no longer viable to those with rational minds they reach for other faiths that – at least superficially – seem to be in accordance with the conclusions of rationality. And when it comes to results, few fields have demonstrated the power of rationality as well as physics.
A simpler explanation is it’s just that old fear of death. While that explanation seems to fit physicalists who believe in cryogenics, immortality through better medicine or uploading themselves to computers, I have met many who do not fall into those categories.
Well, maybe. What religion does is that it answers everything (often by excessive hand-waving and mental trickery). Physicalism does not do that at all, but abused as religion it seems to do so. This may really be about sharing a belief-system with others in order to belong.
Funny thing is actual science gives you the same, without any need to believe in things. But the community is admittedly much, much smaller and not well-respected by others as it frequently comes up with things people do not like. Maggie frequently has good examples for that in one particular area.
A good rule of thumb but I’d hate to rule out the possibility of finding a framework that can answer everything or blind myself to it if it came along.
I think it can be quite fruitful to try to fold all other knowledge systems into a particular one as long as you recognise that the perspective from one framework almost inevitably oversimplifies all the others and contains the potential for serious abuse.
So, for example, trying to explain all human behaviour from the perspective of evolutionary biology can provide some useful insights. But it can also lead to the sort of abuses that manifest as social Darwinism. The key, I believe, is to avoid becoming too attached to the map you are currently holding. And to remember you are a human being, not a machine ;).
Indeed.
And the test for religion that “it answers everything” is indeed one-sided. The real test is then to look for that “hand-waving and mental trickery”, or you may miss out on actual insight.
> You are the one standing on empty faith.
I am taking a calculated risk, with a high confidence of success. I do not think it _terribly_likely_ (like, really unlikely, not worth worrying about) that any non-physical process is required for cognition.
That’s not what faith looks like.
It is when you don’t have the data to make any such claim. And considering no-one knows what components constitute even a decent simulation of cognition I can state with a fair degree of confidence that you are talking through your hat again.
Precisely what data are you using to calculate your confidence interval?
I guess it’s time to lay out my reasons, since my saying ‘I can defend this but it’s gonna be big so please don’t ask me’ is being met with ‘he hasn’t got reasons, must be blind faith’.
But, this is going to take a while. Maybe a couple days, since I’ve got some important stuff coming up elsewhere in my life. OK?
In the mean time, a very brief outline, as a sort of trail of breadcrumbs so you can see the shape of my thoughts. If you could point out if there are any parts you are OK with and don’t need filled in, that would be helpful.
1) Knowledge doesn’t only consist of what IS, but also what ISN’T. In this case, we can tell some things about what cognition might require, without being able to build it. Hypercomputation does not appear to be among them. You appear not to be P-zombie style dualists, so I’m not sure what is left.
(this is almost a complete argument for the relevant parts of the argument, but let’s go on)
2) Outside of brains, even our incomplete physics is really astonishingly effective.
2a) These physical laws are local and impersonal and a bunch of other things.
3) INSIDE of brains, physics shows no signs of being less effective
3a) interfering with brains in ways not based on physics would need to leave physical traces because they alter physical entities
3b) simply having an expanded physics that can recognize and act on brains would violate every expectation from 2a. Basically, it would be magic.
So you’re asking me why as a skeptic I heavily penalize explanations relying on the intervention of magic.
Good strategy.
Did you have to crunch a lot of other possibilities before coming up with that or did it arrive as a sort of inspiration?
I agree we can almost certainly deduce some things about the prerequisites for cognition. But we have not yet deduced all of them and there is no good reason to believe we ever will, much less put a timeline on it.
I also agree that hypercomputation does not appear to be among them. But there are many aspects of cognition that do not seem to rely on computation at any speed or complexity – though of course if there is some kind of quantum-type hypercomputation going on we would not have introspective access to the process after the collapse of the waveform. Most of it would now be in another universe (presuming ‘many worlds’ QM here).
‘Astonishingly effective’ is a relative value judgement.
Yes, physics is better at making predictions and providing theoretical bases for technological advances than, say, alchemy or theology. But there are many things it does poorly or not at all. It does not provide purpose or succour as well as religions do for example. It may assist with weapon design but does not provide winning military tactics (as the US has learned in Vietnam and Afghanistan). It does not predict stockmarket movements even as well as the pathetically inadequate models used by economists. It does not make a child feel loved.
Physics is a tool, not a god. Holding a particularly flash hammer does not turn the entire universe into a nail.
Untrue of QM now and unlikely to be true of yet-to-be-developed branches of physics (unless we are already as close to it’s limits as the 19th Century physicists believed themselves to be).
Inside of brains when examined using the current tools of physics.
If you look inside your mind using introspection you will find physics to be pathetically inadequate in explaining what is going on. In fact the whole subject-object dualism on which science and most of Western philosophy rests is really pretty feeble at explaining the range of your own experiences.
Can you really not see how oxymoronic that statement is?
If they alter physical entities or leave physical traces of course they have some kind of physical component somewhere in the process, though not necessarily one explicable with contemporary physics and not one necessarily causally initiated by something that is in itself within the realm of physics.
For example, let’s assume for a moment that decisions based on free will are possible. If this is so, each such decision represents a new first cause coming into existence. If it is entirely conditioned by earlier events it is not free will. As an unconditioned cause it is a singularity and not subject to examination by causality based science any more than what exists on the ‘other side’ of a black hole or ‘before’ the big bang is. However that decision will probably result in a physical manifestation of something which will produce the chain of cause and effect beloved of scientists. That chain will continue to exists and (theoretically) be examinable by science until it disperses into entropy or is swallowed by another singularity (e.g. the now unlikely ‘big crunch’).
If I understand what you are saying here I could not agree more.
But if I understand it you have just refuted almost everything else you have said in this comment thread so I think I am missing the point here.
BTW, first causes as referred to in my above example are not novel to science – just inexplicable by it. The timing for the decay of a particular radioactive particle would seem to be one such first cause – though I would not rule out causality that has not yet revealed itself to physicists.
Currently, Physics uses a mental trick to explain the decay time (or, for example, tunneling time of an electron, etc.): They say it is “truly random”. This basically says “we do not know how it works, but we do not have a better model and do not expect to understand this anytime soon.”
In fact, there is a residual possibility that quite a bit of quantum theory is wrong. While its prediction power has been pretty good, there are huge gaps and the current description is potentially too simple (or maybe to complex, but missing some effects).
Case in point: A few years ago somebody invested about 10 million USD or more in starting research to prove some the observable effects on quamtum level are rather effects resulting from complex oscillation overlays in atom cores. Unfortunately, I do not know who that was or where they sit, I just saw an anonymous job ad for 10 high-frequency physics PhD positions that would investigate this. The ad alone would have cost something like $10’000 where I saw it and it must have appeared in other newspapers around the world as well.
Now, my guess is that this effort either failed or is still underway. At least I have not seen any related publications. But If anybody can raise that kind of research money at this time, that indicates to me things are not quite as clear as usually claimed. There also was not the slightest hint of any “junk science” in there.
That confidence is called “faith”, as it has no factual basis. It is not that you are very obvious about it, but all the important tell-tales for religion are there in your use of physicalism.
Gotta disagree with you there, Celos.
Drachefly is obviously not stupid and is capable of bringing robust logic to bear on certain questions. But that seems to collapse entirely in other areas.
It’s through debating people like drachefly that I remind myself that I too am likely to have serious cognitive blind spots. Unfortunately I am probably so blind to them that I still wouldn’t see them if someone was shaking them in my face – at least if drachefly is any guide.
What I can do at least is try to acknowledge the beliefs I hold that have no firm basis in rationality and try to avoid confusing them with those that do.
That is fine, “obviousness” is obviously a judgment call.
Note that for purposes of this response, we’re taking as a given that human cognition is made of meat, and trying to poke holes in it. So please let’s avoid going in circles by objecting to my free usage of this idea within this scope.
> I have already named more than three but for your benefit I will repeat three of the biggies.
>
> 1. The feeling of subjectivity. That there is a ‘self’ in there looking out through the senses.
>
> 2. The appearance of free will.
>
> 3. A sense of meaning or purpose to life.
>
> But as these are subjective and you may be inclined to dismiss them as illusory, here are three more from just outside the realms of physics itself.
No, they are quite real. In fact, #2 is more real than you seem to allow it to be!
1: Is this related to P-zombies, or is it a mechanistic question of why would we think about ourselves? if the former, oi. That’s a whole mess. If the latter…
2: I already addressed this elsewhere… let’s see what the response was.
> If your own mind is entirely determined by extrinsic causes you are as completely a slave as it is possible to be – except of course there would be no slave master other than the whole universe (which the religious would equate to ‘God’).
Extrinsic forces? Well, for this post and for any normal materialist, all we are is our material selves, as implemented by physics.
To the extent that our actions depend on the details of what’s going on inside that material self, we have free will. If our actions are forced by external matters without being well-filtered by the stuff going on inside, then we don’t have free will. Like, if I get tazed and thrown in jail, that’s not free will (though my ending up in the situation where that was likely to occur quite possibly was). But if I decide I want to object to the treatment of so-and-so and stage a sit-in at which I expect to be jailed, and I am, that’s free will.
I don’t see how this is sophistry. It seems to be actually answering the question.
3: Evolution gave us drives. Things we feel we need to do. Honor, loyalty, love, anger at injustice, improving the lives of those around us… all of these are very adaptive traits for a cooperative social animal. What other explanation could you need?
Round 2!
> 1. The time at which a particular radioactive particle will decay.
So…. the universe is not 100% deterministic (even indexically)? I don’t see how this is actually an objection. It’s something that can’t be explained but it’s also something that we wouldn’t expect to be able to explain. Same goes for #2:
>2. What may ‘exist’ on the other side of a singularity.
??? I thought you said everyday occurrences. I do not find this to be a particularly threatening gap in our knowledge. It’s much like asking about something outside the past light-cone of our future light-cone. If the universe’s expansion is accelerating, there is quite a lot in _that_ category.
It’s just too far away to see, ever. In the case of the singularity, that’s not so obvious because we can go around it, but that’s what it amounts to.
> 3. The way the observer herself seems to bring certain quantum events into being. (In ‘A Brief History of Time‘ Hawkings suggests that such events do not decohere at all and that we are retrospectively bringing the entire universe into being via our observations. i.e. matter arises from mind – not visa versa).
(Aside: his name is Hawking, not Hawkings. Also, ‘vice versa’)
Ah… ahahaha. Bad quantum physics! To ruin the perfection of a quantum event – or anything where an ‘observer’ would make a difference – doesn’t need a conscious observer. You don’t need any sort of observer at all, not even a mechanical one. All you need to destroy that two-slit pattern and get a one-slit pattern (or whatever) is for some particle somewhere to end up in a different place depending on which slit that one particle went through. This can be in an observer, or not.
As for Hawking’s ontology, that’s a bit of a weird way of looking at it. Maybe if he means… no, I can’t figure out how to square that quote with decoherence. Which makes sense, since decoherence was very much a work in progress when that was written. I wonder whether he would still put it that way.
Oops, didn’t give my response to #1. I’ll still set aside P-zombies, since they were disclaimed earlier. Why should we have subjectivity? Let’s start with how. Our internal systems contain a map of themselves (an incomplete one). In other words, part of our cognition is devoted to noticing (part of) our internal state. That seems like it’s a pretty big sign saying ‘I’m self aware now’.
WHY is this the case? It’s useful. Evolution.
Yes. Or to partial p-zeds.
Oh, really?
Perhaps you would care to explain why.
Preferably in a manner I could in turn explain to my former colleague, David Chalmers.
From your subsequent comment:
That’s right. You disclaimed them once by saying you had an argument prepared (somewhere) then again by saying they were a mess.
I have seen refutations somewhat more robust than that.
You were however correct in saying I am not a p-zombie dualist and I have argued the point with David. And lost. But if forced to pick a side between the physicalists and the p-zombies then I’m with the legions of the undead. They are more human. Besides, unlike physicalism, dualism (p-zed or otherwise) actually has a hope of explaining the observable phenomena in the world – both subjective and objective. But rationally I remain a skeptic and experientially, but irrationally, I am a monist (advaitist).
If your decision to protest arises from deterministic switching within your brain, which is in turn determined by purely causal factors from your genes and prior experience there is no free will at all. In fact the ‘decision’ was made before you were even born. At the time of the big bang.
As a (Newtonian) physicalist you are presumably OK with that notion and you would insist it is compatible with free will. Because you have no choice. It was determined during the birth of the universe.
And that is proof that all those drives are rooted entirely in physicality?
Do you believe there are genes that code the ‘meaning’ protein?
That’s what is called a non-explanation. Physics is a complete explanation because evolution is a complete explanation because physics is a complete explanation … It’s Scientism turtles all the way down.
I am a huge fan of Darwinism. I think natural selection explains a truly astounding amount of what we see in biological systems. I used a lot of keystrokes defending it from a crypto-Lamarckist in one of Maggie’s earlier posts. But unlike Dawkins I do not believe it explains everything there is to know about human behaviour.
But let’s say for an instant that genetic determinism is true. That a combination of your biological heritage and your social environment (i.e. more than just the functionality of your neurological systems) inevitably gave rise to your sense of meaning. Does this automatically mean that physics (or any other reductionist science) realistically has a hope of ever explaining the mechanisms to the point of making it codable on a computer, much less that it can do so already? Is it a given that all factors of evolutionary heritage are physical? Might it not be that evolutionary changes alter the ‘reception state’ of the organism so that it can receive different kinds of non-physical input that increase it’s evolutionary fitness? It certainly works that way with physical input from sense organs. (BTW, when I say ‘non-physical input’ I mean input not explicable with the physics we have or are likely to develop – not necessarily input that has no physical component whatsoever).
As there is not even a theoretical explanation as to how genes (or any other physical stuff) could have carried the sense of meaning from Ook the Caveman who first wondered why he was painting aurochs on the wall to Gordon Gekko who justified it all with ‘Greed is good’, nor how the interaction of meat switches in your head could compel you share their need for meaning, don’t you think it a rather large call to insist it will all someday be explicable with classical physics?
I guess not if you’re starting from the position that physics is an all explanatory god.
Ahh, now I see.
Everything is explicable via physicalism except for the things drachefly would not expect to be explicable via physicalism.
Why didn’t you say so in the first place?
And if some of those things are a factor in intelligence …?
Actually there are physicists who do try to explain it via ‘hidden variables’. I am not aware of any great success on their part though.
As explained earlier, any true expression of free will is a singularity.
I don’t know about you, but I’m of the impression that I encounter them every day.
The popping in and out of existence of subatomic particles and the decay of radioactive particles are also singularities and they are going on inside your head right now.
I am sorry I do not follow your logic or your grammar here.
A link to an appropriate paper would probably help and save you some typing.
>That’s right. You disclaimed them once by saying you had an argument prepared (somewhere) then again by saying they were a mess.
No, you disclaimed them when you said that there was an actual mechanical effect that matter could not provide:
> Yes I do.
> The ‘computation’ of a decision made according to free will.
This makes what you’re talking about not a classic P-zombie.
I will not have time to respond further, for three days. See you on thursday evening..
I put scare quotes around ‘computation’ because I was echoing your terminology, not because I believe anything computational is happening during the exercise of free will (as opposed to what will often happen when balancing options before so exercising it).
And p-zeds (as posited by Chalmers at least) lack not just free will but consciousness. In fact I’m not even sure the lack of free will is a prerequisite either. Or self-awareness for that matter (It is perfectly possible to be conscious but not self-aware. I have experienced it on numerous occasions and I bet you can say the same if you think about it. Ever lose yourself completely in an activity?) so I guess I wasn’t really thinking my answer through when I agreed that #1 related to p-zeds. But I’m perfectly happy to replace ‘self-awareness’ with ‘consciousness’ and go with p-zeds if you like, if only because I would love to hear that answer you’ve prepared.
Okay, I’m back! What with a number of unexpected minor difficulties I didn’t get a chance to return online by last thursday evening as I had expected. I have two posts ready right now; the third, dealing with quantum physics, is not yet ready.
So, P-zombies. I will note that essentially amounts to a TL;DR of Eliezer Yudkowski’s argument at (http://lesswrong.com/lw/p7/zombies_zombies/) though I had come to the same conclusions for the same reasons in advance of reading that.
Suppose there is some system that is responsible for an individual’s actions. This could be made of stuff we have models for in physics, or it could involve some other stuff, but whatever stuff it involves follows laws. Heck, I’ll even grant for the purposes of this particular argument that the laws could be less impersonal than the laws we know of. This system could be completely causal or only partially causal. For simplicity, let’s consider the system responsible for my mind (I think that’s my brain; you suspect other things might be involved). Now, within this system, let’s consider the causal chain behind an action – say, the action of writing this post.
What is in this causal chain? Among other things, a lot of introspection – noticing my thoughts and thinking about them, then typing some of them out. Pauses for word choices and digressions, a little of fighting nausea while writing on the bus, and then choosing to mention that. Most importantly, noticing that I’m doing that. This causal system is feeding on its internal state, considering parts of itself, and only then outputting text.
This isn’t exactly
10 print “Okay, I’m back! …”
The P-zombie claim is that consciousness is not implied by the existence of such a system. That a system could contain a self-model, process it, process its processing it, produce coherent results, produce the same results as a human for the same reasons as a human produces them (we imported the whole causal system), and yet somehow not actually be conscious.
This essentially amounts to saying that stuff isn’t real. This chair? Just atoms. If it goes out and does something, that it didn’t actually happen. I didn’t just come home on the bus; there is no I, no bus. Just atoms moving. I don’t know anything without some observer blessing it. The words on this page don’t mean anything, even in context, without such an observer.
This seems like an awful lot of work for an observer to be doing, considering that it has exactly zero causal influence on the system in question. I don’t think that there’s any way we could attribute consciousness to it. It doesn’t DO anything. All the work of self-awareness and introspection is done down there in the system. All of the actual reasons for my producing this highly self-referential work, are down there – no, down HERE – in the system.
TL;DR: Shit be real, yo. When it does a thing, that thing be done. Including being conscious. Saying that it wouldn’t is saying it arn’t real.
~~~
Incidentally, a few words changed here and there and the same argument applies to free will, with the note: we are never perfectly free – we cannot not change how we were created, for obvious reasons – but we can be free to be ourselves.
A) That’s pretty free; B) we don’t appear to actually be any freer than that even from the inside (actually, we don’t even meet that standard on a fairly regular basis, since a lot of things are simply not up to us), and C) it’s incoherent to want to be more free than that. Why would suddenly doing/becoming something you wouldn’t want to do/become count as freedom?
Sorry about the delay. I was involved in a pretty in depth online discussion about the morality of abortion and I lack the time and mental energy for two discussions of that intensity at once.
Yudkowsky’s piece is so full of fallacies and errors I’d be several days just pointing them out, so I’ll resist the temptation to pull it apart phrase by phrase and just chop it off at the knees.
He equates ‘consciousness’ with ‘internal dialogue’ (both ‘spoken’ and ‘heard’). Any meditator can tell you that is definitely incorrect (anyone with a bit of practice in either vipassana or anapanasati would have got a laugh out of the three paras beginning “If you close your eyes and concentrate on your inward awareness…”), but in case you have no more familiarity with meditation than he does I offer the following.
Do you think those deaf/mute from birth have some kind of internal dialogue (in sign language perhaps)? If not, do you think they therefore lack consciousness? Do you think infants lack consciousness because they can’t talk at all? What about animals?
Does *anyone* really open the fridge and think “Darn, out of orange juice” unless they are rehearsing telling someone else or do they just get a feeling of disappointment and/or frustration if they wanted some. I know which it is for me.
Consciousness is simply awareness. It’s not even necessarily *self* awareness. It precedes subject and object.
Much of the rest of Yudkowsky’s essay is basically an argument for behaviourialism. As BF Skinner pointed out, it is consistent with Occam’s razor when applied to others (i.e. objectively) but fails to account for a lot of subjective phenomena (though not if you’re a p-zombie of course – perhaps that’s Yudkowsky’s problem).
I think we can all agree a computer program could be capable of passing a Turing test without being aware of anything. Now lets add to it the remote sensing human simulacrum you mention in one of your other comments. It would be a p-zombie. Problem solved. P-zombies can theoretically exist.
There’s a few pretty out there assumptions in that last sentence.
Firstly the idea that what is responsible for a persons actions is ‘stuff’ is another example of circular logic. Your belief system is that everything is stuff therefore everything you come across must be more stuff, thereby confirming your belief system. Morality isn’t ‘stuff’ but most people think it guides their actions.
If it is stuff there are definitely no grounds for assuming it follows ‘laws’, or even that the stuff we are familiar with (matter/energy) does. A quick think about history would suggest that laws are man-made artifacts and they ‘follow’ stuff, not visa versa. When they fail to follow stuff sufficiently well they are tossed out and new laws that (hopefully) do a better job are manufactured. The idea there are laws ‘out there’ somehow inscribed on the fabric of the universe is definitely a religious one. It’s possible it’s a true one (just like it’s possible moral laws exist independent of people) but I see no grounds for believing it that are even remotely compelling.
Even if there is ‘stuff’ that is responsible for individual actions and it follows ‘laws’ again there is no reason to believe we can ever come to understand those laws. As Godel demonstrated, self-referential systems are often impossible to define from within the system itself. As ‘understanding’ is one of our actions the system is self referential. It may well be impossible to understand understanding.
If it has any elements that are not causal it is outside the realms of scientific reproducibility (except perhaps epidemiologically;) ) so science has nothing to say about it.
That’s right. Though by the time you get into the reasons you are talking not just p-zombies but epiphenomenal ones. They’re the ones Chalmers talks about the most but they are not the only ones. If you can’t get at the reasons a putative p-zombie behaves in a particular way they are irrelevant to whether or not it’s a p-zombie. If there *are* no reasons you would never be able to get at them.
You are assuming you know the reasons you typed out your comment. But we know that people very often reverse engineer reasons for their behaviour that aren’t true. (This can be demonstrated with post-hypnotic suggestions or with recent stroke victims who come up with ‘reasons’ they are not using one arm that don’t include the fact it is paralysed). Why would it be unlikely people would come up with reasons for doing something when there is no reason whatsoever?
Now you’re cooking!
I’d go further and say there is no way to know there are atoms either.
As Rene Descarte stated and George Berkeley further enunciated, all we can really know is that there are mental processes. Descarte took a leap of faith by insisting that he is responsible for those mental processes. Indian philosophers are far more skeptical, hence the Buddhist doctrine of anatta (no-self).
If there are non-mental phenomena it’s pretty much certain they are very different to how we perceive them with our limited range of senses (probably still very limited even when augmented mechanically) and our far more limited ability to process the input of those senses then model it as some sort of ‘reality’. However it may be possible to be aware in a manner not dependent on senses at all and some would have it that reality is ‘knowable’ through such awareness (e.g. Sankara and Kant. It also tallies with my own experiences. If Steven Barnes is right about freegirard he would probably agree as well).
If it has other than zero influence it has ceased to be an observer strictly speaking. Again this would be consistent with Sankara, Schopenhauer and others who insist the division between subject and object is false.
Feel free to think about what I have written before replying. It will be another day before I can respond properly to the other comment you’ve already made.
Your attempt to chop something off at the knees seems to have landed somewhere in the next state over. Verbal internal monologue is a sufficient but not necessary condition on consciousness.
> I think we can all agree a computer program could be capable of passing a Turing test without being aware of anything. Now lets add to it the remote sensing human simulacrum you mention in one of your other comments. It would be a p-zombie.
No. A P-zombie would have to do what we do *for the same reasons* as we do. Convincing simulacra with different insides don’t address the monism vs dualism issue because such a change could also remove consciousness under monism. This was quite explicit in the argument.
> If it has any elements that are not causal it is outside the realms of scientific reproducibility (except perhaps epidemiologically;) ) so science has nothing to say about it.
If this is what you think about science… eeeurgh. You don’t know as much about it as you think you do. Stochastic models are perfectly legitimate.
> Now you’re cooking!
> I’d go further and say there is no way to know there are atoms either.
Umm… you appear to have misapprehended the entire paragraph (though I may have misinterpreted yours in turn). I was not presenting these as true. I was presenting these as being your claim. I am at least glad that I was presenting your ideas reasonably accurately.
The rest of your post begins to present the epistemological problem that anything we know needs to be ultimately learned by mechanisms we do not perfectly control… as being an ontological principle. That we’re real and that stuff out there isn’t.
Why would you begin to suspect that this is the case? Why would there appear to be a complicated, consistent physical world with a dramatic paucity of conscious actors and abundantly available intersubjectively knowable information, if it was not in some sense real? And if it is in some sense real, it seems to me that that sense of ‘real’ ought to be both the ordinary sense of ‘real’, the one that everyone uses all the time for normal things, AND the philosophical sense of ‘real’.
‘Chair’ is a name we give to a variety of real things. Their ‘chairness’ isn’t a thing, but chairs are real.
> Verbal internal monologue is a sufficient but not necessary condition on consciousness.
Realizing that this of course means that the argument doesn’t directly address the existence claim, let me lay it out more generally.
Assuming you don’t ascribe consciousness to random stuff lying around or the universe as a whole or whatnot, then…
Any time you would believe anyone else to be conscious, there’s an accompanying system that’s the thing you’re actually interacting with. By definition of the system, that system is responsible for every aspect of the thing which leads you to conclude that the thing is conscious.
Every reason you infer that there’s consciousness in there is stuff that this supposedly unconscious system is doing.
I can’t really parse your sentences as I can’t make sense of the “accompanying system that’s the thing you’re actually interacting with”.
Are you suggesting I can interact with other people’s consciousnesses? Because I don’t believe I can. Or are you suggesting I interact with my own belief systems? In which case you seem to have come up with a verbose way of saying nothing at all.
The three main reasons I believe other people are conscious are
1) Because I am and they seem like me.
2) Because some of them tell me they are.
3) Because there’s a word ‘consciousness’ that’s been bandied around since before I was born so I assume there must have been someone other than myself who has experienced it.
However I have no access to what other people actually mean when they say consciousness and when I read people like Yudkowski equating it with internal monologue I’ve really got to wonder.
But more seriously, my own idea of what consciousness is comes from introspection, not from modelling the consciousness of others. I don’t really believe Yudkowski lacks consciousness but rather suffers from a severe deficiency in his capacity to introspect (either that or he doesn’t believe what he writes).
I did suggest you actually take the time to read and think about what I wrote. I afforded you that courtesy.
Internal monologue has nothing to do with consciousness.
There is no reason for believing something non-linguistic could not have consciousness.
It is possible to build something that verbalises, detects it’s own speech and responds to it that does not have consciousness (you could hook two voice recognition ‘telephone receptionists’ up to each other for example).
There is no particular reason to believe internal monologues even originate with the consciousness that is aware of them. Schizophrenics can experience thought insertion and meditators can become aware of internal monologue as something that arises without volition.
Only epiphenomenological p-zombies. In one sense Yudkowsky was creating a strawman in that he pretended to be addressing the whole question of p-zombies while only looking at the most ‘hardline’ version. However as Chalmers in fact argues for the hardline version as well it wasn’t technically a strawman.
Let’s not forget what this conversation is actually about. An artificial intelligence that was an e-p-zombie would still be a true artificial intelligence. One that was merely a p-zombie with undetectable causes/motivations would still be an AGI. But if consciousness is a necessary prerequisite to human intelligence but not subject to scientific examination your functionally emulated brain would not automatically develop human intelligence and quite likely it would develop none at all.
Your model is essentially cargo-cultism. (i.e. if we build something that looks like an airstrip according to our best understanding of what an airstrip is planes will begin dropping off goodies).
That’s what I was referring to with my crack. I used ‘epidemiologically’ as a joke reference to the plague that causes the zombie apocalypse because you would need much more than just one p-zombie to be able to use stochastic methods.
I can see I will have to avoid humour and be very explicit at every step if I am to have any hope of communicating with you.
I very strongly suspected (though was not certain) you didn’t actually believe what you were saying. If you did I would have had to have completely revised my entire model of your thinking.
It is not my ‘claim’ that nothing is real. Merely that we cannot know whether anything outside our own mental phenomena is and I was pointing out that is very well established in multiple philosophical traditions. It is certainly not something that can be ignored when presuming to emulate/create an intelligence akin to your own.
Have you ever had a dream, drachefly?
Have you ever read a novel, watched a movie or played an RPG?
You have no way of knowing that. Perhaps you are a functionally emulated brain in a mainframe being fed bogus sensory data that includes ‘chairs’.
Even if there is something real out there giving rise to the notion of ‘chair’ in your mind that is no reason to believe what caused it corresponds to your idea of what a chair is.
Are you familiar with the Jain parable of the blind men and the elephant?
> There is no reason for believing something non-linguistic could not have consciousness.
… that’s why I said sufficient but not necessary. You do understand this basic term, right?
See immediately below…
> I can’t really parse your sentences as I can’t make sense of the “accompanying system that’s the thing you’re actually interacting with”.
I mean, their brain, as mediated by their bodies, the air, the internet, whatever. You suspect I’m conscious by analogy to your own.
The point was to broaden the discussion from internal monologue to anything that any other creature might ever do that would lead you to suspect it was conscious. Whatever that was, it was caused by the system, and not an epiphenomenal essential consciousness.
Isn’t that odd? Isn’t that… suspicious? Doesn’t it really suggest that the stuff can go be conscious on its own for those causal reasons, and the epiphenomenal observer is just an implication – a sort of a logical shadow – of the state of that matter?
Like, I haven’t got a problem with epiphenomenal observers, so long as we don’t think they’re things with existence independent of their implementations. If we just look at the implementation of a consciousness, verifying its structure, and say ‘hey, there must be an epiphenomenal observer here’ I’ll buy that. It’s basically the same thing as looking at a chair and saying, ‘oh, this isn’t just any old stuff, it’s a _chair_’, only of course the judgement call is way more complicated to actually perform (presently not practical).
> Only epiphenomenological p-zombies.
In what sense is there any other kind? If the consciousness you’re removing has a causal role, then they’re not P-zombies, they’re just… zombies.
And that causal role makes it detectable.
Since you denigrate ‘hard-line’ P-zombies to the point that actually arguing against them is apparently a straw-man, then you agree with me that consciousness ought to play some sort of causal role.
> Your model is essentially cargo-cultism. (i.e. if we build something that looks like an airstrip according to our best understanding of what an airstrip is planes will begin dropping off goodies).
Even if the rest of your critique were right, this wouldn’t be quite right. The point of AI is to produce consciousness? Well… if we’re after widgets, well, we can tell whether we got the widgets. If we’re after things that act conscious, we’ll be able to tell whether they act conscious. If they don’t act right, then we can tell that there’s something wrong in our models. We can then attempt to improve them…
After all, if the cargo cultists build an actual functioning airstrip complete with airplanes, cargo may not be airdropped in from mainland USA, but they have a functioning airport.
> I can see I will have to avoid humour and be very explicit at every step if I am to have any hope of communicating with you.
When in a debate, jokes that distort your opponent’s viewpoint are unwise to make in general, but especially when there is little trust that that viewpoint is not going to be distorted to hell and back, unintentionally or not.
Seriously… cargo cultists? You think neuroscience is THAT unable to explain anything?
> Merely that we cannot know whether anything outside our own mental phenomena is
Wooot wooot woot ! Breakthrough! We have an epistemic problem here. THAT I can deal with. Yes!
> Have you ever had a dream, drachefly?
> Have you ever read a novel, watched a movie or played an RPG?
I have!
If this is a dream, novel, movie, or RPG, it is a dream (etc.) that as far as we can tell obeys strict physical laws, a number of which we have identified with varying degrees of tentativeness. I haven’t had any dreams like that.
If life were an RPG where you roll a d20 and apply some modifiers and succeed or fail, then those rules, among others, define the physics of that world. If a player somehow unaware that they are in a game were to experimentally determine these rules and publish them… they would be right. Those ARE the rules of the game. If the rules went so far as to explain the player’s thoughts, and brook no exceptions, then… that’s no game, that’s just a weird set of physics.
And there’s a funny thing about this. It ties into one thing that you said earlier, about having no idea how a variable would fit into a theory and what values it might take.
Whatever new stuff we add to theories, it has to explain all the old facts. We can heavily constrain any new physics that would have to pop up to explain consciousness. There are a hell of a lot of things that can’t be it.
And if we don’t need new physics to explain consciousness, then what the fuck is the objection in the first place?
You did read my next few sentences where, in conjunction with that one, I showed it was neither sufficient nor necessary, right?
Just what do you think consciousness is drachefly?
I’ve defined my terms.
It’s Turiya. Sakshi. Chit. The witness. Awareness. Not of the self. Not of the sensoria. Not of an internal voice. Not of subject nor of object. Just awareness. Like you might have when you’re totally absorbed in something – then take away the something.
Other than say they are conscious there is nothing they might do to cause me to suspect that. But for all I know they’re lying. They wouldn’t know either. They’d be like Arthur Dent with his brain replaced with a computer – programmed not to notice. Just to hear themselves thinking “Darn, no orange juice” and write inane essays.
That is one of the huge problems in defining it and why people like Chalmers talk about epiphenomenal p-zombies.
There is nothing agreed upon that could objectively demonstrate consciousness. It is entirely subjective (except that it remains when subject-object is gone).
We sometimes say “I have been unconscious” but all we know is that we can’t remember being conscious. We know that retrograde amnesia can do that even when the person who is to suffer it showed no conventional signs of unconsciousness at the time.
What would someone lacking consciousness look like?
Who knows?
Do psychopaths have consciousness?
Do autistic people with no theory of mind?
Do animals? (Many philosophers and neuroscientists would say either “No” or “Only higher primates”, but how would they know?).
How would you describe consciousness to someone who lacks it?
How can you describe any experience to someone who hasn’t had it?
The fact is you can’t.
You might tell someone “Strawberries taste kind of like mulberries but with a touch of apple and a hint of lemon” and that person might think he now knew the experience of tasting strawberries but he would be wrong.
What about consciousness? What is that similar to? Does it taste like chicken?
Is it something like subvocalising? Is it related to observing yourself observing yourself observing yourself …?
Or is it not remotely like anything except consciousness itself?
It would only have a structure if it was conditional. Made of other things. Stuff.
According to the people who have dedicated more time and effort to the study of consciousness than any others – the Indians – it is not.
Pure consciousness is ‘neti, neti, …” (not this, not this …). When Zen masters want to induce such a state in their students they do not tell them things to make them think about it. They ask them koans to make them stop thinking.
I don’t know why you keep using such a lousy example. There is no such thing as a chair beyond the ‘chairness’ imposed by an observer.
Is a chair still a chair when you remove a leg? All it’s legs? Reduce it to splinters? At what point does it cease being a chair? At what point does an extruded plastic blob become a chair? When it’s poured into the mold? When it hardens? When someone sticks a ‘chair’ label with a bar-code on it? When someone sits on it?
If a chair is cut into a solid rock-face where does the chair end and the rock-face begin? Is the whole mountain a chair?
Is a tree-stump a chair when someone gets the idea to sit on it? Is a chair still a chair when it’s being used as a footstool? What if there’s a tangle of vines in the deepest Amazon unseen by humans that happens to be identical in form to a Walmart cane chair? Is it a chair?
When everything with a bum goes extinct will chairs still be chairs?
There’s no such thing as true objectivity drachefly. So there are no true objects. Certainly not minds. Consciousness even less so.
If the only thing consciousness ’causes’ is for people to talk about their consciousness they sure wouldn’t act like escapees from an Ed Wood movie.
A lot of philosophers and neuroscientists insist animals (or sometimes just non-primates) lack consciousness.
They don’t seem very zombie like. (The animals I mean. Neuroscientists, on the other hand …).
Personally and subjectively I agree with you. Because it seems to me that I have consciousness and it plays a causal role in some things I do. But I have no idea how I could demonstrate my consciousness to someone else nor confirm someone else’s consciousness to my own complete satisfaction. I know I’m not faking it and I’m reasonably confident I’m not completely mistaken. I can’t say the same about you.
Some things just aren’t subject to objectification drachefly.
So if you build a complete functional emulation of a human brain (according to your best understanding) but it fails to develop intelligence have you still built a functioning brain?
Is a bunch of buildings and some tarmac strips still an airport if it isn’t on any air routes and no planes ever arrive? Was Botany Bay a seaport before Captain Cook so described it?
I seriously think you are THAT unable to critically examine your own assumptions. Of course cargo cultists generally have the excuse that they lack access to better information. What’s yours?
And I think neuroscience is very poor at what its proponents claim it to be doing. In over half a century of trying with billions poured in by drug companies it is yet to find a single reliable biomarker (or set thereof) for any mental illness. And you think it can tell us how the mind works?
In it’s early days neuroscience had some big breakthroughs in regards to epilepsy, alcoholism, Alzheimer’s, syphilis, traumatic brain injury, etc, causing people like Emile Kraepelin to get very excited and claim it would soon be able to explain everything about the mind. In the century since it has produced a thin dribble of actual knowledge and a torrent of harmful drivel that has led to therapies such as destructive psychosurgery and ECT. (In fact much of what neuroscientists think they have learned has come from ill considered or botched psychosurgery). It has also reverse engineered bogus theories to justify what it’s Big Pharma paymasters were already doing, such as the serotonin theory of depression and the D2 dopamine pathway theory of schizophrenia.
No, I am not impressed by neuroscience. Not even by the internal phrenologists with their psychedelic fMRI readouts. I wouldn’t mind one on a t-shirt though. It would go well with my tinfoil hat.
Funny. Freudians and Jungians seem just as convinced that dreams obey laws as Scientismists believe the universe does. Surely if you think your brain is your mind and it obeys physical laws your dreams must do so as well. Or are they something outside your physical model? Does God insert them?
Until the dungeon master did something that contradicted all the laws seen so far. Or the game company released a revised rule book. Or the players got drunk and stopped following the rules. Or the cat ran across the board and introduced chaos into the universe. Or …
The game character would be completely unable to account for many factors that are outside his reference frame. Such as why some of his companions seem to disappear or stop acting for no reason (i.e. when some of the players
haven’t made it to the game that day). His theory of ‘RPG physics’ would not only be incomplete but incompletable from within his reference frame. (BTW, there’s a good Greg Egan story about an artificial intelligence project that is so successful its denizens manage to escape their reference frame. It raises an ethical question about AI research you may wish to seriously contemplate).
Your own mind is your frame of reference. How do you propose breaking out to check for external ‘rules’?
I’m not sure what you mean by this. All theories are incomplete. They don’t explain all the facts. Many ‘breakthrough’ theories are due to hiving off one particular phenemona from something that was previously explained holistically, explaining that phenomena only and ignoring the rest. Newtonian gravity, for example, failed to explain why rocks fall faster than feathers in an atmosphere.
Reductionist science is inherently unable to explain all the facts. It must ignore many of them to come up with coherent theories at all. But the truth of the matter is there is no such thing as perfect objectivity, perfect isolation or a perfect vacuum. Everything is linked. If a single unknown exists anywhere within the phenomenological universe nothing has been completely explained.
I’m not surprised that you came to the same conclusions as he as you consistently use one of the same methods – building your conclusions into your premises. That is what I keep referring to as the circularity of your arguments.
In Yudkowski’s case he is trying to refute Chalmer’s claim that actions do not necessarily imply consciousness by starting with the position that a certain action (internal monologue) does imply consciousness (or is consciousness) then using it to – lo and behold – show that internal monologue implies consciousness. This is not a syllogism. As Monty Python would point out it is not even an argument. It is simply a contradiction buried in verbosity.
Consciousness is not an action. It is not something you do. It’s something you have (or don’t). More correctly – as consciousness is not ‘stuff’ – ‘conscious’ is something you either are or are not.
I know I said I wouldn’t pull Yudkowski apart point by point but there is a particular furphy he is propagating in the article that has been around for a while that I want to nail.
He claims that the reason Chinese speakers have a short-term memory store of ten digits (I’ll dispense with CIs here to keep it simple) and English speakers only have seven is because Chinese digits are all single syllables (i.e. He believes short term memory is not seven items but rather it is ten syllables). This is nonsense that could only be swallowed by speakers of American English who equate thought with sub-vocalisation (as Yudkowski appears to do).
A moment’s reflection (and a little calculation) would show that if this was the case you would expect Standard English speakers to have a short term memory capacity of around nine digits as there is only one SE digit of more than one syllable (as opposed to American English speakers who use ‘zero’ instead of ‘nought’ or ‘oh’).
You would also expect short term memory involving word lists to be ten items long if the words were all of one syllable. This is not the case either. There is some inverse correlation between word length and short term memory capacity but people can typically retain twelve to eighteen syllables if they are all imbedded in familiar everyday words but can still only retain seven words.
All the propagators of this furphy had to do was ask a native Chinese speaker (or have one on there research team as the debunkers at James Cook University did) and they would have learned that literate Chinese speakers are practiced in manipulating ideograms rather than phonemes. They don’t sub-vocalise digits, they visualise them. This is a completely different mechanism which apparently involves completely different sections of the cortex. Why it seems to be more efficient with regards to short term memory is anyone’s guess, but there you go.
A bit of introspection would reveal to Yudkowski that he too does most of his thinking by manipulating symbols and the sub-vocalisation is added on (perhaps that’s the added overhead that restricts the short-term memory of sub-vocalisers). But he’s not very good at introspection, is he?
I think I’ve dispensed with the notions that (a) we know things are all ‘stuff’ (unless we are materialists, in which case we ‘know’ things are stuff in the same way Christians ‘know’ God is love) and (b) that stuff follows laws (as opposed to laws following stuff or something else entirely).
If your decisions are entirely causal there is no ‘you’ (i.e. no ‘itself’ or even ‘parts of itself’) because no matter where you try to draw the border around the system of ‘you’ (unless you take it as the entire universe) everything within your boundary is entirely defined by things outside the boundary. You have no intrinsic ‘function’ as you can’t be functionally separated from everyone and everything else. So to say ‘you’ have thoughts or ‘you’ make decisions is wrong. The universe does and some of them happen to occur in the place and time you identify as ‘you’ (because you have no choice but to thus identify).
Ditto if your actions are at least partially non-causal but ‘random’ (in the sense that particle decay or electron tunneling is random). If ‘you’ can’t influence it, it is meaningless to call it ‘you’.
There is only a ‘you’ if there is something that can act non-causally but with direction. That is, it can bring new, self-directed causality into the universe. Otherwise there are no decisions, only clockwork winding down towards the heat death. There is no virtue or vice, no crime or punishment. Criminals have no choice but to offend, juries have no choice about whether to convict or acquit, judges have no discretion over sentencing and shock jocks cannot decide not to foam at the mouth, call the punishment patently inadequate and demand the death penalty (hmm, that last one does seem to provide evidence for a deterministic universe).
Yep. What’s more the epiphenomenal p-zed would produce exactly the same output every time. E-p-zombies are built on the supposition that physicalists are correct in that the phenomenological universe is an entirely closed causal system.
But it’s probably worth pointing out that Chalmers doesn’t believe this. He, like me, leans heavily towards meta-ontological anti-realism (or at least he did when I knew him), which is to say we seriously doubt that any ontological statements can ever be shown to be true. The e-p-zombie is merely a construct used to highlight inconsistencies in physicalism. I don’t think it’s been particularly successful at that but I do believe it’s arguments are more internally consistent than that of physicalists (which isn’t to say much). If e-p-zombies tell us anything at all about our universe it’s only that physicalist are a bit silly. I disagree with Chalmers in that I don’t think they tell us a thing about what consciousness might be, though they can perhaps be made to tell some of us one thing it is not. Personally I doubt anyone but the most uncommitted waverers would be disavowed of physicalism by e-p-zombies.
However I do think C-p-zombies (Cartesian automata) provide an intuitively persuasive argument against mind-body monism.
There is a world of complexity in the word ‘want’. A lot of people ‘want’ to get blind drunk on a regular basis even though that results in behaviour they normally wouldn’t ‘want’ to do. Would preventing them from getting drunk make them more or less free?
> I think I’ve dispensed with the notions that (a) we know things are all ‘stuff’ (unless we are materialists, in which case we ‘know’ things are stuff in the same way Christians ‘know’ God is love) and (b) that stuff follows laws (as opposed to laws following stuff or something else entirely).
I didn’t catch any knockdown arguments for either of those, actually. Induction is not nearly as fraught as you seem to think it is. If it were, it wouldn’t work so damned well in real life.
I’m willing to accept a merely very high but non-unity credence on materialism based on induction. Does that fly for you too?
Like, you say
> we seriously doubt that any ontological statements can ever be shown to be true
Well, cogito ergo sum seems solid. And, ‘there exists something that I am not aware of’ too. Beyond that is theories with more or less support.
> everything within your boundary is entirely defined by things outside the boundary. You have no intrinsic ‘function’ as you can’t be functionally separated from everyone and everything else.
The ‘You’ has input channels built into the model. All other influences are failures of free will. Remember that this free will thing isn’t binary – it comes in degrees. So the random effects from quantum mechanics and the environment like gamma rays impacting… those are limitations on free will as you describe. But since our brains are fairly robust, those effects are minimized. They’re not enough to eliminate free will, merely ding it up a little. We normally actually exhibit far less free will than that constraint reduces.
Add cocaine? Now we’re talking trouble. 😉
See, both of these (materialism and free will ) boil down to being willing to see that big fuzzy things that have imperfections are easily adequate for our philosophical purposes. Just because materialistic free will isn’t perfect doesn’t mean we haven’t got any.
> Would preventing them from getting drunk make them more or less free?
It would make them less free on that subject at least. But while they would have been falling-down drunk, they’re going to have far more free will. OBVIOUSLY. Jeez, how is this even a question? You seem to think that this is going to make smoke come out of my ears.
LOL! Induction works well in real life?!!!
What do you think gave us the ‘divine Watchmaker’ theory of Creationists? The ‘ether’ as a medium for the transmission of light waves? Pretty much every prejudiced, bigoted and racist assumption you have ever run across? What do you think your own ill-founded belief in Scientism stands on?
How do you think confirmation bias gets started and what do you think it does to subsequent attempts at induction?
Why do you think the overwhelming majority of scientific experiments fail to prove their thesis?
What do you think many of the successful ones are doing if not demonstrating the failure of earlier inductive logic, only in turn to be shown to be based on false induction themselves and superseded? (though I’m using inductive logic in the latter assumption so I’m very likely wrong).
Sympathetic magic is based entirely on inductive logic, though it is probably equally true to say inductive logic is a form of sympathetic magic. Your model for GAI certainly is.
If you read Nicholas Taleb’s The Black Swan you will discover the entire history of economic forecasting is a litany of failed induction.
If you read Karl Popper or Thomas Kuhn you will realise that science pretty much only progresses by uncovering failures in inductive logic.
And if you read David Hume you will understand that induction doesn’t have a leg to stand on.
Someday the sun will not rise in the east you know.
Clarification on my answer to the free will issue:
We also only have free will to the extent that we are implementing a conscious self-aware being with actual control over our actions. So if I’m drugged up, I do not for the time being have unimpaired free will. If I have instinctual responses, those are not part of my free will. etc.
So yes, in some respects we do not have free will all the time on every subject. I’m straight. That’s not a matter of free will.
> I’m still waiting for a computer program that can provide a decent challenge to a moderately experienced Go player.
Yup. We don’t know how to describe how to play Go very well, and brute force search is atrocious. And we don’t (yet) know how to program computers generally smarter than we are.
I’m not sure if that was just a comment or an actual argument, but I agree denotatively and, I think, connotatively.
> So lets see where we are with this argument.
>
> drachefly: All we need for AGI is functional emulation of the human brain.
>
> cabrogal: No. We would also need functional emulation of many things that interact with it – including society.
>
> drachefly: Ha! We already have a society. We don’t need to emulate it.
>
> cabrogal: But we need to emulate how it would interact so that it would be accessible to a functional ‘brain’ that is not embodied in human form.
>
> drachefly: Fuck you!
>
> Hmm, a profound and subtle answer. I will have to meditate upon it.
Hmm, a predictable and asinine answer – include only parts of our last two posts that don’t have anything to do with each other, making sure to leave out your provocation and everything but my response to it. Real honest, there.
Let’s expand that last part a teensy bit:
> cabrogal: A: we need to emulate how it would interact so that it would be accessible to a functional ‘brain’ that is not embodied in human form;
> B: It’ll be tough being the only one of its kind
> C: You have proposed torturing a baby by keeping it in a box and don’t think human rights exist.
>
> drachefly: A: I don’t see the problem, here.
> B: The answer is love.
> C: Fuck you.
But to address A in more detail, since you clearly didn’t get it: have you, umm, heard of robots? Humaniform robots? And remote controls? We already have robots that you can only tell they’re not human because they don’t act human. That’s what all the sexbot folks are talking about – you know, that thing that got this whole conversation started? So, do you mean ‘not embodied in human form’ to be taken as a philosophical point or as a practical point?
If you can’t see that aspect of your proposal how about we make it more credible technologically, though less literally AGI?
What if you removed a brain from an otherwise non-viable newborn and used it to build a cyborg on which you could conduct your AGI experiments. Do you see the human rights problem there?
If your theory about the possibility of functionally emulating a human brain to create a human mind it correct, how would that be any different?
You have merely shifted the locus of your hubris and lack of intellectual rigour.
We are nowhere near the point where we can even create an artificial limb that provides the sort of sensory feedback (including proprioception) that exists in a real one. And that’s before we even get into all the other functions we know about such as ‘body memory’ which seems to be encoded somewhere outside the brain (when I was studying psychology in the 70s and 80s it was assumed to reside in the autonomic nervous system but no-one really knew. Just as no-one yet knows where narrative memory resides. Neuronal engrams remain a theory with no experimental support despite recent nonsensical claims by Tonagawa et al.)
The fact is we are yet to even fully define how the body feeds back to the brain (even less so the mind) much less being on the verge of artificially emulating it.
Gotta take another break until tomorrow.
By my count I’m still one and half comments behind you.
>Ahh, now I see.
>
> Everything is explicable via physicalism except for the things drachefly would not expect to be explicable via physicalism.
>
>Why didn’t you say so in the first place?
>And if some of those things are a factor in intelligence …?
Physical models provide limits on what you can actually learn. These limitations are a part of the theory. You can sometimes tell the sorts of questions that can and cannot be answered. Like, ‘in what order will each of these free neutrons decay?’ Nope, can’t answer that. ‘Do electrons have state outside of position/momentum?’ Yes, we can answer that (and the answer is ‘only spin’) (and the way we can tell will most likely come up later, what with the quantum stuff you asked about).
This isn’t a weakness of physicalism (or to be more relevant since as far as I can tell physicalism is a category of theories of mind while this topic is more overall ontology, ‘materialism’). That you consider it one is worrisome. Did you get your epistemology from a Cracker Jack box? Do you think we did?
Individual physical models are contingent and provisional; they are not physicalism. The process as a whole has room to absorb new complexities and processes. Whatever you’re proposing might* have a causal influence on cognition would have to be pretty drastic in order to not be describable under _some_ physical model.
*{speaking of which, when I said you were saying it would (as opposed to might), I was simply focusing in on the case where we actually disagree; sorry for being slightly unclear on that point}
>Actually there are physicists who do try to explain it via ‘hidden variables’. I am not aware of any great success on their part though.
Explain what with hidden variables? There was a recent proof that even global hidden variables aren’t compatible with experimental data except under formulations which leave the wavefunction real and minimize the hidden variables’ role (e.g. Bohm).
> As explained earlier, any true expression of free will is a singularity.
o.O
I’m going to suppose that by ‘singularity’ you mean a miniature Kurzweilian ‘event horizon for predictions’ singularity, since no other meaning I can think of makes the slightest sense here. I suppose you could be misinterpreting wavefunction collapse or something, but I’ll give you the credit to proceed to tentatively conclude that it’s not that.
So, let me explore this possibility. Once upon a time, two of my friends got married. Was that by their free will? You could see that coming years in advance. Was it only an expression of free will before it was predictable? Within a few moments of my finding out about their going out, I thought they were a good match and would have laid slightly favorable odds on their getting married. Was there no expression of free will in their relationship in the intervening ten years, or is simply making a justified medium-confidence prediction inadequate? Was it fractional free will since my prediction was low confidence? What if I had known them better or worse? What if someone else knew them better? Is free will relative to a model (think ‘observer’ but less personal), like classical randomness, or entropy?
Basically, what are you taking ‘free will’ to be, and why should I have the slightest value that instead of what I’ve described above as what I take free will to be?
… wow, this has gotten long and I haven’t even gotten to the QM yet.
Thanks for rejoining the discussion drachefly. I thought you had abandoned me.
I’m currently engaged in a discussion on a very different topic and it will be the weekend before I have had the chance to look at your most recent comments properly. But I’d just like to clarify something.
When I was studying my Masters of Philosophy (still incomplete) just over ten years ago ‘materialism’ was the ontological assertion that everything in the universe is ‘stuff’ (typically matter/energy). ‘Physicalism’ was the epistemological addenda to materialism that therefore the behaviour of the entire universe is definable or describable with the laws of physics (albeit perhaps not as we know them). It was not popular at Charles Sturt University where I studied because to be robust it required either the problems Hume pointed out in inductive logic be resolved or for physical laws to be derived/discovered with something other than the scientific method. In fact the notion that there are such things as independent physical laws governing the universe as opposed to a set of man made descriptions that approximate previous observations and have been used to (more or less) predict future ones is seen as pretty muddle headed by almost every philosopher of science I am familiar with (I know of one exception but he always declined to back his assertion with logical argument).
In practice people often said ‘physicalism’ when they meant ‘materialism’ because the Marxists had effectively appropriated the latter term and I may have been guilty of doing the same in some of my earlier comments.
I have run across people in the past who insist that all dualism is mind/body dualism (doubtless Plato, Hegel, Sankara and many others would have disagreed) but you are the first I have struck who equates physicalism with mind/body monism.
However I think your position is better described as ‘Scientism’ anyway, so that is the term I will use from now on.
Kurzweil borrowed his terminology either from mathematical singularities, mechanical singularities or temporal singularities as did I. I certainly didn’t borrow from Kurzweil. Cosmology borrowed from the same sources.
I simply mean a point in a process either beyond which it is impossible to make predictions or (as in this case) before which it is impossible to extrapolate prior conditions.
If free choice truly exists there is nothing prior to it that could have reliably predicted which way it would have gone. Certain arrogant people might think they can predict with confidence when, say, friends might get married but to be doing anything other blowing hot air they would have to be able to completely model the minds of both their friends in their own (at a bare minimum) otherwise they couldn’t possibly know whether the stress of wedding preparations would break them up or whether when asked to say “I do” the groom would say, “Err, actually when I saw how the best man looks in a suit I suddenly realised I’m gay. So no I don’t”.
quick comment:
> you are the first I have struck who equates physicalism with mind/body monism
That’s just my unfamiliarity with the term speaking. Thanks for clearing that up.
[…] for his book Siva Sutras: The Yoga of Supreme Identity and the commenter ‘drachefly’ on Maggie McNeill’s blog The Honest Courtesan for his persistent refusal to let go of his world view despite my merciless […]
Thanks again for this discussion drachefly. It has helped me greatly to refine some of my own understandings.
Firstly I’d like to say that I see you are probably correct in some things. Not that we are close to developing true artificial intelligence nor even to being able to estimate when or if that will happen. Certainly not that trying to emulate what little we currently understand of the functioning of the brain will automatically produce something that is both conscious and intelligent in any sense of those words we usually use. In fact just to emulate what we already know of the organic, analog brain would take some kind of technology very different to the silicon-based digital technology we are currently using.
One thing I categorically disagree with David Chalmers about is that neurons are switching devices akin to silicon chips. He has let a popular contemporary metaphor overwhelm his usually razor sharp skepticism. Neurons do far more than merely switchboard electrochemical impulses. They grow and destroy dendrites, transmitter and receptor sites according to both intrinsic and extrinsic signals for example. While it might be possible to emulate some of this potential by adding a heck of a lot of redundant circuitry the fact is there are not as many particles in the universe as there are potential connections in a human brain.
But I am now very inclined to think p-zombies cannot exist (or be conceived of) and I am far from certain that consciousness of some sort cannot emerge from inorganic matter. In fact I now lean strongly to the view of many Indian philosophers that consciousness is immanent in everything in the universe. So even rocks possess consciousness. George Berkeley said as much too, but I had dismissed it as intellectual cowardice. I thought he was resiling from the loneliness and chaos of his own empiricism to seek solace in the arms of his God. I did him as great a disservice as I once did with Sankara and his insistence on the importance of Ishvara.
I don’t think the brain is akin to a mobile phone, ‘picking up’ consciousness from an external source. Nor do I think it’s akin to a light filament producing consciousness as a response to stimuli. I now think it is probably a filter and distorting lens, narrowing the overwhelming ‘sensation’ of consciousness that is the fundamental ‘stuff’ of the universe (not that it’s really ‘stuff’). The appearance of a four dimensional space-time universe consisting of matter, energy and causality is a bare shadow of the reality that our limited brain/mind stuff cannot possibly deal with and still expect to survive.
As I told Celos I am not actually a dualist but a radical monist. I now believe my monism has been insufficiently radical to provide a framework for expressing the reality it is possible to ‘apprehend’ or ‘realise’ once the feeble tools of sensory input, rationality and brain/mind are set aside. Thank you for helping me to understand this.
Your link to the Bob Zadek interview goes to a dead end. I think this is the correct URL, though: http://www.bobzadek.com/past-shows/2014/6/3/america-the-most-sexually-hypocritical-nation-on-earth?