goula.sh

📰 Review: The AI Does Not Hate You

Author: everythingstudies.com

Full Title: Review: The AI Does Not Hate You

URL: https://everythingstudies.com/2020/03/20/review-the-ai-does-not-hate-you/

Highlights from March 2nd, 2021.

Rationalist discourse tends to assume materialism for the most part, be finicky with the meanings of words, at the same time take ideas seriously and with charity, and at least try to avoid seeing beliefs as markers of social allegiance, and instead evaluate them according to logical coherence and empirical plausibility. The combination of all these is rare and powerful, while also feeling so obviously correct to me. It’s homey, it’s what I identify with, and it’s what I want to protect when I feel solidarity with them whenever they’re attacked or sneered at, even for things I don’t agree with.
My unscientific impression is that preoccupation with AI, transhumanism and polyamory rapidly decrease with distance from the inner circle, and that hardcore utilitarianism is at least controversial all over.
In fact, if you’re anywhere near my corner of Twitter, the topic of “what rationality even is and how it differs from post-rationality and meta-rationality” comes up periodically and most often turns into a confused mess as people throw their pithy and partially contradictory takes into the ring. In truth it’s all a loose network of people and writing, barely held together by complicated, criss-crossing strands of common beliefs, attitudes and references.
Judging by what I know of Yudkowsky and his writing he’s excited about the possibilities. I can’t justify the feeling that I’m not, and I suspect teenage me would look down on current me for it. It’s just that the sheer enormity of the consequences is too much for me to handle on an emotional level, and I don’t feel at all comfortable with what I perceive to be an expectation of no to low sensitivity to future shock among core rationalists. Those eager to spend a lot of time thinking and talking about the prospect of a Singularity feel alien to me for this reason.
And I realised on some level that this was what the instinctive ‘yuck’ was when I thought about the arguments for AI risk. ‘I feel that parents should be able to advise their children,’ I said. ‘Anything involving AGI happening in their lifetimes–I can’t advise my children on that future. I can’t tell them how best to live their lives because I don’t know what their lives will look like, or even if they’ll be recognisable as human lives.’ I then paused, as instructed by Anna, and eventually boiled it down. ‘I’m scared for my children.’ And at this point I apologised, because I found that I was crying.