đ° Review: The AI Does Not Hate You
Author: everythingstudies.com
Full Title: Review: The AI Does Not Hate You
URL: https://everythingstudies.com/2020/03/20/review-the-ai-does-not-hate-you/
Rationalist discourse tends to assume materialism for the most part, be finicky with the meanings of words, at the same time take ideas seriously and with charity, and at least try to avoid seeing beliefs as markers of social allegiance, and instead evaluate them according to logical coherence and empirical plausibility. The combination of all these is rare and powerful, while also feeling so obviously correct to me. Itâs homey, itâs what I identify with, and itâs what I want to protect when I feel solidarity with them whenever theyâre attacked or sneered at, even for things I donât agree with.
My unscientific impression is that preoccupation with AI, transhumanism and polyamory rapidly decrease with distance from the inner circle, and that hardcore utilitarianism is at least controversial all over.
In fact, if youâre anywhere near my corner of Twitter, the topic of âwhat rationality even is and how it differs from post-rationality and meta-rationalityâ comes up periodically and most often turns into a confused mess as people throw their pithy and partially contradictory takes into the ring. In truth itâs all a loose network of people and writing, barely held together by complicated, criss-crossing strands of common beliefs, attitudes and references.
Judging by what I know of Yudkowsky and his writing heâs excited about the possibilities. I canât justify the feeling that Iâm not, and I suspect teenage me would look down on current me for it. Itâs just that the sheer enormity of the consequences is too much for me to handle on an emotional level, and I donât feel at all comfortable with what I perceive to be an expectation of no to low sensitivity to future shock among core rationalists. Those eager to spend a lot of time thinking and talking about the prospect of a Singularity feel alien to me for this reason.
And I realised on some level that this was what the instinctive âyuckâ was when I thought about the arguments for AI risk. âI feel that parents should be able to advise their children,â I said. âAnything involving AGI happening in their lifetimesâI canât advise my children on that future. I canât tell them how best to live their lives because I donât know what their lives will look like, or even if theyâll be recognisable as human lives.â I then paused, as instructed by Anna, and eventually boiled it down. âIâm scared for my children.â And at this point I apologised, because I found that I was crying.