My view of power is more classically liberal. In his book “Liberalism: The Life of an Idea,” Edmund Fawcett describes it neatly: “Human power was| sun.pjh.is
All of the entries posted on Sunglasses, Ideally tagged Quote| sun.pjh.is
The latest entries posted on Sunglasses, Ideally| sun.pjh.is
Ethan Mollick (my emphasis): In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about| sun.pjh.is
People no longer understand ambiance or aesthetics because they’re too distracted by the macro. We have lost the ability to notice details. Once you become a walker, you realize how much people are missing out on life. On walks, you see how perfect light can be. You see how many shades of blue exist between different flowers and skies. You notice the Grandmother who tends to her flowers. You feel her love as she caresses them. You realize that we can’t automate everything. A friend compla...| Sunglasses, Ideally
Jack Clark’s newsletter (my emphasis): The things people worry about keep on happening: A few years ago lots of people working in AI safety had abstract concerns that one day sufficiently advanced systems might start to become pathologically sycophantic, or might‘fake alignment’ to preserve themselves into the future, or might hack their environments to get greater amounts of reward, or might develop persuasive capabilities in excess of humans. All of these once academic concerns have ...| Sunglasses, Ideally
In Tom Davidson’s words: In expectation, future AI systems will better live up to human moral standards than a randomly selected human. Because:| sun.pjh.is
Some senior figures at frontier labs think that even today’s machine learning algorithms are already sufficient to automate most white-collar work| sun.pjh.is
We now know that“LLMs + simple RL” is working really well. So, we’re well on track for superhuman performance in domains where we have many examples of good reasoning and good outcomes to train on. We have or can get such datasets for surprisingly many domains. We may also lack them in surprisingly many. How well can our models generalise to“fill in the gaps”? It’s an empirical question, not yet settled. But Dario and Sam clearly think“very well”. Dario, in particular, is sayi...| Sunglasses, Ideally
Previously| sun.pjh.is
It can be hard to “feel the AGI” until you see an AI surpass top humans in a domain you care deeply about. Competitive coders will feel it within a| sun.pjh.is
“Henry, there’s something I would like to tell you, for what it’s worth, something I wish I had been told years ago. You’ve been a consultant for a long time, and you’ve dealt a great deal with top secret information. But you’re about to receive a whole slew of special clearances, maybe fifteen or twenty of them, that are higher than top secret. “I’ve had a number of these myself, and I’ve known other people who have just acquired them, and I have a pretty good sense of what...| Sunglasses, Ideally
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills.| sun.pjh.is
These are all sort of basic software things, but you’ve seen how crappy enterprise software can be just deploying these‘best practice’ UIs to the real world is insanely powerful. This ended up helping to drive the A350 manufacturing surge and successfully 4x’ing the pace of manufacturing while keeping Airbus’s high standards of quality. https://nabeelqu.co/reflections-on-palantir| Sunglasses, Ideally
Why does privacy matter? What are the best principled arguments for it? Where to set the boundary [i.e., when is it best to require that certain actions be more broadly known]? My own answer to this bundle of questions isn't very good. I've read a few classic books, papers,… — Michael Nielsen (@michael_nielsen) October 3, 2024 It's tempting to give purely consequence-based arguments (in terms of forestalled knowledge and invention and life experience and so on). But while I agree, somehow...| Sunglasses, Ideally
to whatever extent you can enjoy the arising of superintelligence I think that’ll be a lot more comfortable and enjoyable than contracting and trying to put your whole life force into extracting your last couple years of intelligence alpha — Nick (@nickcammarata) September 12, 2024 like this hasn’t propagated into every chart yet and the charts where it hasn’t you’re still ahead of the models. But I’d guess it won’t take more than a year or two before that strategy doesn’t wor...| Sunglasses, Ideally
The prompt: oyfjdnisdr rtqwainr acxz mynzbhhx -> Think step by step Use the example above to decode: oyekaijzdf aaptcg suaokybhai ouow aqht mynznvaatzacdfoulxxz The chain of thought: First, what is going on here? We are given: First, an example: “oyfjdnisdr rtqwainr acxz mynzbhhx -> Think step by step” We can see that when“oyfjdnisdr rtqwainr acxz mynzbhhx” is transformed to“Think step by step” Our task is: Use the example above to decode:“oyekaijzdf aaptcg suaokybhai ouow aqht ...| Sunglasses, Ideally
“It’s dark because you are trying too hard. Lightly child, lightly. Learn to do everything lightly. Yes, feel lightly even though you’re feeling| sun.pjh.is
That’s Stephen West on Žižek—on the most popular kind of overconfidence. The philosopher seeks“cracks in the symbolic edifice that grounds our social stability.” And yeah, most people resist such thoughts—correctly.| Sunglasses, Ideally
Peter Singer famously argues that it’s difficult to come up with criteria to explain why killing babies is wrong without those criteria also| sun.pjh.is
Some notes on “Fear of Oozification”. I accept the evolutionary picture. So do Yudkowsky, Shulman, Hanson, Carlsmith and Bostrom. Key disuputes:| sun.pjh.is
Bostrom defines a Singleton as follows: A world order in which there is a single decision-making agency at the highest level. Among its powers| sun.pjh.is
I don’t know who first said this. But Nathan @labenz repeats it often. He’s right to do so| sun.pjh.is
Over the past few years, Joe Carlsmith has published several blog posts that nicely articulate views that I’ve also arrived at, for similar reasons, before he published the posts 1. My own thinking has certainly been influenced by him, but on non-naturalist realism, deep atheism and AI existential risk, and a few other topics in AI and metaethics, I was definitely there-ish before he published. But: I had not written up these views in anything approaching the quality of his blog posts. I’...| Sunglasses, Ideally
Holden Karnofsky: One of the reasons I’m so interested in AI safety standards is because kind of no matter what risk you’re worried about, I think you hopefully should be able to get on board with the idea that you should measure the risk, and not unwittingly deploy AI systems that are carrying a tonne of the risk, before you’ve at least made a deliberate informed decision to do so. And I think if we do that, we can anticipate a lot of different risks and stop them from coming at us too...| Sunglasses, Ideally
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on“counterfactual” task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the co...| Sunglasses, Ideally
We examine whether substantial AI automation could accelerate global economic growth by about an order of magnitude, akin to the economic growth effects of the Industrial Revolution. We identify three primary drivers for such growth: 1) the scalability of an AI labor force restoring a regime of increasing returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive increase in output from rapid automation occurring over a brief period of time. Against this backdrop, we eval...| Sunglasses, Ideally
via @danielfagella| Sunglasses, Ideally
My picture of the world is drawn in perspective, and not like a model to scale. The foreground is occupied by human beings and the stars are all as small as threepenny bits.| Sunglasses, Ideally
Humanity’s self-alienation has reached such a degree that it can experience its own destruction as an aesthetic pleasure of the first order. via Curtis.| Sunglasses, Ideally
Whether or not you obsess over the particulars of overpopulation, Malthus’s theory is more broadly one of human pressures on the environment, and the lack of suitable equilibrating mechanisms at anything other than extremely high human costs. The simplest version of Malthus is an account of how the world runs when all essential factors do not grow at the same rate, and in particular those growth rates diverge in a roughly consistent and sustained manner. At some point one of those factors b...| Sunglasses, Ideally
Tyler Cowen recommends Mill’s essays on Bentham and Coleridge as among the best essays ever written, a great introduction to Mill’s thought, and“the most sophisticated perspective on a form of neo-Benthamism today, namely the effective altruism as a movement”. I found the key ideas familiar (partly because Tyler is constantly recommending them), but I was glad to read them from the man himself. According to Mill, Bentham’s chief contribution was to exemplify and spread the idea that...| Sunglasses, Ideally
One way to approach the puzzle of deontic constraint is to ask whether rational action necessarily has a consequentialist structure, or whether it can incorporate nonconsequential considerations. […] Unfortunately, many theorists (philosophers and social scientists) have been misled into believing that the technical apparatus of rational choice theory, introduced in order to handle the complications of probabilistic reasoning, is also one that prohibits the introduction of nonconsequential ...| Sunglasses, Ideally