"...weird nerds are increasingly marginalized as things optimize for legibility to large central funding sources. At this point, the movement has died....This really took off by 2018 or 2019. Since thing, EA has been a zombie movement that exists to launder the decision making of a certain kind of billionaire as the process of a distributed cognition."
just strikes me as sort of perfectly anti-correct. In many ways, EA is laundering weird ideas /into/ respectability. This is a totally conscious strategy, there are probably dozens of EA forum posts implicitly saying this.
This is Good. Laundering weird ideas into respectability is the basis of large-scale values change. If only weird nerds hold ideas, they have an intrinsically limited influence.
Mass persuasion (eg, success for EAs) basically looks like this.
And your suggestions: "Prohibitions on allowing people with histories as lobbyists, journalists, and similar to sit on EA decision committees." are basically saying [don't engage with powerful mainstream people], which is fine if you have some magical source of power that doesn't rely on already extant elites, but dumb if you. Some tech elites (Balaji, Elon, etc.) can effectively do this, but they don't work in policy, and this is a sustainable strategy for a whole movement.
that's definitely the counterargument to taking any of my suggestions: something like "keep doing the same thing and hope that indigenous EA culture wins out over the waves of newcomers with influence and money."
there might be some kind of tradeoff between a smaller, purer EA and a larger EA that ends up becoming substantially a carbon copy of what it replaced but with rationalist frills. if you assign weight to the model i offer above, i guess the thing to do is to crunch the numbers on which outcome seems best to you.
a caution i think is that the geeks may not stick around (or will find themselves removed) past a certain threshold of outsider controllers and you may want to weigh that, in particular, explicitly
The short course, in my mind, is my rule "Never give money to big charities", but this assumes the reason to give to charities is to accomplish the aims you intended rather than as a tax writeoff.
>David Chapman’s classic model of subcultural evolution is Geeks, MOPs, and Sociopaths...
I was first exposed to this via a comic spread on 4chan where anons wanted to play dnd, and one among them invite a Chad, who plays reasonably well, but then Chad invites Stacy who doesn't, and then the "game" becomes "flirt and hang out", until the nerds leave and try to make a new game/subculture.
I haven't seen the comic in years and would like assistance in finding it, if anyone has the time.
The main point of this post seems to be that the problem with institutional capture comes from investors having too much power over internal management, rather than too little: chasing whales led well-intentioned insiders astray. I think the standard argument for why this occurs in the private sector is the opposite: HR-style managerialism comes from too little investor power (whether because labor specialization empowers technocrats to act as monopolists of their skillsets, as in Burnham, or, less persuasively, because labor specialization reduces line-workers to reliance on monopsonist employers, as in Wyndham Lewis). And so my prior is that the faddish status-hungry flaws you point out actually come from a lack of memetic sovereignty among insiders, instead of self-sovereign power-seeking.
The personal advice remains the same: cultivate habits rather than memes, because memes transmit horizontally through us, like diseases, and so get selected for optimizing their own spread, whereas habits transmit vertically through us, like genes, and so get selected for optimizing our spread. But the mechanism seems entirely different: a lack of well-defined meaningful ownership over an institution means that nobody is looking out for its long-term health, and everyone is exploiting its resources in poorly aligned ways. And so my political takeaway is that funders should have more power over charities rather than less.
Ironically, I think the root of this view is that markets mostly work by clarifying ownership, not allocating scarce desirables: sure, stock trading constrained by near-perfect competition can as if magically figure out which contractor caused the Challenger explosion within 20 minutes, but the reason a city works is that we all agree on who does own each building so that we don't have to argue about who should own it (and so we mostly don't even think about where the rent goes or which entity our office leases from). In other words, clear ownership allows feedback to select for what owners actually value, while spreadsheet magic selects for signaling: first-gen EAs did well because they were the small-time funders of e.g. malaria prevention, and have gone wrong because they've become the managers of big-time funding spigots. And so I'd suggest that the movement should remain like small-time owners instead of like small-time bureaucrats for big-time owners.
I think this relates to certain blind-spots in standard economics, which focuses on already well-defined agents, who can simply be defined in terms of their expectations and preferences; I think the relevance of memetic sovereignty in this communications age means that we should also focus on the borders which dynamically define said agents. My shameless plug is that I tried writing about that here, in the paragraphs which follow "One way to partition these two types of intelligence is between borders and information": https://cebk.substack.com/p/the-power-of-babble
I unfortunately wrote this and rewrote it and considered rewriting another time to cleanly delineate two important classes of threats from agents with different values: financial (whale donors) and social (the hegemonizing swarm). Sadly I do not have it in me to iterate again. Much writing with proper punctuation is a weariness of the flesh.
I agree with your point about memetic sovereignty but it seems very difficult to maintain; just about every institution you can imagine has been overcome in the last decade.
I wonder if perhaps another solution for EA would be to do away with handling money altogether, and instead to focus on the pure rating/review aspect of what they do, letting others act on that information as they saw fit. There's no reason for these elements to be bundled.
The hilariously ironic thing about EA is that it specifically focuses on causes with not much awareness and not much support, so it's very vulnerable to a sociopathic megadonor entering EA and gaining a really efficient dollars -> prestige ratio!
And extra prestige bonus points for having a new-age sci-fi mystique. It's hard to imagine anyone getting this many puff pieces for donating to Oxfam.
If all donations become anonymous, wouldn't that create a whole host if potentially worse problems: people donating to really socially damaging causes, doing nepotism/backroom deals without any oversight etc?
This seems correct to me, and I enjoyed reading it.
Your comment distinguishing financial/social threats was likewise enlightening, although I kinda already knew that part; I had a pretty good model of the social threat, so the epistemic value to me from this was mostly delineating the financial threat.
It was also of non-negligible aesthetic value to me; clear thought has a beauty all its own.
Wanted to say thanks 🙏 for writing, and I'm sorry I don't have anything more concrete to contribute.
>No, I don’t mean you have to believe in God if you don’t want to.
From the lens of a hard consequentialist that is the modal EA enthusiast, isn't acting in the manner of the most pious saint not really any different from becoming that saint, given that the outcomes of a saint with faulty or absent theology are indistinguishable from one with truly held Christian beliefs? In fact, if you would act more admirably and with better consequences by fully emulating Christian beliefs and practices, a committed and selfless consequentialist would be *compelled* to commit to practice.
That is, it seems like their beliefs indicate they ought to effectively meme themselves into a full second birth into the light of Christ. Go to church, pray and read scripture daily, raise their children to love and fear the Lord, give to the poor, sick, and needy generously and anonymously, etc. Then they would be untethered from their temporal standing and free to do the Most Effective Good. 🙂
So I take it from this that in your discourse on Blue Rose that Shor is the sociopath taking advantage of everyone but inasmuch as political influence actors go, do you think Shor is actually, well, *effective*?
I agree with very few of his politics but I have always thought his political analysis was objective and on point.
hmm. im not quite sure what you're asking here. i think hes very sharp and i was impressed by his taking a stand a couple of years back on an empirical matter, but my basic belief is that for political operators political operations come first. there are surely cases where this isn't true, but even if you were able to isolate such rare birds its hard for me to imagine cases where it's possible to . . . i don't know. socially isolate them? i think it would take really heroic self-awareness and restraint on their part.
this isn't about shor's specific politics in particular. there are others in the same vein, some of whose politics im even more favorable to in theory, who i just will not interact with, because their social and cognitive mode is fundamentally political rather than exploratory.
to clarify what I was wondering was whether you thought he was capable when it came to prognosticating American politics (since he made his name as a prognosticator of American politics).
I had the impression from your excerpt that you had a negative view of him overall, though I wasn't sure if you meant "I think he is not a good person ethically" or "he is a grifter who is actually bad at prognosticating politics".
i should probably add a footnote about him, because while im using his as a sort of metonym for a class of people and i think hes pretty bad for ea i dont have anything against him personally and i believe hes probably as good a person as his profession permits, even supererogatory.
One quibble: the NYT frontrunning seems like it could significantly mitigate the damage this does to EA in the eyes of the public. Most damaging, I guess, would be if it gets polarized. I could see it being fully subsumed within the Aggregate Left, while, like, Fox and whoever destroy its reputation solely within the Aggregate Right.
Tons of things to respond to, but I'll try to stick to a few points for the sake of brevity. First off, this is a cultural/systems critique and is somewhat limited as such. EA as a philosophy, or EA as a list of charity recommendations, escape (mostly) unscathed. But that's OK, it just needed to be acknowledged.
Sticking to the first half. My primary issue is that this form of argument tends to go overboard when not paired with object level criticism. This is especially clear here. You, esteemed bullshit sniffer that you are, have discovered that charity, including EA charity, is actually disguised virtue signalling! Or maybe just people trying to feel better about themselves. Whatever it is, it isn't do-gooding. Other people simply aren't cynical enough and need to be more cynical. This works on anyone, whatever they say. Disagree? You just aren't cynical enough!
This tends toward the unfalsifiable. Cynicism about everything but cynicism. I think it it's unreasonable and unevidenced to deny that good-doing is powerful motivation behind many charitable efforts, even many non-EA ones. Without accounting for genuine good will, a major explanatory gap exists. This isn't to say that less noble impulses aren't also in the mix - these should definitely be scrutinized. It's certainly tempting to ignore goodwill - I think do-gooder-derogation is sometimes behind this. Other times its overcompensating over fear of potential embarrassment. Foolish naivete is much scarier than being merely overly critical. Whatever it is, I think it lead you to being a bit overly taken by the "Geeks, MOPs, and Sociopaths" theory. This also lead to your proposed remedies later, some of which seem quite extreme to me, though a few others are intriguing.
I think this is a good piece overall but:
"...weird nerds are increasingly marginalized as things optimize for legibility to large central funding sources. At this point, the movement has died....This really took off by 2018 or 2019. Since thing, EA has been a zombie movement that exists to launder the decision making of a certain kind of billionaire as the process of a distributed cognition."
just strikes me as sort of perfectly anti-correct. In many ways, EA is laundering weird ideas /into/ respectability. This is a totally conscious strategy, there are probably dozens of EA forum posts implicitly saying this.
This is Good. Laundering weird ideas into respectability is the basis of large-scale values change. If only weird nerds hold ideas, they have an intrinsically limited influence.
Mass persuasion (eg, success for EAs) basically looks like this.
And your suggestions: "Prohibitions on allowing people with histories as lobbyists, journalists, and similar to sit on EA decision committees." are basically saying [don't engage with powerful mainstream people], which is fine if you have some magical source of power that doesn't rely on already extant elites, but dumb if you. Some tech elites (Balaji, Elon, etc.) can effectively do this, but they don't work in policy, and this is a sustainable strategy for a whole movement.
that's definitely the counterargument to taking any of my suggestions: something like "keep doing the same thing and hope that indigenous EA culture wins out over the waves of newcomers with influence and money."
there might be some kind of tradeoff between a smaller, purer EA and a larger EA that ends up becoming substantially a carbon copy of what it replaced but with rationalist frills. if you assign weight to the model i offer above, i guess the thing to do is to crunch the numbers on which outcome seems best to you.
a caution i think is that the geeks may not stick around (or will find themselves removed) past a certain threshold of outsider controllers and you may want to weigh that, in particular, explicitly
The short course, in my mind, is my rule "Never give money to big charities", but this assumes the reason to give to charities is to accomplish the aims you intended rather than as a tax writeoff.
Home run after home run, Robot.
🙏
>David Chapman’s classic model of subcultural evolution is Geeks, MOPs, and Sociopaths...
I was first exposed to this via a comic spread on 4chan where anons wanted to play dnd, and one among them invite a Chad, who plays reasonably well, but then Chad invites Stacy who doesn't, and then the "game" becomes "flirt and hang out", until the nerds leave and try to make a new game/subculture.
I haven't seen the comic in years and would like assistance in finding it, if anyone has the time.
This? https://i.imgur.com/sdOkUVr.jpeg
It is fairly unlike my memory of it, but it probably is what I saw all those years ago. Thanks for finding it!
The main point of this post seems to be that the problem with institutional capture comes from investors having too much power over internal management, rather than too little: chasing whales led well-intentioned insiders astray. I think the standard argument for why this occurs in the private sector is the opposite: HR-style managerialism comes from too little investor power (whether because labor specialization empowers technocrats to act as monopolists of their skillsets, as in Burnham, or, less persuasively, because labor specialization reduces line-workers to reliance on monopsonist employers, as in Wyndham Lewis). And so my prior is that the faddish status-hungry flaws you point out actually come from a lack of memetic sovereignty among insiders, instead of self-sovereign power-seeking.
The personal advice remains the same: cultivate habits rather than memes, because memes transmit horizontally through us, like diseases, and so get selected for optimizing their own spread, whereas habits transmit vertically through us, like genes, and so get selected for optimizing our spread. But the mechanism seems entirely different: a lack of well-defined meaningful ownership over an institution means that nobody is looking out for its long-term health, and everyone is exploiting its resources in poorly aligned ways. And so my political takeaway is that funders should have more power over charities rather than less.
Ironically, I think the root of this view is that markets mostly work by clarifying ownership, not allocating scarce desirables: sure, stock trading constrained by near-perfect competition can as if magically figure out which contractor caused the Challenger explosion within 20 minutes, but the reason a city works is that we all agree on who does own each building so that we don't have to argue about who should own it (and so we mostly don't even think about where the rent goes or which entity our office leases from). In other words, clear ownership allows feedback to select for what owners actually value, while spreadsheet magic selects for signaling: first-gen EAs did well because they were the small-time funders of e.g. malaria prevention, and have gone wrong because they've become the managers of big-time funding spigots. And so I'd suggest that the movement should remain like small-time owners instead of like small-time bureaucrats for big-time owners.
I think this relates to certain blind-spots in standard economics, which focuses on already well-defined agents, who can simply be defined in terms of their expectations and preferences; I think the relevance of memetic sovereignty in this communications age means that we should also focus on the borders which dynamically define said agents. My shameless plug is that I tried writing about that here, in the paragraphs which follow "One way to partition these two types of intelligence is between borders and information": https://cebk.substack.com/p/the-power-of-babble
There's a lot here.
I unfortunately wrote this and rewrote it and considered rewriting another time to cleanly delineate two important classes of threats from agents with different values: financial (whale donors) and social (the hegemonizing swarm). Sadly I do not have it in me to iterate again. Much writing with proper punctuation is a weariness of the flesh.
I agree with your point about memetic sovereignty but it seems very difficult to maintain; just about every institution you can imagine has been overcome in the last decade.
I wonder if perhaps another solution for EA would be to do away with handling money altogether, and instead to focus on the pure rating/review aspect of what they do, letting others act on that information as they saw fit. There's no reason for these elements to be bundled.
and who would be the sociopathic dramatis personae of the online sphere you belong to? awkward...
lmao im not naming names
The hilariously ironic thing about EA is that it specifically focuses on causes with not much awareness and not much support, so it's very vulnerable to a sociopathic megadonor entering EA and gaining a really efficient dollars -> prestige ratio!
And extra prestige bonus points for having a new-age sci-fi mystique. It's hard to imagine anyone getting this many puff pieces for donating to Oxfam.
If all donations become anonymous, wouldn't that create a whole host if potentially worse problems: people donating to really socially damaging causes, doing nepotism/backroom deals without any oversight etc?
This seems correct to me, and I enjoyed reading it.
Your comment distinguishing financial/social threats was likewise enlightening, although I kinda already knew that part; I had a pretty good model of the social threat, so the epistemic value to me from this was mostly delineating the financial threat.
It was also of non-negligible aesthetic value to me; clear thought has a beauty all its own.
Wanted to say thanks 🙏 for writing, and I'm sorry I don't have anything more concrete to contribute.
>No, I don’t mean you have to believe in God if you don’t want to.
From the lens of a hard consequentialist that is the modal EA enthusiast, isn't acting in the manner of the most pious saint not really any different from becoming that saint, given that the outcomes of a saint with faulty or absent theology are indistinguishable from one with truly held Christian beliefs? In fact, if you would act more admirably and with better consequences by fully emulating Christian beliefs and practices, a committed and selfless consequentialist would be *compelled* to commit to practice.
That is, it seems like their beliefs indicate they ought to effectively meme themselves into a full second birth into the light of Christ. Go to church, pray and read scripture daily, raise their children to love and fear the Lord, give to the poor, sick, and needy generously and anonymously, etc. Then they would be untethered from their temporal standing and free to do the Most Effective Good. 🙂
Better to donate to Democrats than Yale. I’ll take that any day.
So I take it from this that in your discourse on Blue Rose that Shor is the sociopath taking advantage of everyone but inasmuch as political influence actors go, do you think Shor is actually, well, *effective*?
I agree with very few of his politics but I have always thought his political analysis was objective and on point.
>do you think Shor is actually, well, *effective*
hmm. im not quite sure what you're asking here. i think hes very sharp and i was impressed by his taking a stand a couple of years back on an empirical matter, but my basic belief is that for political operators political operations come first. there are surely cases where this isn't true, but even if you were able to isolate such rare birds its hard for me to imagine cases where it's possible to . . . i don't know. socially isolate them? i think it would take really heroic self-awareness and restraint on their part.
this isn't about shor's specific politics in particular. there are others in the same vein, some of whose politics im even more favorable to in theory, who i just will not interact with, because their social and cognitive mode is fundamentally political rather than exploratory.
oh ok that makes sense to me.
to clarify what I was wondering was whether you thought he was capable when it came to prognosticating American politics (since he made his name as a prognosticator of American politics).
I had the impression from your excerpt that you had a negative view of him overall, though I wasn't sure if you meant "I think he is not a good person ethically" or "he is a grifter who is actually bad at prognosticating politics".
i should probably add a footnote about him, because while im using his as a sort of metonym for a class of people and i think hes pretty bad for ea i dont have anything against him personally and i believe hes probably as good a person as his profession permits, even supererogatory.
One quibble: the NYT frontrunning seems like it could significantly mitigate the damage this does to EA in the eyes of the public. Most damaging, I guess, would be if it gets polarized. I could see it being fully subsumed within the Aggregate Left, while, like, Fox and whoever destroy its reputation solely within the Aggregate Right.
>NYT frontrunning
I'm not sure what you're pointing at here--do you mean about the more-or-less neutral-to-positive interview NYT put out on SBF a few days ago?
>Most damaging, I guess, would be if it gets polarized.
oh boy that would be stupid. im a bit relieved so far that it hasnt been picked up more broadly in media tho; fingers crossed.
This series of screenshots look to me like a concerted effort to intimate that SBF was the victim of something that could happen to anyone.
https://twitter.com/PatrickWStanley/status/1592399856618057730
wow ok yeah that's much worse than i had thought
jeez. oof.
see ok this is what i said i would ban you for
🥱
i dont mind criticism but if you are rude to other commenters i will ban you. kindly be advised.
Tons of things to respond to, but I'll try to stick to a few points for the sake of brevity. First off, this is a cultural/systems critique and is somewhat limited as such. EA as a philosophy, or EA as a list of charity recommendations, escape (mostly) unscathed. But that's OK, it just needed to be acknowledged.
Sticking to the first half. My primary issue is that this form of argument tends to go overboard when not paired with object level criticism. This is especially clear here. You, esteemed bullshit sniffer that you are, have discovered that charity, including EA charity, is actually disguised virtue signalling! Or maybe just people trying to feel better about themselves. Whatever it is, it isn't do-gooding. Other people simply aren't cynical enough and need to be more cynical. This works on anyone, whatever they say. Disagree? You just aren't cynical enough!
This tends toward the unfalsifiable. Cynicism about everything but cynicism. I think it it's unreasonable and unevidenced to deny that good-doing is powerful motivation behind many charitable efforts, even many non-EA ones. Without accounting for genuine good will, a major explanatory gap exists. This isn't to say that less noble impulses aren't also in the mix - these should definitely be scrutinized. It's certainly tempting to ignore goodwill - I think do-gooder-derogation is sometimes behind this. Other times its overcompensating over fear of potential embarrassment. Foolish naivete is much scarier than being merely overly critical. Whatever it is, I think it lead you to being a bit overly taken by the "Geeks, MOPs, and Sociopaths" theory. This also lead to your proposed remedies later, some of which seem quite extreme to me, though a few others are intriguing.
no
also fuck off
yes i am famously sympathetic to antisemites https://twitter.com/eigenrobot/status/1566632353493024770