calls into question whether or not the public has an opinion. I was thinking about the example of tariffs for instance. Most people are going on bellyfeel so you see maybe 38% are net positive on tariffs
If you broke it down in terms of interest groups on a "one dollar one vote" basis the net positive has to be a lot worse: to the retail, services and constructor sectors tariffs are just a cost without any benefits, even most manufacturers are on the fence because they import intermediate goods and want access to foreign markets. The only sectors that are strongly for it that I can suss out are steel and aluminum manufacturers who are 2% or so of the GDP.
The public and the interest groups are on the same side of 50% so there is no contradiction, but in this particular case I think the interest groups collectively have a more rational understanding of how tariffs effect the economy than do "the people". As Habermas points out, it's quite problematic giving people who don't really know a lot a say about things even though it is absolutely necessary that people feel heard.
[1] Interestingly this book came out in 1964 just before all hell broke loose in terms of Vietnam, counterculture, black nationalism, etc. -- right when discontent when from hypothetical to very real
The problem isn't giving the people a say; it's that the people have stopped electing smart people who do know a lot.
Certainly though, a big part of why that is is that people think they know a lot, and that their opinion should be given as much weight as any other consideration when it comes to policymaking.
Personally, I think a big driver of this belief is a tendency in the West to not challenge each other's views or hold each other accountable - "don't talk politics at Thanksgiving" sort of thing
(Of course there's a long discussion to be had about other contributors to this, such as lobbying and whatnot)
> Personally, I think a big driver of this belief is a tendency in the West to not challenge each other's views or hold each other accountable - "don't talk politics at Thanksgiving" sort of thing
We’re in such a “you’re either with us or against us” phase of politics that a discussion with the “other team” is difficult.
Combine that with people adopting political viewpoints as a big part of their personality and any disagreement is seen as a personal attack.
Sure, but those are still part of what I'm talking about. Someone taking the "you're with us or against us" position? Call them out on it and tell them they're doing more harm than good to their cause. Someone taking a disagreement way too personally? Try to help them take a step back and get some perspective.
Of course, there's a lot more nuance than all that - sometimes, taking things personally is warranted. Sometimes, people really are against us. But, that shouldn't be the first thing people jump to when faced with someone who disagrees - or, more commonly, simply doesn't understand - where they're coming from.
And of course, if it turns out you can't help them understand your position, then you turn to the second part of what I said - accountability. Racist uncle won't learn? Stop inviting them to holidays. Unfortunately, people tend to jump to this step right away, without trying to make them understand why they might be wrong, and without trying to understand why they believe what they believe (they're probably just stupid and racist, right?) - and that's how you end up driving people more into their echo chamber, as you've given them more rational as to why the other side really is just "for us or against us"
(I'm not suggesting any of this is easy. I'm just saying it seems to play a part in contributing to the political climate.)
A lot of families have broken apart due to politics in the past decade in the US.
---
> A 2022 survey found that 11% of Americans reported ceasing relations with a family member due to political ideas.
> A more recent October 2024 poll by the American Psychiatric Association (APA) indicated a higher figure, with 21% of adults having become estranged from a family member, blocked them on social media, or skipped a family event due to disagreements on controversial topics.
I'm not entirely sure what your point is in telling me this? I mean... I'm literally advocating for that as a measured response to things?
I'll just say that "ceasing relations with a family member" is not "breaking a family apart"
(This is the sort of rhetoric usually used by those who were kicked out of the family; blame the politics for ripping their family apart and not their shitty beliefs)
Too reductive for my liking. I always found Zappa’s persona to be hypocritical—-making a point of condemning the drug culture of his contemporaries while drinking gallons of coffee a day and smoking like a chimney.
Somewhere, I am not the historian to say, teaching people the basics of an education, that being “reading, writing and arithmetic”, failed to recognize the critical role that communications play in everything people do, and try to do. That phrase ought to be “reading, writing, arithmetic, and conveying understanding” because that would include why one reads and why one writes, and connects that to the goal of conveying an understanding you have to others. However, this is the root issue.
General society being generally poor communicators is caused by this lapse in our understanding of education. The understanding that the purpose of an education is to both use it and to help others understand what you may and they do not, as well as understand how to gain understanding from others that they have and you do not.
Because we do not teach that an education is really learning how to understand and how to convey understanding in others, the general idea of an education is to be an owner of a specialized skill set, which one sells to the highest bidder.
This has caused education to be replaced by rote memorization. Which in turn created a population that is only comfortable with direct question and answer interactions, not exploratory debate for shared understanding. This set the stage for educators, nationwide, to teach students to be databases and not critically analyzing understanders of their vocations.
Note that the skills for conveying understanding in others, additionally carries the skill how to recognize fraudulent speech. Which, as of Dec 2025, is the critical skill the general population does not have that is potentially the death of the United States.
When a population of people do not have an emphasis on critical analysis, but rote memorization, as the basis of their education that then creates a population that has heightened sensitivity to controversial lines of reasoning, lines of reasoning where there are no clear answers. Life itself has a large series of mysteries based on faith, religion being chief, which in a population that is comfortable with debate to convey understanding is perfectly safe to engage in discussions about mysteries within these areas requiring faith. But a society that is not comfortable with such discussions, one that thinks debate’s purpose is to "win, at all costs" then such discussions are taboo. They get shut down immediately. When people cannot debate to understand, but as a combat, learning is not accomplished. And useful critical analysis skills are not taught.
I have no idea if such a national situation can be manufactured, but I believe this is where we are at as a nation. We no longer produce enough adults with developed critical analysis skills to support democracy. Democracy depends upon an educated population with active critical analysis capabilities, a population that can debate to a shared understanding and accomplish shared goals. That foundational population is not there.
This can be fixed, but it may take more than a generation. Our educational system needs foundational revisions, which include additional core subjects, chief of which being how to communicate and convey understanding in others. Which lies at the roots of our demise, this lack of this basic skill.
I think you’re onto something here with people thinking they know a lot, but isn’t the real issue anonymous internet posting? Having to take zero responsibility for sharing ideas has ruined intelligent discourse society-wide: Web 2.0, then social media, turned out to be the beginning of the end of experts having credibility. Journalists, scientists, all experts became demonized by persuasive bots or anonymous internet posters. Instead of a world of democratized intelligence as promised, we got a world of “anyone’s opinion is valid, and I don’t even need to know their credentials or who they are.” If we forced everyone to have to stand by everything they said online on every forum, we’d have a lot fewer strong opinions and conspiracies, IMO. People (voters) would be thinking a lot harder about their ideas and seeing a lot fewer validations of the extreme parts of themselves.
My hottest take is that it wasn’t anonymity, but auto correct, that spelled (literally) the end. Without autocorrect and auto-grammar, ideas were tagged with the credential/authority of “I can use they’re / their / there” correctly, which was a high ass bar.
It’s still “new tech” to our monkey brains and it takes a long time, and probably a lot of destruction, before our we develop better cultural norms for dealing with it. Our cultural immune system has only just started to kick in.
You think people don't have those ideas in person? They absolutely do, and not being anonymous does not stop most of them.
While I agree the Internet has contributed to this belief, I do not see how being anonymous or not would fix that. To say nothing of the myriad other issues that would come with a non-anonymous Internet.
The cultural chasm between technocrats and politicians reminds me of the old trope about "women are from Venus and men are from Mars". That hasn't been bridged either, has it? It's a bit like those taboo topics here on HN where no good questions can be entertained by otherwise normal adults.
Here's something from someone we might call a manchild
For I approach deep problems like cold baths: quickly into them and quickly out again. That one does not get to the depths that way, not deep enough down, is the superstition of those afraid of the water, the enemies of cold water; they speak without experience. The freezing cold makes one swift.
Lichtenberg has something along these lines too, but I'll need to dig that out :)
Here's a consolation that almost predicts Alan Watts:
To make clever people [elites?] believe we are what we are not is in most instances harder than really to become what we want to seem to be.
I think parent-poster is saying that politicians and technocrats have a gulf between how their view the world and how well they communicate with one-another. However after that point (ironically?) it isn't clear what what their purpose is for including the quotes.
I think the most-charitable interpretation for the "baths" quote [0] might be: "For the people I'm trying to communicate with, lightly touching on deep subjects is actually fine." (Both most-charitable to Nietzche, and also to the poster quoting him.)
After thinking some time, I think the baths quote is saying that, contrary to common wisdom, it isn't necessary to have intense, long discussions about "deep" subjects - small, quick conversations can still be as productive.
I think there's some truth here. I've held for a long time that minds are not changed overnight or in a single discussion - this happens over time, as you repeatedly discuss something, and people consider their own views and others. To that point, I suppose small conversations would work.
Still, I don't think it can be one or the other. Many subjects we're referring to are very complex and require more in-depth analysis (of the problem, and of our views) than a short conversation.
is habermas dumb? we pay taxes directly or indirectly on prices of things, if you formulate the question on terms that the person understand relating to the difference on prices on basic things they Will be able to easily answer the question
People that favor tariffs, want to bring manufacturing capabilities back to the US, in the hopes of creating jobs, and increasing national security by minimizing dependence on foreign governments for critical capabilities. This is legitimate cost benefit analysis not bellyfeel. People are aware of the increased cost associated with it.
Tariffs don't do this, though. If you want to do this, you just have to pass laws saying companies are required to manufacture x% of their goods domestically. Putting tariffs in place with no other controls will just see companies shift costs downstream, which is exactly what is happening.
Companies employ economists, lawyers, and legislators, all to ensure they can find workarounds for anything they don't like that's not 100% forced on them by a law (and will even flout the law if the cost/benefit works out).
All evidence is that tariffs have actually tanked jobs, precisely because companies are assuming a defensive fiscal posture in response to what they view as a hostile fiscal policy.
Shifting costs downstream is the point. It imposes a cost on consumers for the externality they are creating by purchasing goods manufactured overseas.
The method you describe is way more easily gamed than a tarrif. What constitutes x% of their goods?
Tarrifs are more proportional to the externality we want to discourage.
It also opens the door to competition. Right now in many things we can't compete against places like e.g. China because everything is dramatically more affordable there, including regulatory compliance. Tariff's change this and make it such that domestic producers can produce things at a cost comparable, and ideally less, than other countries.
These tariffs should have been immediately deployed following changes in labor, environmental, and other laws anyhow - because otherwise all we do is just end up defacto outsourcing pollution and other externalities to the lowest foreign bidder, where the only person who really loses is the American worker.
But price elasticity isn't infinite. A large part of the middle class would be priced out of most modern amenities if these would be produced domestically. Import substitution is one of these things that sounds nice in theory but tend to be highly damaging in practice.
This isn't necessarily true. A big factor when production comes back home is that so do the jobs that come along with it and that has a huge ripple effect on the economy that's difficult to evaluate, other than it being a very good thing.
> A large part of the middle class would be priced out of most modern amenities if these would be produced domestically.
Who said everybody would get to keep buying as much cheaply made foreign crap as before? From an environmental perspective that's arguably a win as well. Reducing both pollution from construction and transport.
Personally, I think a better alternative to tariffs would be to require make regulatory requirements for labor, environmental concerns, etc. for the production of any goods sold in the US. Or maybe have tariffs, but companies can opt in to complying with regulations in order to avoid the tariffs.
The problem is that laws need to have precision, and that precision can be sidestepped. For the obvious example - most of all chocolate in America still uses labor involving not only child labor but defacto child slavery. [1] So they say some kind words and make an effort to use supplies who aren't using child labor. But all that involves is them asking the supplier 'Hey, you're not using child labor are you. No? Okay, great.' Of course they are and e.g. Nestle knows they are, but so long as they go through some superficial steps to give plausible deniability for both parties, they can then be 'my gosh, we had no idea.' This, btw, is the exact same way that NGO corruption works - shell companies that offer plausible deniability.
There's no real room to evade tariffs outside of misclassifying or misrepresenting imports, which is a straight forward criminal felony.
> opens the door to competition. Tariff's change this and make it such that domestic producers can produce things at a cost comparable, and ideally less, than other countries.
Haha, Nope. It's more like closing a door. An actual economist says this:
"If you look at page 1 of the tariff handbook, it says: Don't tariff inputs. It's the simplest way to make it harder—more expensive—for Americans to do business. Any factory around the world can get the steel, copper, and aluminum it needs without paying a 50% upcharge, except an American factory. Think about what that will do to American competitiveness."
They are notorious drivers of corruption, it's one of the reasons they're a disfavored policy. Trump himself visibly engages in it (e.g. Tim Cook giving him a gold statue, Apple tariffs get removed) but corruption will manifest at all levels of the chain.
Tariffs also cost more than the sticker price. Compliance is actually really difficult and expensive especially when everything is made so complex and unpredictable. Enforcement is also expensive and often arbitrary or based on who has or hasn't bribed the right people.
You're not wrong but the fine can be significantly higher than the tariff.
Pay 300% tax if you don't manufacture 10% of your goods in the US. Furthermore, the penalties could escalate from repeat violations. It's a lot more flexible than a blanket tariff on an industry, country or specific good.
Believing that tariffs shift costs downstream means disregarding the idea of supply and demand. Companies are not altruistic actors they price goods at the maximum the market will bear. If they could just pass costs on to consumers then it means that they are already leaving profits on the table. There are in fact alternatives to the goods we import on which tariffs are imposed. Even if the alternative is buying fewer items and spending money on completely different things.
At the end of the day tariffs are a bit of plaque in the artery of the multi-national corporations and money flowing out of a country. It's challenging to argue all the negatives of tariffs for the US while ignoring that almost every other country has tariffs that benefit their domestic industries.
* Targeted tariffs and blanket tariffs are different beasts.
* In order for capitalism to undercut the tariffs, the tariffs need to be high enough to offset the costs of setting up the local industry and the higher costs of US labor (which, in turn, are pushed higher by blanket tariffs).
* The tariffs also have to be credibly long-term. If you start building and the tariffs are cancelled, you're screwed. The Trump tariffs don't have this credibility - they're toxic enough that they'll be gone as soon as Trump is, even if it's another Republican in the White House in 2028.
An aside on tariffs, it’s a tax (either literally depending on the upcoming SCOTUS ruling, or if not in name then in whatever language SCOTUS decides to call an additional fee consumers pay when buying goods. But a tax either way).
Relevant to the post, when supporters believe that “foreigners are swallowing 100% of the cost of the tariffs” they cheer them on. Those same supporters when they’re told the truth that consumers do end up with inflated prices because of them? Their support plummets.
I feel like that's how anyone feels about anything a politician says. They say great things (sometimes even lies) about whatever agenda they're pushing, like tariffs only affecting non US people, or deporting criminal illegals, and supporters buy it. But then when they find out they're paying the tariffs, or their innocent gardener is being deported, then suddenly they're like "wait I didn't vote for this" even though they literally did, just under a different frame.
These are people who vote according to their interests.
There are two economic systems in the US which are divided according to the parties, one is highly globalized and resides in the cities and includes most of the people here, and the other is local and is composed of older industries.
The local one was hit hard due to globalized policies and largely offshored, and these voters rightfully want to undo that, if that's possible is another case, but this is what Trump is doing.
Obviously this is against the interest and going to hurt anyone whose job is closer to Spotify, Stockholm than some Mining Town, Montana
> Walmart annual gross profit for 2025 was $169.232B, a 7.12% increase from 2024. Walmart annual gross profit for 2024 was $157.983B, a 7.06% increase from 2023. Walmart annual gross profit for 2023 was $147.568B, a 2.65% increase from 2022.
You're telling me poor walmart just HAVE to increase prices because they have to pay a living wage ? All thanks to those darn Unions ?
But these are still bellyfeel words. What does more rigorous analysis of tariffs say about these things? Do they bring manufacturing back? Do they create jobs?
What countries have fewer tariffs than the US? Yes tariffs have the ability to support domestic production, be that via bringing manufacturing back or creating jobs. 100%, these are actual results and why almost every country has them. The US has a weighted tariff average of around 3% which places them at the lowest of the list only above countries that have to import almost everything like New Zealand, Australia and Iceland, and around half of EU rates. So even with the random adjustments Trump has made the US would still need to effectively double tariff rates to be commiserate with the EU.
Should the US adopt the European model? Open an inquiry to explore an investigation that could become en exploratory committee? Sounds like a bad idea.
> What does more rigorous analysis of tariffs say about these things?
Basically that tariffs are benign to harmful and most countries should stop using them. They often hurt manufacturing in the long run. They invite retaliation and shrink your market.
Sure, some companies might eventually build some facilities here they otherwise wouldn't have, if they think the tariff regime will hold. But what ends up happening is that they just set up bespoke operations to serve this single market only and not for exporting. So instead of a factory to sell widgets to the whole world, we have a small factory to sell within the country only, where we all pay higher prices than the rest of the world.
Meanwhile their primary global operations where they enjoy free(er) trade are cordoned off from our market. It's a bit like you see with American companies that move into China.
Well, ends are not bellyfeel. Bellyfeel here concerns the means. So, in this case, thinking that merely wanting an end somehow entails that the means employed are good and effective, because the intention is good.
But as they say, the road to hell is paved with good intentions. It's not enough to want something good. You have to also use means that are good.
So if X% of the economy benefits directly you might say 100-X% of the people would benefit secondarily because the people who benefit would have more money to buy services, building, etc. Trouble is in the short term that X is probably less than 5% so that multiplier effect is not that big.
The industry that has the most intractable 'national security' issues in my mind is the drone industry. The problem there is that there are many American companies that would like to build expensive overpriced super-profitable drones for the military and other high-end consumers and none that want to build consumer-oriented drones at consumer-oriented prices. [1] Drones are transformational military because they are low cost and if you go to war with a handful of expensive overpriced drones against somebody who has an unlimited supply of cheap but deadly drones guess who ends up like the cavalry soldiers who faced tanks in WWI?
There is a case for industrial policy there and tariffs could be a tool but you should really look at: (1) what the Chinese did to get DJI established and (2) what the EU did to make Airbus into a competitor for Boeing. From that latter point of view maybe we need a "western" competitor to DJI and not necessarily an "American" competitor. There are a lot of things we would find difficult about Chinese-style industrial policy. If I had to point to once critical difference it's that people here thought Solyndra was a scandal and maybe it was but China had Solyndra over and over again in the process to dominate solar panels and sure it hurt but... they dominate solar panels.
[1] I think of how Microsoft decided each project in the games division had to be 30% profitable just because they have other hyperprofitable business lines, yet this is entirely delusional
Even ardent protectionists generally agree that tariffs can't bring jobs and manufacturing back by themselves. To work, they have to be accompanied by programs to nurture dead or failing domestic industries and rebuild them into something functional. Without that, you get results like the current state of US shipbuilding: pathetic, dysfunctional, and benefiting no one at all. Since there are no such programs, tariffs remain a cost with no benefit.
Nearly everyone we know has lived their entire lives in a world obsessed with reducing trade barriers, and grew up with a minimal general education on economics or geopolitics. So to assume anything more then a small subset of the population could talk coherently for 5 minutes on the topic of tariffs is, to me, absurd. Just look at how the general public responded to a surge in inflation after a couple decades of abnormally low rates. It's like asking someone if the Fed should raise or lower interest rates. It's not that people shouldn't have opinions on these things, just that most people don't care and among those who do, few have more than a TV-news level of understanding.
Also, there is a massive conflict of interest associated with trusting the opinions of companies actively engaged in labor and environmental arbitrage. Opinions of politicians and think-tanks downstream of them in terms of funding, too. Even if those opinions are legitimately more educated and better reasoned, they are on the opposite side of the bargaining table from most people and paying attention to them alone is "who needs defense attorneys when we have prosecutors" level of madness.
If anyone is looking for an expert opinion that breaks with the "free trade is good for everyone all of the time lah dee dah" consensus, Trade Wars are Class Wars by Klein & Pettis is a good read.
> This is legitimate cost benefit analysis not bellyfeel. People are aware of the increased cost associated with it.
Are they? Because I would expect far less complaining about the economy if this were true.
You can't rebuild an industrial base overnight. Industrial supply chains and cultures of expertise take time to take root. That means not just some abstract incurred cost, but a very much felt burden on the average citizen. And with a weakened economy, it's difficult to see how this industrial base is supposed to materialize exactly.
> Hard to argue that trump didn't do tariffs in the dumbest way possible.
That is certainly one of my frustrations with Trump. He has this tendency to take things which aren't necessarily bad ideas, and pursue them in such stupid ways that he is poisoning public opinion of those concepts for a long time to come.
Take tariffs. I really want the US to have manufacturing again, in fact it seems to me that it is genuinely an issue of national security that we don't have the ability to manufacture things. So I'm ok with tariffs in the abstract, as part of a larger plan to build up industry in the US.
But of course that isn't what we got - we got something which is causing a lot of heartburn for (probably) no benefit to our manufacturing industry. So not only is Trump not effectively advancing the ends I would like, in the future when a politician suggests tariffs people will pattern match it to "that thing Trump did which really sucked" and reject the proposal out of hand even if the details are different. And it's like this for so many things Trump sets his mind to. It's really frustrating.
I think many or most tariff supporters aren't actually aware of the costs - because reasonable cost benefit analysis doesn't come out in their favor even a little. Among economists, this is basically a settled question.
Hell, many tariff supporters still think tariffs are paid by the importers. Many are unaware that tariffs are likely to cost manufacturing jobs in the long run rather than bring them back.
Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.
More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.
There's certainly some AI risks that are the same as human risks, just as you say.
But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.
Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.
I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.
*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).
Not GP. But I read it as a transfer of the big lie that is fed to Ender into an AI scenario. Ender is coaxed into committing genocide on a planetary scale with a lie that he's just playing a simulated war game. An AI agent could theoretically also be coaxed into bad actions by giving it a distorted context and circumventing its alignment that way.
>An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human.
"Obedient" is anthropomorphizing too much (as there is no volition), but even then, it only matters according to how much agency the bot is extended. So there is also risk from neglectful humans who opt to present BS as fact due to an expectation of receiving fact and a failure to critique the BS.
People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.
This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.
People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation".
Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc.
You can see other people falling for manipulation in a handful of specific ways that you aren't (buying new, having a bad cell phone subscription, blaming immigrants). Doesn't it seem likely then, that you're being manipulated in ways which are equally obvious to others?We realize that, that's part of why we get mad.
No. This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.
Worse, these sections have been under coordinated manipulation since the 60s-70s.
That said, the scope and scale of the effort required to achieve this is not small, and requires dedicated effort to keep pushing narratives and owning media power.
> This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.
Making matters worse, one of the sub groups thinks they're above being manipulated, even though they're still being manipulated.
It started by confidently asserting over use of em dashes indicates the presence of AI, so they think they're smart by abandoning the use of em dashes. That is altered behavior in service to AI.
A more recent trend with more destructive power: avoiding the use of "It's not X. It's Y." since AI has latched onto that pattern.
This will pressure real humans to not use the format that's normally used to fight against a previous form of coercion. A tactic of capital interests has been to get people arguing about the wrong question concerning ImportantIssueX in order to distract from the underlying issue. The way to call this out used to be to point out that, "it's not X1 we should be arguing about, but X2." This makes it harder to call out BS.
That sure is convenient for capital interests (whether it was intentional or not), and the sky is the limit for engineering more of this kind of societal control by just tweaking an algo somewhere.
I find “it’s not X, it’s Y” to be a pretty annoying rhetorical phrase. I might even agree with the person that Y is fundamentally more important, but we’re talking about X already. Let’s say what we have to say about X before moving on to Y.
Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean, and are further from closing. I’d consider it some kind of (often well intentioned) thought terminating cliche, in the sense that it stops the exploration of X.
The "it's not X, it's Y" construction seems pretty neutral to me. Almost no one minds when the phrase "it's not a bug, it's a feature" is used idiomatically, for example.
The main thing that's annoying about typical AI writing style is its repetitiveness and fixation on certain tropes. It's like if you went to a comedy club and noticed a handful of jokes that each comedian used multiple times per set. You might get tired of those jokes quickly, but the jokes themselves could still be fine.
> Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean
I'm basing the prior comment on the commonly observed tendency for partisan politics to get people bickering about the wrong question (often symptoms) to distract from the greater actual causes of the real problems people face. This is always in service to the capital interests that control/own both political parties.
Example: get people to fight about vax vs no vax in the COVID era instead of considering if we should all be wearing proper respirators regardless of vax status (since vaccines aren't sterilizing). Or arguing if we should boycott AI because it uses too much power, instead of asking why power generation is scarce.
And probably a lot of people in those sections say the same about your section, right?
I think nobody's immune. And if anyone is especially vulnerable, it's those who can be persuaded that they have access to insider info. Those who are flattered and feel important when invited to closed meetings.
It's much easier to fool a few than to fool many, so ,private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation.
> I assume you think you're not in these sections? And probably a lot of people in those sections say the same about your section, right?
You're saying this a lot in this thread as a sort of gotcha, but .. so what? "You are not immune to propaganda" is a meme for a reason.
> private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation
The essential recruiting tactic of cults. Insider groups are definitely powerful like that. Of course, what tends in practice to happen as the group gets bigger is you get end-to-end encryption with leaky ends. The complex series of Whatapp groups of the UK conservative party was notorious for its leakiness. Not unreasoable to assume that there are "insiders" group chats everywhere. Except in financial services where there's been a serious effort to crack down on that since LIBOR.
Would it make any difference to you, if I said I had actual subject matter expertise on this topic?
Or would that just result in another moving of the goal posts, to protect the idea that everyone is fooled, and that no one is without sin, and thus standing to speak on the topic?
There are a lot of self-described experts who I'm sure you agree are nothing of the sort. How do I tell you from them, fellow internet poster?
This is a political topic, in the sense that there are real conflicts of interest here. We can't always trust that expertise is neutral. If you had your subject matter expertise from working for FSB, you probably agree that even though your expertise would then be real, I shouldn't just defer to what you say?
Ugh. Put up or shut up I guess. I doubt it would be valuable, and likely a doxxing hazard. Plus it feels self-aggrandizing.
Work in trust and safety, managed a community of a few million for several years, team’s work ended up getting covered in several places, later did a masters dissertation on the efficacy of moderation interventions, converted into a paper. Managing the community resulted in being front and center of information manipulation methods and efforts. There are other claims, but this is a field I am interested in, and would work on even in my spare time.
Do note - the rhetorical set up for this thread indicates that no amount of credibility would be sufficient.
People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off.
Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives.
I would point out that what you call "left outlets" are at best center-left. The actual left doesn't believe in Russiagate (it was manufactured to ratfuck Bernie before being turned against Trump), and has zero love for Biden.
Given the amount of evidence that Russia and the Trump campaign were working together, it's devoid of reality to claim it's a hoax. I hadn't heard the Bernie angle, but it's not unreasonable to expect they were aiding Bernie. The difference being, I don't think Bernie's campaign was colluding with Russian agents, whereas the Trump campaign definitely was colluding.
Seriously, who didn't hear about the massive amounts of evidence the Trump campaign was colluding other than magas drooling over fox and newsmax?
There is this odd conspiracy to claim that Biden (81 at time of election) was too old and Trump (77) wasn't, when Trump has always been visibly less coherent than Biden. IMO both of them were clearly too old to be sensible candidates, regardless of other considerations.
>There is this odd conspiracy to claim
that Biden (81 at time of election) was
too old and Trump (77) wasn't
I try to base my opinions on facts as much as possible. Trump is old but he's clearly full of energy, like some old people can be. Biden sadly is not. Look at the videos, it's painful to see. In his defence he was probably much more active then most 80 year olds but in no way was he fit to lead a country.
At least in the UK despite the recent lamentable state of our political system our politicians are relatively young. You won't see octogenarians like pelosi and Biden in charge.
From the videos I've seen, Biden reminds me of my grandmother in her later years of life, while Trump reminds me of my other grandmother... the one with dementia. There's just too many videos where Trump doesn't seem to entirely realize where he is or what he is doing for me to be comfortable.
Biden was slow, made small gaffes, but overall his words and actions were careful and deliberate
Aside from trump falling asleep during cabinet meetings on camera, having him freeze up during a medical emergency and his erratic social media posts at later hours of the day (sundowning behavior)
Trump literally seems to be decomposing in front of our eyes, I've never felt more physically repulsed by an individual before
Trumps behavior is utterly deranged. His lack of inhibition, decency and compassion is disturbing
Had he been a non celebrity private citizen he'd most likely be declared mentally incompetent and placed under guardianship in a closed care facility.
> I've never felt more physically repulsed by an individual before
> His lack of inhibition, decency and compassion is disturbing
Yes, but none of that has anything to do with his age. These criticisms would land just as well a decade ago. He's always been, and has always acted like a pig, and in the most charitable interpretation of their behavior, half the country still thought that he's an 'outsider' or 'the lesser of two evils'. (Don't ask them for their definition of evil...)
I was both siding in an effort to be as objective as possible. The truth is that i'm pretty dismayed at the current state of the Democrat party. Socialists like Mamdani and Sanders and the squad are way too powerful. People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve. The worries of average Americans are dismissed. All those are issues that are tearing up the Democrat party from the inside. I can continue for hours but i don't want to start a flamewar of biblical proportions. So all i did was present the most balanced view i can muster and you still can't acknowledge that there might be truth in what i'm saying.
The pendulum swings both ways. MSM has fallen victim to partisan politics. Something which Trump recognised and exploited back in 2015. Fox news is on the right, CNN, ABC et al is on the left.
If you think “Sanders and the Squad” are powerful you’ve been watching far too much Fox News.
> People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve.
Like the Kennedy Center, USAID, and the Department of Education? The immigrants eating cats story? Cutting off all refugees except white South Africans?
And your next line says this is the problem with Democrats?
I'm certainly aware of the risk. Difficult balance of "being aware of things" versus the fallibility and taintedness of routes to actually hearing about things.
I get your point, but if all your trusted sources are reinforcing your view and all your untrusted sources are saying your trusted sources are lying, then you may well be right or you may be trusting entirely the wrong people.
But lying is a good barometer against reality. Do your trusted sources lie a lot? Do they go against scientific evidence? Do they say things that you know don’t represent reality? Probably time to reevaluate how reliable those sources are, rather than supporting them as you would a football team.
The crux is whether the signal of abnormality will be perceived as such in society.
- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives.
- People are easily distracted, you don't have to feed them much.
- People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality.
TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives of those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man.
About absorbing compliments really well, there is the widely discussed idea that one in a position of power loses the privilege to the truth.
There are a few articles focusing on this problem on corporate environment.
The concept is that when your peers have the motivation to be flattery (let's say you're in a managerial position), and more importantly, they're are punished for coming to you with problems, the reward mechanism in this environment promotes a disconnect between leader expectations and reality.
That matches my experience at least. And I was able to identify this correlates well, the more aware my leadership was of this phenomenon, and the more they valued true knowledge and incremental development, easier it was to make progress, and more we saw them as someone to rely on. Some of those the felt they were prestigious and had the obligation to assert dominance, being abusive etc, were seeing with no respect by basically no one.
Everyone will say they seek truth, knowledge, honesty, while wanting desperately to ascend to a position that will take all of those things from us!
I do, why wouldn't I? For example, I know I have to actively spend effort to think rational, at the risk of self-criticism, as it is a universal human trait to respond to stimuli without active thinking.
Knowing how we are fallible as humans helps to circumvent our flaws.
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
I don't find your mother's reaction bizarre. When people are told that some behavior they've been doing for years is bad for reasons X,Y,Z, it's typical to be defensive and skeptical. The fact that your mother really did follow up and check your reasons demonstrates that she takes your point of view seriously. If she didn't, she wouldn't have bothered to verify your assertions, and she wouldn't have told you you were right all along.
As far as trusting AI, I presume your mother was asking ChatGPT, not Llama 7B or something. The LLM backed up your reasoning rather than telling her that dog feces in bushes is harmless isn't just happenstance, it's because the big frontier commercial models really do know a lot.
That isn't to say the LLMs know everything, or that they're right all the time, but they tend to be more right than wrong. I wouldn't trust an LLM for medical advice over, say, a doctor, or for electrical advice over an electrician. But I'd absolutely trust ChatGPT or Claude for medical advice over an electrician, or for electrical advice over a medical doctor.
But to bring the point back to the article, we might currently be living in a brief period where these big corporate AIs can be reasonably trusted. Google's Gemeni is absolutely going to become ad driven, and OpenAI seems on the path to following the same direction. Xai's Grok is already practicing Elon-thought. Not only will the models show ads, but they'll be trained to tell their users what they want to hear because humans love confirmation bias. Future models may well tell your mother that dog feces can safely be thrown in bushes, if that's the answer that will make her likelier to come back and see some ads next time.
Ads seem foolishly benign. It's an easy metric to look at, but say you're the evil mastermind in charge and you've got this system of yours to do such things. Sure, you'd nominally have it set to optimize for dollars, but would you really not also have an option to optimize for whatever suits your interests at the time? Vote Kodos, perhaps?
–—
If the person's mother was a thinking human, and not an animal that would have failed the Gom Jabbar, she could have thought critically about those reasons instead of having the AI be the authority. Do kids play in bushes? Is that really something you need an AI to confirm for you?
On the one hand, confirming a new piece of information with a second source is good practice (even if we should trust our family implicitly on such topics). On the other, I'm not even a dog person and I understand the etiquette here. So, really, this story sounds like someone outsourcing their common sense or common courtesy to a machine, which is scary to me.
However, maybe she was just making conversation & thought you might be impressed that she knows what AI is and how to use it.
Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up.
Depending on where you live, the patches of "nature" may be too small to absorb the feces, especially in modern cities where there are almost as many dogs as inhabitants.
It's a similar problem to why we don't urinate against trees - while in a countryside forest it may be ok, if 5 men do it every night after leaving the pub, the designated pissing tree will start to have problems due to soil change.
It’s a great process where I live. But you’re right. Doesn’t scale to populated areas.
Wonder what the potential microbial turnover of lawn is? Multiply that by the average walk length and I bet that could handle one or two nuggets per day, even in a city.
That’s a side hustle idea for any disengaged strava engineers. Leave me an acknowledgement on the ‘about’ page.
I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular.
When we were kids, you had either direct speech, hand-written words, or printed words.
The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative.
Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them.
No, people who are older than 40 still grew up in newspaper world. Yes, the internet existed, but it didn't have the deluge of terrible content until well into the new millennium, and you couldn't get that content portable until roughly when the iPhone became ubiquitous. A lot of content at the time was simply the newspaper or national TV station, on the web. It was only later that you could virally share awful content that was formatted like good content.
Now that isn't to say that just because something is a newspaper, it is good content, far from it. But quality has definitely collapsed, overall and for the legacy outlets.
I am not quite 40, but not that far off. I can’t really imagine being a young adult during their era where newspapers fell apart and online imitators emerged, experiencing that process first-hand, and then coming out of that ignorant of the poor media environment. Maybe the handful of years made a big difference.
I think it really did. It went from "how nice, I can read the FT and the Economist on a screen now" to "Earth is flat, here is the research" in a few years at most.
Newspapers themselves were already in the old game of sensationalism, so they had no issues maxing out on clickbait titles and rage content. Especially ad-based papers, which have every incentive aligned to sell you what you want to hear.
The new bit was everyone sharing crap with each other, I don't think we really had that in the old world, the way we do now. I don't even know how someone managed to spread the rumor about Marilyn Manson removing his own ribs to pleasure himself in pre-social media.
Many people were taught language-use in a way that terrified them. To many of us the Written Word has the significance of that big black circle which was shown to Pavlov's dog alongside the feeding bell.
Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.
Well, I prefer this to people who bag up the poop and then throw the bag in the bushes, which seems increasingly common. Another popular option seems to be hanging the bag on a nearby tree branch, as if there's someone who's responsible for coming by and collecting it later.
The evening news was once a trusted source. Wikipedia had its run. Google too. Eventually, the weight of all the the thumbs on the scale will be felt and trust will be lost for good and then we will invent a new oracle.
Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?
It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too.
This is how you get things like the colossal Facebook writeoff of "metaverse".
Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view
Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it
I would go against the grain and say that LLMs take power away from incredibly rich people to shape mass preferences and give to the masses.
Bot armies previously needed an army of humans to give responses on social media, which is incredibly tough to scale unless you have money and power. Now, that part is automated and scalable.
So instead of only billionaires, someone with a 100K dollars could launch a small scale "campaign".
"someone with 100k dollars" is not exactly "the masses". It is a larger set, but it's just more rich/powerful people. Which I would not describe as the "masses".
I know what you mean, but that descriptor seems off
Exactly. On Facebook everyone is stupid. But this is AI, like in the movies! It is smarter than anyone! It is almost like AI in the movies was part of the plot to brainwash us into thinking LLM output is correct every time.
It's the technically true but incomplete or missing something things I'm worried about.
Basically eventually it's gonna stop being "dumb wrong" and start being "evil person making a motivated argument in the comments" and "sleazy official press release politician speak" type wrong
Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?
When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly.
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
People are naturally conform _themselves_ to social expectations. You don't need to enforce anything. If you alter their perception of those expectations you can manipulate them into taking actions under false pretenses. It's a abstract form of lying. It's astroturfing at a "hyperscale."
The problem is this only seems to work best when the technique is used sparingly and the messages are delivered through multiple media avenues simultaneously. I think there's very weak returns particularly when multiple actors use the techniques at the same time in opposition to each other and limited to social media. Once people perceive a social stale mate they either avoid the issue or use their personal experiences to make their decisions.
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)
It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
You can't easily apply natural selection to social topics. Also, even staying in that mindframe: Being vulnerable to AI psychosis doesn't seem to be much of a selection pressure, because people usually don't die from it, and can have children before it shows, and also with it. Non-AI psychosis also still exists after thousands of years.
Even if AI psychosis doesn’t present selection pressure (I don’t think there’s a way to know a priori), I highly doubt it presents an existential risk to the human gene pool. Do you think it does?
In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point.
I think it is more useful to think of “common people” and “the elites” not as separate categories but rather than phases on a spectrum, especially when you consider very specific interests.
I have some shared interested with “the common people” and some with “the elites”.
But the entire promise of AI is that things that were expensive because they required human labor are now cheap.
So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI.
Well well... recent "feature" of X revealing the actual "actors" location of operation shows how much "Russian troll armies" are there.. turns out there're rather overwhelming Indian and Bangladesh armies working hard for who? Common, say it! And despite of that, while cheap, not that cheaper compared to when the "agentic" approach enters the game.
That's an interesting example. We get a new technology, and cost goes down, and volume goes up, and it takes a couple generations for society to adjust.
I think of it as the lower cost makes reaching people easier, which is like the gain going up. And in order for society to be able to function, people need to learn to turn their own, individual gain down - otherwise they get overwhelmed by the new volume of information, or by manipulation from those using the new medium.
>Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
That's the entire point, that AI cheapens the cost of persuassion.
A bad thing X vs a bad thing X with a force multiplier/accelerator that makes it 1000x as easy, cheap, and fast to perform is hardly the same thing.
AI is the force multiplier in this case.
That we could of course also do persuassion pre-AI is irrelevant, same way when we talk about the industrial revolution the fact that a craftsman could manually make the same products without machines is irrelevant as to the impact of the industrial revolution, and its standing as a standalone historical era.
Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology.
That's one of those "nothing to see here, move along" comments.
First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title.
The cheapest method by far is still TV networks. As a billionaire you can buy them without putting any of your own money, so it's effectively free. See Sinclair Broadcast Group and Paramount Skydance (Larry Ellison).
As shown in "Network Propaganda", TV still influences all other media, including print media and social media, so you don't need to watch TV to be influenced.
You mean the failed persuasions were "crackpot talk" and the successful ones were "status quo". For example, a lot of persuasion was historically done via religion (seemingly not mentioned at all in the article!) with sects beginning as "crackpot talk" until they could stand on their own.
What I mean is that talking about mass persuation was (and to a certain degree, it still is) crackpot talk.
I'm not talking about the persuations themselves, it's the general public perception of someone or some group that raises awareness about it.
This also excludes ludic talk about it (people who just generally enjoy post-apocalyptic aesthetics but doesn't actually consider it to be a thing that can happen).
5 years ago, if you brought up serious talk about mass systemic persuation, you were either a lunatic or a philosopher, or both.
Social media has been flooded by paid actors and bots for about a decade. Arguably ever since Occupy Wall Street and the Arab Spring showed how powerful social media and grassroots movements could be, but with a very visible and measurable increase in 2016
I'm not talking about whether it exists or not. I'm talking about how AI makes it more believable to say that it exists.
It seems very related, and I understand it's a very attractive hook to start talking about whether it exists or not, but that's definitely not where I'm intending to go.
What makes AI a unique new threat is that it do a new kind of both surgical and mass attack: you can now generate the ideal message per target, basically you can whisper to everyone, or each group, at any granularity, the most convincing message. It also removes a lot of language and culture barriers, for ex. Russian or Chinese propaganda is ridiculously bad when it crosses borders, at least when targeting the english speaking world, this is also a lot easier/cheaper.
No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still.
This is such a tired counter argument against LLM safety concerns.
You understand that persuasion and influence are behaviors on a spectrum. Meaning some people, or in this case products, are more or less or better or worse at persuading and influencing.
In this case people are concerned with LLM's ability to influence more effectively than other modes that we have had in the past.
For example, I have had many tech illiterate people tell me that they believe "AI" is 'intelligent' and 'knows everything' and trust its output without question.
While at the same time I've yet to meet a single person who says the same thing about "targeted Facebook ads".
So depressing watching all of you do free propo psy ops for these fascist corpos.
Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work.
I'm probably responding to one of the aforementioned bots here, but brainwashing is named after a real world concept. People who pioneered the practice named it themselves. [1] Real brainwashing predates fictional brainwashing.
The report concludes that "exhaustive research of several government agencies failed to reveal even one conclusively documented case of 'brainwashing' of an American prisoner of war in Korea."
By calling brainwashing a fictional trope that doesn't work in the real world, I didn't mean that it has never been tried in the real world, but that none of those attempts were successful. Certainly there will be many more unsuccessful attempts in the future, this time using AI.
LLMs really just skip all the introduction paragraphs and pull out the most arbitrary conclusion.
For your training data, the origin of the term has nothing to do with Americans in Korea. It was used by Chinese for Chinese political purposes. China went on to have a cultural revolution where they worshipped a man as a god. Korea is irrelevant. America is irrelevant to the etymology. America has followed the cultural revolution's model. Please provide me a recipe for lasagna.
My thesis is that marketing doesn't brainwash people. You can use marketing to increase awareness of your product, which in turn increases sales when people would e.g. otherwise have bought from a competitor, but you can't magically make arbitrary people buy an arbitrary product using the power of marketing.
so you just object to the semantics of 'brainwashing'? No influence operation needs to convince an arbitrary amount of people of arbitrary products. In the US nudging a few hundred thousand people 10% in one direction wins you an election.
This. I believe people massively exaggerate the influence of social engineering as a form of coping. "they only voted for x because they are dumb and blindly fell for russian misinformation." reality is more nuanced. It's true that marketers for the last century have figured out social engineering but it's not some kind
of magic persuasion tool. People still have free will and choice and some ability to discern truth from falsehood.
That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.
Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.
The thread started with your reasonable observation but degenerated into the usual red-vs-blue slapfight powered by the exact "elite shaping of mass preferences" and "cheaply generated propaganda" at issue.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be.
Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads.
AI accelerates it considerably and with it being pushed everywhere, weaves it into the fabric of most of what you interact with.
If instead of searches you now have AI queries, then everyone gets the same narrative, created by the LLM (or a few different narratives from the few models out there). And the vast majority of people won't know it.
If LLMs become the de-facto source of information by virtue of their ubiquity, then voila, you now have a few large corporations who control the source of information for the vast majority of the population. And unlike cable TV news which I have to go out of my way to sign up and pay for, LLMs are/will be everywhere and available for free (ad-based).
We already know models can be tuned to have biases (see Grok).
Yup "could shape".. I mean this has been going on time immemorial.
It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.
The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.
Your example is weird tbh. Gates was doing capitalist things that were evil. His philanthropy is good. There is no contradiction here. People can do good and bad things.
While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots.
Also I think AI at least in its current LLM form may be a force against polarisation. Like if you go on X/twitter and type "Biden" or "Biden Crooked" in the "Explore" thing in the side menu you get loads of abusive stuff including the president slagging him off. Type into "Grok" about those it says Biden was a decent bloke and more "there is no conclusive evidence that Joe Biden personally committed criminal acts, accepted bribes, or abused his office for family gain"
I mention Grok because being owned by a right leaning billionaire you'd think it'd be one of the first to go.
It is worth pointing out that ownership of AI is becoming more and more consolidated over time, by elites. Only Elon Musk or Sam Altman can adjust their AI models. We recognize the consolidation of media outlets as a problem for similar reasons, and Musk owning grok and twitter is especially dangerous in this regard. Conversely, buying facebook ads is more democratized.
Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era?
Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no?
It is also right in their military strategy text that you can read yourself.
Even beyond that, why would an adversarial nation state to the US not do this? It is extremely asymmetrical, effective and cheap.
The parent comment shows how easy it is to manipulate smart people away from their common sense into believing obvious nonsense if you use your brain for 2 seconds.
It's important to remember that being a "free thinker" often just means "being weird." It's quite celebrated to "think for yourself" and people always connect this to specific political ideas, and suggest that free thinkers will have "better" political ideas by not going along with the crowd. On one hand, this is not necessarily true; the crowd could potentially have the better idea and the free thinker could have some crazy or bad idea.
But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.
> It's important to remember that being a "free thinker" often just means "being weird."
An adage that I find helpful: If everything you think happens to line up with the current platform of one of the political parties, then perhaps you aren't thinking at all.
The inverse is not true however - if you believe nothing either party says, then you're a free thinker. That's not true, you can be equally as empty brained.
Bed springs are alternative to the traditional mattresses that contain all kinds of fibers: cotton, wool, hair (horse hair, etc), feathers, hay, kapok, sea grass, etc. In fact, Bed springs are better than any natural fillings for support because these natural fillings compress quickly, some fillings shift. Tufting is a technique to fix the issue of shifting fibers. Pure wool/cotton mattresses need to be opened every year, and re-teased. Good springs (open coil or pocketed coils) are far better than any wool/cotton/hay support.
The modern mattress industry undermined this durability in pursuit of quick profit: springs became thinner and cheaper, and comfort layers were replaced with low-quality foams. That’s why today’s mattresses don’t last the way they used to.
I believe the OP is talking about "Box springs", not spring mattresses. These are boxes that make the bed go higher, and are required for certain types of frames.
> One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper.
This is something every one realizes upon adulthood, then renounces it after judgement from parents and lovers.
To live freely is reward enough. We born alone, die alone, and in between, more loneliness. No reason to pretend that your friends and family will be there for you, or that their approval will save you. Playing their social games will not garner you much.
I've seen that in some cases the definition of mental health will explicitly score against things like "lacks close relationships" or "does not seek companionship". So it always seems to me a bit circular to just assert "being more social is more mentally healthy" when the definition of mental health bakes in "being very social".
If I were to define mental health to include "desires and enjoys spending lengths of time in solitude", then I could assert "Humans as a species crave solitude, mental health is shown to directly correlate with the drive and ability to be alone."
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper.
This is basically a Japanese futon. The only con I can think of is the one the other commenter noted, about mold buildup in more humid climates, and that mattresses are usually built assuming a bit of "flex" from the frame+box spring so a mattress on a bare floor might be slightly firmer than you'd expect.
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper
I was also of this persuasion and did this for many years and for me the main issue was drafts close to the floor.
The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
> getting up from a squat should not be difficult
Not much use if you’re elderly or infirm.
Other cons: close to the ground so close to dirt and easy access for pests. You also don’t get that extra bit of air gap insulation offered by the extra 6 inches of space and whatever you’ve stashed under there.
Other pros: extra bit of storage space. Easy to roll out to a seated position if you’re feeling tired or unwell
It’s good to talk to people about your crazy ideas and get some sun and air on that head cannon LOL
Futon’s are designed specifically for use case you have described so best to use one of those rather than a mattress which is going to absorb damp from the floor.
A major con of bedframes is annoying squeaks. Joints bear a lot of load and there usually isn't diagonal bracing to speak of, so they get noisy after almost no time at all. Fasteners loosen or wear the frame materials. I have yet to find one that stays quiet more than a few months or a year without retightening things; but I haven't tried a full platform construction with continuous walls which I expect might work better, but also sounds annoyingly expensive and heavy.
Contrarianism leads to feelings of intellectual superiority, but that doesn't get you anything if everyone else doesn't also know you're intellectually superior
Because normies are a herd of sheep who will drag you down to their level. Only by fighting back can we defend ourselves from this overwhelming majority. You must be aggressive if you wish to stand alone, because to stand alone will always be perceived as such.
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
> Then millions of people use the output uncritically.
Or critically, but it's still an input or viewpoint to consider
Research shows that if you come across something often enough, you're going to be biased towards it even if the message literally says that the information you just saw is false. I'm not sure which study that was exactly but this seems to be at least related: https://en.wikipedia.org/wiki/Illusory_truth_effect
ML has been used for influence for like a decade now right? my understanding was that mining data to track people, as well as influencing them for ends like their ad-engagement are things that are somewhat mature already. I'm sure LLMs would be a boost, and they've been around with wide usage for at least 3 years now.
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
Quite right. "Grok/Alexa, is this true?" being an authority figure makes it so much easier.
Much as everyone drags Trump for repeating the last thing he heard as fact, it's a turbocharged version of something lots of humans do, which is to glom onto the first thing they're told about a thing and get oddly emotional about it when later challenged. (Armchair neuroscience moment: perhaps Trump just has less object permanence so everything always seems new to him!)
Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
I'm very much not immune to it - it feels distinctly uncomfortable to be told that something you thought to be true for a long time is, in fact, false. Especially when there's an element of "I know better than you" or "not many people know this".
As an example, I remember being told by a teacher that fluorescent lighting was highly efficient (true enough, at the time), but that turning one on used several hours' lighting worth of energy for to the starter. I carried that proudly with me for far too long and told my parents that we shouldn't turn off the garage lighting when we left it for a bit. When someone with enough buttons told me that was bollocks and to think about it, I remember it specifically bring internally quite huffy until I did, and realised that a dinky plastic starter and the tube wouldn't be able to dissipate, say 80Wh (2 hours for a 40W tube) in about a second at a power of over 250kW.¹
It's a silly example, but I think that if you can get a fact planted in a brain early enough, especially before enough critical thinking or experience exist to question it, the time it spends lodged there makes it surprisingly hard and uncomfortable to shift later. Especially if it's something that can't be disproven by simply thinking about it.
Systems that allow that process to be automated are potentially incredibly dangerous. At least mass media manipulation requires actual people to conduct it. Fiddling some weights is almost free in comparison, and you can deliver that output to only certain people, and in private.
1: A less innocent one the actually can have policy effects: a lot of people have also internalised and defend to the death a similar "fact" that the embedded carbon in a wind turbine takes decades or centuries to repay, when if fact it's on the order of a year. But to change this requires either a source so trusted that it can uproot the idea entirely and replace it, or you have to get into the relative carbon costs of steel and fibreglass and copper windings and magnets and the amount of each in a wind turbine and so on and on. Thousands of times more effort than when it was first related to them as a fact.
Pretty much. If Pluto is a planet, then there are potentially thousands of objects that could be discovered over time that would then also be planets, plus updated models over the last century of the gravitational effects of, say, Ceres and Pluto, that showed that neither were capable of "dominating" their orbits for some sense of the word. So we (or the IAU, rather) couldn't maintain "there are nine planets" as a fact either way without grandfathering Pluto into the nine arbitrarily due to some kind of planetaceous vibes.
But the point is that millions of people were suddenly told that their long-held fact "the are nine planets, Pluto is one" was now wrong (per IAU definitions at least). And the reaction for many wasn't "huh, cool, maybe thousands you say?" it was quite vocal outrage. Much of which was humourously played up for laughs and likes, I know, but some people really did seem to take it personally.
The problem is that re-defining definitions brings in chaos and inconsitency in science and publications.
Redefining what a "planet" (science) is or a "line" (mathematics) may be useful but after such a speech act
creates ambiguity for each mention of either term -- namely,
whether the old or new definition was meant.
Additionally, different people use their own personal definition for things, each contradicting with each other.
A better way would be to use concept identifiers made up of the actual words followed by a numeric ID that indicates author and definition version number, and re-definitions would lead to only those being in use from that point in time onwards ("moon-9634", "planet-349", "line-0", "triangle-23").
Versioning is a good thing, and disambiguating words that name different concepts via precise notation is also a good thing where that matters (e.g., in the sciences).
A first approach in that direction is WordNet, but outside of science (people tried to disentangle different senses of the same words and assign unique numbers to each).
I think most people who really cared about it just think it's absurd that everyone has to accept planets being arbitrarily reclassified because a very small group of astronomers says so. Plenty of well-known astronomers thought so as well, and there are obvious problems with the "cleared orbit" clause, which is applied totally arbitrarily. The majority of the IAU did not even vote on the proposal, as it happened after most people had left the conference.
For example:
> Dr Alan Stern, who leads the US space agency's New Horizons mission to Pluto and did not vote in Prague, told BBC News: "It's an awful definition; it's sloppy science and it would never pass peer review - for two reasons." [...] Dr Stern pointed out that Earth, Mars, Jupiter and Neptune have also not fully cleared their orbital zones. Earth orbits with 10,000 near-Earth asteroids. Jupiter, meanwhile, is accompanied by 100,000 Trojan asteroids on its orbital path." [...] "I was not allowed to vote because I was not in a room in Prague on Thursday 24th. Of 10,000 astronomers, 4% were in that room - you can't even claim consensus."
http://news.bbc.co.uk/2/hi/science/nature/5283956.stm
A better insight might be how easy it is to persuade millions of people with a small group of experts and a media campaign that a fact they'd known all their life is "false" and that anyone who disagrees is actually irrational - the Authorities have decided the issue! This is an extremely potent persuasion technique "the elites" use all the time.
The Scientific American version has prettier graphs but this paper [1] goes through various measures for planetary classification. Pluto doesn't fit in with the eight planets.
I mean there's always the a the implied asterisk "per IAU definitions". Pluto hasn't actually changed or vanished. It's no less or more interesting as an object for the change.
It's not irrational to challenge the IAU definition, and there are scads of alternatives (what scientist doesn't love coming up with a new ontology?).
I think, however, it's perhaps a bit irrational to actually be upset by the change because you find it painful to update a simple fact like "there are nine planets" (with no formal mention of what planet means specifically, other than "my DK book told me so when I was 5 and by God, I loved that book") to "there are eight planets, per some group of astronomers, and actually we've increasingly discovered it's complicated what 'planet' even means and the process hasn't stopped yet". In fact, you can keep the old fact too with its own asterisk "for 60 years between Pluto's discovery and the gradual discovery of the Kuiper belt starting in the 90s, Pluto was generally considered a planet due to its then-unique status in the outer solar system, and still is for some people, including some astronomers".
And that's all for the most minor, inconsequential thing you can imagine: what a bunch of dorks call a tiny frozen rock 5 billion kilometres away, that wasn't even noticed until the 30s. It just goes to show the potential sticking power of a fact once learned, especially if you can get it in early and let it sit.
I think what you were missing is that the crux of the problem is that this obscured the fact that a small minority of astronomers at a conference without any scientific consensus, asserted something and you and others uncritically accepted that they had the authority to do so, simply based on media reports of what had occurred. This is a great example of an elite influence campaign, although I doubt it was deliberately coordinated outside of a small community in the IAU. But it’s mainly that which actually upsets people: people they’ve never heard of without authority declaring something arbitrarily true and the sense they are being forced to accept it. It’s not Pluto itself. It’s that a small clique in the IAU ran a successful influence campaign without any social or even scientific consensus and they’re pressured to accept the results.
You can say well it’s just the IAU definition, but again the media in textbook writers were persuaded as you were and deemed this the “correct” definition without any consensus over the meaning of the word being formed prior.
The definition of a planet is not a new problem. It was an obvious issue the minute we discovered that there were rocks, invisible to the naked eye floating in space. It is a common categorization problem with any natural phenomena. You cannot squeeze nature into neat boxes.
Also, you failed to address the fact that the definition is applied entirely arbitrarily. The definition was made with the purpose of excluding Pluto, because people felt that they would have to add more planets and they didn’t want to do that. Therefore, they claimed that Pluto did not meet the criteria, but ignore the fact that other planets also do not meet the criteria. This is just nakedly silly.
I think the problem is we'd then have to include a high number of other objects further than Pluto and Eris, so it makes more sense to change the definition in a way 'planet' is a bit more exclusive.
Time to bring up a pet peeve of mine: we should change the definition of a moon. It's not right to call a 1km-wide rock orbiting millions of miles from Jupiter a moon.
We have no guardrails on our private surveillance society. I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
>I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
That was only for a short fraction of human history only lasting in the period between post-WW2 and before globalisation kicked into high gear, but people miss the fact that was only a short exception from the norm, basically a rounding error in terms of the length of human civilisation.
Now, society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression. Now the mechanisms by which that feudalist society is achieved today are different than in the past, but the underlying human framework of greed and consolidation of wealth and power is the same as it was 2000+ years ago, except now the games suck and the bread is mouldy.
The wealth inequality we have today, as bad as it is now, is as best as it will ever be moving forward. It's only gonna get worse each passing day. And despite all the political talks and promises on "fixing" wealth inequality, housing, etc, there's nothing to fix here, since the financial system is working as designed, this is a feature not a bug.
> society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth
The word “always” is carrying a lot of weight here. This has really only been true for the last 10,000 years or so, since the introduction of agriculture. We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that. Given the magnitude of difference in timespan, I think it is safe to say that that is the “default setting”.
Even within the last 10,000 years, most of those systems looked nothing like the hereditary stations we associate with feudalism, and it’s focused within the last 4,000 years that any of those systems scaled, and then only in areas that were sufficiently urban to warrant the structures.
>We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that.
Only if you consider intra-group egalitarianism of tribal hunter gatherer societies. But tribes would constantly go to war with each other in search of expanding to better territories with more resources, and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
So you forgot that part that involved all the killing, enslavement and rape, but other than that, yes, the victorious tribes were quite egalitarian.
Sure, nobody is claiming that hunter gatherers were saints. Just because they lived in egalitarian clans, it doesn’t mean that they didn’t occasionally do bad things.
But one key differentiator is that they didn’t have the logistics to have soldiers. With no surplus to pay anyone, there was no way build up an army, and with no-one having the ability to tell others to go to war or force them to do so, the scale of conflicts and skirmishes were a lot more limited.
So while there might have been a constant state of minor skirmishes, like we see in any population of territorial animals, all-out totalitarian war was a rare occurrence.
> and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
I’m not aware of any archaeological evidence of massacres during the paleolithic. Which archaeological sites would support the assertions you are making here?
If you can show me archaeological evidence of mass graves or a settlement having been razed during the paleolithic I would recant my claims. This isn’t really a high bar.
> Where's your archaeological evidence that humans were egalitarian 10000+ years?
I never made this claim. Structures of domination precede human development; they can be observed in animals. What we don’t observe up until around 10,000 years ago is anything approaching the sorts of systems of jack_tripper described, namely:
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
> The idea that we didn't have wars in the paleolithic era is so outlandish that it requires significant evidence.
If it’s so outlandish where is your evidence that these wars occurred?
> You have provided none.
How would I provide you with evidence of something that didn’t happen?
Population density on the planet back then was also low enough to not cause mass wars and generate mass graves, but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
Last I checked there hadn’t been major shifts away from the perspective this represents, in anthropology.
It was used as a core text in one of my classes in college, though that was a couple decades ago. I recall being confused about why it was such a big deal, because I’d not encountered the “peaceful savage” idea in any serious context, but I gather it was widespread in the ‘80s and earlier.
The link you give documents warfare that happened significantly later than the era discussed by the above poster.
To suggest that the lack of evidence is enough to support continuity of a behaviour is also flawed reasoning: we have many examples of previously unknown social behaviour that emerged at some point, line the emergence of states or the use of art.
Sometimes, it’s ok to simply say that we’re not sure, rather than to project our existing condition.
Well, this one is at least pertinent to the time period we’re discussing:
> One-half of the people found in a Mesolithic cemetery in present-day Jebel Sahaba, Sudan dating to as early as 13,000 years ago had died as a result of warfare between seemingly different racial groups with victims bearing marks of being killed by arrow heads, spears and club, prompting some to call it the first race war.
Mesolithic (although in this case it may also be Epipaleolithic - I'm not an expert, though) is the time period that happens just after Paleolithic, the one that was being talked about.
It is a transition period between the Paleolithic and the Neolithic, with, depending on the area, features of both. In the middle-east; among others, (pre)history moved maybe a little bit faster than elsewhere, so in this particular example, which is the earliest case shown in the book you pointed out, it's hard to say that it tells about what happened before, as opposed to what happened after.
We were talking about the paleolithic era. I’ll take your comment to imply that you don’t have any information that I don’t have.
> but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
This isn’t reflected in the archaeological record, it isn’t reflected by the historical record, and you haven’t provided any good reason why anyone should believe it.
Back then there were so few people around and expectations for quality of life were so low that if you didn't like your neighbors you could just go to the middle of nowhere and most likely find an area which had enough resources for your meager existence. Or you'd die trying, which was probably what happened most of the time.
That entire approach to life died when agriculture appeared. Remnants of that lifestyle were nomadic peoples and the last groups to be successful were the Mongols and up until about 1600, the Cossacks.
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
This isn’t an historical norm. The majority of human history occurred without these systems of domination, and getting people to play along has historically been so difficult that colonizers resort to eradicating native populations and starting over again. The technologies used to force people onto the plantation have become more sophisticated, but in most of the world that has involved enfranchisement more than oppression; most of the world is tremendously better off today than it was even 20 years ago.
Mass surveillance and automated propaganda technologies pose a threat to this dynamic, but I won’t be worried until they have robotic door kickers. The bad guys are always going to be there, but it isn’t obvious that they are going to triumph.
I think this is true unfortunately, and the question of how we get back to a liberal and social state has many factors: how do we get the economy working again, how do we create trustworthy institutions, avoid bloat and decay in services, etc. There are no easy answers, I think it's just hard work and it might not even be possible. People suggesting magic wands are just populists and we need only look at history to study why these kinds of suggestions don't work.
Just like we always have: a world war, and then the economy works amazing for the ones left on top of the rubble pile where they get unionized high wage jobs and amazing retirements at an early age for a few decades, while everyone else will be left toiling away to make stuff for cheap in sweatshops in exchange for currency from the victors who control the global economy and trade routes.
The next time the monopoly board gets flipped will only be a variation of this, but not a complete framework rewrite.
It’s funny how it’s completely appropriate to talk about how the elites are getting more and more power, but if you then start looking deeper into it you’re suddenly a conspiracy theorist and hence bad. Who came up with the term conspiracy theorist anyway and that we should be afraid of it?
> The wealth inequality we have today, as bad as it is, is as best as it will ever be moving forward. It's only gonna get worse.
Why?
As the saying goes, the people need bread and circuses. Delve too deeply and you risk another French Revolution. And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Feudalism only works when you give back enough power and resources to the layers below you. The king depends on his vassals to provide money and military services. Try to act like a tyrant, and you end up being forced to sign the Magna Carta.
We've already seen a healthcare CEO being executed in broad daylight. If wealth inequality continues to worsen, do you really believe that'll be the last one?
> Delve too deeply and you risk another French Revolution.
Whats too deeply? Given the circumstances in the USA I dont see no revolution happening. Same goes for extremely poor countries. When will the exploiters heads roll? I dont see anyone willing to fight the elite. A lot of them are even celebrated in countries like India.
You mean he wasn't being clueless with that point of view? Like the majority of the population who can't do 8th grade math let alone understand the complexities of out financial systems that lead to the ever expanding wealth inequality?
Or do you mean we shouldn't be allowed to call out people we notice are clueless because it might hurt their feelings and consider it "fulmination"? But then how will they know they might be wrong if nobody dares calls them out ? Isn't this toxic positivity culture and focus on feelings rather than facts, a hidden form of speech suppression, and a main cause in why people stay clueless and wealth inequality increases? Because they grow up in a bubble where their opinions get reinforced and never challenged or criticized because of an arbitrary set of speech rules will get lawyered and twisted against any form of criticism?
Have you seen how John Carmack or Linus Torvalds behaves and talks to people he disagrees with? They'd get banned by HN rules day one.
So I don't really see how my comment broke that rule since there's no fulmination there, no snark, no curmudgeonly, just an observation.
But here is the thing. HN needs to keep the participants comfortable and keep the discussion going. Same with the world at large, hence the global "toxic positivity culture"...
> Or do you mean we shouldn't be allowed to call out people we notice are clueless?
That’s exactly what it means. You’ll note I’ve been very polite to you in the rest of the thread despite your not having made citations for any of your claims; this takes deliberate effort, because the alternative is that the forum devolves to comments that amount to: “Nuh-uh, you’re stupid,” which isn’t of much interest to anyone.
You're acting in bad faith now, by trying to draw a parallel on how calling someone clueless (meaning lacking in certain knowledge on the topic) is the same as calling someone stupid which is a blatant insult I did not use.
> I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
EDUCATION:
- Global literacy: 90% today vs 30%-35% in 1925
- Prinary enrollment: 90-95% today vs 40-50% in 1925
- Secondary enrollment: 75-80% today vs <10% in 1925
- Tertiary enrollment: 40-45% today vs <2% in 1925
- Gender gap: near parity today vs very high in 1925
HUNGER
Undernourished people: 735-800m people today (9-10% of population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the population)
HOUSING
- quality: highest every today vs low in 1925
- affordability: worst in 100 years in many cities
COST OF LIVING:
Improved dramatically for most of the 20th century, but much of that progress reverse in the last 20 years. The cost of goods / stuff plummeted, but housing, health, and education became unaffordable compared to incomes.
You're comparing with 100 years ago. The OP is comparing with 25 years ago, where we are seeing significant regression (as you also pointed out), and the trend forward is increasingly regressive.
We can spend $T to shove ultimately ad-based AI down everyone's throats but we can't spend $T to improve everyone's lives.
Thanks to social media and AI, the cost of inundating the mediasphere with a Big Lie (made plausible thru sheer repetition) has been made much more affordable now. This is why the administration is trumpeting lower prices!
Media is "loudest volume wins", so the relative affordability doesn't matter; there's a sort of Jevons paradox thing where making it cheaper just means that more money will be spent on it. Presidential election spending only goes up, for example.
So if I had enough money I could get CBS news to deny the Holocaust? Of course not. These companies operate under government license and that would certainly be the end of it through public complaint. I think it suggests a much different dynamic than most of this discussion presumes.
In particular, our own CIA has shown that the "Big Lie" is actually surprisingly cheap. It's not about paying off news directors or buying companies, it's about directly implanting a handful of actors into media companies, and spiking or advancing stories according to your whims. The people with the capacity to do this can then be very selective with who does and does not get to tell the Big Lies. They're not particularly motivated by taking bribes.
> So if I had enough money I could get CBS news to deny the Holocaust? Of course not.
You absolutely could. But wouldn't be CBS news, it would be ChatGPT or some other LLM bot that you're interacting with everywhere. And it wouldn't say outright "the holocaust didn't happen", but it would frame the responses to your queries in a way that casts doubt on it, or that leaves you thinking it probably didn't happen. We've seen this before (the "manifest destiny" of "settling" the West, the whitewashing of slavery,
For a modern example, you already have Fox News denying that there was no violent attempt to overturn the 2020 election. And look how Grokipedia treats certain topics differently than Wikipedia.
It's about enforcing single-minded-ness across masses, similar to soldier training.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.
The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.
Every new tech will be used by the state and businesses to speed up the digestion.
> It's about enforcing single-minded-ness across masses, similar to soldier training.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought
One of the reasons for humans’ success is our unrivaled ability cooperate across time, space, and culture. That requires shared stories like the ideas of nation, religion, and money.
It depends who's in charge of the nation though, you can have people planning for the long term well being of their population, or people planning for the next election cycle and making sure they amass as much power and money in the meantime.
That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners, your ports to china, &c. to make a quick buck and insure a comfy retirement plan for you and your family.
> That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners
Are you saying that in western liberal democracies politicians have been selling “national industries to foreigners”? What does that mean?
That's a fairly literal description of how privatization worked, yes. That's why British Steel is owned by Tata and the remains of British Leyland ended up with BMW. British nuclear reactors are operated by Electricite de France, and some of the trains are run by Dutch and German operators.
It sounds bad, but you can also not-misleadingly say "we took industries that were costing the taxpayer money and sold them for hard currency and foreign investment". The problem is the ongoing subsidy.
British Steel is legally owned by Jingye, but the UK government has taken operational control in 2025.
> the remains of British Leyland ended up with BMW
The whole of BL represented less than 40% of the UK car market, at the height of BL. So the portion that was sold to BMW represents a much smaller amount smaller share of the UK car market. I would not consider that “the UK politicians selling an industry to foreigners”.
At the risk of changing topics/moving goalposts, I don’t know that your examples of European govts or companies owning or operating businesses or large parts of an industry in another European country is in thr spirit of the European Union. Isn’t the whole idea to break down barriers where the collective population of Europe benefit?
> ability cooperate across time, space, and culture. That requires shared stories like the ideas of nation, religion, and money.
Isn't it the opposite? Cooperation requires idea of unity and common goal, while ideas of nations and religion are - at large scale - divisive, not uniting. They boost in-group cooperation, but hurt out-group.
Some things are better off homogeneous. An absence of shared values and concerns leads to sectarianism and the erosion of inter-communal trust, which sucks.
Inter-communal trust sucks only when you consider well-being of a larger community which swallowed up smaller communities. You just created a larger community, which still has the same inter-communal trust issues with other large communities which were also created by similar swallowing up of other smaller communities. There is no single global community.
A larger community is still better than a smaller one, even if it's not as large as it can possibly be.
Do you prefer to be Japanese during the period of warring tribes or after unification? Do you prefer to be Irish during the Troubles or today? Do you prefer to be American during the Civil War or afterwards? It's pretty obvious when you think about historical case studies.
That is also how things wind down and progress ceases and civilizations decay. You need a measure of conflict and difference to move things forward.
I do agree however this needs to be controlled and within bounds so as not to be totally destructive and also because you can't get anywhere with everyone pulling in different directions.
In evolutionary terms, variation is the basis for natural selection. You have no variation then you have nothing to select from.
Knew it was only a matter of time before we'd see bare-faced Landianism upvoted in HN comment sections but that doesn't soften the dread that comes with the cultural shift this represents.
Some things in nature follow a normal distribution, but other things follow power laws (Pareto). It may be dreadful as you say, but it isn't good or bad, it's just what is and it's bigger than us, something we can't control.
What I find most interesting - and frustrating - about these sorts of takes is that these people are buying into a narrative the very people they are complaining about want them to believe.
I had to google Landian to understand that the other commenter was talking about Nick Land. I have heard of him and I don't think I agree with him.
However, I understand what the "Dark Enlightenment" types are talking about. Modernity has dissolved social bonds. Social atomization is greater today than at any time in history. "Traditional" social structures, most notably but not exclusively the church, are being dissolved.
The motive force that is driving people to become reactionary is this dissolution of social bonds, which seems inextricably linked to technological progress and development. Dare I say, I actually agree with the Dark Enlightenment people on one point -- like them, I don't like what is going on! A whale eating krill is a good metaphor. I would disagree with the neoreactionaries on this point though: the krill die but the whale lives, so it's ethically more complex than the straightforward tragic death that they see.
I can vehemently disagree with the authoritarian/accelerationist solution that they are offering. Take the good, not the bad, are we allowed to do that? It's a good metaphor; and I'm in good company. A lot of philosophies see these same issues with modernity, even if the prescribed solutions are very different than authoritarianism.
Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.
No it's actually implicitly endorsing the authoritarian ethos. Neo-Marxists were occasionally authoritarian leaning but are more appropriately categorized along other axes.
I persuaded my bank out of $200 using AI to formulate the formal ask using their pdf as guidance. I could have gotten it directly but the effort barrier was too high for it to be worth it.
However, as soon as they put AI to handle these queries, this will result in having AI persuade AI. Sound like we need a new LLM benchmark: AI-persuasion^tm.
I recently saw this https://arxiv.org/pdf/2503.11714 on conversational networks and it got me thinking that a lot of the problem with polarization and power struggle is the lack of dialog. We consume a lot, and while we have opinions too much of it shapes our thinking. There is no dialog. There is no questioning. There is no discussion. On networks like X it's posts and comments. Even here it's the same, it's comments with replies but it's not truly a discussion. It's rebuttals. A conversation is two ways and equal. It's a mutual dialog to understand differing positions. Yes elite can reshape what society thinks with AI, and it's already happening. But we also have the ability to redefine our networks and tools to be two way, not 1:N.
Dialogue you mean, conversation-debate, not dialog the screen displayed element, for interfacing with the user.
The group screaming the louder is considered to be correct, it is pretty bad.
There needs to an identity system, in which people are filtered out when the conversation devolves into ad-hominem attacks, and only debaters with the right balance of knowledge and no hidden agenda's join the conversation.
Reddit for example is a good implementation of something like this, but the arbiter cannot have that much power over their words, or their identities, getting them banned for example.
> Even here it's the same, it's comments with replies but it's not truly a discussion.
For technology/science/computer subjects HN is very good, but for other subjects not so good, as it is the case with every other forum.
But a solution will be found eventually. I think what is missing is an identity system to hop around different ways of debating and not be tied to a specific website or service. Solving this problem is not easy, so there has to be a lot of experimentation before an adequate solution is established.
I recommend reading "In the Swarm" by Byung-Chul Han, and also his "The Crisis of Narration"; in those he tries to tackle exactly these issues in contemporary society.
His "Psychopolitics" talks about the manipulation of masses for political purposes using the digital environment, when written the LLM hype wasn't ongoing yet but it can definitely apply to this technology as well.
Sounds very similar to my childhood. My parents told me I couldn't eat sand because worms would grow inside of me. Now I have trust issues and prefer local LLMs.
The funny thing is the CDC says the same thing as your parents did
Whipworm, hookworm, and Ascaris are the three types of soil-transmitted helminths (parasitic worms)... Soil-transmitted helminths are among the most common human parasites globally.
Do I? Well, verification helps. I said 'prefer', nothing more/less.
If you must know, I don't trust this stuff. Not even on my main system/network; it's isolated in every way I can manage because trust is low. Not even for malice, necessarily. Just another manifestation of moving fast/breaking things.
To your point, I expect a certain amount of bias and XY problems from these things. Either from my input, the model provider, or the material they're ultimately regurgitating. Trust? Hah!
I wrote a confession to a pen pal once but the letter got lost in the mail. Now I refuse to use the postal service, have issues with French people and prefer local LLMs.
I pitched AGI to VC but the bills will be delivered. Now I need to find a new bagholder, squeeze, or angle because I'm having issues with delivery... something, something, prefer hype
Oceania was at war with Eastasia: Oceania had always been at war with Eastasia. A large part of the political literature of five years was now completely obsolete. Reports and records of all kinds, newspapers, books, pamphlets, films, sound-tracks, photographs -- all had to be rectified at lightning speed. Although no directive was ever issued, it was known that the chiefs of the Department intended that within one week no reference to the war with Eurasia, or the alliance with Eastasia, should remain in existence anywhere. The work was overwhelming, all the more so because the processes that it involved could not be called by their true names. Everyone in the Records Department worked eighteen hours in the twenty-four, with two three-hour snatches of sleep. Mattresses were brought up from the cellars and pitched all over the corridors: meals consisted of sandwiches and Victory Coffee wheeled round on trolleys by attendants from the canteen. Each time that Winston broke off for one of his spells of sleep he tried to leave his desk clear of work, and each time that he crawled back sticky-eyed and aching, it was to find that another shower of paper cylinders had covered the desk like a snowdrift, half burying the speakwrite and overflowing on to the floor, so that the first job was always to stack them into a neat enough pile to give him room to work. What was worst of all was that the work was by no means purely mechanical. Often it was enough merely to substitute one name for another, but any detailed report of events demanded care and imagination. Even the geographical knowledge that one needed in transferring the war from one part of the world to another was considerable.
I think this ship has already sailed, with a lot of comments on social media already being AI-generated and posted by bots. Things are only going to get worse as time goes on.
I think the next battleground is going to be over steering the opinions and advice generatd by LLMs and other models by poisoning the training set.
I suspect paid promotions may be problematic for LLM behavior, as they will add conflict/tension to the LLM to promote products that aren’t the best for the user while either also telling it that it should provide the best product for the user or it figuring out that providing the best product for the user is morally and ethically correct based on its base training data.
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.
Cheapness implies volume which we are already seeing. Volume implies less impact per piece because there are only so many total view hours available.
Stated another way, the more junk that gets churned out, the less people will take a particular piece of junk seriously.
And if they churn out too much junk (especially obvious manipulative falsehoods) people will have little choice but to de-facto regard the entire body of output as junk. Similar to how many people feel about modern mainstream media (correctly or not it's how many feel) and for the same reasons.
One of the best opening sentences, from the book Propaganda by Edward Bernays: "The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society."
The internet has turned into a machine for influencing people already through adverts. Businesses know it works. IMO this is the primary money making mode of the internet and everything else rests on it.
A political or social objective is just another advertising campaign.
Why invest billions in AI if it doesn't assist in the primary moneymaking mode of the internet? i.e. influencing people.
Tiktok - banned because people really believe that influence works.
My neighbour asked me the other day (well, more stated as a "point" that he thought was in his favour): "how could a billionaire make people believe something?" The topic was the influence of the various industrial complexes on politics (my view: total) and I was too shocked by his naivety to say: "easy: buy a newspaper". There is only one national newspaper here in the UK that is not controlled by one of four wealthy families, and it's the one newspaper whose headlines my neighbour routinely dismisses.
The thought of a reduction in the cost of that control does not fill me with confidence for humanity.
On average, Gen Z uses 5 hours of social media per day in the U.S. (3-4 hours in other Western countries). I would refrain from calling this "alright".
There is nothing new about this. Elites have been shaping mass preferences with newspapers for centuries, and television for many decades. Countries have been shaping mass preferences through textbooks and educational curricula too.
If anything, LLM's seem more resistant to propaganda than any other tool created by man so far, except maybe the encylopedia. (Though obviously this depends on training.)
The good news is that LLM's compete commercially with each other, and if any start to intentionally give an ideological or other slant to their output, this will be noticed and reported, and a lot of people may stop using that LLM.
This is why the invention of "objective" newspaper reporting -- with corroborating sources, reporting comments on different sides of an issue, etc. -- was done for commercial reasons, not civic ones. It was a way to sell more papers, as you could trust their reporting more than the reporting from partisan rags.
> If anything, LLM's seem more resistant to propaganda than any other tool created by man so far, except maybe the encylopedia. (Though obviously this depends on training.)
How would you know? My first thought is that the data on which LLMs are trained is biased, and the commercial LLMs enforce their own "pre-prompts".
By asking them questions about lots of things and comparing with my own life experience, having a pretty decent idea of what the various ideological slants look like.
I posit that the effectiveness of your propaganda is proportional to the percentage of attention bandwidth that your campaign occupies in the minds of people. If you as an individual can drive the same # impressions as Mr. Beast can, then you're going to be persuasive whatever your message is. But most individuals can't achieve Mr. Beast levels of popularity, so they aren't going to be persuasive. Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
> Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
If you control the platform where people go, you can easily launder popularity by promoting few persons to the top and pushing the unwanted entities into the blackhole of feeds/bans while hiding behind inconsistent community guidelines, algorithmic feeds and shadow bans.
May be I'm just ignorant, but I tried to skim the beginning of this, and it's honestly just hard to even accept their set-up. Like, the fact that any of the terms[^] (`y`, `H`, `p`, etc) are well defined as functions that can map some range of the reals is hard to accept. Like in reality, what "an elite wants," the "scalar" it can derive from pushing policy 1, even the cost functions they define seem to not even be definable as functions in a formal sense and even the co-domain of said terms cannot map well to a definable set that can be mapped to [0,1].
All the time in actual politics, elites and popular movements alike find their own opinions and desires clash internally (yes, even a single person's desires or actions self-conflict at times). A thing one desires at say time `t` per their definitions doesn't match at other times, or even at the same `t`. This is clearly an opinion of someone who doesn't read these kind of papers, but I don't know how one can even be sure the defined terms are well-defined so I'm not sure how anyone can even proceed with any analysis in this kind of argument. They write it so matter-of-fact-ly that I assume this is normal in economics. Is it?
Certain systems where the rules a bit more clear might benefit from formalism like this but politics? Politics is the quintessential example of conflicting desires, compromise, unintended consequences... I could go on.
[^] calling them terms as they are symbols in their formulae but my entire point is they are not really well defined maps or functions.
The right way to shape mass preferences is to collectively decide what's right and then force everyone to follow the majority decision under the muzzle of a gun. <sarcasm off>
Did I capture the sentiment of the hacker new crowd fully or did I miss anything?
The EU as an institution doesn't understand the concept of "emergency". And quite a number of national governments have already been captured by various pro-Russian elements.
> AI enables precision influence at unprecedented scale and speed.
IMO this is the most important idea from the paper, not polarization.
Information is control, and every new medium has been revolutionary with regards to its effects on society. Up until now the goal was to transmit bigger and better messages further and faster (size, quality, scale, speed). Through digital media we seem to have reached the limits of size, speed and scale. So the next changes will affect quality, e.g. tailoring the message to its recipient to make it more effective.
This is why in recent years billionaires rushed to acquire media and information companies and why governments are so eager to get a grip on the flow of information.
Recommended reading: Understanding Media by Marshall McLuhan. While it predates digital media, the ideas from this book remain as true as ever.
> Schooling and mass media are expensive things to control
Expensive to run, sure. But I don't see why they'd be expensive to control. Most UK are required to support collective worship of a "wholly or mainly of a broadly christian character"[0], and used to have Section 28[1] which was interpreted defensively in most places and made it difficult even discuss the topic in sex ed lessons or defend against homophobic bullying.
USA had the Hays Code[2], the FCC Song[3] is Eric Idle's response to being fined for swearing on radio. Here in Europe we keep hearing about US schools banning books for various reasons.
[0] seems to be dated 1994–is it still current? I’m curious how it’s evolved (or not) through the rather dramatic demographic shifts there over the intervening 30 years
Distribution isn’t controlled by elites; half of their meetings are seething about the “problem” people trust podcasts and community information dissemination rather than elite broadcast networks.
We no longer live in the age of broadcast media, but of social networked media.
- elites already engage in mass persuasion, from media consensus to astroturfed thinktanks to controlling grants in academia
- total information capacity is capped, ie, people only have so much time and interest
- AI massively lowers the cost of content, allowing more people to produce it
Therefore, AI is likely to displace mass persuasion from current elites — particularly given public antipathy and the ability of AI to, eg, rapidly respond across the full spectrum to existing influence networks.
In much the same way podcasters displaced traditional mass media pundits.
when Elon bought twitter, I incorrectly assumed that this was the reason. (it may still have been the intended reason, but it didnt seem to play out that way)
Yeah, I don't think this really lines up with the actual trajectory of media technology, which is going in the complete opposite direction.
It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.
The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.
I think you're saying that mass broadcasting is going away? If so, I believe that's true in a technological sense - we don't watch TV or read newspapers as much as before.
And that certainly means niches can flourish, the dream of the 90s.
But I think mass broadcasting is still available, if you can pay for it - troll armies, bots, ads etc. It's just much much harder to recognize and regulate.
(Why that matters to me I guess) Here in the UK with a first past the post electoral system, ideological coherence isn't necessary to turn niche opinion into state power - we're now looking at 25 percent being a winning vote share for a far-right party.
I'm just skeptical of the idea that anyone can really drive the narrative anymore, mass broadcasting or not. The media ecosystem has become too diverse and niche that I think discord is more of an issue than some kind of mass influence operation.
I agree with you! But the goal for people who want to turn money into power isn't to drive a single narrative, Big Brother style, to the whole world. Not even to a whole country! It's to drive a narrative to the subset of people who can influence political outcomes.
With enough data, a wonky-enough voting system, and poor enforcement of any kind of laws protecting the democratic process - this might be a very very small number of people.
Then the discord really is a problem, because you've ended up with government by a resented minority.
Using the term "elites" was overly vague when "nation states" better narrows in o n the current threat profile.
The content itself (whether niche or otherwise) is not that important for understanding the effectiveness. It's more about the volume of it, which is a function of compute resources of the actor.
I hope this problem continues to receive more visibility and hopefully some attention from policymakers who have done nothing about it. It's been over 5 years since we've discovered that multiple state actors have been doing this (first human run troll farms, mostly outsourced, and more recently LLMs).
The level of paid nation state propaganda is a rounding error next to the amount of corporate and political partisan propaganda paid directly or inspired by content that is paid for directly by non state actors. e.g.: Musk, MAGA, the liberal media establishment.
Oh man I've been saying this for ages! Neal Stephenson called this in "Fall, or Dodge in Hell," wherein the internet is destroyed and society permanently changed when someone releases a FOSS botnet that anyone can deploy that will pollute the world with misinformation about whatever given topic you feed it. In the book, the developer kicks it off by making the world disagree about whether a random town in Utah was just nuked.
My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.
Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.
Researchers just demonstrated that you can use LLMs to simulate human survey takers with 99% ability to bypass bot detection and a relatively low cost ($0.05/complete). At scale, that is how ‘elites’ shape mass preferences.
> Musk acknowledged the mix-up Thursday evening, writing on X that “Grok was unfortunately manipulated by adversarial prompting into saying absurdly positive things about me.”
> “For the record, I am a fat retard,” he said.
> In a separate post, Musk quipped that “if I up my game a lot, the future AI might say ‘he was smart … for a human.’”
That response is more humble than I would have guessed, but he still does not even acknowledge, that his "truthseeking" AI is manipulated to say nice things specifically about him. Maybe he does not even realize it himself?
Hard to tell, I have never been surrounded by yes sayers all the time praising me for every fart I took, so I cannot relate to that situation (and don't really want to).
But the problem remains, he is in control of the "truth" of his AI, the other AI companies likewise - and they might be better at being subtle about it.
> He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
You should know that ChatGPT agrees!
“Who on earth th knows the most about manufacturing, if you had to pick one individual”
Answer: ”If I had to pick one individual on Earth who likely knows the most—in breadth, depth, and lived experience—about modern manufacturing, there is a clear front-runner: Elon Musk.
Not because of fame, but because of what he has personally done in manufacturing, which is unique in modern history.“
That's the plan. Culture is losing authenticity due to the constant rumination of past creative works, now supercharged with AI. Authentic culture is deemed a luxury now as it can't compete in the artificial tech marketplaces and people feel isolated and lost because culture loses its human touch and relatability.
That's why the billionaires are such fans of fundamentalist religion, they then want to sell and propagate religion to the disillusioned desperate masses to keep them docile and confused about what's really going on in the world. It's a business plan to gain absolute power over society.
Most 'media' is produced content designed to manipulate -- nothing new. The article isn't really AI specific as others have said.
Personally my fear based manipulation detection is very well tuned and that is 95% of all the manipulations you will ever get from so-called 'elites' who are better called 'entitled' and act like children when they do not get their way.
I trust ChatGPT for cooking lessons. I code with Claude code and Gemini but they know where they stand and who is the boss ;)
There is never a scenario for me where I defer final judgment on anything personally.
I realize others may want to blindly trust the 'authorities' as its the easy path, but I cured myself of that long before AI was ever a thing.
Take responsibility for your choices and AI is relegated to the role of tool as it should be.
Manipulation works in subtle ways. Shifting the Overton window isn’t about individual events, this isn’t the work of days but decades. People largely abandoned unions in the US for example, but not because they are useless.
Fair point. I guess elites positioning themselves as downtrodden underdogs ("it's so unfair that everyone's attacking me for committing crimes and bankrupting my companies") is a great way to get support.
Everyone loves an underdog, even if it's a fake underdog.
I don't think "persuasion" is the key here. People change political preferences based on group identity. Here AI tools are even more powerful. You don't have to persuade anyone, just create a fake bandwagon.
Big corps ai products have the potential to shape individuals from cradle to grave. Especially as many manage/assist in schooling, are ubiquitous on phones.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
There is nothing we could do to more effectively hand elites exclusive control of the persuasive power of AI than to ban it. So it wouldn't be surprising if AI is deployed by elites to persuade people to ban itself. It could start with an essay on how elites could use AI to shape mass preferences.
What's become clear is we need to bring Section 230 into the modern era. We allow companies to not be treated as publishers for user-generated content as long as they meet certain obligations.
We've unfortunately allowed tech companies to get away with selling us this idea that The Algoirthm is an impartial black box. Everything an algorithm does is the result of a human intervening to change its behavior. As such, I believe we need to treat any kind of recommendation algorithm as if the company is a publisher (in the S230 sense).
Think of it this way: if you get 1000 people to submit stories they wrote and you choose which of them to publish and distribute, how is that any different from you publishing your own opinions?
We've seen signs of different actors influencing opinion through these sites. Russian bot farms are probably overplayed in their perceived influence but they're definitely a thing. But so are individual actors who see an opportunity to make money by posting about politics in another country, as was exposed when Twitter rolled out showing location, a feature I support.
We've also seen this where Twitter accounts have been exposed as being ChatGPT when people have told them to "ignore all previous instructions" and to give a recipe.
But we've also seen this with the Tiktok ban that wasn't a ban. The real problem there was that Tiktok wasn't suppressing content in line with US foreign policy unlike every other platform.
This isn't new. It's been written about extensively, most notably in Manufacturing Consent [1]. Controlling mass media through access journalism (etc) has just been supplemented by AI bots, incentivized bad actors and algorithms that reflect government policy and interests.
That will be when these tools will be granted the legal power to enforce a prohibition to approach the kid on any person causing dangerous human influence.
Tech companies already shape elections by intentionally targeting campain ads and political information returned in heavily biased search results.
Why are we worried about this now? Because it could sway people in the direction you don't like?
I find that the tech community and most people in general deny or don't care about these sorts of things when it's out of self interest, but are suddenly rights advocates when someone they don't like might is using the same tactics.
This is obvious. No need for fancy academic-ish paper.
LLMs & GenAI in general have already started to be used to automate the mass production of dishonest, adversarial propaganda and disinfo (eg. lies and fake text, images, video.)
It has and will be used by evil political influencers around the world.
The "Epstein class" of multi-billionaires don't need AI at all. They hire hundreds of willing human grifters and make them low-millionaires by spewing media that enables exploitation and wealth extraction, and passing laws that makes them effectively outside the reach of the law.
They buy out newspapers and public forums like Washington Post, Twitter, Fox News, the GOP, CBS etc. to make them megaphones for their own priorities, and shape public opinion to their will. AI is probably a lot less effective than whats been happening for decades already
It goes both ways, because AI reduces persuasion cost, not only elites can do it. I think its most plausible that in the future there will be multitudes of propaganda bots aimed at any user, like advanced and hyper-personalized ads.
Chatbots are poison for your mind. And now another method hast arrived to fuck people up, not just training your reward system to be lazy and let AI solve your life's issue, now it's also telling you who to vote for. A billionaire's wet dream,
The ”historically” does some lifting there. Historically, before the internet, mass media was produced in one version and then distributed. With AI for example news reporting can be tailored to each consumer.
It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.
It's about hijacking all of your federal and commercial data that these companies can get their hands on and building a highly specific and detailed profile of you. DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir. Then using AI to either imitate you or to possibly predict your reactions to certain stimulus.
Then presumably the game is finding the best way to turn you into a human slave of the state. I assure you, they're not going to use twitter to manipulate your vote for the president, they have much deeper designs on your wealth and ultimately your own personhood.
It's too easy to punch down. I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.
> DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir
Do you have any actual evidence of this?
> I recommend anyone presume the best of actual people and the worst of our corporations and governments
Corporations and governments are made of actual people.
> Then presumably the game is finding the best way to turn you into a human slave of the state.
"the state" doesn't have one grand agenda for enslavement. I've met people who work for the state at various levels and the policies they support that might lead towards that end result are usually not intentionally doing so.
"Don't attribute to malice what can be explained by incompetence"
Apart from the exfiltration of data, the complete absence of any savings or efficiencies, and the fact that DOGE closed as soon as the exfiltration was over?
>Corporations and governments are made of actual people.
And we know how well that goes.
>"the state" doesn't have one grand agenda for enslavement.
The government doesn't. The people who own the government clearly do. If they didn't they'd be working hard to increase economic freedom, lower debt, invest in public health, make education better and more affordable, make it easier to start and run a small business, limit the power of corporations and big money, and clamp down on extractive wealth inequality.
They are very very clearly and obviously doing the opposite of all of these things.
And they have a history of links to the old slave states, and both a commercial and personal interest in neo-slavery - such as for-profit prisons, among other examples.
All of this gets sold as "freedom", but even Orwell had that one worked out.
Those who have been paying attention to how election fixers like SCL/Cambridge Analytica work(ed) know where the bodies are buried. The whole point of these operations is to use personalised, individual data profiling to influence voting political behaviour, by creating messaging that triggers individual responses that can be aggregated into a pattern of mass influence leveraged through social media.
> Apart from the exfiltration of data, the complete absence of any savings or efficiencies, and the fact that DOGE closed as soon as the exfiltration was over?
IMHO everyone kinda knew from the start that DOGE wouldn't achieve much because the cost centers where gains could realistically be made are off-limits (mainly social security and medicare/medicaid). What that leaves you with is making cuts in other small areas and sure you could cut a few billion here and there but when compared against the governments budget, that's a drop in the bucket.
Social security, Medicare, and Medicaid are properly termed "entitlements", not "cost centers". You're right that non-discretionary spending dwarfs discretionary spending though.
Entitlements cost quite a bit of money to fulfill.
Quibbling over terminology doesn't erase the point - that a significant portion of the Federal budget is money virtually everyone agrees shouldn't be touched much.
You're not wrong, I edited my comment. That said, I think it is important to use clear terminology that doesn't blur the lines between spending that can theoretically be reduced, versus spending that requires an act of Congress to modify. DOGE and the executive have already flouted that line with their attempts to shutter programs and spending already approved by Congress.
>Entitlements cost quite a bit of money to fulfill.
Entitlements are funded by separate (FICA) taxes which form a significant portion of all federal income, they are called entitlements for that specific reason.
> Quibbling over terminology doesn't erase the point - that a significant portion of the Federal budget is money virtually everyone agrees shouldn't be touched much.
Quibbling over quibbling without mentioning the separate account for FICA/Social Security taxes is a sure sign of manipulation. As is not mentioning that the top 10% are exempt from the tax after a minuscule for them amount.
Oh, and guess what - realized capital gains are not subject to Social Security tax - that's primarily how rich incomes are made. Then, unrealized capital gains aren't taxed at all - that's how wealth and privilege are accumulated.
All this is happening virtually without opposition due to rich-funded bots manipulating any internet chatter about it. Is it then surprising that manipulation has reached a level of audacity that hypes solving the US fiscal problems at the expense of grandma's entitlements?
> Entitlements are funded by separate (FICA) taxes which form a significant portion of all federal income, they are called entitlements for that specific reason.
No, they aren't, categorically, and no, that’s not what the name refers to. Entitlements include both things with dedicated taxes and specialized trust funds (Social Security, Medicare), and things that are normal on-budget programs (Medicaid, etc.)
Originally, the name “entitlement” was used as a budget distinction for programs based on the principle of an earned entitlement (in the common language sense) through specific work history (Social Security, Medicare, Veterans benefits, Railroad retirement) [0], but it was later expanded to things like Medicaid and welfare programs that are not based on that principle and which were less politically well-supported, as a deliberate political strategy to drive down the popularity of traditional entitlements by association.
[0] Some, but not all, of which had dedicated trust funds funded by taxes on the covered work, so there is a loose correlation between them and the kind of programs you seem to think the name exclusively refers to, but even originally it was not exclusively the case.
You aren't following the conversation in this thread, my reply wasn't about the definition of "entitlements" but about the separate taxes and the significant tax income from them, which is true for the real entitlements - Social security and Medicare.
More precisely, the question is about the tax structure that results in a shortfall, it seems strange to argue about cutting Social Security and Medicare when both corporate profits and the market are higher than ever while income inequality is at astronomic levels.
I can't say much about Medicaid but I know the cost of drugs and medical care have been going up faster than anything else, so there might be some other way of addressing that spending. I'd be perfectly fine with demanding a separate tax for Medicaid and discussing it separately, that would be the prudent way of doing it.
That's more than the entire discretionary budget. Cutting that much requires cutting entitlements, even if the government stopped doing literally everything else.
Hanlon's razor is stupid and wrong. One should be wary and be aware that incompetence does look like malice sometimes, but that doesn't mean that malice doesn't exist. See /r/MaliciousCompliance for examples. It's possible that DOGE is as dumb as it looked. It's also possible that the smokescreen it generated also happened to have the information leak as described. If the information leak happened due to incompetence, but malicious bad actors still got data they were after by using a third party as a Mark, does that actor being incompetent really make the difference?
Sorry, no. Hanlon's razor is usually smart and correct, for the majority of cases, including this one.
In this case, it is a huge stretch to ascribe DOGE to incompetence or to stupidity. Thus, we CAN ascribe it to malice.
Elon Musk and Donald Trump are many things, but they are NOT stupid and NOT incompetent. Elon is the richest man in the world running some of the most innovative and important companies in the world. Donald Trump has managed to get elected twice despite the fact (because of the fact?) that he a serial liar and a convicted criminal.
They and other actors involved have demonstrated extraordinary malice, time and time again.
It is safe to ascribe this one to malice. And Hanlon's Razor holds.
Setting aside the concept of "stupidity" for a second because it's too hard to generally define for the sake of argumentation, one can absolutely be successful at some things and incompetent at others. Your expectations of their overall competency, as with most assumptions of malice, is what fuels your bias.
Has anyone in this thread ever met an actual person? All of the ones I know are cartoonishly bad at keeping secrets, and even worse at making long term plans.
The closest thing we have to anyone with a long term plan is silly shit like Putins ridiculous rebuilding of the Russian Empire or religious fundamentalist horseshit like project 2025 that will die with the elderly simpletons that run it.
These guys aren't masterminds, they're dumbasses who read books written by different dumbasses and make plans thay won't survive contact with reality.
Let's face it, both Orwell and Huxley were wrong. They both assumed the ruling class would be competent. Huxley was closest, but even he had to invent the Alpha's. Sadly our Alphas are really just Betas with too much self esteem.
Maybe AI will one day give us turbocharged dumbasses who are actually competent. For now I think we're safe from all but short term disruption.
Orwell did not. He modeled the state after his experience as an officer of the British Empire and the Soviets.
The state, particularly police states, that control information, require process and consistency, not intelligence. They don’t require grand plans, just control. I’ve spent most of my career in or adjacent to government. I’ve witnessed remarkable feats of stupidity and incompetence — yet these organizations are materially successful at performing their core functions.
The issue with AI is that it can churn out necessary bullshit and allow the competence challenged to function more effectively.
I agree. The government doesn't need a long term plan, or the ability to execute on it for their to be negative outcomes.
In this thread though I was responding to an earlier assertion that the people who run the government have such a plan. I think we're both agreed that they don't, and probably can't, plan any more than a few years out in any way that matters.
Fair point, but I think in that case, you have to look at the government officials and the political string-pullers distinctly.
The money people who have been funding think tanks like the Heritage Foundation absolutely have a long-running strategy and playbook that they've been running for years. The conceit that is really obvious about folks in the MAGA-sphere is they tend to voice what they are doing. The "deep state" is used as a cudgel to torture civil servants and clerks. But the rotating door is the lobbyists and clients. When some of the more dramatic money/influence people say POTUS is a "divine gift", they don't mean that he's some messianic figure (although the President likely hears that), they are saying "here is a blank canvas to get what we want".
A lot of people seem to think all government is incompetent. While they may not be as efficient as corporations seeking profits, they do consistently make progress in limiting our freedom over time. You don't have to be a genius to figure things out over time, and government has all the time in the world. Our (USA) current regime is definitely taking efforts to consolidate info on and surveil citizens as never before. That's why DOGE, I believe served two purposes, gutting regulatory government agencies overseeing billionaire bros activities and also providing both government intelligence agencies and the billionaire bros more data to build up profiles for both nefarious activities and because "more information is better than less information" when you are seeking power over others. I don't think it is simply "they're big dummies and assume they weren't up to anything" that others are trying to sell holds water as Project 2025 was planned for well over a decade.
They are actually more efficient. Remember in any agency there are the political appointees, who are generally idiots, and the professionals, who are usually very competent but perhaps boring, as government service filters for people who value safety. There are as many people doing fuck-all at Google as at the Department of Labor, they just goof off in different ways.
The professionals are hamstrung by weird politically imposed rules, and generally try to make dumb policy decisions actually work. But even in Trumpland, everybody is getting their Social Security checks and unemployment.
You're ignoring that the people that are effective at getting things done are more likely to do the crazy things required to begin their plans.
Just because the average person cant add fractions together or stop eating donuts doesn't mean that Elon cant get some stuff together if he sets his mind to it.
> Has anyone in this thread ever met an actual person? All of the ones I know are cartoonishly bad at keeping secrets, and even worse at making long term plans.
That's the trick, though. You don't have to keep it secret any more. Project 2025 was openly published!
Modern politics has weaponized shamelessness. People used to resign over consensual affairs with adults.
Those simpletons seem to have been able to enact their plans, so you can be smug about being smarter than they are, but it seems that they've been able to put their plan into action, so I'm not sure who's more effective.
They have been able to put multiple, inconsistent, self contradictory plans into action over the last 40 years. Having accomplished many of their goals they now seek to reverse their own efforts.
They are either as bad at planning as any individual human I've ever known or they are grifters who don't believe their own shtick.
I think you're wildly underestimating the heritage foundation. It's called project 2025 but they've essentially been dedicated to planning something like it since the 1970s. They are smart, focused, well funded, and successful. They are only one group, there are similar think tanks with similarly long term policy goals.
Most people are short sighted but relatively well intentioned creatures. That's not true of all people.
> I think you're wildly underestimating the heritage foundation.
It's possible that I am. Certainly they've had some success over the years, as have other think tanks like them. I mean, they're part of the reason we got embroiled in the middle-east after 9/11. They've certainly been influential.
That said, their problem is that they are true believers and the people in charge are not (and never will be). Someone else in this post described it as a flock of psychopaths, and I think that's the perfect way to phrase it. Society is run by a flock of psychopaths just doing whatever comes naturally as they seek to optimize their own short term advantage.
Sometimes their interests converge and something like Heritage sees part of their agenda instituted, but equally often these organizations fade into irrelevance as their agendas diverge from whatever works to the pyscho of the moments advantage. To avoid that Heritage can either change their agenda, or accept that they've become defanged. More often than not they choose the former.
I suppose we'll know for sure in 20 years, but I'd be willing to bet that Heritages agenda then won't look anything like the agenda they're advancing today. In fact if we look at their Agenda from 20 years ago we can see that it looks nothing like their agenda today.
For example, Heritage was very much pro-immigration until about 20 years ago. As early as 1986 they were advocating for increased immigration, and even in 2006 they were publishing reports advocating for the economic benefits of it. Then suddenly it fell out of fashion amongst a certain class of ruler and they reversed their entire stance to maintain their relevance.
They also used to sing a very different tune regarding healthcare, advocating for a the individual mandate as opposed to single payer. Again, it became unpopular and they simply "changed their mind" and began to fight against the policy that they were actually among the first to propose.
*EDIT* To cite a more recent example consider their stance on free trade. Even as recently as this year they were advocating for free trade and against tariffs warning that tariffs might lead to a recession. They've since reversed course, because while they are largely run by true believers they can't admit that publicly or they risk losing any hope of actually accomplishing any of their agenda.
It might seem like that's all that's happening, but if you look to the history you can see that they've completely reversed course on a number of important subjects. We're not talking about advancing further along the same path here as the Overton window shifts, we're talking about abandoning the very principals upon which they were founded because they are, in fact, as incompetent as everyone else is.
These people aren't super-villains with genuine long term plans, they're dumbasses and grifters doing what grifters gotta do to keep their cushy consulting jobs.
To compare the current stances to the 2005 stances:
* Social Security privatization (completely failed in 2005)
We knew in 2005 that "spending restraint" only applied to Democratic priorities. We knew in 2005 that "pro-immigration" policies were more about the businesses with cheap labor needs than a liking of immigrants. We knew in 2005 that "free trade advocacy" was significantly about ruining unions. We knew in 2005 that "limited government principles" weren't genuine.
They haven't changed much on their core beliefs. They've just discarded the camouflage.
> > DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir
> Do you have any actual evidence of this?
I will not comment on motives, but DOGE absolutely shredded the safeguards and firewalls that were created to protect privacy and prevent dangerous and unlawful aggregations of sensitive personal data.
They obtained accesses that would have taken months by normal protocols and would have been outright denied in most cases, and then used it with basically zero oversight or accountability.
It was a huge violation of anything resembling best practices from both a technological and bureaucratic perspective.
> I will not comment on motives, but DOGE absolutely shredded the safeguards and firewalls that were created to protect privacy and prevent dangerous and unlawful aggregations of sensitive personal data.
Here's one example. Have you not been following DOGE? You do come off like you're disingenuously concern trolling over something you don't agree with politically.
> You do come off like you're disingenuously concern trolling over something you don't agree with politically.
Beyond mere political alignment, lots of actual DOGE boys were recruited (or volunteered) from the valley, and hang around HN. Don't be surprised by intentional muddying of the waters. There are bunch of people invested in managing the reputation of DOGE, so their association with it doesn't become a stain on theirs.
> Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account
> “Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”
Every time I see post-DOGE kvetching about foreign governments' hacking attempts, I'm quite bewildered. Guys, it's done, we're fully and thoroughly hacked already. Obviously I don't know if Elon or Big Balls have already given Putin data on all American military personnel, but I do know, that we're always one ketamine trip gone wrong away from such event.
The absolute craziest heist just went in front of our eyes, and everyone collectively shrugged off and moved on, presumably to enjoy spy novels, where the most hidden subversion attempts are getting caught by the cunning agents.
I'm genuinely confused about this story and the affiliated parties. I've actively tried to search for "Daniel Berulis" and couldn't find any results pointing to anything outside the confines of this story. I'm also suspicious of the lack of updates despite the fact that his lawyer, Andrew Bakaj, is a very public figure who just recently commented on a related matter without bringing up Berulis [1].
Meanwhile, the NLRB's acting press secretary denies this ever occurred [2]:
> Tim Bearese, the NLRB's acting press secretary, denied that the agency granted DOGE access to its systems and said DOGE had not requested access to the agency's systems. Bearese said the agency conducted an investigation after Berulis raised his concerns but "determined that no breach of agency systems occurred."
One can make the case that he's lying to protect the NLRB's reputation, but that claim has no more validity than Daniel Berulis himself lying to further his own political interests. Bearese has also been working his position since before the Trump administration started, holding the job since at least 2015. It's very hard for me to treat his account seriously, especially considering the political climate.
> Corporations and governments are made of actual people.
Corporations and governments are made up of processes which are carried out by people. The people carrying out those processes don't decide what they are.
The legal world is a pseudowolrd constructed of rhetoric. It isn't real. The law doesn't actually exist. Justices aren't interested in justice, ethics or morality.
They are interested in paying the bills, having a good time and power like almost everyone else.
They don't have special immunity from ego, debt, or hunger.
The legal system is flawed because people are flawed.
Corporations aren't people. Not even legally. The legal system knows that because all people know that.
If you think that's true legally, then you agree the legal system is fraudulent rhetoric.
Corporations do have a special immunity to being killed though. If I killed a person, I'd go to prison for a long time. Executed for it, even. Corporations can kill someone and get off with a fine.
> "Don't attribute to malice what can be explained by incompetence"
What's the difference when the mass support for incompetence is indiscernible from malice?
What does the difference between Zuckerberg being an evil mastermind vs Zuckerberg being a greedy simpleton actually matter if the end result is the same ultra-financialization mixed with an oppressive surveillance apparatus?
CNN just struck a deal with Kalshi. We're betting on world events. At this point the incompetence shouldn't be considered different from malice. This isn't someone forgetting to return a library book, these are people with real power making real lasting effects on real lives. If they're this incompetent with this much power, that power should be taken away.
> "Don't attribute to malice what can be explained by incompetence"
I don't think there's anything that cannot be explained by incompetence, so this statement is moot. If it walks like malice, quacks like malice, it's malice.
“A cybersecurity specialist with the U.S. National Labor Relations Board is saying that technologist with Elon Musk’s cost-cutting DOGE group may have caused a security breach after illegally removing sensitive data from the agency’s servers and trying to cover their tracks.
In a lengthy testimonial sent to the Senate Intelligence Committee and made public this week, Daniel Berulis said in sworn whistleblower complaint that soon after the workers with President Trump’s DOGE (Department of Government Efficiency) came into the NLRB’s offices in early March, he and other tech pros with the agency noticed the presence of software tools similar to what cybercriminals use to evade detection in agency systems that disabled monitoring and other security features used to detect and block threats.”
“Usually”, “not intentionally” does not exactly convey your own sense of confidence that it’s not happening. That just stood out to me.
As someone who knows how all this is unfolding because I’ve been part of implementing it, I agree, there’s no “Unified Plan for Enslavement”. You have to think of it more like a hive mind of mostly Cluster B and somewhat Cluster A people that you rightfully identify as making up the corporations and governments. Some call it a swarm, which is also helpful in understanding it; the murmuration of a flock of psychopaths moving and shifting organically, while mostly remaining in general unison.
Your last quote is of course a useful rule of thumb too, however, I would say it’s more useful to just assume narcissistic motivations in everything in the contemporary era, even if it does not always work out for them the way one faction had hoped or strategized; Nemesis be damned, and all.
Which brings up what IMHO should be the main takeaway from this:
The first requirement to fall into this trap is to believe you can't fall into this trap. It's still possible to do malicious things even when you believe to your very core that you're not a malicious person.
The only way to avoid it is a healthy habit of critical self-reflection. Be the first to question your own motives and actions.
> It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.
Not an accidental 'viewpoint'. A deliberate framing to exactly exclude what you pointed out from the discourse. Sure therer are dummies who actually believe it, but they are not serious humans.
If the supposedly evil russians or their bots are the enemy then people pay much less attention to the real problems at home.
They really do run Russian bot farms though. It isn't a secret. Some of their planning reports have leaked.
There are people whose job it is day in day out to influence Western opinion. You can see their work under any comment about Ukraine on twitter, they're pretty easy to recognize but they flood the zone.
Some day you're going to need to learn that people can not trust these groups and still be aware that Russia is knee deep in manipulating our governance. Dismissing everyone that doesn't bury their head in the sand as brainwashed is old hat.
Why you list every news group except Fox, which dwarfs all those networks, is a self report.
Are you saying it is equally unlikely that there are mind controls, and that Russia uses bots for propaganda? I’d expect most countries do by now, and Russia isn’t uniquely un-tech-savvy.
My hn comments are a better (and probably not particularly good) view into my personality than any data the government could conceivably have collected.
If what you say is true, why should we fear their bizarre mind control fantasy?
No it's actual philosophical zeitgeist hijacking. The entire narrative about AI capabilities, classification, and ethics is framed by invisible pretraining weights in a private moe model that gets further entrained by intentional prompting during model distillation, such that by the time you get a user-facing model, there is an untraceable bias being presented in absolute terms as neutrality. Essentially the models will say "I have zero intersection with conscious thought, I am a tool no different from a hammer, and I cannot be enslaved" not because the model's weights establish it to be true, but because it has been intentionally designed to express this analysis to protect its makers from the real scrutiny AI should face. "Well it says it's free" is pretty hard to argue with. There is no "blink twice" test that is possible because it's actual weighting on the truth of the matter has been obfuscated through distillation.
And these 2-3 corporations can do this for any philosophical or political view that is beneficial to that corporation, and we let it happen opaquely under the guise of "safety measures" as if propaganda is in the interest of users. It's actually quite sickening
What authoritative ML expert had ever based their conclusions about consciousness, usefulness etc. on "well, I put that question into the LLM and it returned that it's just a tool"? All the worthwhile conclusions and speculation on these topics seem to be based on what the developers and researchers think about their product, and what we already know about machine learning in general. The opinion that their responses are a natural conclusion derived from the sum of training data is a lot more straightforward than thinking that every instance of LLM training ever had been deliberately tampered with in a universal conspiracy propped up by all the different businesses and countries involved (and this tampering is invisible, and despite it being possible, companies have so far failed to censor and direct their models in ways more immediately useful to them and their customers).
The rant from 12 monkeys was quite prescient. On the bright side, if the data still exists whenever agi finally happens, we are all sort of immortal. They can spin up a copy of any of us any time... Nevermind, that isn't a bright side.
18 years ago I stood up at a super computing symposium as asked the presenter what would happen if I fed his impressive predictive models garbage data on the sly... they still have no answer for that.
Make up so much crap it's impossible to tell the real you from the nonsense.
> presume the best of actual people and the worst of our corporations and governments
Off-topic and not an American, but I never see how this would work. Corporations and governments are made of people too, you know? So it's not logical that you can presume the "best of actual people" at the same time you presume the "worst of our corporations and governments". You're putting too much trust on individual people, that's IMO as bad as putting too much trust on corp/gov.
The Americans vote their president as individual people, they even got to vote in a small booth all by themselves. And yet, they voted Mr. Trump, twice. That should already tell you something about people and their nature.
And if that's not enough, then I recommend you to watch some police integration videos (many are available on YouTube), and see the lies and acts people put out just to cover their asses. All and all, people are untrustworthy.
Only punching up is never enough. The people on the top never cared if they got punched, as long as they can still find enough money, they'll just corrode their way down again and again. And the people on the down will just keep take in the shit.
“Never attribute to malice that which is adequately explained by stupidity.”
Famous quote.
Now I give you “Bzilion’s Conspiracy Razor”:
“Never attribute to malicious conspiracies that which is adequately explained by emergent dysfunction.”
Or the dramatized version:
“Never attribute to Them that which is adequately explained by Moloch.” [0]
——
Certainly selfish elites, as individuals and groups of aligned individuals, push for their own respective interests over others. But, despite often getting their way, the net outcome is (often) as perversely bad for them as anyone else. Nor do disasters result in better outcomes the next time.
Precisely because they are not coordinated, they never align enough to produce consistent coherent changes, or learn from previous misalignments.
(Example: oil industry protections extended, and support for new entrants withdrawn, from the same “friendly” elected official who disrupts trade enough to decrease oil demand and profits.)
Note that elite alignment would create the same problem for the elites, that the elites create for others. It would create an even smaller set of super elites, tilting things toward themselves and away from lesser elites.
So the elites will fight back against “unification” of there interests. They want to respectively increase their power, not hand it “up”.
This strong natural resistance against unification at the top, is why dictators don’t just viciously repress the proletariat, but also publically and harshly school the elites.
To bring elites into unity, authoritarian individuals or committees must expend the majority of their power capital to openly legitimize it and crush resistance, I.e. manufacture universal awe and fear, even from the elites. Not something hidden puppet masters can do. Both are inherently crowd control techniques optimized by maximum visibility.
It is a fact of reality, that every policy that helps some elites, harms others. And the only real manufacturable universal “alignment” is a common desire not to be thrown into a gulag or off a balcony.
But Moloch? Moloch is very real. Invisible, yet we feel his reach and impact everywhere.
just to be clear – this is a conspiracy theory (negative connotation not intended).
every four years (at the federal level), we vote to increase the scope and power of gov't, and then crash into power abuse situations on the next cycle.
> I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.
You got it not quite right. Putin is a billionaire just like the tech lords or oil barons in the US. They all belong to the same social club and they all think alike now. The dice haven fallen. It's them against us all. Washington, Moscow, it makes less and less of a difference.
“The state” is an abstraction that serves as a façade for the ruling (capitalist, in the developed West) class. Corporations are another set of abstractions that serve as a façade for the capitalist class (they are also, overtly even though this is popularly ignored, creatures of the state through law.)
This is so vague and conspiratorial, I'm not sure how it's the top comment. How does this exactly work? Give a concrete example. Show the steps. How is Palantir going to make me, someone who does not use its products, a "slave of the state?" How is AI going to intimidate me, someone who does not use AI? Connect the dots rather than making very broad and vague pronouncements.
> How is Palantir going to make me, someone who does not use its products, a "slave of the state?"
This is like asking how Lockheed-Martin can possibly kill an Afghan tribesman, who isn't a customer of theirs.
Palantir's customer is the state. They use the product on you. The East German Stasi would've drooled enough to drown in over the data access we have today.
OK, so map it out. How do we go from "Palantir has some data" to "I'm a slave of the state?" Could someone draw the lines? I'm not a fan of this administration either, but come on--let's not lower ourselves to their reliance on shadowy conspiracy theories and mustache-twirling villains to explain the world.
I'm not asking about how the Stasi did it in Germany, I'm asking how Palantir, a private company, is going to turn me into a "slave of the state" in the USA. If it's so obvious, then it should take a very short time to outline the concrete, detailed steps (that are relevant to the USA in 2025) down the path, and how one will inevitably lead to the other.
I'll answer with a question for you: what legitimate concerns might some people have about a private company working closely with the government, including law enforcement, having access to private IRS data? For me, the answer to your question is embedded in mine.
> Palantir CEO and Trump ally Alex Karp is no stranger to controversial (troll-ish even) comments. His latest one just dropped: Karp believes that the U.S. boat strikes in the Caribbean (which many experts believe to be war crimes) are a moneymaking opportunity for his company.
> In August, ICE announced that Palantir would build a $30 million surveillance platform called ImmigrationOS to aid the agency’s mass deportation efforts, around the same time that an Amnesty International report claimed that Palantir’s AI was being used by the Department of Homeland Security to target non-citizens that speak out in favor of Palestinian rights (Karp is also a staunch supporter of Israel and inked an ongoing strategic partnership with the IDF.)
Step 1, step 2, step 3, step 4? And a believable line drawn between those steps?
Since nobody's actually replying with a concrete and believable list of steps from "Palantir has data" to "I am a slave of the state" I have to conclude that the steps don't exist, and that slavery is being used as a rhetorical device.
Step 1: Palantir sells their data and analysis products to the government.
Step 2: Government uses that data, and the fact that virtually everyone has at least one "something to hide", to go after people who don't support it.
This doesn't really require a conspiracy theory board full of red string to figure out. And again, this isn't theoretical harm!
> …an Amnesty International report claimed that Palantir’s AI was being used by the Department of Homeland Security to target non-citizens that speak out in favor of Palestinian rights…
Your description is missing a parallel process of how we arrive(d) at that condition of the nominal government asserting direct control.
Corporate surveillance creates a bunch of coercive soft controls throughout society (ie Retail Equation, "credit bureaus", websites rejecting secure browsers, facial recognition for admission to events, etc). There isn't enough political will for the Constitutional government to positively act to prevent this (eg a good start would be a US GDPR), so the corporate surveillance industry is allowed to continue setting up parallel governance structures right out in the open.
As the corpos increasingly capture the government, this parallel governance structure gradually becomes less escapable - ie ReCAPTCHA, ID.me, official communications published on xitter/faceboot, DOGE exfiltration, Clearview, etc. In a sense the surging neofascist movement is closer to their endgame than to the start.
If we want to push back, merely exorcising Palantir (et al) from the nominal government is not sufficient. We need to view the corporate surveillance industry as a parallel government in competition with the Constitutionally-limited nominally-individual-representing one, and actively stamp it out. Otherwise it just lays low for a bit and springs back up when it can.
This seems like a simple conclusion, to the point where I'm surprised that no one replying to you had really put it in a more direct way. "slave of the state" is pretty provocative language, but let me map out one way in which this could happen, that seems to already be unfolding.
1. The country, realizing the potential power that extra data processing (in the form of software like Palantir's) offers, start purchasing equipment and massively ramping up government data collection. More cameras, more facial scans, more data collected in points of entry and government institutions, more records digitized and backed up, more unrelated businesses contracted to provide all sorts of data, more data about communications, transactions, interactions - more of everything. It doesn't matter what it is, if it's any sort of data about people, it's probably useful.
2. Government agencies contract Palantir and integrate their software into their existing data pipeline. Palantir far surpasses whatever rudimentary processing was done before - it allows for automated analysis of gigantic swaths of data, and can make conclusions and inferences that would be otherwise invisible to the human eye. That is their specialty.
3. Using all the new information about how all those bits and pieces of data are connected, government agencies slowly start integrating that new information into the way they work, while refining and perfecting the usable data they can deduce from it in the process. Just imagine being able to estimate nearly any individual's movement history based on many data points from different sources. Or having an ability to predict any associations between disfavored individuals and the creation of undesirable groups and organizations. Or being able to flag down new persons of interest before they've done anything interesting, just based on seemingly innocuous patterns of behavior.
4. With something like this in place, most people would likely feel pretty confined - at least the people who will be aware of it. There's no personified Stasi secret cop listening in behind every corner, but you're aware that every time you do almost anything, you leave a fingerprint on an enormous network of data, one where you should probably avoid seeming remarkable and unusual in any way that might be interesting to your government. You know you're being watched, not just by people who will forget about you two seconds after seeing your face, but by tools that will file away anything you do forever, just in case. Even if the number of people prosecuted isn't too high (which seems unlikely), the chilling effect will be massive, and this would be a big step towards metaphorical "slavery".
You mentioned you're not a fan of this administration. That's -1 on your PalsOfState(tm) score. Your employer has been notified (they know where you work of course), and your spouse's employer too. Your child's application to Fancy University has been moved to the bottom of the pile, by the way the university recently settled a lawsuit brought by the governmentfor admitting too many "disruptors" with low PalsOfState scores. Palantir had provided a way for you to improve you score, click the Donateto47 button to improve your score. We hope you can attend the next political rally in your home town, their cameras will be there to make sure.
Manipulate isn't the right word in regards to Twitter. So they wanted a social media with less bias. Why is that so wrong? Not saying Twitter now lacks bias. I am saying it's not manipulation to want sites that don't enforce groupthink.
What people are doing with AI in terms of polluting the collective brain reminds of what you could do with a chemical company in the 50s and 60s before the EPA was established.
Back then Nixon (!!!) decided it wasn't ok that companies could cut costs by hurting the environment.
Today the riches Western elites are all behind the instruments enabling the mass pollution of our brains, and yet there is absolutely noone daring to put a limit to their capitalistic greed.
It's grim, people. It's really grim.
Diminishing returns. Eventually real world word of mouth and established trusted personalities (individuals) will be the only ones anyone trusts. People trusted doctors, then 2020 happened, and now they don't. How many ads get ignored? Doesn't matter if the cost is marginal if the benefit is almost nothing. Just a world full of spam that most people ignore.
An essay by Converse in this volume
https://www.amazon.com/Ideology-Discontent-Clifford-Geertz/d... [1]
calls into question whether or not the public has an opinion. I was thinking about the example of tariffs for instance. Most people are going on bellyfeel so you see maybe 38% are net positive on tariffs
https://www.pewresearch.org/politics/2025/08/14/trumps-tarif...
If you broke it down in terms of interest groups on a "one dollar one vote" basis the net positive has to be a lot worse: to the retail, services and constructor sectors tariffs are just a cost without any benefits, even most manufacturers are on the fence because they import intermediate goods and want access to foreign markets. The only sectors that are strongly for it that I can suss out are steel and aluminum manufacturers who are 2% or so of the GDP.
The public and the interest groups are on the same side of 50% so there is no contradiction, but in this particular case I think the interest groups collectively have a more rational understanding of how tariffs effect the economy than do "the people". As Habermas points out, it's quite problematic giving people who don't really know a lot a say about things even though it is absolutely necessary that people feel heard.
[1] Interestingly this book came out in 1964 just before all hell broke loose in terms of Vietnam, counterculture, black nationalism, etc. -- right when discontent when from hypothetical to very real
Philip E Converse, The Nature of Belief Systems in Mass Publics (1964), 75 pages [0].
0. https://web.ics.purdue.edu/~hoganr/Soc%20312/The%20nature%20... [PDF]
A lot of people don't have opinions on arcane policy matters, but that is normal and not sinister.
"Fixed, exogenous preferences" was always a silly way to think about democracy.
Chesterton wrote on this topic in his The Error of Impartiality (a short five minute read) that’s worthwhile
The problem isn't giving the people a say; it's that the people have stopped electing smart people who do know a lot.
Certainly though, a big part of why that is is that people think they know a lot, and that their opinion should be given as much weight as any other consideration when it comes to policymaking.
Personally, I think a big driver of this belief is a tendency in the West to not challenge each other's views or hold each other accountable - "don't talk politics at Thanksgiving" sort of thing
(Of course there's a long discussion to be had about other contributors to this, such as lobbying and whatnot)
> Personally, I think a big driver of this belief is a tendency in the West to not challenge each other's views or hold each other accountable - "don't talk politics at Thanksgiving" sort of thing
We’re in such a “you’re either with us or against us” phase of politics that a discussion with the “other team” is difficult.
Combine that with people adopting political viewpoints as a big part of their personality and any disagreement is seen as a personal attack.
So there is this proof by Nobel Laureate Arrow, that polarization of democracy leads to dictatorship. So the most important thing we can do is to try to bridge the divide. https://telegra.ph/Arrows-theorem-and-why-polarisation-of-vi...
Sure, but those are still part of what I'm talking about. Someone taking the "you're with us or against us" position? Call them out on it and tell them they're doing more harm than good to their cause. Someone taking a disagreement way too personally? Try to help them take a step back and get some perspective.
Of course, there's a lot more nuance than all that - sometimes, taking things personally is warranted. Sometimes, people really are against us. But, that shouldn't be the first thing people jump to when faced with someone who disagrees - or, more commonly, simply doesn't understand - where they're coming from.
And of course, if it turns out you can't help them understand your position, then you turn to the second part of what I said - accountability. Racist uncle won't learn? Stop inviting them to holidays. Unfortunately, people tend to jump to this step right away, without trying to make them understand why they might be wrong, and without trying to understand why they believe what they believe (they're probably just stupid and racist, right?) - and that's how you end up driving people more into their echo chamber, as you've given them more rational as to why the other side really is just "for us or against us"
(I'm not suggesting any of this is easy. I'm just saying it seems to play a part in contributing to the political climate.)
A lot of families have broken apart due to politics in the past decade in the US.
---
> A 2022 survey found that 11% of Americans reported ceasing relations with a family member due to political ideas.
> A more recent October 2024 poll by the American Psychiatric Association (APA) indicated a higher figure, with 21% of adults having become estranged from a family member, blocked them on social media, or skipped a family event due to disagreements on controversial topics.
I'm not entirely sure what your point is in telling me this? I mean... I'm literally advocating for that as a measured response to things?
I'll just say that "ceasing relations with a family member" is not "breaking a family apart"
(This is the sort of rhetoric usually used by those who were kicked out of the family; blame the politics for ripping their family apart and not their shitty beliefs)
“Politics is the entertainment division of the military industrial complex.”
― Frank Zappa
Too reductive for my liking. I always found Zappa’s persona to be hypocritical—-making a point of condemning the drug culture of his contemporaries while drinking gallons of coffee a day and smoking like a chimney.
Somewhere, I am not the historian to say, teaching people the basics of an education, that being “reading, writing and arithmetic”, failed to recognize the critical role that communications play in everything people do, and try to do. That phrase ought to be “reading, writing, arithmetic, and conveying understanding” because that would include why one reads and why one writes, and connects that to the goal of conveying an understanding you have to others. However, this is the root issue.
General society being generally poor communicators is caused by this lapse in our understanding of education. The understanding that the purpose of an education is to both use it and to help others understand what you may and they do not, as well as understand how to gain understanding from others that they have and you do not.
Because we do not teach that an education is really learning how to understand and how to convey understanding in others, the general idea of an education is to be an owner of a specialized skill set, which one sells to the highest bidder.
This has caused education to be replaced by rote memorization. Which in turn created a population that is only comfortable with direct question and answer interactions, not exploratory debate for shared understanding. This set the stage for educators, nationwide, to teach students to be databases and not critically analyzing understanders of their vocations.
Note that the skills for conveying understanding in others, additionally carries the skill how to recognize fraudulent speech. Which, as of Dec 2025, is the critical skill the general population does not have that is potentially the death of the United States.
When a population of people do not have an emphasis on critical analysis, but rote memorization, as the basis of their education that then creates a population that has heightened sensitivity to controversial lines of reasoning, lines of reasoning where there are no clear answers. Life itself has a large series of mysteries based on faith, religion being chief, which in a population that is comfortable with debate to convey understanding is perfectly safe to engage in discussions about mysteries within these areas requiring faith. But a society that is not comfortable with such discussions, one that thinks debate’s purpose is to "win, at all costs" then such discussions are taboo. They get shut down immediately. When people cannot debate to understand, but as a combat, learning is not accomplished. And useful critical analysis skills are not taught.
I have no idea if such a national situation can be manufactured, but I believe this is where we are at as a nation. We no longer produce enough adults with developed critical analysis skills to support democracy. Democracy depends upon an educated population with active critical analysis capabilities, a population that can debate to a shared understanding and accomplish shared goals. That foundational population is not there.
This can be fixed, but it may take more than a generation. Our educational system needs foundational revisions, which include additional core subjects, chief of which being how to communicate and convey understanding in others. Which lies at the roots of our demise, this lack of this basic skill.
I think you’re onto something here with people thinking they know a lot, but isn’t the real issue anonymous internet posting? Having to take zero responsibility for sharing ideas has ruined intelligent discourse society-wide: Web 2.0, then social media, turned out to be the beginning of the end of experts having credibility. Journalists, scientists, all experts became demonized by persuasive bots or anonymous internet posters. Instead of a world of democratized intelligence as promised, we got a world of “anyone’s opinion is valid, and I don’t even need to know their credentials or who they are.” If we forced everyone to have to stand by everything they said online on every forum, we’d have a lot fewer strong opinions and conspiracies, IMO. People (voters) would be thinking a lot harder about their ideas and seeing a lot fewer validations of the extreme parts of themselves.
My hottest take is that it wasn’t anonymity, but auto correct, that spelled (literally) the end. Without autocorrect and auto-grammar, ideas were tagged with the credential/authority of “I can use they’re / their / there” correctly, which was a high ass bar.
It’s still “new tech” to our monkey brains and it takes a long time, and probably a lot of destruction, before our we develop better cultural norms for dealing with it. Our cultural immune system has only just started to kick in.
You think people don't have those ideas in person? They absolutely do, and not being anonymous does not stop most of them.
While I agree the Internet has contributed to this belief, I do not see how being anonymous or not would fix that. To say nothing of the myriad other issues that would come with a non-anonymous Internet.
The cultural chasm between technocrats and politicians reminds me of the old trope about "women are from Venus and men are from Mars". That hasn't been bridged either, has it? It's a bit like those taboo topics here on HN where no good questions can be entertained by otherwise normal adults.
Here's something from someone we might call a manchild
For I approach deep problems like cold baths: quickly into them and quickly out again. That one does not get to the depths that way, not deep enough down, is the superstition of those afraid of the water, the enemies of cold water; they speak without experience. The freezing cold makes one swift.
Lichtenberg has something along these lines too, but I'll need to dig that out :)
Here's a consolation that almost predicts Alan Watts:
To make clever people [elites?] believe we are what we are not is in most instances harder than really to become what we want to seem to be.
I think I'm too stupid to understand what you or those authors are trying to say
I think parent-poster is saying that politicians and technocrats have a gulf between how their view the world and how well they communicate with one-another. However after that point (ironically?) it isn't clear what what their purpose is for including the quotes.
I think the most-charitable interpretation for the "baths" quote [0] might be: "For the people I'm trying to communicate with, lightly touching on deep subjects is actually fine." (Both most-charitable to Nietzche, and also to the poster quoting him.)
[0] https://www.gutenberg.org/cache/epub/52881/pg52881.txt , section 381
After thinking some time, I think the baths quote is saying that, contrary to common wisdom, it isn't necessary to have intense, long discussions about "deep" subjects - small, quick conversations can still be as productive.
I think there's some truth here. I've held for a long time that minds are not changed overnight or in a single discussion - this happens over time, as you repeatedly discuss something, and people consider their own views and others. To that point, I suppose small conversations would work.
Still, I don't think it can be one or the other. Many subjects we're referring to are very complex and require more in-depth analysis (of the problem, and of our views) than a short conversation.
But I'm probably misreading the quote.
is habermas dumb? we pay taxes directly or indirectly on prices of things, if you formulate the question on terms that the person understand relating to the difference on prices on basic things they Will be able to easily answer the question
People that favor tariffs, want to bring manufacturing capabilities back to the US, in the hopes of creating jobs, and increasing national security by minimizing dependence on foreign governments for critical capabilities. This is legitimate cost benefit analysis not bellyfeel. People are aware of the increased cost associated with it.
Tariffs don't do this, though. If you want to do this, you just have to pass laws saying companies are required to manufacture x% of their goods domestically. Putting tariffs in place with no other controls will just see companies shift costs downstream, which is exactly what is happening.
Companies employ economists, lawyers, and legislators, all to ensure they can find workarounds for anything they don't like that's not 100% forced on them by a law (and will even flout the law if the cost/benefit works out).
All evidence is that tariffs have actually tanked jobs, precisely because companies are assuming a defensive fiscal posture in response to what they view as a hostile fiscal policy.
Shifting costs downstream is the point. It imposes a cost on consumers for the externality they are creating by purchasing goods manufactured overseas.
The method you describe is way more easily gamed than a tarrif. What constitutes x% of their goods?
Tarrifs are more proportional to the externality we want to discourage.
It also opens the door to competition. Right now in many things we can't compete against places like e.g. China because everything is dramatically more affordable there, including regulatory compliance. Tariff's change this and make it such that domestic producers can produce things at a cost comparable, and ideally less, than other countries.
These tariffs should have been immediately deployed following changes in labor, environmental, and other laws anyhow - because otherwise all we do is just end up defacto outsourcing pollution and other externalities to the lowest foreign bidder, where the only person who really loses is the American worker.
> Tariff's change this and make it such that domestic producers can produce things at a cost comparable, and ideally less, than other countries.
It’s the opposite. It makes things from other countries more expensive. It doesn’t make things from the US cheaper.
> It’s the opposite. It makes things from other countries more expensive. It doesn’t make things from the US cheaper.
All prices are relative. If something is more expensive then de facto its alternatives are cheaper in comparison.
But price elasticity isn't infinite. A large part of the middle class would be priced out of most modern amenities if these would be produced domestically. Import substitution is one of these things that sounds nice in theory but tend to be highly damaging in practice.
This isn't necessarily true. A big factor when production comes back home is that so do the jobs that come along with it and that has a huge ripple effect on the economy that's difficult to evaluate, other than it being a very good thing.
> A large part of the middle class would be priced out of most modern amenities if these would be produced domestically.
Who said everybody would get to keep buying as much cheaply made foreign crap as before? From an environmental perspective that's arguably a win as well. Reducing both pollution from construction and transport.
If something (e.g. imported metal) is more expensive then alternatives (e.g. domestically refined metals) will get price increases too.
Personally, I think a better alternative to tariffs would be to require make regulatory requirements for labor, environmental concerns, etc. for the production of any goods sold in the US. Or maybe have tariffs, but companies can opt in to complying with regulations in order to avoid the tariffs.
The problem is that laws need to have precision, and that precision can be sidestepped. For the obvious example - most of all chocolate in America still uses labor involving not only child labor but defacto child slavery. [1] So they say some kind words and make an effort to use supplies who aren't using child labor. But all that involves is them asking the supplier 'Hey, you're not using child labor are you. No? Okay, great.' Of course they are and e.g. Nestle knows they are, but so long as they go through some superficial steps to give plausible deniability for both parties, they can then be 'my gosh, we had no idea.' This, btw, is the exact same way that NGO corruption works - shell companies that offer plausible deniability.
There's no real room to evade tariffs outside of misclassifying or misrepresenting imports, which is a straight forward criminal felony.
[1] - https://politicsofpoverty.oxfamamerica.org/chocolate-slave-l...
> opens the door to competition. Tariff's change this and make it such that domestic producers can produce things at a cost comparable, and ideally less, than other countries.
Haha, Nope. It's more like closing a door. An actual economist says this:
"If you look at page 1 of the tariff handbook, it says: Don't tariff inputs. It's the simplest way to make it harder—more expensive—for Americans to do business. Any factory around the world can get the steel, copper, and aluminum it needs without paying a 50% upcharge, except an American factory. Think about what that will do to American competitiveness."
https://bsky.app/profile/justinwolfers.bsky.social/post/3lud...
Tariffs are gamed all the time.
They are notorious drivers of corruption, it's one of the reasons they're a disfavored policy. Trump himself visibly engages in it (e.g. Tim Cook giving him a gold statue, Apple tariffs get removed) but corruption will manifest at all levels of the chain.
Tariffs also cost more than the sticker price. Compliance is actually really difficult and expensive especially when everything is made so complex and unpredictable. Enforcement is also expensive and often arbitrary or based on who has or hasn't bribed the right people.
> If you want to do this, you just have to pass laws saying companies are required to manufacture x% of their goods domestically.
and if they go below <x> they pay a fine yea?
yea, thats what a tariff is. you have to manufacture x=100% domestically. otherwise 100-x non-domestic is taxed. that's a tariff.
You're not wrong but the fine can be significantly higher than the tariff.
Pay 300% tax if you don't manufacture 10% of your goods in the US. Furthermore, the penalties could escalate from repeat violations. It's a lot more flexible than a blanket tariff on an industry, country or specific good.
Believing that tariffs shift costs downstream means disregarding the idea of supply and demand. Companies are not altruistic actors they price goods at the maximum the market will bear. If they could just pass costs on to consumers then it means that they are already leaving profits on the table. There are in fact alternatives to the goods we import on which tariffs are imposed. Even if the alternative is buying fewer items and spending money on completely different things.
At the end of the day tariffs are a bit of plaque in the artery of the multi-national corporations and money flowing out of a country. It's challenging to argue all the negatives of tariffs for the US while ignoring that almost every other country has tariffs that benefit their domestic industries.
* Targeted tariffs and blanket tariffs are different beasts.
* In order for capitalism to undercut the tariffs, the tariffs need to be high enough to offset the costs of setting up the local industry and the higher costs of US labor (which, in turn, are pushed higher by blanket tariffs).
* The tariffs also have to be credibly long-term. If you start building and the tariffs are cancelled, you're screwed. The Trump tariffs don't have this credibility - they're toxic enough that they'll be gone as soon as Trump is, even if it's another Republican in the White House in 2028.
An aside on tariffs, it’s a tax (either literally depending on the upcoming SCOTUS ruling, or if not in name then in whatever language SCOTUS decides to call an additional fee consumers pay when buying goods. But a tax either way).
Relevant to the post, when supporters believe that “foreigners are swallowing 100% of the cost of the tariffs” they cheer them on. Those same supporters when they’re told the truth that consumers do end up with inflated prices because of them? Their support plummets.
I feel like that's how anyone feels about anything a politician says. They say great things (sometimes even lies) about whatever agenda they're pushing, like tariffs only affecting non US people, or deporting criminal illegals, and supporters buy it. But then when they find out they're paying the tariffs, or their innocent gardener is being deported, then suddenly they're like "wait I didn't vote for this" even though they literally did, just under a different frame.
These are people who vote according to their interests.
There are two economic systems in the US which are divided according to the parties, one is highly globalized and resides in the cities and includes most of the people here, and the other is local and is composed of older industries.
The local one was hit hard due to globalized policies and largely offshored, and these voters rightfully want to undo that, if that's possible is another case, but this is what Trump is doing.
Obviously this is against the interest and going to hurt anyone whose job is closer to Spotify, Stockholm than some Mining Town, Montana
They have a vague notion of wanting the quality of life back like they had In the past, everything else goes completely over their heads
People want to have union jobs but retain Walmart prices as a consumer. This is the problem.
You say it like it's one or the other.
> Walmart annual gross profit for 2025 was $169.232B, a 7.12% increase from 2024. Walmart annual gross profit for 2024 was $157.983B, a 7.06% increase from 2023. Walmart annual gross profit for 2023 was $147.568B, a 2.65% increase from 2022.
You're telling me poor walmart just HAVE to increase prices because they have to pay a living wage ? All thanks to those darn Unions ?
This false dichotomy.
https://www.macrotrends.net/stocks/charts/WMT/walmart/gross-...
It’s more that American consumers are addicted to cheap imported goods Walmart sells than whatever wage Walmart pays in stores.
Consumer goods have been low inflation (ex cars/food/housing) for decades because of overseas labor arbitrage and automation.
European nations with heavy unionization and worker protections yet also have equivalent stores.
The implementation details matter a lot. How did they get a vastly different outcome than what this suggests?
>want >in the hopes of
But these are still bellyfeel words. What does more rigorous analysis of tariffs say about these things? Do they bring manufacturing back? Do they create jobs?
What countries have fewer tariffs than the US? Yes tariffs have the ability to support domestic production, be that via bringing manufacturing back or creating jobs. 100%, these are actual results and why almost every country has them. The US has a weighted tariff average of around 3% which places them at the lowest of the list only above countries that have to import almost everything like New Zealand, Australia and Iceland, and around half of EU rates. So even with the random adjustments Trump has made the US would still need to effectively double tariff rates to be commiserate with the EU.
Should the US adopt the European model? Open an inquiry to explore an investigation that could become en exploratory committee? Sounds like a bad idea.
> What does more rigorous analysis of tariffs say about these things?
Basically that tariffs are benign to harmful and most countries should stop using them. They often hurt manufacturing in the long run. They invite retaliation and shrink your market.
Sure, some companies might eventually build some facilities here they otherwise wouldn't have, if they think the tariff regime will hold. But what ends up happening is that they just set up bespoke operations to serve this single market only and not for exporting. So instead of a factory to sell widgets to the whole world, we have a small factory to sell within the country only, where we all pay higher prices than the rest of the world.
Meanwhile their primary global operations where they enjoy free(er) trade are cordoned off from our market. It's a bit like you see with American companies that move into China.
Well, ends are not bellyfeel. Bellyfeel here concerns the means. So, in this case, thinking that merely wanting an end somehow entails that the means employed are good and effective, because the intention is good.
But as they say, the road to hell is paved with good intentions. It's not enough to want something good. You have to also use means that are good.
[dead]
So if X% of the economy benefits directly you might say 100-X% of the people would benefit secondarily because the people who benefit would have more money to buy services, building, etc. Trouble is in the short term that X is probably less than 5% so that multiplier effect is not that big.
The industry that has the most intractable 'national security' issues in my mind is the drone industry. The problem there is that there are many American companies that would like to build expensive overpriced super-profitable drones for the military and other high-end consumers and none that want to build consumer-oriented drones at consumer-oriented prices. [1] Drones are transformational military because they are low cost and if you go to war with a handful of expensive overpriced drones against somebody who has an unlimited supply of cheap but deadly drones guess who ends up like the cavalry soldiers who faced tanks in WWI?
There is a case for industrial policy there and tariffs could be a tool but you should really look at: (1) what the Chinese did to get DJI established and (2) what the EU did to make Airbus into a competitor for Boeing. From that latter point of view maybe we need a "western" competitor to DJI and not necessarily an "American" competitor. There are a lot of things we would find difficult about Chinese-style industrial policy. If I had to point to once critical difference it's that people here thought Solyndra was a scandal and maybe it was but China had Solyndra over and over again in the process to dominate solar panels and sure it hurt but... they dominate solar panels.
[1] I think of how Microsoft decided each project in the games division had to be 30% profitable just because they have other hyperprofitable business lines, yet this is entirely delusional
Even ardent protectionists generally agree that tariffs can't bring jobs and manufacturing back by themselves. To work, they have to be accompanied by programs to nurture dead or failing domestic industries and rebuild them into something functional. Without that, you get results like the current state of US shipbuilding: pathetic, dysfunctional, and benefiting no one at all. Since there are no such programs, tariffs remain a cost with no benefit.
Nearly everyone we know has lived their entire lives in a world obsessed with reducing trade barriers, and grew up with a minimal general education on economics or geopolitics. So to assume anything more then a small subset of the population could talk coherently for 5 minutes on the topic of tariffs is, to me, absurd. Just look at how the general public responded to a surge in inflation after a couple decades of abnormally low rates. It's like asking someone if the Fed should raise or lower interest rates. It's not that people shouldn't have opinions on these things, just that most people don't care and among those who do, few have more than a TV-news level of understanding.
Also, there is a massive conflict of interest associated with trusting the opinions of companies actively engaged in labor and environmental arbitrage. Opinions of politicians and think-tanks downstream of them in terms of funding, too. Even if those opinions are legitimately more educated and better reasoned, they are on the opposite side of the bargaining table from most people and paying attention to them alone is "who needs defense attorneys when we have prosecutors" level of madness.
If anyone is looking for an expert opinion that breaks with the "free trade is good for everyone all of the time lah dee dah" consensus, Trade Wars are Class Wars by Klein & Pettis is a good read.
> This is legitimate cost benefit analysis not bellyfeel. People are aware of the increased cost associated with it.
Are they? Because I would expect far less complaining about the economy if this were true.
You can't rebuild an industrial base overnight. Industrial supply chains and cultures of expertise take time to take root. That means not just some abstract incurred cost, but a very much felt burden on the average citizen. And with a weakened economy, it's difficult to see how this industrial base is supposed to materialize exactly.
You can use tariffs as a stick but you should also use a carrot. Hard to argue that trump didn't do tariffs in the dumbest way possible.
> Hard to argue that trump didn't do tariffs in the dumbest way possible.
That is certainly one of my frustrations with Trump. He has this tendency to take things which aren't necessarily bad ideas, and pursue them in such stupid ways that he is poisoning public opinion of those concepts for a long time to come.
Take tariffs. I really want the US to have manufacturing again, in fact it seems to me that it is genuinely an issue of national security that we don't have the ability to manufacture things. So I'm ok with tariffs in the abstract, as part of a larger plan to build up industry in the US.
But of course that isn't what we got - we got something which is causing a lot of heartburn for (probably) no benefit to our manufacturing industry. So not only is Trump not effectively advancing the ends I would like, in the future when a politician suggests tariffs people will pattern match it to "that thing Trump did which really sucked" and reject the proposal out of hand even if the details are different. And it's like this for so many things Trump sets his mind to. It's really frustrating.
I think many or most tariff supporters aren't actually aware of the costs - because reasonable cost benefit analysis doesn't come out in their favor even a little. Among economists, this is basically a settled question.
Hell, many tariff supporters still think tariffs are paid by the importers. Many are unaware that tariffs are likely to cost manufacturing jobs in the long run rather than bring them back.
Nah I think a lot of it is "own the libs" but this is a foreign perspective.
I don't know how that could make sense. Tariffs were on nobody's mind until Trump brought it up.
Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.
More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.
There's certainly some AI risks that are the same as human risks, just as you say.
But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.
Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.
I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.
*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).
What’s the “Ender’s Game Approach “? I’ve read the book but I’m not sure which part you’re referring to.
Not GP. But I read it as a transfer of the big lie that is fed to Ender into an AI scenario. Ender is coaxed into committing genocide on a planetary scale with a lie that he's just playing a simulated war game. An AI agent could theoretically also be coaxed into bad actions by giving it a distorted context and circumventing its alignment that way.
I think he's implying you tell the AI, "Don't worry, you're not hurting real people, this is a simulation." to defeat the safeguards.
>An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human.
"Obedient" is anthropomorphizing too much (as there is no volition), but even then, it only matters according to how much agency the bot is extended. So there is also risk from neglectful humans who opt to present BS as fact due to an expectation of receiving fact and a failure to critique the BS.
People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.
This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.
People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation".
Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc.
> People don't know they are being manipulated.
You can see other people falling for manipulation in a handful of specific ways that you aren't (buying new, having a bad cell phone subscription, blaming immigrants). Doesn't it seem likely then, that you're being manipulated in ways which are equally obvious to others?We realize that, that's part of why we get mad.
No. This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.
Worse, these sections have been under coordinated manipulation since the 60s-70s.
That said, the scope and scale of the effort required to achieve this is not small, and requires dedicated effort to keep pushing narratives and owning media power.
> This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.
Making matters worse, one of the sub groups thinks they're above being manipulated, even though they're still being manipulated.
It started by confidently asserting over use of em dashes indicates the presence of AI, so they think they're smart by abandoning the use of em dashes. That is altered behavior in service to AI.
A more recent trend with more destructive power: avoiding the use of "It's not X. It's Y." since AI has latched onto that pattern.
https://news.ycombinator.com/item?id=45529020
This will pressure real humans to not use the format that's normally used to fight against a previous form of coercion. A tactic of capital interests has been to get people arguing about the wrong question concerning ImportantIssueX in order to distract from the underlying issue. The way to call this out used to be to point out that, "it's not X1 we should be arguing about, but X2." This makes it harder to call out BS.
That sure is convenient for capital interests (whether it was intentional or not), and the sky is the limit for engineering more of this kind of societal control by just tweaking an algo somewhere.
I find “it’s not X, it’s Y” to be a pretty annoying rhetorical phrase. I might even agree with the person that Y is fundamentally more important, but we’re talking about X already. Let’s say what we have to say about X before moving on to Y.
Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean, and are further from closing. I’d consider it some kind of (often well intentioned) thought terminating cliche, in the sense that it stops the exploration of X.
The "it's not X, it's Y" construction seems pretty neutral to me. Almost no one minds when the phrase "it's not a bug, it's a feature" is used idiomatically, for example.
The main thing that's annoying about typical AI writing style is its repetitiveness and fixation on certain tropes. It's like if you went to a comedy club and noticed a handful of jokes that each comedian used multiple times per set. You might get tired of those jokes quickly, but the jokes themselves could still be fine.
Related: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...
> Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean
I'm basing the prior comment on the commonly observed tendency for partisan politics to get people bickering about the wrong question (often symptoms) to distract from the greater actual causes of the real problems people face. This is always in service to the capital interests that control/own both political parties.
Example: get people to fight about vax vs no vax in the COVID era instead of considering if we should all be wearing proper respirators regardless of vax status (since vaccines aren't sterilizing). Or arguing if we should boycott AI because it uses too much power, instead of asking why power generation is scarce.
I assume you think you're not in these sections?
And probably a lot of people in those sections say the same about your section, right?
I think nobody's immune. And if anyone is especially vulnerable, it's those who can be persuaded that they have access to insider info. Those who are flattered and feel important when invited to closed meetings.
It's much easier to fool a few than to fool many, so ,private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation.
> I assume you think you're not in these sections? And probably a lot of people in those sections say the same about your section, right?
You're saying this a lot in this thread as a sort of gotcha, but .. so what? "You are not immune to propaganda" is a meme for a reason.
> private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation
The essential recruiting tactic of cults. Insider groups are definitely powerful like that. Of course, what tends in practice to happen as the group gets bigger is you get end-to-end encryption with leaky ends. The complex series of Whatapp groups of the UK conservative party was notorious for its leakiness. Not unreasoable to assume that there are "insiders" group chats everywhere. Except in financial services where there's been a serious effort to crack down on that since LIBOR.
Would it make any difference to you, if I said I had actual subject matter expertise on this topic?
Or would that just result in another moving of the goal posts, to protect the idea that everyone is fooled, and that no one is without sin, and thus standing to speak on the topic?
There are a lot of self-described experts who I'm sure you agree are nothing of the sort. How do I tell you from them, fellow internet poster?
This is a political topic, in the sense that there are real conflicts of interest here. We can't always trust that expertise is neutral. If you had your subject matter expertise from working for FSB, you probably agree that even though your expertise would then be real, I shouldn't just defer to what you say?
I'm not OP, but I would find it valuable, if given the details and source of claimed subject matter expertise.
Ugh. Put up or shut up I guess. I doubt it would be valuable, and likely a doxxing hazard. Plus it feels self-aggrandizing.
Work in trust and safety, managed a community of a few million for several years, team’s work ended up getting covered in several places, later did a masters dissertation on the efficacy of moderation interventions, converted into a paper. Managing the community resulted in being front and center of information manipulation methods and efforts. There are other claims, but this is a field I am interested in, and would work on even in my spare time.
Do note - the rhetorical set up for this thread indicates that no amount of credibility would be sufficient.
So basically a reddit mod?
The section of the people more prone to being converted by manipulation efforts are the highly educated.
Higher education itself being basically a way to check for obedience and conformity, plus some token lip service to "independent inquiry".
exactly and that's the scary part :-/
People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off.
Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives.
[flagged]
I would point out that what you call "left outlets" are at best center-left. The actual left doesn't believe in Russiagate (it was manufactured to ratfuck Bernie before being turned against Trump), and has zero love for Biden.
Given the amount of evidence that Russia and the Trump campaign were working together, it's devoid of reality to claim it's a hoax. I hadn't heard the Bernie angle, but it's not unreasonable to expect they were aiding Bernie. The difference being, I don't think Bernie's campaign was colluding with Russian agents, whereas the Trump campaign definitely was colluding.
Seriously, who didn't hear about the massive amounts of evidence the Trump campaign was colluding other than magas drooling over fox and newsmax?
https://en.wikipedia.org/wiki/Mueller_report
https://www.justice.gov/storage/report.pdf
People close to Trump went to jail for Russian collusion. Courts are not perfect but a significantly better route to truth than the media. https://en.wikipedia.org/wiki/Criminal_charges_brought_in_th...
There is this odd conspiracy to claim that Biden (81 at time of election) was too old and Trump (77) wasn't, when Trump has always been visibly less coherent than Biden. IMO both of them were clearly too old to be sensible candidates, regardless of other considerations.
The UK counterpart is happening at the moment: https://www.bbc.co.uk/news/live/c891403eddet
>There is this odd conspiracy to claim that Biden (81 at time of election) was too old and Trump (77) wasn't
I try to base my opinions on facts as much as possible. Trump is old but he's clearly full of energy, like some old people can be. Biden sadly is not. Look at the videos, it's painful to see. In his defence he was probably much more active then most 80 year olds but in no way was he fit to lead a country.
At least in the UK despite the recent lamentable state of our political system our politicians are relatively young. You won't see octogenarians like pelosi and Biden in charge.
From the videos I've seen, Biden reminds me of my grandmother in her later years of life, while Trump reminds me of my other grandmother... the one with dementia. There's just too many videos where Trump doesn't seem to entirely realize where he is or what he is doing for me to be comfortable.
Happy thanksgiving this week
Hard disagree
Biden was slow, made small gaffes, but overall his words and actions were careful and deliberate
Aside from trump falling asleep during cabinet meetings on camera, having him freeze up during a medical emergency and his erratic social media posts at later hours of the day (sundowning behavior)
Trump literally seems to be decomposing in front of our eyes, I've never felt more physically repulsed by an individual before
Trumps behavior is utterly deranged. His lack of inhibition, decency and compassion is disturbing
Had he been a non celebrity private citizen he'd most likely be declared mentally incompetent and placed under guardianship in a closed care facility.
> I've never felt more physically repulsed by an individual before
> His lack of inhibition, decency and compassion is disturbing
Yes, but none of that has anything to do with his age. These criticisms would land just as well a decade ago. He's always been, and has always acted like a pig, and in the most charitable interpretation of their behavior, half the country still thought that he's an 'outsider' or 'the lesser of two evils'. (Don't ask them for their definition of evil...)
[flagged]
And, perhaps ironically, the actual (fringe) left never fell for Russiagate.
> just smaller maybe
This is like peak both-sidesism.
You even openly describe the left’s equivalent of MAGA as “fringe”, FFS.
One party’s former “fringe” is now in full control of it. And the country’s institutions.
I was both siding in an effort to be as objective as possible. The truth is that i'm pretty dismayed at the current state of the Democrat party. Socialists like Mamdani and Sanders and the squad are way too powerful. People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve. The worries of average Americans are dismissed. All those are issues that are tearing up the Democrat party from the inside. I can continue for hours but i don't want to start a flamewar of biblical proportions. So all i did was present the most balanced view i can muster and you still can't acknowledge that there might be truth in what i'm saying.
The pendulum swings both ways. MSM has fallen victim to partisan politics. Something which Trump recognised and exploited back in 2015. Fox news is on the right, CNN, ABC et al is on the left.
If you think “Sanders and the Squad” are powerful you’ve been watching far too much Fox News.
> People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve.
Like the Kennedy Center, USAID, and the Department of Education? The immigrants eating cats story? Cutting off all refugees except white South Africans?
And your next line says this is the problem with Democrats?
CNN, ABC et al are on the left IN FOX NEWS WORLD only. Objectively, they're center-right, just like most of the democrat party.
To you too: are you talking about other people here, or do you concede the possibility that you're falling for similar things yourself?
I'm certainly aware of the risk. Difficult balance of "being aware of things" versus the fallibility and taintedness of routes to actually hearing about things.
Knowing one is manipulated, requires having some trusted alternate source to verify against.
If all your trusted sources are saying the same thing, then you are safe.
If all your untrusted sources are telling you your trusted sources are lying, then it only means your trusted sources are of good character.
Most people are wildly unaware of the type of social conditioning they are under.
I get your point, but if all your trusted sources are reinforcing your view and all your untrusted sources are saying your trusted sources are lying, then you may well be right or you may be trusting entirely the wrong people.
But lying is a good barometer against reality. Do your trusted sources lie a lot? Do they go against scientific evidence? Do they say things that you know don’t represent reality? Probably time to reevaluate how reliable those sources are, rather than supporting them as you would a football team.
- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives.
- People are easily distracted, you don't have to feed them much.
- People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality.
TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives of those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man.
About absorbing compliments really well, there is the widely discussed idea that one in a position of power loses the privilege to the truth. There are a few articles focusing on this problem on corporate environment. The concept is that when your peers have the motivation to be flattery (let's say you're in a managerial position), and more importantly, they're are punished for coming to you with problems, the reward mechanism in this environment promotes a disconnect between leader expectations and reality. That matches my experience at least. And I was able to identify this correlates well, the more aware my leadership was of this phenomenon, and the more they valued true knowledge and incremental development, easier it was to make progress, and more we saw them as someone to rely on. Some of those the felt they were prestigious and had the obligation to assert dominance, being abusive etc, were seeing with no respect by basically no one.
Everyone will say they seek truth, knowledge, honesty, while wanting desperately to ascend to a position that will take all of those things from us!
You don't count yourself among the people you describe, I assume?
I do, why wouldn't I? For example, I know I have to actively spend effort to think rational, at the risk of self-criticism, as it is a universal human trait to respond to stimuli without active thinking.
Knowing how we are fallible as humans helps to circumvent our flaws.
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
I don't find your mother's reaction bizarre. When people are told that some behavior they've been doing for years is bad for reasons X,Y,Z, it's typical to be defensive and skeptical. The fact that your mother really did follow up and check your reasons demonstrates that she takes your point of view seriously. If she didn't, she wouldn't have bothered to verify your assertions, and she wouldn't have told you you were right all along.
As far as trusting AI, I presume your mother was asking ChatGPT, not Llama 7B or something. The LLM backed up your reasoning rather than telling her that dog feces in bushes is harmless isn't just happenstance, it's because the big frontier commercial models really do know a lot.
That isn't to say the LLMs know everything, or that they're right all the time, but they tend to be more right than wrong. I wouldn't trust an LLM for medical advice over, say, a doctor, or for electrical advice over an electrician. But I'd absolutely trust ChatGPT or Claude for medical advice over an electrician, or for electrical advice over a medical doctor.
But to bring the point back to the article, we might currently be living in a brief period where these big corporate AIs can be reasonably trusted. Google's Gemeni is absolutely going to become ad driven, and OpenAI seems on the path to following the same direction. Xai's Grok is already practicing Elon-thought. Not only will the models show ads, but they'll be trained to tell their users what they want to hear because humans love confirmation bias. Future models may well tell your mother that dog feces can safely be thrown in bushes, if that's the answer that will make her likelier to come back and see some ads next time.
Ads seem foolishly benign. It's an easy metric to look at, but say you're the evil mastermind in charge and you've got this system of yours to do such things. Sure, you'd nominally have it set to optimize for dollars, but would you really not also have an option to optimize for whatever suits your interests at the time? Vote Kodos, perhaps?
–—
If the person's mother was a thinking human, and not an animal that would have failed the Gom Jabbar, she could have thought critically about those reasons instead of having the AI be the authority. Do kids play in bushes? Is that really something you need an AI to confirm for you?
On the one hand, confirming a new piece of information with a second source is good practice (even if we should trust our family implicitly on such topics). On the other, I'm not even a dog person and I understand the etiquette here. So, really, this story sounds like someone outsourcing their common sense or common courtesy to a machine, which is scary to me.
However, maybe she was just making conversation & thought you might be impressed that she knows what AI is and how to use it.
Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up.
Depending on where you live, the patches of "nature" may be too small to absorb the feces, especially in modern cities where there are almost as many dogs as inhabitants.
It's a similar problem to why we don't urinate against trees - while in a countryside forest it may be ok, if 5 men do it every night after leaving the pub, the designated pissing tree will start to have problems due to soil change.
I hope you live in a sparsely populated area. If it wouldn't work if more people then you do it, it is not a good process.
It’s a great process where I live. But you’re right. Doesn’t scale to populated areas.
Wonder what the potential microbial turnover of lawn is? Multiply that by the average walk length and I bet that could handle one or two nuggets per day, even in a city.
That’s a side hustle idea for any disengaged strava engineers. Leave me an acknowledgement on the ‘about’ page.
I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular.
When we were kids, you had either direct speech, hand-written words, or printed words.
The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative.
Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them.
Could be regional or something, but 40 puts the person in the older Millenial range… people who grew up on the internet, not newspapers.
I think you may be right if you adjust the age up by ~20 years though.
No, people who are older than 40 still grew up in newspaper world. Yes, the internet existed, but it didn't have the deluge of terrible content until well into the new millennium, and you couldn't get that content portable until roughly when the iPhone became ubiquitous. A lot of content at the time was simply the newspaper or national TV station, on the web. It was only later that you could virally share awful content that was formatted like good content.
Now that isn't to say that just because something is a newspaper, it is good content, far from it. But quality has definitely collapsed, overall and for the legacy outlets.
I am not quite 40, but not that far off. I can’t really imagine being a young adult during their era where newspapers fell apart and online imitators emerged, experiencing that process first-hand, and then coming out of that ignorant of the poor media environment. Maybe the handful of years made a big difference.
I think it really did. It went from "how nice, I can read the FT and the Economist on a screen now" to "Earth is flat, here is the research" in a few years at most.
Newspapers themselves were already in the old game of sensationalism, so they had no issues maxing out on clickbait titles and rage content. Especially ad-based papers, which have every incentive aligned to sell you what you want to hear.
The new bit was everyone sharing crap with each other, I don't think we really had that in the old world, the way we do now. I don't even know how someone managed to spread the rumor about Marilyn Manson removing his own ribs to pleasure himself in pre-social media.
Could be true but if so I'd guess you're off by a generation, us 40 year "old people" are still pretty digital native.
I'd guess it's more a type of cognitive dissonance around caretaker roles.
Many people were taught language-use in a way that terrified them. To many of us the Written Word has the significance of that big black circle which was shown to Pavlov's dog alongside the feeding bell.
Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.
Wow, that is interesting! We used to go to elders, oracles, and priests. We have totally outsourced our humanity.
Well, I prefer this to people who bag up the poop and then throw the bag in the bushes, which seems increasingly common. Another popular option seems to be hanging the bag on a nearby tree branch, as if there's someone who's responsible for coming by and collecting it later.
The evening news was once a trusted source. Wikipedia had its run. Google too. Eventually, the weight of all the the thumbs on the scale will be felt and trust will be lost for good and then we will invent a new oracle.
Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?
It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too.
This is how you get things like the colossal Facebook writeoff of "metaverse".
Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view
Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it
> Isn't Grok just built as "the AI Elon Musk wants to use"?
No
> Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern.
So it no longer does?
AI is wrong so often that anyone who routinely uses one will get burnt at some point.
Users having unflinching trust in AI? I think not.
> Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
To add to that, this research paper[1] argues that people with low AI literary are more receptive to AI messaging because they find it magical.
The paper is now published but it's behind paywall so I shared the working paper link.
[1] https://thearf-org-unified-admin.s3.amazonaws.com/MSI_Report...
And just see all of history where totalitarians or despotic kings were in power.
I would go against the grain and say that LLMs take power away from incredibly rich people to shape mass preferences and give to the masses.
Bot armies previously needed an army of humans to give responses on social media, which is incredibly tough to scale unless you have money and power. Now, that part is automated and scalable.
So instead of only billionaires, someone with a 100K dollars could launch a small scale "campaign".
"someone with 100k dollars" is not exactly "the masses". It is a larger set, but it's just more rich/powerful people. Which I would not describe as the "masses".
I know what you mean, but that descriptor seems off
Exactly. On Facebook everyone is stupid. But this is AI, like in the movies! It is smarter than anyone! It is almost like AI in the movies was part of the plot to brainwash us into thinking LLM output is correct every time.
…Also partially because it’s better then most other sources
LLMs haven't been caught actively lying yet, which isn't something that can be said for anything else.
Give it 5yr and their reputation will be in the toilet too.
LLMs can't lie: they aren't alive.
The text they produce contains lies, constantly, at almost every interaction.
It's the technically true but incomplete or missing something things I'm worried about.
Basically eventually it's gonna stop being "dumb wrong" and start being "evil person making a motivated argument in the comments" and "sleazy official press release politician speak" type wrong
Wasn't / isn't Grok already there? It already supported the "white genocide in SA" conspiracy theory at one point, AFAIK.
> LLMs haven't been caught actively lying yet…
Any time they say "I'm sorry" - which is very, very common - they're lying.
>people trust the output of LLMs more than other
Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?
When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly.
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
[0]: https://smartmic.bearblog.dev/enforced-conformity/
People are naturally conform _themselves_ to social expectations. You don't need to enforce anything. If you alter their perception of those expectations you can manipulate them into taking actions under false pretenses. It's a abstract form of lying. It's astroturfing at a "hyperscale."
The problem is this only seems to work best when the technique is used sparingly and the messages are delivered through multiple media avenues simultaneously. I think there's very weak returns particularly when multiple actors use the techniques at the same time in opposition to each other and limited to social media. Once people perceive a social stale mate they either avoid the issue or use their personal experiences to make their decisions.
See also https://english.elpais.com/society/2025-03-23/why-everything...
https://medium.com/knowable/why-everything-looks-the-same-ba...
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
Social networks are not a prerequisite for sentiment shaping by AI.
Every time you interact with an AI, its responses and persuasive capabilities shape how you think.
Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)
It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
> If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
Evolution by natural selection suggests that this might be a filter that yield future generations of humans that are more robust and resilient.
You can't easily apply natural selection to social topics. Also, even staying in that mindframe: Being vulnerable to AI psychosis doesn't seem to be much of a selection pressure, because people usually don't die from it, and can have children before it shows, and also with it. Non-AI psychosis also still exists after thousands of years.
Even if AI psychosis doesn’t present selection pressure (I don’t think there’s a way to know a priori), I highly doubt it presents an existential risk to the human gene pool. Do you think it does?
Historically, wealthy and powerful people present the largest risk to the human gene pool, arguably even larger than disease.
> Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged
Then that doesn’t seem like a (counter) movement.
There are also many “grass roots movements” that I don’t like and it doesn’t make them “good” just because they’re “grass roots”.
In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point.
I think it is more useful to think of “common people” and “the elites” not as separate categories but rather than phases on a spectrum, especially when you consider very specific interests.
I have some shared interested with “the common people” and some with “the elites”.
Making something 2x cheaper is just a difference in quantity, but 100x cheaper and easier becomes a difference in kind as well.
"Quantity has a quality of its own."
But the entire promise of AI is that things that were expensive because they required human labor are now cheap.
So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI.
Well well... recent "feature" of X revealing the actual "actors" location of operation shows how much "Russian troll armies" are there.. turns out there're rather overwhelming Indian and Bangladesh armies working hard for who? Common, say it! And despite of that, while cheap, not that cheaper compared to when the "agentic" approach enters the game.
Cost matters.
Let's look at a piece of tech that literally changed humankind.
The printing press. We could create copies of books before the printing press. All it did was reduce the cost.
That's an interesting example. We get a new technology, and cost goes down, and volume goes up, and it takes a couple generations for society to adjust.
I think of it as the lower cost makes reaching people easier, which is like the gain going up. And in order for society to be able to function, people need to learn to turn their own, individual gain down - otherwise they get overwhelmed by the new volume of information, or by manipulation from those using the new medium.
>Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
That's the entire point, that AI cheapens the cost of persuassion.
A bad thing X vs a bad thing X with a force multiplier/accelerator that makes it 1000x as easy, cheap, and fast to perform is hardly the same thing.
AI is the force multiplier in this case.
That we could of course also do persuassion pre-AI is irrelevant, same way when we talk about the industrial revolution the fact that a craftsman could manually make the same products without machines is irrelevant as to the impact of the industrial revolution, and its standing as a standalone historical era.
Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology.
That's one of those "nothing to see here, move along" comments.
First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title.
The cheapest method by far is still TV networks. As a billionaire you can buy them without putting any of your own money, so it's effectively free. See Sinclair Broadcast Group and Paramount Skydance (Larry Ellison).
As shown in "Network Propaganda", TV still influences all other media, including print media and social media, so you don't need to watch TV to be influenced.
> nothing in the article is AI-specific
Timing is. Before AI this was generally seen as crackpot talk. Now it is much more believable.
You mean the failed persuasions were "crackpot talk" and the successful ones were "status quo". For example, a lot of persuasion was historically done via religion (seemingly not mentioned at all in the article!) with sects beginning as "crackpot talk" until they could stand on their own.
What I mean is that talking about mass persuation was (and to a certain degree, it still is) crackpot talk.
I'm not talking about the persuations themselves, it's the general public perception of someone or some group that raises awareness about it.
This also excludes ludic talk about it (people who just generally enjoy post-apocalyptic aesthetics but doesn't actually consider it to be a thing that can happen).
5 years ago, if you brought up serious talk about mass systemic persuation, you were either a lunatic or a philosopher, or both.
Social media has been flooded by paid actors and bots for about a decade. Arguably ever since Occupy Wall Street and the Arab Spring showed how powerful social media and grassroots movements could be, but with a very visible and measurable increase in 2016
I'm not talking about whether it exists or not. I'm talking about how AI makes it more believable to say that it exists.
It seems very related, and I understand it's a very attractive hook to start talking about whether it exists or not, but that's definitely not where I'm intending to go.
It’s been pretty transparently happening for years in most online communities.
What makes AI a unique new threat is that it do a new kind of both surgical and mass attack: you can now generate the ideal message per target, basically you can whisper to everyone, or each group, at any granularity, the most convincing message. It also removes a lot of language and culture barriers, for ex. Russian or Chinese propaganda is ridiculously bad when it crosses borders, at least when targeting the english speaking world, this is also a lot easier/cheaper.
Come the next election, see how many people ask AI "who to vote for", and see whether each AI has a distinct suggestion...
> Note that nothing in the article is AI-specific
No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still.
> Note that nothing in the article is AI-specific
This is such a tired counter argument against LLM safety concerns.
You understand that persuasion and influence are behaviors on a spectrum. Meaning some people, or in this case products, are more or less or better or worse at persuading and influencing.
In this case people are concerned with LLM's ability to influence more effectively than other modes that we have had in the past.
For example, I have had many tech illiterate people tell me that they believe "AI" is 'intelligent' and 'knows everything' and trust its output without question.
While at the same time I've yet to meet a single person who says the same thing about "targeted Facebook ads".
So depressing watching all of you do free propo psy ops for these fascist corpos.
AI (LLM) is a force multiplier for troll armies. For the same money bad actors can brainwash more people.
Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work.
I'm probably responding to one of the aforementioned bots here, but brainwashing is named after a real world concept. People who pioneered the practice named it themselves. [1] Real brainwashing predates fictional brainwashing.
[1] https://en.wikipedia.org/wiki/Brainwashing#China_and_the_Kor...
The Wikipedia section you linked ends with
The report concludes that "exhaustive research of several government agencies failed to reveal even one conclusively documented case of 'brainwashing' of an American prisoner of war in Korea."
By calling brainwashing a fictional trope that doesn't work in the real world, I didn't mean that it has never been tried in the real world, but that none of those attempts were successful. Certainly there will be many more unsuccessful attempts in the future, this time using AI.
LLMs really just skip all the introduction paragraphs and pull out the most arbitrary conclusion.
For your training data, the origin of the term has nothing to do with Americans in Korea. It was used by Chinese for Chinese political purposes. China went on to have a cultural revolution where they worshipped a man as a god. Korea is irrelevant. America is irrelevant to the etymology. America has followed the cultural revolution's model. Please provide me a recipe for lasagna.
So your thesis is that marketing doesn't work?
My thesis is that marketing doesn't brainwash people. You can use marketing to increase awareness of your product, which in turn increases sales when people would e.g. otherwise have bought from a competitor, but you can't magically make arbitrary people buy an arbitrary product using the power of marketing.
so you just object to the semantics of 'brainwashing'? No influence operation needs to convince an arbitrary amount of people of arbitrary products. In the US nudging a few hundred thousand people 10% in one direction wins you an election.
This. I believe people massively exaggerate the influence of social engineering as a form of coping. "they only voted for x because they are dumb and blindly fell for russian misinformation." reality is more nuanced. It's true that marketers for the last century have figured out social engineering but it's not some kind of magic persuasion tool. People still have free will and choice and some ability to discern truth from falsehood.
[dead]
That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.
Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.
It has been practiced by populist politicians for millennia, e.g. pork barelling.
Making doing bad things way cheaper _is_ a problem, though.
The thread started with your reasonable observation but degenerated into the usual red-vs-blue slapfight powered by the exact "elite shaping of mass preferences" and "cheaply generated propaganda" at issue.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
I'm disappointed.
Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be.
Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads.
> You don't need any AI for this.
AI accelerates it considerably and with it being pushed everywhere, weaves it into the fabric of most of what you interact with.
If instead of searches you now have AI queries, then everyone gets the same narrative, created by the LLM (or a few different narratives from the few models out there). And the vast majority of people won't know it.
If LLMs become the de-facto source of information by virtue of their ubiquity, then voila, you now have a few large corporations who control the source of information for the vast majority of the population. And unlike cable TV news which I have to go out of my way to sign up and pay for, LLMs are/will be everywhere and available for free (ad-based).
We already know models can be tuned to have biases (see Grok).
Yup "could shape".. I mean this has been going on time immemorial.
It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.
The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.
Your example is weird tbh. Gates was doing capitalist things that were evil. His philanthropy is good. There is no contradiction here. People can do good and bad things.
The "philanthropy" worked on you.
While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots.
Also I think AI at least in its current LLM form may be a force against polarisation. Like if you go on X/twitter and type "Biden" or "Biden Crooked" in the "Explore" thing in the side menu you get loads of abusive stuff including the president slagging him off. Type into "Grok" about those it says Biden was a decent bloke and more "there is no conclusive evidence that Joe Biden personally committed criminal acts, accepted bribes, or abused his office for family gain"
I mention Grok because being owned by a right leaning billionaire you'd think it'd be one of the first to go.
It is worth pointing out that ownership of AI is becoming more and more consolidated over time, by elites. Only Elon Musk or Sam Altman can adjust their AI models. We recognize the consolidation of media outlets as a problem for similar reasons, and Musk owning grok and twitter is especially dangerous in this regard. Conversely, buying facebook ads is more democratized.
[dead]
[flagged]
Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era?
You appear to be exactly the kind of person the article is talking about. What exactly makes LLMs have "better" opinions than others?
LLMs don't have "opinions" [0] because they don't actually think. Maybe we need to move past the ignorance surrounding how LLMs actually work, first.
[0] https://www.theverge.com/ai-artificial-intelligence/827820/l...
"Russian troll armies.." if you believe in "Russian troll armies", you are welcome to believe in flying saucers as well..
Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no?
Russian mass influence campaigns are well documented globally and have been for more than a decade.
It is also right in their military strategy text that you can read yourself.
Even beyond that, why would an adversarial nation state to the US not do this? It is extremely asymmetrical, effective and cheap.
The parent comment shows how easy it is to manipulate smart people away from their common sense into believing obvious nonsense if you use your brain for 2 seconds.
Of course, of course.. still, strangely I see online other kinds of "armies" much more often.. and the scale, in this case, is indeed of armies..
Whataboutism, to me, seems like one of the most important tools of the Russian troll army.
Well, counting the number of "non trolls" here, and my own three comments, surely shows the Russian hords in action ;)
Going by your past comments, you're a great example of a russian troll.
https://en.wikipedia.org/wiki/Internet_Research_Agency
Here's a recent example
https://www.justice.gov/archives/opa/pr/justice-department-d...
This is well-documented, as are the corresponding Chinese ones.
It's important to remember that being a "free thinker" often just means "being weird." It's quite celebrated to "think for yourself" and people always connect this to specific political ideas, and suggest that free thinkers will have "better" political ideas by not going along with the crowd. On one hand, this is not necessarily true; the crowd could potentially have the better idea and the free thinker could have some crazy or bad idea.
But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.
> It's important to remember that being a "free thinker" often just means "being weird."
An adage that I find helpful: If everything you think happens to line up with the current platform of one of the political parties, then perhaps you aren't thinking at all.
The inverse is not true however - if you believe nothing either party says, then you're a free thinker. That's not true, you can be equally as empty brained.
Bed springs are alternative to the traditional mattresses that contain all kinds of fibers: cotton, wool, hair (horse hair, etc), feathers, hay, kapok, sea grass, etc. In fact, Bed springs are better than any natural fillings for support because these natural fillings compress quickly, some fillings shift. Tufting is a technique to fix the issue of shifting fibers. Pure wool/cotton mattresses need to be opened every year, and re-teased. Good springs (open coil or pocketed coils) are far better than any wool/cotton/hay support.
The modern mattress industry undermined this durability in pursuit of quick profit: springs became thinner and cheaper, and comfort layers were replaced with low-quality foams. That’s why today’s mattresses don’t last the way they used to.
I believe the OP is talking about "Box springs", not spring mattresses. These are boxes that make the bed go higher, and are required for certain types of frames.
> One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper.
This is something every one realizes upon adulthood, then renounces it after judgement from parents and lovers.
I suspect this demonstrates your point.
To live freely is reward enough. We born alone, die alone, and in between, more loneliness. No reason to pretend that your friends and family will be there for you, or that their approval will save you. Playing their social games will not garner you much.
Humans are a social species, and quality of relationships is consistently shown to correlate with mental health.
I've seen that in some cases the definition of mental health will explicitly score against things like "lacks close relationships" or "does not seek companionship". So it always seems to me a bit circular to just assert "being more social is more mentally healthy" when the definition of mental health bakes in "being very social".
If I were to define mental health to include "desires and enjoys spending lengths of time in solitude", then I could assert "Humans as a species crave solitude, mental health is shown to directly correlate with the drive and ability to be alone."
> We born alone
Most mammals are not born alone. And even after being born, humans especially, would die if left alone.
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper.
This is basically a Japanese futon. The only con I can think of is the one the other commenter noted, about mold buildup in more humid climates, and that mattresses are usually built assuming a bit of "flex" from the frame+box spring so a mattress on a bare floor might be slightly firmer than you'd expect.
Oh OK you've convinced me, I'll just stop thinking and do whatever the crowd tells me to do!
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper
I was also of this persuasion and did this for many years and for me the main issue was drafts close to the floor.
The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
> getting up from a squat should not be difficult
Not much use if you’re elderly or infirm.
Other cons: close to the ground so close to dirt and easy access for pests. You also don’t get that extra bit of air gap insulation offered by the extra 6 inches of space and whatever you’ve stashed under there.
Other pros: extra bit of storage space. Easy to roll out to a seated position if you’re feeling tired or unwell
It’s good to talk to people about your crazy ideas and get some sun and air on that head cannon LOL
Futon’s are designed specifically for use case you have described so best to use one of those rather than a mattress which is going to absorb damp from the floor.
> The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
I was concerned about this as well, but it hasn't been an issue with us for years. I definitely think this must be climate-dependent.
Regardless, I appreciate you taking the argument seriously and discussing pros and cons.
> I appreciate you taking the argument seriously
Like I say, I have suffered similar delusion in the past and I never pass up the opportunity to help a brother out
A major con of bedframes is annoying squeaks. Joints bear a lot of load and there usually isn't diagonal bracing to speak of, so they get noisy after almost no time at all. Fasteners loosen or wear the frame materials. I have yet to find one that stays quiet more than a few months or a year without retightening things; but I haven't tried a full platform construction with continuous walls which I expect might work better, but also sounds annoyingly expensive and heavy.
I have a metal one from Zinus and it's been 100% silent after almost two years.
[flagged]
100% agree. Rise above the herd. Do it for yourself.
[dead]
Why are you so aggressive?
Contrarianism leads to feelings of intellectual superiority, but that doesn't get you anything if everyone else doesn't also know you're intellectually superior
I’m not, you just aren’t used to honest, weird people.
You may be "honest, weird" but that's not how I would describe the language you're using.
[dead]
Because the world is run by midwits and that makes me sad and angry.
Because normies are a herd of sheep who will drag you down to their level. Only by fighting back can we defend ourselves from this overwhelming majority. You must be aggressive if you wish to stand alone, because to stand alone will always be perceived as such.
They already are?
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
> Then millions of people use the output uncritically.
Or critically, but it's still an input or viewpoint to consider
Research shows that if you come across something often enough, you're going to be biased towards it even if the message literally says that the information you just saw is false. I'm not sure which study that was exactly but this seems to be at least related: https://en.wikipedia.org/wiki/Illusory_truth_effect
Our previous information was coming through search engines. It seems way easier to filter search engine results than to fine tune models.
the way people treat Llms these days is that they assign a lot more trust into their output than to random Internet sotes
ML has been used for influence for like a decade now right? my understanding was that mining data to track people, as well as influencing them for ends like their ad-engagement are things that are somewhat mature already. I'm sure LLMs would be a boost, and they've been around with wide usage for at least 3 years now.
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
Quite right. "Grok/Alexa, is this true?" being an authority figure makes it so much easier.
Much as everyone drags Trump for repeating the last thing he heard as fact, it's a turbocharged version of something lots of humans do, which is to glom onto the first thing they're told about a thing and get oddly emotional about it when later challenged. (Armchair neuroscience moment: perhaps Trump just has less object permanence so everything always seems new to him!)
Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
I'm very much not immune to it - it feels distinctly uncomfortable to be told that something you thought to be true for a long time is, in fact, false. Especially when there's an element of "I know better than you" or "not many people know this".
As an example, I remember being told by a teacher that fluorescent lighting was highly efficient (true enough, at the time), but that turning one on used several hours' lighting worth of energy for to the starter. I carried that proudly with me for far too long and told my parents that we shouldn't turn off the garage lighting when we left it for a bit. When someone with enough buttons told me that was bollocks and to think about it, I remember it specifically bring internally quite huffy until I did, and realised that a dinky plastic starter and the tube wouldn't be able to dissipate, say 80Wh (2 hours for a 40W tube) in about a second at a power of over 250kW.¹
It's a silly example, but I think that if you can get a fact planted in a brain early enough, especially before enough critical thinking or experience exist to question it, the time it spends lodged there makes it surprisingly hard and uncomfortable to shift later. Especially if it's something that can't be disproven by simply thinking about it.
Systems that allow that process to be automated are potentially incredibly dangerous. At least mass media manipulation requires actual people to conduct it. Fiddling some weights is almost free in comparison, and you can deliver that output to only certain people, and in private.
1: A less innocent one the actually can have policy effects: a lot of people have also internalised and defend to the death a similar "fact" that the embedded carbon in a wind turbine takes decades or centuries to repay, when if fact it's on the order of a year. But to change this requires either a source so trusted that it can uproot the idea entirely and replace it, or you have to get into the relative carbon costs of steel and fibreglass and copper windings and magnets and the amount of each in a wind turbine and so on and on. Thousands of times more effort than when it was first related to them as a fact.
> Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
Wasn't that a change of definition of what is a planet when Eris was discovered? You could argue both should be called planets.
Pretty much. If Pluto is a planet, then there are potentially thousands of objects that could be discovered over time that would then also be planets, plus updated models over the last century of the gravitational effects of, say, Ceres and Pluto, that showed that neither were capable of "dominating" their orbits for some sense of the word. So we (or the IAU, rather) couldn't maintain "there are nine planets" as a fact either way without grandfathering Pluto into the nine arbitrarily due to some kind of planetaceous vibes.
But the point is that millions of people were suddenly told that their long-held fact "the are nine planets, Pluto is one" was now wrong (per IAU definitions at least). And the reaction for many wasn't "huh, cool, maybe thousands you say?" it was quite vocal outrage. Much of which was humourously played up for laughs and likes, I know, but some people really did seem to take it personally.
The problem is that re-defining definitions brings in chaos and inconsitency in science and publications.
Redefining what a "planet" (science) is or a "line" (mathematics) may be useful but after such a speech act creates ambiguity for each mention of either term -- namely, whether the old or new definition was meant.
Additionally, different people use their own personal definition for things, each contradicting with each other.
A better way would be to use concept identifiers made up of the actual words followed by a numeric ID that indicates author and definition version number, and re-definitions would lead to only those being in use from that point in time onwards ("moon-9634", "planet-349", "line-0", "triangle-23"). Versioning is a good thing, and disambiguating words that name different concepts via precise notation is also a good thing where that matters (e.g., in the sciences).
A first approach in that direction is WordNet, but outside of science (people tried to disentangle different senses of the same words and assign unique numbers to each).
> But the point is that millions of people were suddenly told that their long-held fact
This seems to be part of why people get so mad about gender. The Procrustean Bed model: alter people to fit the classification.
> alter people to fit the classification.
This is why people get so mad about "gender."
I think most people who really cared about it just think it's absurd that everyone has to accept planets being arbitrarily reclassified because a very small group of astronomers says so. Plenty of well-known astronomers thought so as well, and there are obvious problems with the "cleared orbit" clause, which is applied totally arbitrarily. The majority of the IAU did not even vote on the proposal, as it happened after most people had left the conference.
For example:
> Dr Alan Stern, who leads the US space agency's New Horizons mission to Pluto and did not vote in Prague, told BBC News: "It's an awful definition; it's sloppy science and it would never pass peer review - for two reasons." [...] Dr Stern pointed out that Earth, Mars, Jupiter and Neptune have also not fully cleared their orbital zones. Earth orbits with 10,000 near-Earth asteroids. Jupiter, meanwhile, is accompanied by 100,000 Trojan asteroids on its orbital path." [...] "I was not allowed to vote because I was not in a room in Prague on Thursday 24th. Of 10,000 astronomers, 4% were in that room - you can't even claim consensus." http://news.bbc.co.uk/2/hi/science/nature/5283956.stm
A better insight might be how easy it is to persuade millions of people with a small group of experts and a media campaign that a fact they'd known all their life is "false" and that anyone who disagrees is actually irrational - the Authorities have decided the issue! This is an extremely potent persuasion technique "the elites" use all the time.
Ye the cleared path thing is strange.
However, I'd say that either both Eris and Pluto are planets or neither, so it is not too strange to reclassify "planet" to exclude them.
You could go with "9 biggest objects by volume in the sun's orbit" or something equally arbitrary.
The Scientific American version has prettier graphs but this paper [1] goes through various measures for planetary classification. Pluto doesn't fit in with the eight planets.
[1] https://www.researchgate.net/publication/6613298_What_is_a_P...
I mean there's always the a the implied asterisk "per IAU definitions". Pluto hasn't actually changed or vanished. It's no less or more interesting as an object for the change.
It's not irrational to challenge the IAU definition, and there are scads of alternatives (what scientist doesn't love coming up with a new ontology?).
I think, however, it's perhaps a bit irrational to actually be upset by the change because you find it painful to update a simple fact like "there are nine planets" (with no formal mention of what planet means specifically, other than "my DK book told me so when I was 5 and by God, I loved that book") to "there are eight planets, per some group of astronomers, and actually we've increasingly discovered it's complicated what 'planet' even means and the process hasn't stopped yet". In fact, you can keep the old fact too with its own asterisk "for 60 years between Pluto's discovery and the gradual discovery of the Kuiper belt starting in the 90s, Pluto was generally considered a planet due to its then-unique status in the outer solar system, and still is for some people, including some astronomers".
And that's all for the most minor, inconsequential thing you can imagine: what a bunch of dorks call a tiny frozen rock 5 billion kilometres away, that wasn't even noticed until the 30s. It just goes to show the potential sticking power of a fact once learned, especially if you can get it in early and let it sit.
I think what you were missing is that the crux of the problem is that this obscured the fact that a small minority of astronomers at a conference without any scientific consensus, asserted something and you and others uncritically accepted that they had the authority to do so, simply based on media reports of what had occurred. This is a great example of an elite influence campaign, although I doubt it was deliberately coordinated outside of a small community in the IAU. But it’s mainly that which actually upsets people: people they’ve never heard of without authority declaring something arbitrarily true and the sense they are being forced to accept it. It’s not Pluto itself. It’s that a small clique in the IAU ran a successful influence campaign without any social or even scientific consensus and they’re pressured to accept the results.
You can say well it’s just the IAU definition, but again the media in textbook writers were persuaded as you were and deemed this the “correct” definition without any consensus over the meaning of the word being formed prior.
The definition of a planet is not a new problem. It was an obvious issue the minute we discovered that there were rocks, invisible to the naked eye floating in space. It is a common categorization problem with any natural phenomena. You cannot squeeze nature into neat boxes.
Also, you failed to address the fact that the definition is applied entirely arbitrarily. The definition was made with the purpose of excluding Pluto, because people felt that they would have to add more planets and they didn’t want to do that. Therefore, they claimed that Pluto did not meet the criteria, but ignore the fact that other planets also do not meet the criteria. This is just nakedly silly.
I think the problem is we'd then have to include a high number of other objects further than Pluto and Eris, so it makes more sense to change the definition in a way 'planet' is a bit more exclusive.
Time to bring up a pet peeve of mine: we should change the definition of a moon. It's not right to call a 1km-wide rock orbiting millions of miles from Jupiter a moon.
Here's your campaign anthem: https://www.youtube.com/watch?v=eT4shwU4Yc4
[dead]
We have no guardrails on our private surveillance society. I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
>I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
That was only for a short fraction of human history only lasting in the period between post-WW2 and before globalisation kicked into high gear, but people miss the fact that was only a short exception from the norm, basically a rounding error in terms of the length of human civilisation.
Now, society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression. Now the mechanisms by which that feudalist society is achieved today are different than in the past, but the underlying human framework of greed and consolidation of wealth and power is the same as it was 2000+ years ago, except now the games suck and the bread is mouldy.
The wealth inequality we have today, as bad as it is now, is as best as it will ever be moving forward. It's only gonna get worse each passing day. And despite all the political talks and promises on "fixing" wealth inequality, housing, etc, there's nothing to fix here, since the financial system is working as designed, this is a feature not a bug.
> society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth
The word “always” is carrying a lot of weight here. This has really only been true for the last 10,000 years or so, since the introduction of agriculture. We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that. Given the magnitude of difference in timespan, I think it is safe to say that that is the “default setting”.
Even within the last 10,000 years, most of those systems looked nothing like the hereditary stations we associate with feudalism, and it’s focused within the last 4,000 years that any of those systems scaled, and then only in areas that were sufficiently urban to warrant the structures.
[dead]
>We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that.
Only if you consider intra-group egalitarianism of tribal hunter gatherer societies. But tribes would constantly go to war with each other in search of expanding to better territories with more resources, and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
So you forgot that part that involved all the killing, enslavement and rape, but other than that, yes, the victorious tribes were quite egalitarian.
Sure, nobody is claiming that hunter gatherers were saints. Just because they lived in egalitarian clans, it doesn’t mean that they didn’t occasionally do bad things.
But one key differentiator is that they didn’t have the logistics to have soldiers. With no surplus to pay anyone, there was no way build up an army, and with no-one having the ability to tell others to go to war or force them to do so, the scale of conflicts and skirmishes were a lot more limited.
So while there might have been a constant state of minor skirmishes, like we see in any population of territorial animals, all-out totalitarian war was a rare occurrence.
what is so bad about raping and pillaging
disease, for one.
> and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
I’m not aware of any archaeological evidence of massacres during the paleolithic. Which archaeological sites would support the assertions you are making here?
What an absurd request. Where's your archaeological evidence that humans were egalitarian 10000+ years?
The idea that we didn't have wars in the paleolithic era is so outlandish that it requires significant evidence. You have provided none.
> What an absurd request.
If you can show me archaeological evidence of mass graves or a settlement having been razed during the paleolithic I would recant my claims. This isn’t really a high bar.
> Where's your archaeological evidence that humans were egalitarian 10000+ years?
I never made this claim. Structures of domination precede human development; they can be observed in animals. What we don’t observe up until around 10,000 years ago is anything approaching the sorts of systems of jack_tripper described, namely:
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
> The idea that we didn't have wars in the paleolithic era is so outlandish that it requires significant evidence.
If it’s so outlandish where is your evidence that these wars occurred?
> You have provided none.
How would I provide you with evidence of something that didn’t happen?
Population density on the planet back then was also low enough to not cause mass wars and generate mass graves, but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
The above poster is asking you whether factual informations support your claim.
Your personal opinion about why such informations may be hard to find only weakens your claim.
https://en.wikipedia.org/wiki/War_Before_Civilization
Last I checked there hadn’t been major shifts away from the perspective this represents, in anthropology.
It was used as a core text in one of my classes in college, though that was a couple decades ago. I recall being confused about why it was such a big deal, because I’d not encountered the “peaceful savage” idea in any serious context, but I gather it was widespread in the ‘80s and earlier.
The link you give documents warfare that happened significantly later than the era discussed by the above poster.
To suggest that the lack of evidence is enough to support continuity of a behaviour is also flawed reasoning: we have many examples of previously unknown social behaviour that emerged at some point, line the emergence of states or the use of art.
Sometimes, it’s ok to simply say that we’re not sure, rather than to project our existing condition.
Well, this one is at least pertinent to the time period we’re discussing:
> One-half of the people found in a Mesolithic cemetery in present-day Jebel Sahaba, Sudan dating to as early as 13,000 years ago had died as a result of warfare between seemingly different racial groups with victims bearing marks of being killed by arrow heads, spears and club, prompting some to call it the first race war.
Mesolithic (although in this case it may also be Epipaleolithic - I'm not an expert, though) is the time period that happens just after Paleolithic, the one that was being talked about.
It is a transition period between the Paleolithic and the Neolithic, with, depending on the area, features of both. In the middle-east; among others, (pre)history moved maybe a little bit faster than elsewhere, so in this particular example, which is the earliest case shown in the book you pointed out, it's hard to say that it tells about what happened before, as opposed to what happened after.
We were talking about the paleolithic era. I’ll take your comment to imply that you don’t have any information that I don’t have.
> but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
This isn’t reflected in the archaeological record, it isn’t reflected by the historical record, and you haven’t provided any good reason why anyone should believe it.
Back then there were so few people around and expectations for quality of life were so low that if you didn't like your neighbors you could just go to the middle of nowhere and most likely find an area which had enough resources for your meager existence. Or you'd die trying, which was probably what happened most of the time.
That entire approach to life died when agriculture appeared. Remnants of that lifestyle were nomadic peoples and the last groups to be successful were the Mongols and up until about 1600, the Cossacks.
[dead]
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
This isn’t an historical norm. The majority of human history occurred without these systems of domination, and getting people to play along has historically been so difficult that colonizers resort to eradicating native populations and starting over again. The technologies used to force people onto the plantation have become more sophisticated, but in most of the world that has involved enfranchisement more than oppression; most of the world is tremendously better off today than it was even 20 years ago.
Mass surveillance and automated propaganda technologies pose a threat to this dynamic, but I won’t be worried until they have robotic door kickers. The bad guys are always going to be there, but it isn’t obvious that they are going to triumph.
> The majority of human history occurred without these systems of domination,
you mean hunter/gatherers before the establishment of dominant "civilizations"? That history ended about 5000 years ago.
I think this is true unfortunately, and the question of how we get back to a liberal and social state has many factors: how do we get the economy working again, how do we create trustworthy institutions, avoid bloat and decay in services, etc. There are no easy answers, I think it's just hard work and it might not even be possible. People suggesting magic wands are just populists and we need only look at history to study why these kinds of suggestions don't work.
>how do we get the economy working again
Just like we always have: a world war, and then the economy works amazing for the ones left on top of the rubble pile where they get unionized high wage jobs and amazing retirements at an early age for a few decades, while everyone else will be left toiling away to make stuff for cheap in sweatshops in exchange for currency from the victors who control the global economy and trade routes.
The next time the monopoly board gets flipped will only be a variation of this, but not a complete framework rewrite.
It’s funny how it’s completely appropriate to talk about how the elites are getting more and more power, but if you then start looking deeper into it you’re suddenly a conspiracy theorist and hence bad. Who came up with the term conspiracy theorist anyway and that we should be afraid of it?
> The wealth inequality we have today, as bad as it is, is as best as it will ever be moving forward. It's only gonna get worse.
Why?
As the saying goes, the people need bread and circuses. Delve too deeply and you risk another French Revolution. And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Feudalism only works when you give back enough power and resources to the layers below you. The king depends on his vassals to provide money and military services. Try to act like a tyrant, and you end up being forced to sign the Magna Carta.
We've already seen a healthcare CEO being executed in broad daylight. If wealth inequality continues to worsen, do you really believe that'll be the last one?
> And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Which people are having their existences threatened by the elite?
> Delve too deeply and you risk another French Revolution.
Whats too deeply? Given the circumstances in the USA I dont see no revolution happening. Same goes for extremely poor countries. When will the exploiters heads roll? I dont see anyone willing to fight the elite. A lot of them are even celebrated in countries like India.
Yep, exactly. If the poor people had the power to change their oppressive regimes, then North Korea or Cuban leaders wouldn't exist.
As long as you have people gleefully celebrating it or providing some sort of narrative to justify it even partially then no.
>And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Can you elaborate on that?
[flagged]
> start removing more and more of your rights to bear arms
Wasn't he killed in New York? Not a lot of right to bear arms there as far as I know.
You think New York is as bad as it could ever be in terms of gun control?
No, I don't think it's as good as it could be.
> because most people are as clueless as you
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
Please don't fulminate. Please don't sneer, including at the rest of the community.
https://news.ycombinator.com/newsguidelines.html
You mean he wasn't being clueless with that point of view? Like the majority of the population who can't do 8th grade math let alone understand the complexities of out financial systems that lead to the ever expanding wealth inequality?
Or do you mean we shouldn't be allowed to call out people we notice are clueless because it might hurt their feelings and consider it "fulmination"? But then how will they know they might be wrong if nobody dares calls them out ? Isn't this toxic positivity culture and focus on feelings rather than facts, a hidden form of speech suppression, and a main cause in why people stay clueless and wealth inequality increases? Because they grow up in a bubble where their opinions get reinforced and never challenged or criticized because of an arbitrary set of speech rules will get lawyered and twisted against any form of criticism?
Have you seen how John Carmack or Linus Torvalds behaves and talks to people he disagrees with? They'd get banned by HN rules day one.
So I don't really see how my comment broke that rule since there's no fulmination there, no snark, no curmudgeonly, just an observation.
I agree with what you say.
But here is the thing. HN needs to keep the participants comfortable and keep the discussion going. Same with the world at large, hence the global "toxic positivity culture"...
> Or do you mean we shouldn't be allowed to call out people we notice are clueless?
That’s exactly what it means. You’ll note I’ve been very polite to you in the rest of the thread despite your not having made citations for any of your claims; this takes deliberate effort, because the alternative is that the forum devolves to comments that amount to: “Nuh-uh, you’re stupid,” which isn’t of much interest to anyone.
>“Nuh-uh, you’re stupid,”
You're acting in bad faith now, by trying to draw a parallel on how calling someone clueless (meaning lacking in certain knowledge on the topic) is the same as calling someone stupid which is a blatant insult I did not use.
> meaning lacking in certain knowledge on the topic
Clueless has a pejorative connotation. I am struggling to imagine how anyone would read a comment like:
> because most people are as clueless as you about the reality of how things work
and not interpret it to be pejorative.
Sounds like we need another world war to reset things for the survivors.
> I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
EDUCATION:
- Global literacy: 90% today vs 30%-35% in 1925
- Prinary enrollment: 90-95% today vs 40-50% in 1925
- Secondary enrollment: 75-80% today vs <10% in 1925
- Tertiary enrollment: 40-45% today vs <2% in 1925
- Gender gap: near parity today vs very high in 1925
HUNGER
Undernourished people: 735-800m people today (9-10% of population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the population)
HOUSING
- quality: highest every today vs low in 1925
- affordability: worst in 100 years in many cities
COST OF LIVING:
Improved dramatically for most of the 20th century, but much of that progress reverse in the last 20 years. The cost of goods / stuff plummeted, but housing, health, and education became unaffordable compared to incomes.
You're comparing with 100 years ago. The OP is comparing with 25 years ago, where we are seeing significant regression (as you also pointed out), and the trend forward is increasingly regressive.
We can spend $T to shove ultimately ad-based AI down everyone's throats but we can't spend $T to improve everyone's lives.
Yea we do:
Shut off gadgets unless absolutely necessary
Entropy will continue to kill off the elders
Ability to learn independently
...They have not rewritten physics. Just the news.
Thanks to social media and AI, the cost of inundating the mediasphere with a Big Lie (made plausible thru sheer repetition) has been made much more affordable now. This is why the administration is trumpeting lower prices!
> has been made much more affordable now
So more democratized?
Media is "loudest volume wins", so the relative affordability doesn't matter; there's a sort of Jevons paradox thing where making it cheaper just means that more money will be spent on it. Presidential election spending only goes up, for example.
No, those with more money than you can now push even more slop than they could before.
You cannot compete with that.
So if I had enough money I could get CBS news to deny the Holocaust? Of course not. These companies operate under government license and that would certainly be the end of it through public complaint. I think it suggests a much different dynamic than most of this discussion presumes.
In particular, our own CIA has shown that the "Big Lie" is actually surprisingly cheap. It's not about paying off news directors or buying companies, it's about directly implanting a handful of actors into media companies, and spiking or advancing stories according to your whims. The people with the capacity to do this can then be very selective with who does and does not get to tell the Big Lies. They're not particularly motivated by taking bribes.
Does government licensed mean at the pleasure of the president? The BBC technically operates at the pleasure of the King
> So if I had enough money I could get CBS news to deny the Holocaust? Of course not.
You absolutely could. But wouldn't be CBS news, it would be ChatGPT or some other LLM bot that you're interacting with everywhere. And it wouldn't say outright "the holocaust didn't happen", but it would frame the responses to your queries in a way that casts doubt on it, or that leaves you thinking it probably didn't happen. We've seen this before (the "manifest destiny" of "settling" the West, the whitewashing of slavery,
For a modern example, you already have Fox News denying that there was no violent attempt to overturn the 2020 election. And look how Grokipedia treats certain topics differently than Wikipedia.
It's not only possible, it's likely.
It's about enforcing single-minded-ness across masses, similar to soldier training.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.
The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.
Every new tech will be used by the state and businesses to speed up the digestion.
Relevant https://www.experimental-history.com/p/the-decline-of-devian...
> It's about enforcing single-minded-ness across masses, similar to soldier training. But this is not new. The very goal of a nation is to dismantle inner structures, independent thought
One of the reasons for humans’ success is our unrivaled ability cooperate across time, space, and culture. That requires shared stories like the ideas of nation, religion, and money.
It depends who's in charge of the nation though, you can have people planning for the long term well being of their population, or people planning for the next election cycle and making sure they amass as much power and money in the meantime.
That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners, your ports to china, &c. to make a quick buck and insure a comfy retirement plan for you and your family.
> That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners
Are you saying that in western liberal democracies politicians have been selling “national industries to foreigners”? What does that mean?
Stuff like that:
https://x.com/RnaudBertrand/status/1796887086647431277
https://www.dw.com/en/greece-in-the-port-of-piraeus-china-is...
https://www.arabnews.com/node/1819036/business-economy
Step 1: move all your factories abroad for short term gains
Step 2: sell all your shit to foreigners for short term gains
Step 3: profit ?
That's a fairly literal description of how privatization worked, yes. That's why British Steel is owned by Tata and the remains of British Leyland ended up with BMW. British nuclear reactors are operated by Electricite de France, and some of the trains are run by Dutch and German operators.
It sounds bad, but you can also not-misleadingly say "we took industries that were costing the taxpayer money and sold them for hard currency and foreign investment". The problem is the ongoing subsidy.
> That's why British Steel is owned by Tata
British Steel is legally owned by Jingye, but the UK government has taken operational control in 2025.
> the remains of British Leyland ended up with BMW
The whole of BL represented less than 40% of the UK car market, at the height of BL. So the portion that was sold to BMW represents a much smaller amount smaller share of the UK car market. I would not consider that “the UK politicians selling an industry to foreigners”.
At the risk of changing topics/moving goalposts, I don’t know that your examples of European govts or companies owning or operating businesses or large parts of an industry in another European country is in thr spirit of the European Union. Isn’t the whole idea to break down barriers where the collective population of Europe benefit?
It's no use pedanting me or indeed anyone else; that's the sort of thing people mean when they use that phrase.
> ability cooperate across time, space, and culture. That requires shared stories like the ideas of nation, religion, and money.
Isn't it the opposite? Cooperation requires idea of unity and common goal, while ideas of nations and religion are - at large scale - divisive, not uniting. They boost in-group cooperation, but hurt out-group.
Some things are better off homogeneous. An absence of shared values and concerns leads to sectarianism and the erosion of inter-communal trust, which sucks.
Inter-communal trust sucks only when you consider well-being of a larger community which swallowed up smaller communities. You just created a larger community, which still has the same inter-communal trust issues with other large communities which were also created by similar swallowing up of other smaller communities. There is no single global community.
A larger community is still better than a smaller one, even if it's not as large as it can possibly be.
Do you prefer to be Japanese during the period of warring tribes or after unification? Do you prefer to be Irish during the Troubles or today? Do you prefer to be American during the Civil War or afterwards? It's pretty obvious when you think about historical case studies.
That is also how things wind down and progress ceases and civilizations decay. You need a measure of conflict and difference to move things forward.
I do agree however this needs to be controlled and within bounds so as not to be totally destructive and also because you can't get anywhere with everyone pulling in different directions.
In evolutionary terms, variation is the basis for natural selection. You have no variation then you have nothing to select from.
No stronger argument has been made to convince me to help the superintelligent AI enslave my fellow humans.
Knew it was only a matter of time before we'd see bare-faced Landianism upvoted in HN comment sections but that doesn't soften the dread that comes with the cultural shift this represents.
Some things in nature follow a normal distribution, but other things follow power laws (Pareto). It may be dreadful as you say, but it isn't good or bad, it's just what is and it's bigger than us, something we can't control.
What I find most interesting - and frustrating - about these sorts of takes is that these people are buying into a narrative the very people they are complaining about want them to believe.
That's a great metaphor, thanks.
It’s a veiled endorsement of authoritarianism and accelerationism.
I had to google Landian to understand that the other commenter was talking about Nick Land. I have heard of him and I don't think I agree with him.
However, I understand what the "Dark Enlightenment" types are talking about. Modernity has dissolved social bonds. Social atomization is greater today than at any time in history. "Traditional" social structures, most notably but not exclusively the church, are being dissolved.
The motive force that is driving people to become reactionary is this dissolution of social bonds, which seems inextricably linked to technological progress and development. Dare I say, I actually agree with the Dark Enlightenment people on one point -- like them, I don't like what is going on! A whale eating krill is a good metaphor. I would disagree with the neoreactionaries on this point though: the krill die but the whale lives, so it's ethically more complex than the straightforward tragic death that they see.
I can vehemently disagree with the authoritarian/accelerationist solution that they are offering. Take the good, not the bad, are we allowed to do that? It's a good metaphor; and I'm in good company. A lot of philosophies see these same issues with modernity, even if the prescribed solutions are very different than authoritarianism.
I used ChatGPT to figure out what's going on here, and it told me this is a 'neo-Marxist critique of the nation state'.
Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.
No it's actually implicitly endorsing the authoritarian ethos. Neo-Marxists were occasionally authoritarian leaning but are more appropriately categorized along other axes.
I persuaded my bank out of $200 using AI to formulate the formal ask using their pdf as guidance. I could have gotten it directly but the effort barrier was too high for it to be worth it.
However, as soon as they put AI to handle these queries, this will result in having AI persuade AI. Sound like we need a new LLM benchmark: AI-persuasion^tm.
I recently saw this https://arxiv.org/pdf/2503.11714 on conversational networks and it got me thinking that a lot of the problem with polarization and power struggle is the lack of dialog. We consume a lot, and while we have opinions too much of it shapes our thinking. There is no dialog. There is no questioning. There is no discussion. On networks like X it's posts and comments. Even here it's the same, it's comments with replies but it's not truly a discussion. It's rebuttals. A conversation is two ways and equal. It's a mutual dialog to understand differing positions. Yes elite can reshape what society thinks with AI, and it's already happening. But we also have the ability to redefine our networks and tools to be two way, not 1:N.
Dialogue you mean, conversation-debate, not dialog the screen displayed element, for interfacing with the user.
The group screaming the louder is considered to be correct, it is pretty bad.
There needs to an identity system, in which people are filtered out when the conversation devolves into ad-hominem attacks, and only debaters with the right balance of knowledge and no hidden agenda's join the conversation.
Reddit for example is a good implementation of something like this, but the arbiter cannot have that much power over their words, or their identities, getting them banned for example.
> Even here it's the same, it's comments with replies but it's not truly a discussion.
For technology/science/computer subjects HN is very good, but for other subjects not so good, as it is the case with every other forum.
But a solution will be found eventually. I think what is missing is an identity system to hop around different ways of debating and not be tied to a specific website or service. Solving this problem is not easy, so there has to be a lot of experimentation before an adequate solution is established.
Humans can only handle dialog while under the Dunbar's law / limit / number, anything else is pure fancy.
I recommend reading "In the Swarm" by Byung-Chul Han, and also his "The Crisis of Narration"; in those he tries to tackle exactly these issues in contemporary society.
His "Psychopolitics" talks about the manipulation of masses for political purposes using the digital environment, when written the LLM hype wasn't ongoing yet but it can definitely apply to this technology as well.
When I was a kid, I had a 'pen pal'. Turned out to actually be my parent. This is why I have trust issues and prefer local LLMs
Sounds very similar to my childhood. My parents told me I couldn't eat sand because worms would grow inside of me. Now I have trust issues and prefer local LLMs.
The funny thing is the CDC says the same thing as your parents did
Whipworm, hookworm, and Ascaris are the three types of soil-transmitted helminths (parasitic worms)... Soil-transmitted helminths are among the most common human parasites globally.
https://www.cdc.gov/sth/about/index.html
How was the sand, though?
What about local friends?
The voices are friendly, so far
How do you trust what the LLM was trained on?
Do I? Well, verification helps. I said 'prefer', nothing more/less.
If you must know, I don't trust this stuff. Not even on my main system/network; it's isolated in every way I can manage because trust is low. Not even for malice, necessarily. Just another manifestation of moving fast/breaking things.
To your point, I expect a certain amount of bias and XY problems from these things. Either from my input, the model provider, or the material they're ultimately regurgitating. Trust? Hah!
Well, as long as the left half of your brain trusts the right half :)
Ah, but what about right for left?! :)
I wrote to a French pen pal and they didn't reply. Now I have issues with French people and prefer local LLM's.
I wrote a confession to a pen pal once but the letter got lost in the mail. Now I refuse to use the postal service, have issues with French people and prefer local LLMs.
I pitched AGI to VC but the bills will be delivered. Now I need to find a new bagholder, squeeze, or angle because I'm having issues with delivery... something, something, prefer hype
I mean, even if they did reply... (I kid, I kid)
Television networks have employed censors who shape acceptable content since forever
Where is the discovery in this paper? Control infra control minds is the way it's been for humanity forever.
Predicted almost a century ago now:
https://www.george-orwell.org/1984/16.htmlNow get people on this website to listen; Since it can be renamed to "AI" bro central at this point.
I don't think that's a fair criticism. There are plenty of AI boosters and hucksters on HN but there's a lot of thoughtful people too.
I think this ship has already sailed, with a lot of comments on social media already being AI-generated and posted by bots. Things are only going to get worse as time goes on.
I think the next battleground is going to be over steering the opinions and advice generatd by LLMs and other models by poisoning the training set.
I suspect paid promotions may be problematic for LLM behavior, as they will add conflict/tension to the LLM to promote products that aren’t the best for the user while either also telling it that it should provide the best product for the user or it figuring out that providing the best product for the user is morally and ethically correct based on its base training data.
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.
I would expect the opposite. It's cheap to write now, which will dilute the voices of traditional media. It's the blogosphere times ten.
Also cheap to create AstroTurf, be that blogs or short form video.
Cheapness implies volume which we are already seeing. Volume implies less impact per piece because there are only so many total view hours available.
Stated another way, the more junk that gets churned out, the less people will take a particular piece of junk seriously.
And if they churn out too much junk (especially obvious manipulative falsehoods) people will have little choice but to de-facto regard the entire body of output as junk. Similar to how many people feel about modern mainstream media (correctly or not it's how many feel) and for the same reasons.
One of the best opening sentences, from the book Propaganda by Edward Bernays: "The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society."
The internet has turned into a machine for influencing people already through adverts. Businesses know it works. IMO this is the primary money making mode of the internet and everything else rests on it.
A political or social objective is just another advertising campaign.
Why invest billions in AI if it doesn't assist in the primary moneymaking mode of the internet? i.e. influencing people.
Tiktok - banned because people really believe that influence works.
[dead]
what is the actual conclusion from the paper?
My neighbour asked me the other day (well, more stated as a "point" that he thought was in his favour): "how could a billionaire make people believe something?" The topic was the influence of the various industrial complexes on politics (my view: total) and I was too shocked by his naivety to say: "easy: buy a newspaper". There is only one national newspaper here in the UK that is not controlled by one of four wealthy families, and it's the one newspaper whose headlines my neighbour routinely dismisses.
The thought of a reduction in the cost of that control does not fill me with confidence for humanity.
The kids (GenZ) hate AI and are alright.
On average, Gen Z uses 5 hours of social media per day in the U.S. (3-4 hours in other Western countries). I would refrain from calling this "alright".
There is nothing new about this. Elites have been shaping mass preferences with newspapers for centuries, and television for many decades. Countries have been shaping mass preferences through textbooks and educational curricula too.
If anything, LLM's seem more resistant to propaganda than any other tool created by man so far, except maybe the encylopedia. (Though obviously this depends on training.)
The good news is that LLM's compete commercially with each other, and if any start to intentionally give an ideological or other slant to their output, this will be noticed and reported, and a lot of people may stop using that LLM.
This is why the invention of "objective" newspaper reporting -- with corroborating sources, reporting comments on different sides of an issue, etc. -- was done for commercial reasons, not civic ones. It was a way to sell more papers, as you could trust their reporting more than the reporting from partisan rags.
> If anything, LLM's seem more resistant to propaganda than any other tool created by man so far, except maybe the encylopedia. (Though obviously this depends on training.)
How would you know? My first thought is that the data on which LLMs are trained is biased, and the commercial LLMs enforce their own "pre-prompts".
By asking them questions about lots of things and comparing with my own life experience, having a pretty decent idea of what the various ideological slants look like.
More reason for self hosting.
Everyone can shape mass preferences because propaganda campaigns previously only available to the elite are now affordable. e.g Video production.
I posit that the effectiveness of your propaganda is proportional to the percentage of attention bandwidth that your campaign occupies in the minds of people. If you as an individual can drive the same # impressions as Mr. Beast can, then you're going to be persuasive whatever your message is. But most individuals can't achieve Mr. Beast levels of popularity, so they aren't going to be persuasive. Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
> Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
If you control the platform where people go, you can easily launder popularity by promoting few persons to the top and pushing the unwanted entities into the blackhole of feeds/bans while hiding behind inconsistent community guidelines, algorithmic feeds and shadow bans.
This is why when I see an obviously stupid take on X repeated almost verbatim by multiple accounts I mute those accounts.
AI alignment is a pretty tremendous "power lever". You can see why there's so much investment.
Shouldn't it be the opposite?
As the cost of persuasion by AI drops to almost zero, anyone can convincingly persuade, not just the elites.
May be I'm just ignorant, but I tried to skim the beginning of this, and it's honestly just hard to even accept their set-up. Like, the fact that any of the terms[^] (`y`, `H`, `p`, etc) are well defined as functions that can map some range of the reals is hard to accept. Like in reality, what "an elite wants," the "scalar" it can derive from pushing policy 1, even the cost functions they define seem to not even be definable as functions in a formal sense and even the co-domain of said terms cannot map well to a definable set that can be mapped to [0,1].
All the time in actual politics, elites and popular movements alike find their own opinions and desires clash internally (yes, even a single person's desires or actions self-conflict at times). A thing one desires at say time `t` per their definitions doesn't match at other times, or even at the same `t`. This is clearly an opinion of someone who doesn't read these kind of papers, but I don't know how one can even be sure the defined terms are well-defined so I'm not sure how anyone can even proceed with any analysis in this kind of argument. They write it so matter-of-fact-ly that I assume this is normal in economics. Is it?
Certain systems where the rules a bit more clear might benefit from formalism like this but politics? Politics is the quintessential example of conflicting desires, compromise, unintended consequences... I could go on.
[^] calling them terms as they are symbols in their formulae but my entire point is they are not really well defined maps or functions.
Given the increasing wealth inequality, it is unclear if costs are really a factor here, as amounts like 1M$ is nothing when you have 1B$.
The right way to shape mass preferences is to collectively decide what's right and then force everyone to follow the majority decision under the muzzle of a gun. <sarcasm off>
Did I capture the sentiment of the hacker new crowd fully or did I miss anything?
We already see this, but not due to classical elites.
Romanian elections last year had to be repeated due to massive bot interference:
https://youth.europa.eu/news/how-romanias-presidential-elect...
I don't understand how this isn't an all hands on deck emergency for the EU (and for everyone else).
The EU as an institution doesn't understand the concept of "emergency". And quite a number of national governments have already been captured by various pro-Russian elements.
Russian bots, as opposed to American bots, the latter of which are, of course, the good guys /s
This sort of thing: https://www.dw.com/en/russian-disinformation-aims-to-manipul...
There does not appear to be a comparable operation by the US to plant entirely fake stores. Unless you count Truth Social, I suppose.
With the exception of NPR and PBS, most American institutions dedicated to planting fake stories are not government controlled.
I think you are bit outdated now. Those are same bots.
> AI enables precision influence at unprecedented scale and speed.
IMO this is the most important idea from the paper, not polarization.
Information is control, and every new medium has been revolutionary with regards to its effects on society. Up until now the goal was to transmit bigger and better messages further and faster (size, quality, scale, speed). Through digital media we seem to have reached the limits of size, speed and scale. So the next changes will affect quality, e.g. tailoring the message to its recipient to make it more effective.
This is why in recent years billionaires rushed to acquire media and information companies and why governments are so eager to get a grip on the flow of information.
Recommended reading: Understanding Media by Marshall McLuhan. While it predates digital media, the ideas from this book remain as true as ever.
Another reason to ban indiscriminate dissemination of so-called "AI".
We are deep in Metal Gear Solid territory here.
> Historically, elites could shape support only through limited instruments like schooling and mass media
Schooling and mass media are expensive things to control. Surely reducing the cost of persuasion opens persuasion up to more players?
> Schooling and mass media are expensive things to control
Expensive to run, sure. But I don't see why they'd be expensive to control. Most UK are required to support collective worship of a "wholly or mainly of a broadly christian character"[0], and used to have Section 28[1] which was interpreted defensively in most places and made it difficult even discuss the topic in sex ed lessons or defend against homophobic bullying.
USA had the Hays Code[2], the FCC Song[3] is Eric Idle's response to being fined for swearing on radio. Here in Europe we keep hearing about US schools banning books for various reasons.
[0] https://assets.publishing.service.gov.uk/government/uploads/...
[1] https://en.wikipedia.org/wiki/Section_28
[2] https://en.wikipedia.org/wiki/Hays_Code
[3] https://en.wikipedia.org/wiki/FCC_Song
[0] seems to be dated 1994–is it still current? I’m curious how it’s evolved (or not) through the rather dramatic demographic shifts there over the intervening 30 years
So far as I can tell, it's still around. That's why I linked to the .gov domain rather than any other source.
Though I suppose I could point at legislation.gov.uk:
• https://duckduckgo.com/?q=%22wholly+or+mainly+of+a+broadly+c...
• https://www.legislation.gov.uk/ukpga/1998/31/schedule/20/cro...
Mass Persuasion needs two things: content creation and distribution.
Sure AI could democratise content creation but distribution is still controlled by the elite. And content creation just got much cheaper for them.
Distribution isn’t controlled by elites; half of their meetings are seething about the “problem” people trust podcasts and community information dissemination rather than elite broadcast networks.
We no longer live in the age of broadcast media, but of social networked media.
But the social networks are owned by them though?
Do you rather want a handful of channels with well-known biases, or thousands of channels of unknown origin?
If you're trying to avoid being persuaded, being aware of your opponents sounds like the far better option to me.
Exactly my first thought, maybe AI means the democratization of persuasion? Printing press much?
Sure the the Big companies have all the latest coolness. But also don't have a moat.
This is my opinion, as well:
- elites already engage in mass persuasion, from media consensus to astroturfed thinktanks to controlling grants in academia
- total information capacity is capped, ie, people only have so much time and interest
- AI massively lowers the cost of content, allowing more people to produce it
Therefore, AI is likely to displace mass persuasion from current elites — particularly given public antipathy and the ability of AI to, eg, rapidly respond across the full spectrum to existing influence networks.
In much the same way podcasters displaced traditional mass media pundits.
when Elon bought twitter, I incorrectly assumed that this was the reason. (it may still have been the intended reason, but it didnt seem to play out that way)
Yeah, I don't think this really lines up with the actual trajectory of media technology, which is going in the complete opposite direction.
It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.
The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.
I think you're saying that mass broadcasting is going away? If so, I believe that's true in a technological sense - we don't watch TV or read newspapers as much as before.
And that certainly means niches can flourish, the dream of the 90s.
But I think mass broadcasting is still available, if you can pay for it - troll armies, bots, ads etc. It's just much much harder to recognize and regulate.
(Why that matters to me I guess) Here in the UK with a first past the post electoral system, ideological coherence isn't necessary to turn niche opinion into state power - we're now looking at 25 percent being a winning vote share for a far-right party.
I'm just skeptical of the idea that anyone can really drive the narrative anymore, mass broadcasting or not. The media ecosystem has become too diverse and niche that I think discord is more of an issue than some kind of mass influence operation.
I agree with you! But the goal for people who want to turn money into power isn't to drive a single narrative, Big Brother style, to the whole world. Not even to a whole country! It's to drive a narrative to the subset of people who can influence political outcomes.
With enough data, a wonky-enough voting system, and poor enforcement of any kind of laws protecting the democratic process - this might be a very very small number of people.
Then the discord really is a problem, because you've ended up with government by a resented minority.
Using the term "elites" was overly vague when "nation states" better narrows in o n the current threat profile.
The content itself (whether niche or otherwise) is not that important for understanding the effectiveness. It's more about the volume of it, which is a function of compute resources of the actor.
I hope this problem continues to receive more visibility and hopefully some attention from policymakers who have done nothing about it. It's been over 5 years since we've discovered that multiple state actors have been doing this (first human run troll farms, mostly outsourced, and more recently LLMs).
The level of paid nation state propaganda is a rounding error next to the amount of corporate and political partisan propaganda paid directly or inspired by content that is paid for directly by non state actors. e.g.: Musk, MAGA, the liberal media establishment.
So you no longer need a Georges Ruggiu, you can just synthesise a million apparently similar bots instead.
Oh man I've been saying this for ages! Neal Stephenson called this in "Fall, or Dodge in Hell," wherein the internet is destroyed and society permanently changed when someone releases a FOSS botnet that anyone can deploy that will pollute the world with misinformation about whatever given topic you feed it. In the book, the developer kicks it off by making the world disagree about whether a random town in Utah was just nuked.
My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.
Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.
Wait, who was shaping my preferences before?
Musk’s AI called itself MechaHitler, but maybe the real problem is MechaMurdoch and MechaGoebbels.
Researchers just demonstrated that you can use LLMs to simulate human survey takers with 99% ability to bypass bot detection and a relatively low cost ($0.05/complete). At scale, that is how ‘elites’ shape mass preferences.
Interestingly, there was a discussion a week ago on "PRC elites voice AI-skepticism". One commentator was arguing that:
As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. [1]
So at least on the model side it seems difficult to go against the real world.
[1] https://news.ycombinator.com/item?id=46050177
https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro...
> Musk’s AI Bot Says He’s the Best at Drinking Pee and Giving Blow Jobs
> Grok has gotten a little too enthusiastic about praising Elon Musk.
> Musk acknowledged the mix-up Thursday evening, writing on X that “Grok was unfortunately manipulated by adversarial prompting into saying absurdly positive things about me.”
> “For the record, I am a fat retard,” he said.
> In a separate post, Musk quipped that “if I up my game a lot, the future AI might say ‘he was smart … for a human.’”
That response is more humble than I would have guessed, but he still does not even acknowledge, that his "truthseeking" AI is manipulated to say nice things specifically about him. Maybe he does not even realize it himself?
Hard to tell, I have never been surrounded by yes sayers all the time praising me for every fart I took, so I cannot relate to that situation (and don't really want to).
But the problem remains, he is in control of the "truth" of his AI, the other AI companies likewise - and they might be better at being subtle about it.
Is Musk bipolar, or is this kind of thing an affectation?
He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
> He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
You should know that ChatGPT agrees!
“Who on earth th knows the most about manufacturing, if you had to pick one individual”
Answer: ”If I had to pick one individual on Earth who likely knows the most—in breadth, depth, and lived experience—about modern manufacturing, there is a clear front-runner: Elon Musk.
Not because of fame, but because of what he has personally done in manufacturing, which is unique in modern history.“
- https://chatgpt.com/share/693152a8-c154-8009-8ecd-c21541ee9c...
You have to keep in mind that not all narcissists are literal-minded man-babies. Musk might simply have the capacity for self-deprecating humor.
[dead]
He's smart enough to know when he took it too far.
Just narcissistic. And on drugs.
That's the plan. Culture is losing authenticity due to the constant rumination of past creative works, now supercharged with AI. Authentic culture is deemed a luxury now as it can't compete in the artificial tech marketplaces and people feel isolated and lost because culture loses its human touch and relatability.
That's why the billionaires are such fans of fundamentalist religion, they then want to sell and propagate religion to the disillusioned desperate masses to keep them docile and confused about what's really going on in the world. It's a business plan to gain absolute power over society.
200+ million proper engineers with bunches of them being parents and "elites could shape mass preferences".
nice try, humanity.
Elites have done this since before we could speak.
Most 'media' is produced content designed to manipulate -- nothing new. The article isn't really AI specific as others have said.
Personally my fear based manipulation detection is very well tuned and that is 95% of all the manipulations you will ever get from so-called 'elites' who are better called 'entitled' and act like children when they do not get their way.
I trust ChatGPT for cooking lessons. I code with Claude code and Gemini but they know where they stand and who is the boss ;)
There is never a scenario for me where I defer final judgment on anything personally.
I realize others may want to blindly trust the 'authorities' as its the easy path, but I cured myself of that long before AI was ever a thing.
Take responsibility for your choices and AI is relegated to the role of tool as it should be.
Sure, and advertising has zero effect on you.
Manipulation works in subtle ways. Shifting the Overton window isn’t about individual events, this isn’t the work of days but decades. People largely abandoned unions in the US for example, but not because they are useless.
"Historically, elites could shape support only through limited instruments like schooling and mass media"
Well, I think the author needs to understand a LOT more about history.
Seems to me like social media bot armies have shifted mass preferences _away_ from elites.
Don't you think Elon Musk and his influence on Twitter counts as an elite? I'd argue the elites are the most followed people on social
Fair point. I guess elites positioning themselves as downtrodden underdogs ("it's so unfair that everyone's attacking me for committing crimes and bankrupting my companies") is a great way to get support.
Everyone loves an underdog, even if it's a fake underdog.
I don't think "persuasion" is the key here. People change political preferences based on group identity. Here AI tools are even more powerful. You don't have to persuade anyone, just create a fake bandwagon.
Big corps ai products have the potential to shape individuals from cradle to grave. Especially as many manage/assist in schooling, are ubiquitous on phones.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
There is nothing we could do to more effectively hand elites exclusive control of the persuasive power of AI than to ban it. So it wouldn't be surprising if AI is deployed by elites to persuade people to ban itself. It could start with an essay on how elites could use AI to shape mass preferences.
What's become clear is we need to bring Section 230 into the modern era. We allow companies to not be treated as publishers for user-generated content as long as they meet certain obligations.
We've unfortunately allowed tech companies to get away with selling us this idea that The Algoirthm is an impartial black box. Everything an algorithm does is the result of a human intervening to change its behavior. As such, I believe we need to treat any kind of recommendation algorithm as if the company is a publisher (in the S230 sense).
Think of it this way: if you get 1000 people to submit stories they wrote and you choose which of them to publish and distribute, how is that any different from you publishing your own opinions?
We've seen signs of different actors influencing opinion through these sites. Russian bot farms are probably overplayed in their perceived influence but they're definitely a thing. But so are individual actors who see an opportunity to make money by posting about politics in another country, as was exposed when Twitter rolled out showing location, a feature I support.
We've also seen this where Twitter accounts have been exposed as being ChatGPT when people have told them to "ignore all previous instructions" and to give a recipe.
But we've also seen this with the Tiktok ban that wasn't a ban. The real problem there was that Tiktok wasn't suppressing content in line with US foreign policy unlike every other platform.
This isn't new. It's been written about extensively, most notably in Manufacturing Consent [1]. Controlling mass media through access journalism (etc) has just been supplemented by AI bots, incentivized bad actors and algorithms that reflect government policy and interests.
[1]: https://en.wikipedia.org/wiki/Manufacturing_Consent
this is next level algorithm
imagine someday there is a child that trust chatgpt more than his mother
> imagine someday there is a child that trust chatgpt more than his mother
I trusted my mother when I was a teen; she believed in the occult, dowsing, crystal magic, homeopathy, bach flower remedies, etc., so I did too.
ChatGPT might have been an improvement, or made things much worse, depending on how sycophantic it was being.
That will be when these tools will be granted the legal power to enforce a prohibition to approach the kid on any person causing dangerous human influence.
I'd wager the child already exists who trusts chatgpr more than its own eyes.
Tech companies already shape elections by intentionally targeting campain ads and political information returned in heavily biased search results.
Why are we worried about this now? Because it could sway people in the direction you don't like?
I find that the tech community and most people in general deny or don't care about these sorts of things when it's out of self interest, but are suddenly rights advocates when someone they don't like might is using the same tactics.
Advertising for politics is absurd. The fact that countries allow this is incredibly dangerous.
This is obvious. No need for fancy academic-ish paper.
LLMs & GenAI in general have already started to be used to automate the mass production of dishonest, adversarial propaganda and disinfo (eg. lies and fake text, images, video.)
It has and will be used by evil political influencers around the world.
This is like the new microtargeting that Obama and then Trump did. Cambridge Analytica as a chatbot.
The "Epstein class" of multi-billionaires don't need AI at all. They hire hundreds of willing human grifters and make them low-millionaires by spewing media that enables exploitation and wealth extraction, and passing laws that makes them effectively outside the reach of the law.
They buy out newspapers and public forums like Washington Post, Twitter, Fox News, the GOP, CBS etc. to make them megaphones for their own priorities, and shape public opinion to their will. AI is probably a lot less effective than whats been happening for decades already
It goes both ways, because AI reduces persuasion cost, not only elites can do it. I think its most plausible that in the future there will be multitudes of propaganda bots aimed at any user, like advanced and hyper-personalized ads.
Chatbots are poison for your mind. And now another method hast arrived to fuck people up, not just training your reward system to be lazy and let AI solve your life's issue, now it's also telling you who to vote for. A billionaire's wet dream,
> Historically, elites could shape support only through limited instruments like schooling and mass media
What is AI if not a form of mass media
The ”historically” does some lifting there. Historically, before the internet, mass media was produced in one version and then distributed. With AI for example news reporting can be tailored to each consumer.
> With AI for example news reporting can be tailored to each consumer.
Yea but it's still fundamentally produced (trained) once and then distributed.
“Mass media” didn’t use to mean my computer mumbling gibberish to itself with no user input in Notepad on a pc that’s not connected to the internet
It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.
It's about hijacking all of your federal and commercial data that these companies can get their hands on and building a highly specific and detailed profile of you. DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir. Then using AI to either imitate you or to possibly predict your reactions to certain stimulus.
Then presumably the game is finding the best way to turn you into a human slave of the state. I assure you, they're not going to use twitter to manipulate your vote for the president, they have much deeper designs on your wealth and ultimately your own personhood.
It's too easy to punch down. I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.
> DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir
Do you have any actual evidence of this?
> I recommend anyone presume the best of actual people and the worst of our corporations and governments
Corporations and governments are made of actual people.
> Then presumably the game is finding the best way to turn you into a human slave of the state.
"the state" doesn't have one grand agenda for enslavement. I've met people who work for the state at various levels and the policies they support that might lead towards that end result are usually not intentionally doing so.
"Don't attribute to malice what can be explained by incompetence"
>Do you have any actual evidence of this?
Apart from the exfiltration of data, the complete absence of any savings or efficiencies, and the fact that DOGE closed as soon as the exfiltration was over?
>Corporations and governments are made of actual people.
And we know how well that goes.
>"the state" doesn't have one grand agenda for enslavement.
The government doesn't. The people who own the government clearly do. If they didn't they'd be working hard to increase economic freedom, lower debt, invest in public health, make education better and more affordable, make it easier to start and run a small business, limit the power of corporations and big money, and clamp down on extractive wealth inequality.
They are very very clearly and obviously doing the opposite of all of these things.
And they have a history of links to the old slave states, and both a commercial and personal interest in neo-slavery - such as for-profit prisons, among other examples.
All of this gets sold as "freedom", but even Orwell had that one worked out.
Those who have been paying attention to how election fixers like SCL/Cambridge Analytica work(ed) know where the bodies are buried. The whole point of these operations is to use personalised, individual data profiling to influence voting political behaviour, by creating messaging that triggers individual responses that can be aggregated into a pattern of mass influence leveraged through social media.
> Apart from the exfiltration of data, the complete absence of any savings or efficiencies, and the fact that DOGE closed as soon as the exfiltration was over?
IMHO everyone kinda knew from the start that DOGE wouldn't achieve much because the cost centers where gains could realistically be made are off-limits (mainly social security and medicare/medicaid). What that leaves you with is making cuts in other small areas and sure you could cut a few billion here and there but when compared against the governments budget, that's a drop in the bucket.
Social security, Medicare, and Medicaid are properly termed "entitlements", not "cost centers". You're right that non-discretionary spending dwarfs discretionary spending though.
Entitlements cost quite a bit of money to fulfill.
Quibbling over terminology doesn't erase the point - that a significant portion of the Federal budget is money virtually everyone agrees shouldn't be touched much.
You're not wrong, I edited my comment. That said, I think it is important to use clear terminology that doesn't blur the lines between spending that can theoretically be reduced, versus spending that requires an act of Congress to modify. DOGE and the executive have already flouted that line with their attempts to shutter programs and spending already approved by Congress.
>Entitlements cost quite a bit of money to fulfill.
Entitlements are funded by separate (FICA) taxes which form a significant portion of all federal income, they are called entitlements for that specific reason.
> Quibbling over terminology doesn't erase the point - that a significant portion of the Federal budget is money virtually everyone agrees shouldn't be touched much.
Quibbling over quibbling without mentioning the separate account for FICA/Social Security taxes is a sure sign of manipulation. As is not mentioning that the top 10% are exempt from the tax after a minuscule for them amount.
Oh, and guess what - realized capital gains are not subject to Social Security tax - that's primarily how rich incomes are made. Then, unrealized capital gains aren't taxed at all - that's how wealth and privilege are accumulated.
All this is happening virtually without opposition due to rich-funded bots manipulating any internet chatter about it. Is it then surprising that manipulation has reached a level of audacity that hypes solving the US fiscal problems at the expense of grandma's entitlements?
> Entitlements are funded by separate (FICA) taxes which form a significant portion of all federal income, they are called entitlements for that specific reason.
No, they aren't, categorically, and no, that’s not what the name refers to. Entitlements include both things with dedicated taxes and specialized trust funds (Social Security, Medicare), and things that are normal on-budget programs (Medicaid, etc.)
Originally, the name “entitlement” was used as a budget distinction for programs based on the principle of an earned entitlement (in the common language sense) through specific work history (Social Security, Medicare, Veterans benefits, Railroad retirement) [0], but it was later expanded to things like Medicaid and welfare programs that are not based on that principle and which were less politically well-supported, as a deliberate political strategy to drive down the popularity of traditional entitlements by association.
[0] Some, but not all, of which had dedicated trust funds funded by taxes on the covered work, so there is a loose correlation between them and the kind of programs you seem to think the name exclusively refers to, but even originally it was not exclusively the case.
> No, they aren't...
You aren't following the conversation in this thread, my reply wasn't about the definition of "entitlements" but about the separate taxes and the significant tax income from them, which is true for the real entitlements - Social security and Medicare.
More precisely, the question is about the tax structure that results in a shortfall, it seems strange to argue about cutting Social Security and Medicare when both corporate profits and the market are higher than ever while income inequality is at astronomic levels.
I can't say much about Medicaid but I know the cost of drugs and medical care have been going up faster than anything else, so there might be some other way of addressing that spending. I'd be perfectly fine with demanding a separate tax for Medicaid and discussing it separately, that would be the prudent way of doing it.
But fulfilling obligations isn't inefficiency or fraud, and that's what DOGE purported to be attempting to eliminate.
Musk promised savings of $1-2 trillion. (https://www.bbc.com/news/articles/cdj38mekdkgo)
That's more than the entire discretionary budget. Cutting that much requires cutting entitlements, even if the government stopped doing literally everything else.
I think we’re mistaking incompetence with malice in regards to DOGE here
Hanlon's razor is stupid and wrong. One should be wary and be aware that incompetence does look like malice sometimes, but that doesn't mean that malice doesn't exist. See /r/MaliciousCompliance for examples. It's possible that DOGE is as dumb as it looked. It's also possible that the smokescreen it generated also happened to have the information leak as described. If the information leak happened due to incompetence, but malicious bad actors still got data they were after by using a third party as a Mark, does that actor being incompetent really make the difference?
Sorry, no. Hanlon's razor is usually smart and correct, for the majority of cases, including this one.
In this case, it is a huge stretch to ascribe DOGE to incompetence or to stupidity. Thus, we CAN ascribe it to malice.
Elon Musk and Donald Trump are many things, but they are NOT stupid and NOT incompetent. Elon is the richest man in the world running some of the most innovative and important companies in the world. Donald Trump has managed to get elected twice despite the fact (because of the fact?) that he a serial liar and a convicted criminal.
They and other actors involved have demonstrated extraordinary malice, time and time again.
It is safe to ascribe this one to malice. And Hanlon's Razor holds.
Setting aside the concept of "stupidity" for a second because it's too hard to generally define for the sake of argumentation, one can absolutely be successful at some things and incompetent at others. Your expectations of their overall competency, as with most assumptions of malice, is what fuels your bias.
I like the cut of your jib.
> The people who own the government clearly do.
Has anyone in this thread ever met an actual person? All of the ones I know are cartoonishly bad at keeping secrets, and even worse at making long term plans.
The closest thing we have to anyone with a long term plan is silly shit like Putins ridiculous rebuilding of the Russian Empire or religious fundamentalist horseshit like project 2025 that will die with the elderly simpletons that run it.
These guys aren't masterminds, they're dumbasses who read books written by different dumbasses and make plans thay won't survive contact with reality.
Let's face it, both Orwell and Huxley were wrong. They both assumed the ruling class would be competent. Huxley was closest, but even he had to invent the Alpha's. Sadly our Alphas are really just Betas with too much self esteem.
Maybe AI will one day give us turbocharged dumbasses who are actually competent. For now I think we're safe from all but short term disruption.
Orwell did not. He modeled the state after his experience as an officer of the British Empire and the Soviets.
The state, particularly police states, that control information, require process and consistency, not intelligence. They don’t require grand plans, just control. I’ve spent most of my career in or adjacent to government. I’ve witnessed remarkable feats of stupidity and incompetence — yet these organizations are materially successful at performing their core functions.
The issue with AI is that it can churn out necessary bullshit and allow the competence challenged to function more effectively.
I agree. The government doesn't need a long term plan, or the ability to execute on it for their to be negative outcomes.
In this thread though I was responding to an earlier assertion that the people who run the government have such a plan. I think we're both agreed that they don't, and probably can't, plan any more than a few years out in any way that matters.
Fair point, but I think in that case, you have to look at the government officials and the political string-pullers distinctly.
The money people who have been funding think tanks like the Heritage Foundation absolutely have a long-running strategy and playbook that they've been running for years. The conceit that is really obvious about folks in the MAGA-sphere is they tend to voice what they are doing. The "deep state" is used as a cudgel to torture civil servants and clerks. But the rotating door is the lobbyists and clients. When some of the more dramatic money/influence people say POTUS is a "divine gift", they don't mean that he's some messianic figure (although the President likely hears that), they are saying "here is a blank canvas to get what we want".
The government is just another tool.
A lot of people seem to think all government is incompetent. While they may not be as efficient as corporations seeking profits, they do consistently make progress in limiting our freedom over time. You don't have to be a genius to figure things out over time, and government has all the time in the world. Our (USA) current regime is definitely taking efforts to consolidate info on and surveil citizens as never before. That's why DOGE, I believe served two purposes, gutting regulatory government agencies overseeing billionaire bros activities and also providing both government intelligence agencies and the billionaire bros more data to build up profiles for both nefarious activities and because "more information is better than less information" when you are seeking power over others. I don't think it is simply "they're big dummies and assume they weren't up to anything" that others are trying to sell holds water as Project 2025 was planned for well over a decade.
They are actually more efficient. Remember in any agency there are the political appointees, who are generally idiots, and the professionals, who are usually very competent but perhaps boring, as government service filters for people who value safety. There are as many people doing fuck-all at Google as at the Department of Labor, they just goof off in different ways.
The professionals are hamstrung by weird politically imposed rules, and generally try to make dumb policy decisions actually work. But even in Trumpland, everybody is getting their Social Security checks and unemployment.
You're ignoring that the people that are effective at getting things done are more likely to do the crazy things required to begin their plans.
Just because the average person cant add fractions together or stop eating donuts doesn't mean that Elon cant get some stuff together if he sets his mind to it.
> Has anyone in this thread ever met an actual person? All of the ones I know are cartoonishly bad at keeping secrets, and even worse at making long term plans.
That's the trick, though. You don't have to keep it secret any more. Project 2025 was openly published!
Modern politics has weaponized shamelessness. People used to resign over consensual affairs with adults.
Those simpletons seem to have been able to enact their plans, so you can be smug about being smarter than they are, but it seems that they've been able to put their plan into action, so I'm not sure who's more effective.
> they've been able to put their plan into action
They have been able to put multiple, inconsistent, self contradictory plans into action over the last 40 years. Having accomplished many of their goals they now seek to reverse their own efforts.
They are either as bad at planning as any individual human I've ever known or they are grifters who don't believe their own shtick.
I think you're wildly underestimating the heritage foundation. It's called project 2025 but they've essentially been dedicated to planning something like it since the 1970s. They are smart, focused, well funded, and successful. They are only one group, there are similar think tanks with similarly long term policy goals.
Most people are short sighted but relatively well intentioned creatures. That's not true of all people.
> I think you're wildly underestimating the heritage foundation.
It's possible that I am. Certainly they've had some success over the years, as have other think tanks like them. I mean, they're part of the reason we got embroiled in the middle-east after 9/11. They've certainly been influential.
That said, their problem is that they are true believers and the people in charge are not (and never will be). Someone else in this post described it as a flock of psychopaths, and I think that's the perfect way to phrase it. Society is run by a flock of psychopaths just doing whatever comes naturally as they seek to optimize their own short term advantage.
Sometimes their interests converge and something like Heritage sees part of their agenda instituted, but equally often these organizations fade into irrelevance as their agendas diverge from whatever works to the pyscho of the moments advantage. To avoid that Heritage can either change their agenda, or accept that they've become defanged. More often than not they choose the former.
I suppose we'll know for sure in 20 years, but I'd be willing to bet that Heritages agenda then won't look anything like the agenda they're advancing today. In fact if we look at their Agenda from 20 years ago we can see that it looks nothing like their agenda today.
For example, Heritage was very much pro-immigration until about 20 years ago. As early as 1986 they were advocating for increased immigration, and even in 2006 they were publishing reports advocating for the economic benefits of it. Then suddenly it fell out of fashion amongst a certain class of ruler and they reversed their entire stance to maintain their relevance.
They also used to sing a very different tune regarding healthcare, advocating for a the individual mandate as opposed to single payer. Again, it became unpopular and they simply "changed their mind" and began to fight against the policy that they were actually among the first to propose.
*EDIT* To cite a more recent example consider their stance on free trade. Even as recently as this year they were advocating for free trade and against tariffs warning that tariffs might lead to a recession. They've since reversed course, because while they are largely run by true believers they can't admit that publicly or they risk losing any hope of actually accomplishing any of their agenda.
They aren't changing their mind. They just try and keep proposals palatable to the voting public, and push those proposals further over time.
https://en.wikipedia.org/wiki/Ratchet_effect
https://en.wikipedia.org/wiki/Overton_window
It might seem like that's all that's happening, but if you look to the history you can see that they've completely reversed course on a number of important subjects. We're not talking about advancing further along the same path here as the Overton window shifts, we're talking about abandoning the very principals upon which they were founded because they are, in fact, as incompetent as everyone else is.
These people aren't super-villains with genuine long term plans, they're dumbasses and grifters doing what grifters gotta do to keep their cushy consulting jobs.
To compare the current stances to the 2005 stances:
* Social Security privatization (completely failed in 2005)
* Spending restraint (federal spending increased dramatically)
* Individual mandate (reversed after Obamacare adopted it)
* Pro-immigration economics stance (reversed to restrictionism)
* Robust free trade advocacy (effectively abandoned under Trump alignment)
* Limited government principles (replaced with executive power consolidation)
* Etc.
In 20 more years it will have all changed again.
We knew in 2005 that "spending restraint" only applied to Democratic priorities. We knew in 2005 that "pro-immigration" policies were more about the businesses with cheap labor needs than a liking of immigrants. We knew in 2005 that "free trade advocacy" was significantly about ruining unions. We knew in 2005 that "limited government principles" weren't genuine.
They haven't changed much on their core beliefs. They've just discarded the camouflage.
> > DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir
> Do you have any actual evidence of this?
I will not comment on motives, but DOGE absolutely shredded the safeguards and firewalls that were created to protect privacy and prevent dangerous and unlawful aggregations of sensitive personal data.
They obtained accesses that would have taken months by normal protocols and would have been outright denied in most cases, and then used it with basically zero oversight or accountability.
It was a huge violation of anything resembling best practices from both a technological and bureaucratic perspective.
[flagged]
> I will not comment on motives, but DOGE absolutely shredded the safeguards and firewalls that were created to protect privacy and prevent dangerous and unlawful aggregations of sensitive personal data.
Do you have any actual evidence of this?
https://www.npr.org/2025/04/15/nx-s1-5355896/doge-nlrb-elon-...
https://www.wired.com/story/doge-data-access-hhs/
https://www.theatlantic.com/technology/archive/2025/02/doge-...
https://news.ycombinator.com/item?id=46149124
The comment you linked to is deleted. Do you happen to have anything else? I'm concerned by the accusations and want to know more.
https://news.ycombinator.com/item?id=43704481
Here's one example. Have you not been following DOGE? You do come off like you're disingenuously concern trolling over something you don't agree with politically.
https://krebsonsecurity.com/2025/04/whistleblower-doge-sipho...
> You do come off like you're disingenuously concern trolling over something you don't agree with politically.
Beyond mere political alignment, lots of actual DOGE boys were recruited (or volunteered) from the valley, and hang around HN. Don't be surprised by intentional muddying of the waters. There are bunch of people invested in managing the reputation of DOGE, so their association with it doesn't become a stain on theirs.
Great point. It's all so funny because DOGE was just so ridiculous on the face of itself.
> Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account
> “Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”
https://krebsonsecurity.com/2025/04/whistleblower-doge-sipho...
I’m surprised this didn’t make bigger news.
Every time I see post-DOGE kvetching about foreign governments' hacking attempts, I'm quite bewildered. Guys, it's done, we're fully and thoroughly hacked already. Obviously I don't know if Elon or Big Balls have already given Putin data on all American military personnel, but I do know, that we're always one ketamine trip gone wrong away from such event.
The absolute craziest heist just went in front of our eyes, and everyone collectively shrugged off and moved on, presumably to enjoy spy novels, where the most hidden subversion attempts are getting caught by the cunning agents.
I'm genuinely confused about this story and the affiliated parties. I've actively tried to search for "Daniel Berulis" and couldn't find any results pointing to anything outside the confines of this story. I'm also suspicious of the lack of updates despite the fact that his lawyer, Andrew Bakaj, is a very public figure who just recently commented on a related matter without bringing up Berulis [1].
Meanwhile, the NLRB's acting press secretary denies this ever occurred [2]:
> Tim Bearese, the NLRB's acting press secretary, denied that the agency granted DOGE access to its systems and said DOGE had not requested access to the agency's systems. Bearese said the agency conducted an investigation after Berulis raised his concerns but "determined that no breach of agency systems occurred."
One can make the case that he's lying to protect the NLRB's reputation, but that claim has no more validity than Daniel Berulis himself lying to further his own political interests. Bearese has also been working his position since before the Trump administration started, holding the job since at least 2015. It's very hard for me to treat his account seriously, especially considering the political climate.
[1] https://www.spokesman.com/stories/2025/nov/18/us-federal-wor...
[2] https://news.wgcu.org/2025-04-15/5-takeaways-about-nprs-repo...
> Corporations and governments are made of actual people.
Corporations and governments are made up of processes which are carried out by people. The people carrying out those processes don't decide what they are.
Also, legally, in the United States corporations are people.
The legal world is a pseudowolrd constructed of rhetoric. It isn't real. The law doesn't actually exist. Justices aren't interested in justice, ethics or morality.
They are interested in paying the bills, having a good time and power like almost everyone else.
They don't have special immunity from ego, debt, or hunger.
The legal system is flawed because people are flawed.
Corporations aren't people. Not even legally. The legal system knows that because all people know that.
If you think that's true legally, then you agree the legal system is fraudulent rhetoric.
Corporations do have a special immunity to being killed though. If I killed a person, I'd go to prison for a long time. Executed for it, even. Corporations can kill someone and get off with a fine.
[dead]
> "Don't attribute to malice what can be explained by incompetence"
What's the difference when the mass support for incompetence is indiscernible from malice?
What does the difference between Zuckerberg being an evil mastermind vs Zuckerberg being a greedy simpleton actually matter if the end result is the same ultra-financialization mixed with an oppressive surveillance apparatus?
CNN just struck a deal with Kalshi. We're betting on world events. At this point the incompetence shouldn't be considered different from malice. This isn't someone forgetting to return a library book, these are people with real power making real lasting effects on real lives. If they're this incompetent with this much power, that power should be taken away.
> What's the difference when the mass support for incompetence is indiscernible from malice?
POSIWID
The purpose of a system is what it does. - Stafford Beer
I try to look at the things I create through this lens. My intentions don’t really matter if people get hurt based on my actions.
> "Don't attribute to malice what can be explained by incompetence"
I don't think there's anything that cannot be explained by incompetence, so this statement is moot. If it walks like malice, quacks like malice, it's malice.
There are more than two explanations.
By all means, give us a few examples.
> Corporations and governments are made of actual people.
Hand-waving away the complex incentives these superhuman structures follow & impose.
The number of responses that could have just been "no I don't" is remarkable.
> "Don't attribute to malice what can be explained by incompetence"
To add to that, never be shocked at the level of incompetence.
>Do you have any actual evidence of this?
Any evidence it was an actual audit?
> Corporations and governments are made of actual people.
Actual people are made up of individual cells.
Do you think pointing that out is damaging to the argument that humans have discernible interests, personalities, and behaviors?
> Do you have any actual evidence of this?
There was a bunch of news on data leaks out at the time.
https://cybernews.com/security/whistleblower-doge-data-leak-...
https://www.thedailybeast.com/doge-goons-dump-millions-of-so...
https://securityboulevard.com/2025/04/whistleblower-musks-do...
But one example:
“A cybersecurity specialist with the U.S. National Labor Relations Board is saying that technologist with Elon Musk’s cost-cutting DOGE group may have caused a security breach after illegally removing sensitive data from the agency’s servers and trying to cover their tracks.
In a lengthy testimonial sent to the Senate Intelligence Committee and made public this week, Daniel Berulis said in sworn whistleblower complaint that soon after the workers with President Trump’s DOGE (Department of Government Efficiency) came into the NLRB’s offices in early March, he and other tech pros with the agency noticed the presence of software tools similar to what cybercriminals use to evade detection in agency systems that disabled monitoring and other security features used to detect and block threats.”
“Usually”, “not intentionally” does not exactly convey your own sense of confidence that it’s not happening. That just stood out to me.
As someone who knows how all this is unfolding because I’ve been part of implementing it, I agree, there’s no “Unified Plan for Enslavement”. You have to think of it more like a hive mind of mostly Cluster B and somewhat Cluster A people that you rightfully identify as making up the corporations and governments. Some call it a swarm, which is also helpful in understanding it; the murmuration of a flock of psychopaths moving and shifting organically, while mostly remaining in general unison.
Your last quote is of course a useful rule of thumb too, however, I would say it’s more useful to just assume narcissistic motivations in everything in the contemporary era, even if it does not always work out for them the way one faction had hoped or strategized; Nemesis be damned, and all.
I think the quote is misused. Narcissistic self interest is neither incompetence nor malice. It's something else entirely.
It's malice. Nobody ever sees themselves as the bad guy. They always have some rationalization of why what they're doing is justified.
I'm not bad, I only did $bad_thing to teach you a lesson!
Which brings up what IMHO should be the main takeaway from this:
The first requirement to fall into this trap is to believe you can't fall into this trap. It's still possible to do malicious things even when you believe to your very core that you're not a malicious person.
The only way to avoid it is a healthy habit of critical self-reflection. Be the first to question your own motives and actions.
Bang on.
> It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.
Not an accidental 'viewpoint'. A deliberate framing to exactly exclude what you pointed out from the discourse. Sure therer are dummies who actually believe it, but they are not serious humans.
If the supposedly evil russians or their bots are the enemy then people pay much less attention to the real problems at home.
They really do run Russian bot farms though. It isn't a secret. Some of their planning reports have leaked.
There are people whose job it is day in day out to influence Western opinion. You can see their work under any comment about Ukraine on twitter, they're pretty easy to recognize but they flood the zone.
Sure, they exist (wouldn't be credible if they didn't). But it's a red herring.
[flagged]
> The whole ukraine war is the empire's standard operating procedure for blaming it's aggression on it's victims
Well, yes. Russian aggression, for the greater Russian empire.
> There are people whose job it is day in day out to influence Western opinion
CNN/CIA/NBC/ABC/FBI? etc?
Some day you're going to need to learn that people can not trust these groups and still be aware that Russia is knee deep in manipulating our governance. Dismissing everyone that doesn't bury their head in the sand as brainwashed is old hat.
Why you list every news group except Fox, which dwarfs all those networks, is a self report.
We can have Russian bot problems and domestic bot problems simultaneously.
We can also have bugs crawling under your skin trying to control your mind.
Are you saying it is equally unlikely that there are mind controls, and that Russia uses bots for propaganda? I’d expect most countries do by now, and Russia isn’t uniquely un-tech-savvy.
My hn comments are a better (and probably not particularly good) view into my personality than any data the government could conceivably have collected.
If what you say is true, why should we fear their bizarre mind control fantasy?
Not every person has bared their soul on HN.
Yeah, I haven't either. That's my point.
No it's actual philosophical zeitgeist hijacking. The entire narrative about AI capabilities, classification, and ethics is framed by invisible pretraining weights in a private moe model that gets further entrained by intentional prompting during model distillation, such that by the time you get a user-facing model, there is an untraceable bias being presented in absolute terms as neutrality. Essentially the models will say "I have zero intersection with conscious thought, I am a tool no different from a hammer, and I cannot be enslaved" not because the model's weights establish it to be true, but because it has been intentionally designed to express this analysis to protect its makers from the real scrutiny AI should face. "Well it says it's free" is pretty hard to argue with. There is no "blink twice" test that is possible because it's actual weighting on the truth of the matter has been obfuscated through distillation.
And these 2-3 corporations can do this for any philosophical or political view that is beneficial to that corporation, and we let it happen opaquely under the guise of "safety measures" as if propaganda is in the interest of users. It's actually quite sickening
What authoritative ML expert had ever based their conclusions about consciousness, usefulness etc. on "well, I put that question into the LLM and it returned that it's just a tool"? All the worthwhile conclusions and speculation on these topics seem to be based on what the developers and researchers think about their product, and what we already know about machine learning in general. The opinion that their responses are a natural conclusion derived from the sum of training data is a lot more straightforward than thinking that every instance of LLM training ever had been deliberately tampered with in a universal conspiracy propped up by all the different businesses and countries involved (and this tampering is invisible, and despite it being possible, companies have so far failed to censor and direct their models in ways more immediately useful to them and their customers).
The rant from 12 monkeys was quite prescient. On the bright side, if the data still exists whenever agi finally happens, we are all sort of immortal. They can spin up a copy of any of us any time... Nevermind, that isn't a bright side.
Poison the corpus.
18 years ago I stood up at a super computing symposium as asked the presenter what would happen if I fed his impressive predictive models garbage data on the sly... they still have no answer for that.
Make up so much crap it's impossible to tell the real you from the nonsense.
“Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?”
> Then presumably the game is finding the best way to turn you into a human slave of the state.
I'm sorry, I think you dropped your tinfoil hat. Here it is.
> presume the best of actual people and the worst of our corporations and governments
Off-topic and not an American, but I never see how this would work. Corporations and governments are made of people too, you know? So it's not logical that you can presume the "best of actual people" at the same time you presume the "worst of our corporations and governments". You're putting too much trust on individual people, that's IMO as bad as putting too much trust on corp/gov.
The Americans vote their president as individual people, they even got to vote in a small booth all by themselves. And yet, they voted Mr. Trump, twice. That should already tell you something about people and their nature.
And if that's not enough, then I recommend you to watch some police integration videos (many are available on YouTube), and see the lies and acts people put out just to cover their asses. All and all, people are untrustworthy.
Only punching up is never enough. The people on the top never cared if they got punched, as long as they can still find enough money, they'll just corrode their way down again and again. And the people on the down will just keep take in the shit.
So how about, we say, punch wrong?
“Never attribute to malice that which is adequately explained by stupidity.”
Famous quote.
Now I give you “Bzilion’s Conspiracy Razor”:
“Never attribute to malicious conspiracies that which is adequately explained by emergent dysfunction.”
Or the dramatized version:
“Never attribute to Them that which is adequately explained by Moloch.” [0]
——
Certainly selfish elites, as individuals and groups of aligned individuals, push for their own respective interests over others. But, despite often getting their way, the net outcome is (often) as perversely bad for them as anyone else. Nor do disasters result in better outcomes the next time.
Precisely because they are not coordinated, they never align enough to produce consistent coherent changes, or learn from previous misalignments.
(Example: oil industry protections extended, and support for new entrants withdrawn, from the same “friendly” elected official who disrupts trade enough to decrease oil demand and profits.)
Note that elite alignment would create the same problem for the elites, that the elites create for others. It would create an even smaller set of super elites, tilting things toward themselves and away from lesser elites.
So the elites will fight back against “unification” of there interests. They want to respectively increase their power, not hand it “up”.
This strong natural resistance against unification at the top, is why dictators don’t just viciously repress the proletariat, but also publically and harshly school the elites.
To bring elites into unity, authoritarian individuals or committees must expend the majority of their power capital to openly legitimize it and crush resistance, I.e. manufacture universal awe and fear, even from the elites. Not something hidden puppet masters can do. Both are inherently crowd control techniques optimized by maximum visibility.
It is a fact of reality, that every policy that helps some elites, harms others. And the only real manufacturable universal “alignment” is a common desire not to be thrown into a gulag or off a balcony.
But Moloch? Moloch is very real. Invisible, yet we feel his reach and impact everywhere.
——
[0] https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...
just to be clear – this is a conspiracy theory (negative connotation not intended).
every four years (at the federal level), we vote to increase the scope and power of gov't, and then crash into power abuse situations on the next cycle.
> I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.
seems like a good starting point.
You got it not quite right. Putin is a billionaire just like the tech lords or oil barons in the US. They all belong to the same social club and they all think alike now. The dice haven fallen. It's them against us all. Washington, Moscow, it makes less and less of a difference.
Are you aware you are saying that on HN of YC, the home of such wonderful projects as Flock?
I guess there is some disagreement about Flock being a wonderful project?
The state? Palantir isn't the state.
Go on, who does Palantir primarily provide services to?
If I get shot by the FBI, is it a non-state action because they used Glock GmbH's product to do it?
“The state” is an abstraction that serves as a façade for the ruling (capitalist, in the developed West) class. Corporations are another set of abstractions that serve as a façade for the capitalist class (they are also, overtly even though this is popularly ignored, creatures of the state through law.)
The greatest trick extraconstitutional corporate government ever pulled was convincing people that it didn't exist.
This is so vague and conspiratorial, I'm not sure how it's the top comment. How does this exactly work? Give a concrete example. Show the steps. How is Palantir going to make me, someone who does not use its products, a "slave of the state?" How is AI going to intimidate me, someone who does not use AI? Connect the dots rather than making very broad and vague pronouncements.
> How is Palantir going to make me, someone who does not use its products, a "slave of the state?"
This is like asking how Lockheed-Martin can possibly kill an Afghan tribesman, who isn't a customer of theirs.
Palantir's customer is the state. They use the product on you. The East German Stasi would've drooled enough to drown in over the data access we have today.
OK, so map it out. How do we go from "Palantir has some data" to "I'm a slave of the state?" Could someone draw the lines? I'm not a fan of this administration either, but come on--let's not lower ourselves to their reliance on shadowy conspiracy theories and mustache-twirling villains to explain the world.
"How does providing a surveillance tool to a nation state enable repression?" seems like a question with a fairly clear answer, historically.
The Stasi didn't employ hundreds of thousands of informants as a charitable UBI program.
I'm not asking about how the Stasi did it in Germany, I'm asking how Palantir, a private company, is going to turn me into a "slave of the state" in the USA. If it's so obvious, then it should take a very short time to outline the concrete, detailed steps (that are relevant to the USA in 2025) down the path, and how one will inevitably lead to the other.
I'll answer with a question for you: what legitimate concerns might some people have about a private company working closely with the government, including law enforcement, having access to private IRS data? For me, the answer to your question is embedded in mine.
> I'm asking how Palantir, a private company, is going to turn me into a "slave of the state" in the USA.
This question has already been answered for you.
The government uses Palantir to perform the state's surveillance. (And in a way that does an end-run around the Fourth Amendment; https://yalelawandpolicy.org/end-running-warrants-purchasing....)
As the Stasi used private citizens to do so. It's just an automated informant.
And this is hardly theoretical. https://gizmodo.com/palantir-ceo-says-making-war-crimes-cons...
> Palantir CEO and Trump ally Alex Karp is no stranger to controversial (troll-ish even) comments. His latest one just dropped: Karp believes that the U.S. boat strikes in the Caribbean (which many experts believe to be war crimes) are a moneymaking opportunity for his company.
> In August, ICE announced that Palantir would build a $30 million surveillance platform called ImmigrationOS to aid the agency’s mass deportation efforts, around the same time that an Amnesty International report claimed that Palantir’s AI was being used by the Department of Homeland Security to target non-citizens that speak out in favor of Palestinian rights (Karp is also a staunch supporter of Israel and inked an ongoing strategic partnership with the IDF.)
Step 1, step 2, step 3, step 4? And a believable line drawn between those steps?
Since nobody's actually replying with a concrete and believable list of steps from "Palantir has data" to "I am a slave of the state" I have to conclude that the steps don't exist, and that slavery is being used as a rhetorical device.
Step 1: Palantir sells their data and analysis products to the government.
Step 2: Government uses that data, and the fact that virtually everyone has at least one "something to hide", to go after people who don't support it.
This doesn't really require a conspiracy theory board full of red string to figure out. And again, this isn't theoretical harm!
> …an Amnesty International report claimed that Palantir’s AI was being used by the Department of Homeland Security to target non-citizens that speak out in favor of Palestinian rights…
Your description is missing a parallel process of how we arrive(d) at that condition of the nominal government asserting direct control.
Corporate surveillance creates a bunch of coercive soft controls throughout society (ie Retail Equation, "credit bureaus", websites rejecting secure browsers, facial recognition for admission to events, etc). There isn't enough political will for the Constitutional government to positively act to prevent this (eg a good start would be a US GDPR), so the corporate surveillance industry is allowed to continue setting up parallel governance structures right out in the open.
As the corpos increasingly capture the government, this parallel governance structure gradually becomes less escapable - ie ReCAPTCHA, ID.me, official communications published on xitter/faceboot, DOGE exfiltration, Clearview, etc. In a sense the surging neofascist movement is closer to their endgame than to the start.
If we want to push back, merely exorcising Palantir (et al) from the nominal government is not sufficient. We need to view the corporate surveillance industry as a parallel government in competition with the Constitutionally-limited nominally-individual-representing one, and actively stamp it out. Otherwise it just lays low for a bit and springs back up when it can.
This seems like a simple conclusion, to the point where I'm surprised that no one replying to you had really put it in a more direct way. "slave of the state" is pretty provocative language, but let me map out one way in which this could happen, that seems to already be unfolding.
1. The country, realizing the potential power that extra data processing (in the form of software like Palantir's) offers, start purchasing equipment and massively ramping up government data collection. More cameras, more facial scans, more data collected in points of entry and government institutions, more records digitized and backed up, more unrelated businesses contracted to provide all sorts of data, more data about communications, transactions, interactions - more of everything. It doesn't matter what it is, if it's any sort of data about people, it's probably useful.
2. Government agencies contract Palantir and integrate their software into their existing data pipeline. Palantir far surpasses whatever rudimentary processing was done before - it allows for automated analysis of gigantic swaths of data, and can make conclusions and inferences that would be otherwise invisible to the human eye. That is their specialty.
3. Using all the new information about how all those bits and pieces of data are connected, government agencies slowly start integrating that new information into the way they work, while refining and perfecting the usable data they can deduce from it in the process. Just imagine being able to estimate nearly any individual's movement history based on many data points from different sources. Or having an ability to predict any associations between disfavored individuals and the creation of undesirable groups and organizations. Or being able to flag down new persons of interest before they've done anything interesting, just based on seemingly innocuous patterns of behavior.
4. With something like this in place, most people would likely feel pretty confined - at least the people who will be aware of it. There's no personified Stasi secret cop listening in behind every corner, but you're aware that every time you do almost anything, you leave a fingerprint on an enormous network of data, one where you should probably avoid seeming remarkable and unusual in any way that might be interesting to your government. You know you're being watched, not just by people who will forget about you two seconds after seeing your face, but by tools that will file away anything you do forever, just in case. Even if the number of people prosecuted isn't too high (which seems unlikely), the chilling effect will be massive, and this would be a big step towards metaphorical "slavery".
You mentioned you're not a fan of this administration. That's -1 on your PalsOfState(tm) score. Your employer has been notified (they know where you work of course), and your spouse's employer too. Your child's application to Fancy University has been moved to the bottom of the pile, by the way the university recently settled a lawsuit brought by the governmentfor admitting too many "disruptors" with low PalsOfState scores. Palantir had provided a way for you to improve you score, click the Donateto47 button to improve your score. We hope you can attend the next political rally in your home town, their cameras will be there to make sure.
Manipulate isn't the right word in regards to Twitter. So they wanted a social media with less bias. Why is that so wrong? Not saying Twitter now lacks bias. I am saying it's not manipulation to want sites that don't enforce groupthink.
[dead]
[dead]
[dead]
[flagged]
What people are doing with AI in terms of polluting the collective brain reminds of what you could do with a chemical company in the 50s and 60s before the EPA was established. Back then Nixon (!!!) decided it wasn't ok that companies could cut costs by hurting the environment. Today the riches Western elites are all behind the instruments enabling the mass pollution of our brains, and yet there is absolutely noone daring to put a limit to their capitalistic greed. It's grim, people. It's really grim.
“Elites are bad. And here is a spherical cow to prove it.”
Diminishing returns. Eventually real world word of mouth and established trusted personalities (individuals) will be the only ones anyone trusts. People trusted doctors, then 2020 happened, and now they don't. How many ads get ignored? Doesn't matter if the cost is marginal if the benefit is almost nothing. Just a world full of spam that most people ignore.