Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rooting is useless. We should be taking conscious action to reduce the bosses' manipulation of our lives and society. We will not be saved by hoping to sabotage a genuinely useful technology.




How is it useful other than for people making money off token outout. Continue to fry your brain.

They’re fantastic learning tools, for a start. What you get out of them is proportional to what you put in.

You’ve probably heard of the Luddites, the group who destroyed textile mills in the early 1800s. If not: https://en.wikipedia.org/wiki/Luddite

Luddites often get a bad rap, probably in large part because of employer propaganda and influence over the writing of history, as well as the common tendency of people to react against violent means of protest. But regardless of whether you think they were heroes, villains, or something else, the fact is that their efforts made very little difference in the end, because that kind of technological progress is hard to arrest.

A better approach is to find ways to continue to thrive even in the presence of problematic technologies, and work to challenge the systems that exploit people rather than attack tools which can be used by anyone.

You can, of course, continue to flail at the inevitable, but you might want to make sure you understand what you’re trying to achieve.


Arguably the Luddites don't get a bad enough rep. The lump of labour fallacy was as bad then as it is now or at any other time.

https://en.wikipedia.org/wiki/Lump_of_labour_fallacy


Again, that may at least in part be a function of how history was written. The Luddite wikipedia link includes this:

> Malcolm L. Thomas argued in his 1970 history “The Luddites” that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made."[10] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes.

Of course, there would have been people who just saw it as striking back at the machines, and leaders who took advantage of that tendency, but the point is it probably wasn’t as simple as the popular accounts suggest.

Also, there’s a kind of corollary to the lump of labor fallacy, which is arguably a big reason the US is facing such a significant political upheaval today: when you disturb the labor status quo, it takes time - potentially even generations - for the economy to adjust and adapt, and many people can end up relatively worse off as a result. Most US factory workers and miners didn’t end up with good service industry jobs, for example.

Sure, at a macro level an economist viewing the situation from 30,000 feet sees no problem - meanwhile on the ground, you end up with millions of people ready to vote for a wannabe autocrat who promises to make things the way they were. Trying to treat economics as a discipline separate from politics, sociology, and psychology in these situations can be misleading.


> [...] undermine lower-paid competing workers, and create solidarity among workers.

Nice 'solidarity' there!

> Most US factory workers and miners didn’t end up with good service industry jobs, for example.

Which people are you talking about? More specifically, when?

As long as overall unemployment stays low and the economy keeps growing, I don't see much of a problem. Even if you tried to keep everything exactly as is, you'll always have some people who do better and some who do worse; even if just from random chance. It's hard to blame that on change.

See eg how the draw down of the domestic construction industry around 2007 was handled: construction employment fell over time, but overall unemployment was low and flat. Indicating an orderly shuffling around of workers from construction into the wider economy. (As a bonus point, contrast with how the Fed unnecessarily tanked the wider economy a few months after this re-allocation of labour had already finished.)

> Sure, at a macro level an economist viewing the situation from 30,000 feet sees no problem - meanwhile on the ground, you end up with millions of people ready to vote for a wannabe autocrat who promises to make things the way they were. Trying to treat economics as a discipline separate from politics, sociology, and psychology in these situations can be misleading.

It would help immensely, if the Fed were more competent in preventing recessions. Nominal GDP level targeting would help to keep overall spending in the economy on track.


The Fed is capable of doing no such thing. They can soften or delay recessions by socializing mistakes and redistributing wealth using interest rates, but an absence of recessions would imply perfect market participants.

> [...] but an absence of recessions would imply perfect market participants.

No, not at all. What makes you think so? Israel (and to a lesser extent Australia) managed to skip the Great Recession on account of having competent central banks. But they didn't have any more 'perfect' market participants than any other economy.

Russia, of all places, also shows right now what a competent central bank can do for your economy---the real situation is absolutely awful on account of the 'special military operation' and the sanctions both financial and kinetic. See https://en.wikipedia.org/wiki/Elvira_Nabiullina for the woman at the helm.

See also how after the Brexit referendum the Bank of England wisely let the Pound exchange rate take the hit---instead of tanking the real economy trying to defend the exchange rate.

> They can soften or delay recessions by socializing mistakes and redistributing wealth using interest rates, [...]

Btw, not all central banks even use interest rates for their policies.

You are right that the central banks are sometimes involved in bail outs, but just as often it's the treasury and other more 'fiscal' parts of the government. I don't like 'Too big to fail' either. Keeping total nominal spending on a stable path would help ease the temptation to bail out.


Today, we found better ways to prevent machines from crushing children, e.g., more regulation from democracy.

Obviously, some people love that machine crushing kids in the past. Looks like they hope it happen again...

are you pretending to be confused?

I see millions of kids cheating on their schoolwork, many adults substituting reading and thinking to GPUs. There's like 0.001% of people that use them to learn responsibly. You are genuinely a fool.

Hey, I wrote a long response to your other reply to me, but your comment seems to have been flagged so I can no longer reply there. Since I took the time to write that, I'm posting it here.

I'm glad I was able to inspire a new username for you. But aren't you concerned that if you let other people influence you like that, you're frying your brain? Shouldn't everything originate in your own mind?

> They don't provide any value except to a very small percentage of the population who safely use them to learn

There are many things that only a small percentage of the population benefit from or care about. What do you want to do about that? Ban those things? Post exclamation-filled comments exhorting people not to use them? This comes back to what I said at the end of my previous comment:

You might want to make sure you understand what you’re trying to achieve.

Do you know the answer to that?

> A language model is not the same as a convolution neural network finding anomalies on medical imagining.

Why not? Aren't radiologists "frying their brains" by using these instead of examining the images themselves?

The last paragraph of your other comment was literally the Luddite argument. (Sorry I can't quote it now.) Do you know how to weave cloth? No? Your brain is fried!

The world changes, and I find it more interesting and challenging to change with it, than to fight to maintain some arbitrary status quo. To quote Ghost in the Shell:

All things change in a dynamic environment. Your effort to remain what you are is what limits you.

For me, it's not about "getting ahead" as you put it. It's about enjoying my work, learning new things. I work in software development because I enjoy it. LLMs have opened up new possibilities for me. In that 5 year future you mentioned, I'm going to have learned a lot of things that someone not using LLMs will not have.

As for being dependent on Altman et al., you can easily go out and buy a machine that will allow you to run decent models yourself. A Mac, a Framework desktop, any number of mini PCs with some kind of unified memory. The real dependence is on the training of the models, not running them. And if that becomes less accessible, and new open weight models stop being released, the open weight models we have now won't disappear, and aren't going to get any worse for things like coding or searching the web.

> Keep falling for lesswrong bs.

Good grief. Lesswrong is one of the most misleadingly named groups around, and their abuse of the word "rational" would be hilarious if it weren't sad. In any case, Yudkowsky advocated being ready to nuke data centers, in a national publication. I'm not particular aware of their position on the utility of AI, because I don't follow any of that.

What I'm describing to you is based on my own experience, from the enrichment I've experienced from having used LLMs for the past couple of years. Over time, I suspect that kind of constructive and productive usage will spread to more people.


Out of respect the time you put into your response, I will try to respond in good faith.

> There are many things that only a small percentage of the population benefit from or care about. What do you want to do about that?

---There are many things from our society that I would like to ban that are useful to a small percentage of the population, or at least should be heavily regulated. Guns for example. A more extreme example would be cars. Many people drive 5 blocks when they could walk to their (and everyone else's) detriment. Forget the climate, it impacts everyone ( break dust, fumes, pedestrian deaths). Some cities create very expensive tolls / parking fees to prevent this, this angers most people and is seen as irrational by the masses but is necessary and not done enough. Open Free societies are a scam told to us by capitalist that want to exploit without any consequences.

--- I want to air-gap all computers in classrooms. I want students to be expelled for using LLMs to do assignments, as they would have been previously for plagiarism (that's all an llm is, a plagiarism laundering machine).

---During COVID there was a phenomenon where some children did not learn to speak until they were 4-5 years old, and some of those children were even diagnosed with autism. In reality, we didn't understand fully how children learned to speak, and didn't understand the importance of the young brain's need to subconsciously process people's facial expressions. It was Masks!!! (I am not making a statement on masks fyi) We are already observing unpredictable effects that LLMs have on the brain and I believe we will see similar negative consequences on the young mind if we take away the struggle to read, think and process information. Hell I already see the effects on myself, and I'm middle aged!

> Why not? Aren't radiologists "frying their brains" by using these instead of examining the images themselves?

--- I'm okay with technology replacing a radiologist!!! Just like I'm okay with a worker being replaced in an unsafe textile factory! The stakes are higher in both of these cases, and obviously in the best interest of society as a whole. The same cannot be said for a machine that helps some people learn while making the rest dependent on it. Its the opposite of a great equalizer, it will lead to a huge gap in inequality for many different reasons.

We can all say we think this will be better for learning, that remains to be seen. I don't really want to run a worldwide experiment on a generation of children so tech companies can make a trillion dollars, but here we are. Didn't we learn our lesson with social media/porn?

If Uber's were subsidized and cost only $20.00 a month for unlimited rides, could people be trusted to only use it when it was reasonable or would they be taking Uber's to go 5 blocks, increasing the risk for pedestrians and deteriorating their own health. They would use them in an irresponsible way.

If there was an unlimited pizza machine that cost $20.00 a month to create unlimited food, people would see that as a miracle! It would greatly benefit the percentage of the population that is food insecure, but could they be trusted to not eat themselves into obesity after getting their fill? I don't think so. The affordability of food, and the access to it has a direct correlation to obesity.

Both of these scenarios look great on the surface but are terrible for society in the long run.

I could go on and on about the moral hazards of LLMs, there are many more outside of just the dangers of learning and labor. We are being told they are game changing by the people who profit off them..

In the past, empires bet their entire kingdom's on the words of astronomers and magicians who said they could predict the future. I really don't see how the people running AI companies are any different than those astronomers (they even say they can predict the future LOL!)

They are Dunning Kruger plagiarism laundering machines as I see it. Text extruding machines that are controlled by a cabal of tech billionaires who have proven time and time again they do not have societies best interest at heart.

I really hope this message is allowed to send!


Just replying that I read your post, and don't disagree with some of what you wrote, and I'm glad there are some people that peacefully/respectfully push back (because balance is good).

However, I don't agree that AI is a risk to the extreme levels you seem to think it is. The truth is that humans have advanced by use of technology since the first tool and we are horrible predictors at what the use case of these technologies will bring.

So far they have been mostly positive, I don't see a long term difference here.


The kids went out and found the “cheating engines” for themselves. There was no plot from Big Tech, and believe me academia does not like them either.

They have, believe it or not, very little power to stop kids from choosing to use cheating engines on their personal laptops. Universities are not Enterprise.


They're just exploiting a bug in the Educational System where instead of testing if students know things, we test if they can produce a product that implies they know things. We don't interrogate them in person with questions to see if they understand the topic, we give them multiple choice questions that can be marked automatically to save time

Ok, so there’s a clear pattern emerging here, which is that you think we should do much more to manage our use of technology. An interesting example of that is the Amish. While they take it to what can seem like an extreme, they’re doing exactly what you’re getting at, just perhaps to a different degree.

The problem with such approaches is that it involves some people imposing their opinions on others, “for their own good”. That kind of thing often doesn’t turn out well. The Amish address that by letting their children leave to experience the outside world, so that their return is (arguably) voluntary - they have an opportunity to consent to the Amish social contract.

But what you seem to be doing is making a determination of what’s good for society as a whole, and then because you have no way to effect that, you argue against the tools that we might abuse rather than the tendencies people have to abuse them. It seems misplaced to me. I’m not saying there are no societal dangers from LLMs, or problems with the technocrats and capitalists running it all, but we’re not going to successfully address those issues by attacking the tools, or people who are using them effectively.

> In the past, empires bet their entire kingdom's on the words of astronomers and magicians who said they could predict the future.

You’re trying to predict the future as well, quite pessimistically at that.

I don’t pretend to be able to predict the future, but I do have a certain amount of trust in the ability of people to adapt to change.

> that's all an llm is, a plagiarism laundering machine

That’s a possible application, but it’s certainly not all they are. If you genuinely believe that’s all they are, then I don’t think you have a good understanding of them, and it could explain some of our difference in perspective.

One of the important features of LLMs is transfer learning: their ability to apply their training to problems that were not directly in their training set. Writing code is a good example of this: you can use LLMs to successfully write novel programs. There’s no plagiarism involved.


Hmm so I read this today. By happen chance someone sent it to me, it applies aptly to our conversation. It made me think a little differently about your argument and the luddite pursuasion all together. And why we shouldnt call people luddites (in a negative connotation)!!

https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: