Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Out of respect the time you put into your response, I will try to respond in good faith.

> There are many things that only a small percentage of the population benefit from or care about. What do you want to do about that?

---There are many things from our society that I would like to ban that are useful to a small percentage of the population, or at least should be heavily regulated. Guns for example. A more extreme example would be cars. Many people drive 5 blocks when they could walk to their (and everyone else's) detriment. Forget the climate, it impacts everyone ( break dust, fumes, pedestrian deaths). Some cities create very expensive tolls / parking fees to prevent this, this angers most people and is seen as irrational by the masses but is necessary and not done enough. Open Free societies are a scam told to us by capitalist that want to exploit without any consequences.

--- I want to air-gap all computers in classrooms. I want students to be expelled for using LLMs to do assignments, as they would have been previously for plagiarism (that's all an llm is, a plagiarism laundering machine).

---During COVID there was a phenomenon where some children did not learn to speak until they were 4-5 years old, and some of those children were even diagnosed with autism. In reality, we didn't understand fully how children learned to speak, and didn't understand the importance of the young brain's need to subconsciously process people's facial expressions. It was Masks!!! (I am not making a statement on masks fyi) We are already observing unpredictable effects that LLMs have on the brain and I believe we will see similar negative consequences on the young mind if we take away the struggle to read, think and process information. Hell I already see the effects on myself, and I'm middle aged!

> Why not? Aren't radiologists "frying their brains" by using these instead of examining the images themselves?

--- I'm okay with technology replacing a radiologist!!! Just like I'm okay with a worker being replaced in an unsafe textile factory! The stakes are higher in both of these cases, and obviously in the best interest of society as a whole. The same cannot be said for a machine that helps some people learn while making the rest dependent on it. Its the opposite of a great equalizer, it will lead to a huge gap in inequality for many different reasons.

We can all say we think this will be better for learning, that remains to be seen. I don't really want to run a worldwide experiment on a generation of children so tech companies can make a trillion dollars, but here we are. Didn't we learn our lesson with social media/porn?

If Uber's were subsidized and cost only $20.00 a month for unlimited rides, could people be trusted to only use it when it was reasonable or would they be taking Uber's to go 5 blocks, increasing the risk for pedestrians and deteriorating their own health. They would use them in an irresponsible way.

If there was an unlimited pizza machine that cost $20.00 a month to create unlimited food, people would see that as a miracle! It would greatly benefit the percentage of the population that is food insecure, but could they be trusted to not eat themselves into obesity after getting their fill? I don't think so. The affordability of food, and the access to it has a direct correlation to obesity.

Both of these scenarios look great on the surface but are terrible for society in the long run.

I could go on and on about the moral hazards of LLMs, there are many more outside of just the dangers of learning and labor. We are being told they are game changing by the people who profit off them..

In the past, empires bet their entire kingdom's on the words of astronomers and magicians who said they could predict the future. I really don't see how the people running AI companies are any different than those astronomers (they even say they can predict the future LOL!)

They are Dunning Kruger plagiarism laundering machines as I see it. Text extruding machines that are controlled by a cabal of tech billionaires who have proven time and time again they do not have societies best interest at heart.

I really hope this message is allowed to send!





Just replying that I read your post, and don't disagree with some of what you wrote, and I'm glad there are some people that peacefully/respectfully push back (because balance is good).

However, I don't agree that AI is a risk to the extreme levels you seem to think it is. The truth is that humans have advanced by use of technology since the first tool and we are horrible predictors at what the use case of these technologies will bring.

So far they have been mostly positive, I don't see a long term difference here.


The kids went out and found the “cheating engines” for themselves. There was no plot from Big Tech, and believe me academia does not like them either.

They have, believe it or not, very little power to stop kids from choosing to use cheating engines on their personal laptops. Universities are not Enterprise.


They're just exploiting a bug in the Educational System where instead of testing if students know things, we test if they can produce a product that implies they know things. We don't interrogate them in person with questions to see if they understand the topic, we give them multiple choice questions that can be marked automatically to save time

Ok, so there’s a clear pattern emerging here, which is that you think we should do much more to manage our use of technology. An interesting example of that is the Amish. While they take it to what can seem like an extreme, they’re doing exactly what you’re getting at, just perhaps to a different degree.

The problem with such approaches is that it involves some people imposing their opinions on others, “for their own good”. That kind of thing often doesn’t turn out well. The Amish address that by letting their children leave to experience the outside world, so that their return is (arguably) voluntary - they have an opportunity to consent to the Amish social contract.

But what you seem to be doing is making a determination of what’s good for society as a whole, and then because you have no way to effect that, you argue against the tools that we might abuse rather than the tendencies people have to abuse them. It seems misplaced to me. I’m not saying there are no societal dangers from LLMs, or problems with the technocrats and capitalists running it all, but we’re not going to successfully address those issues by attacking the tools, or people who are using them effectively.

> In the past, empires bet their entire kingdom's on the words of astronomers and magicians who said they could predict the future.

You’re trying to predict the future as well, quite pessimistically at that.

I don’t pretend to be able to predict the future, but I do have a certain amount of trust in the ability of people to adapt to change.

> that's all an llm is, a plagiarism laundering machine

That’s a possible application, but it’s certainly not all they are. If you genuinely believe that’s all they are, then I don’t think you have a good understanding of them, and it could explain some of our difference in perspective.

One of the important features of LLMs is transfer learning: their ability to apply their training to problems that were not directly in their training set. Writing code is a good example of this: you can use LLMs to successfully write novel programs. There’s no plagiarism involved.


Hmm so I read this today. By happen chance someone sent it to me, it applies aptly to our conversation. It made me think a little differently about your argument and the luddite pursuasion all together. And why we shouldnt call people luddites (in a negative connotation)!!

https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: