Hacker Newsnew | past | comments | ask | show | jobs | submit | hnburnsy's commentslogin

Property. It is the one thing they cannot produce more of.

Real Estate is currently trading at the highest price to earnings ratios ever recorded.

Yeah, not sure how valuable lots of real estate is if there aren't any high-paying jobs nearby, in part due to AI.

And this isn't even about the total number of high-paying jobs. Even having too much income concentration (fewer, but higher paying jobs) will mean that there's less demand at the margin. To put it another way, if the job growth in say, silicon valley, starts to reverse because of AI, there will still be newcomers, but not enough to buy out the available housing at an ever increasing price.

If the price trend ever reverses and holds that way long enough to seem like a new normal, I suspect the price will suddenly correct downwards. Everyone holding on to real estate as an investment will have a great reason to sell once it becomes a depreciating asset. If it goes on long enough, people will be underwater on housing and start walking away.

The price trend is already somewhat flattened, which reduces FOMO. Why buy now when AI is uncertain and the price seems pretty flat?


Those people still need to rent. So as long as the rental income covers the mortgage, you’re ahead of the game and someone is paying an asset down for you.

I don't think the rent will cover most mortgages in California, especially not at higher interest rates.

Depends on equity and interest rate of course.


That's what they said in 2007.

Pets.com→Chewy

Webvan → Instacart, DoorDash, Amazon Fresh

Kozmo.com → Postmates, Uber Eats, Gopuff

Boo.com (fashion) → Farfetch, Net-a-Porter, ASOS

Broadcast.com → YouTube, Netflix, Twitch

The dot-com bubble didn’t prove the internet was a fad — it proved the internet was inevitable, but the valuations assumed adoption would happen in 2 years instead of 15–20. To me it feels like the AI inevitability will be much quicker.


> To me it feels like the AI inevitability will be much quicker.

Based on what? We're only seeing linear improvements for increasing spending. There's no new algorithm ideas on the horizon, just more and more hardware, in the hopes that if we throw enough RAM and CPU at the problem, it will suddenly become "AGI."

No one has their eye on power budgets or sustainability or durability of the system. The human brain has such a high degree of energy efficiency that I don't think people understand the realities of competing with it digitally.

The main problem "AI" seems to solve is that humans get bored with certain tasks. The language models obviously don't, but they do hallucinate, and checking for hallucinations is an exceedingly boring task. It's a coffin corner of bad ideas.


>no new algorithm ideas on the horizon

The LLM algorithms seem pretty clunky to me, like a hack designed for text translation that surprised everyone by getting quite smart. The reason the human brain is so much more energy efficient is quite likely better design/algorithms. I was watching some video comparing brain and LLM function and almost tempted to try building something myself (https://youtu.be/3SUqBUGlyh8). I'm sure there are many more competent people looking at similar things.


Everyone says "those autoregressive transformer LLMs are obviously flawed", and then fails to come up with anything that outperforms them.

I'm not too bullish on architectural gains. There are efficiencies to be had, but far closer to "+5% a year" than "+5000% in a single breakthrough".

You can try to build a novel AI architecture, at a small scale. Just be ready. This field will kick your teeth in. ML doesn't like grand ideas and isn't kind to high aspirations.


Physics is obviously incomplete and yet nobody can solve quantum gravity. Being obviously flawed doesn't mean the solution is obvious. That's the whole problem.

I think in this case, people tend to underrate just how capable and flexible the basic LLM architecture is. And, also, underrate how many gains are there in better training vs better architecture.

Not obvious but the brain manages to think in ways LLMs don't really and the design is presumably of fairly finite complexity to be encoded in DNA.

Most people are not ML researchers. Most of the AI industry is not AI researchers. Most of the AI spending is not going to AI researchers.

AI researchers came up with an architectural improvement that made a lot of previously impossible stuff barely possible. Then, industry ran with it. Scaling that particular trick to the limits by throwing as much raw compute and data at it as humanly possible.

You don't need to be an AI expert to know that there are probably more advances to be had and that funding foundational research is the way to get them.


The results from "funding foundational research" are, too, middling at best.

It's not certain if something like JEPA would ever reach production grade AI models.


> Based on what?

Internet did not have enough devices to reach people. At the height of 2002 only a fraction of people worldwide had an already expensive computer and an internet to go with it.

I ran a e-commerce startup from 2005-2010. Having access to demand is a thing.

Today everyone has access in their pockets. Go to small city in Africa, India and China, and observe how they use AI. See how perplexity has put AI answers in hundreds of millions of people's hands before Google in a matter of months.

Forgive me for saying — but "Based on what" for comparing accelerated adoption between 2005 and 2025 — is discarding many huge elephants in the room, starting with that small thing you're reading this in your hand with, and the invisible thing that's sending you this comment.


The current LLM models are too fallible and inefficient for everyday use. Energy requirements have become a major concern, bucking the trend of banking exponential computing gains through the efficiency improvements. Until recently, year-on-year global energy demands of data-centers were increasing linearly despite the exponential increase in computing power.

This has changed since the cost of equipment and infrastructure has been close to "free" for the large corporations running a cycle of funny-money investment in a bubble. This has allowed them sell access to the computing and models for just above operational energy costs (to the extent of increasing global energy prices), whilst offering free accounts to harvest data. No small competitor could possibly compete with that model.

The calculation for profitability (useful output for humans in a cost-benefit analysis) of the current setup is broken, and trades on our dreams of the future. Scaling computing forever simply does not work. It will never be profitable without further leaps forward in the technology--either more efficient models or hardware. By this time, the extent to the excessive investment in new data-centers will be clear.


"The dot-com bubble didn't prove the internet was a fad -"

The internet is a medium, an interconnection of autonomous computer networks

A "web" of hyperlinked HTML pages is one use of an internet

However this internet is more than a handful of popular websites incorporated as companies

Perhaps that's why it was called the "dot-com" bubble and not the "internet" bubble


The internet predated dot-coms by two or three decades and wasn't very bubbly - mostly government funded links between academic and military institutions. It was only when commerce got in there in the 90s that things started getting busy and then bubbly.

A lot of those early .com companies could have been profitable. They chose to go for rapid growth instead. Some people here probably remember discussion of users mattering more than revenue.

You see that same pattern with AI now. Products are being provided for free or nearly free, and plenty is being spent on marketing.


Thats because both then and now there’s a perception that it’s a land grab.

That the key to success is developing a brand and user base faster than the next person can.

And that’s where AI makes things so hard. Creating a protective moat.


also people forget that user base can change, yahoo -> gmail, altavista -> google search etc

A lot of companies are shocked to find user loyalty is about as good as employer

The minute Amazon stops being good about refunds or a cheaper delivery app comes out than DoorDash I’m gone. No loyalty.

And both of those are on the verge as they focus on profits over customer experience.


Technology for all the above existed in rudimentary form, faster Internet, faster machines and adoption was missing. But Current bets are assuming AGI. No one knows how soon. To predict that would be foolish.

If technology had been the same for the past 20 years, basically none of these would have existed, or would be even close as large as today. We needed way faster cable and mobile internet, and smartphones. Probably even smaller laptops. It was possible to predict these more or less, however, it was impossible to predict when or whether people start to really utilize the internet. Even now, we needed COVID to have another shift regarding this. The general acceptance of “internet first” kind of worldview maybe would have never happened without forcing us to have.

Inevitability of what, chatbots that mark poisonous mushrooms as edible in every product?

How does this add to the discussion? Is the goal to make HN as toxic as everywhere else online? If you have something to say, say it. Otherwise this performative negativity and cynicism is boring honestly.

How is it toxic to back against the irrational infatuation with barely useful stochastic parrots?

What I have to say is in the comment above: chatbots are mediocre at best

On the RHS, post hype, the second movers could work on the boring, unsexy problems in those domains nobody wanted to solve. And solve them extremely well. Then build a moat around that.

There is also a customer adoption curve of technology that lags far behind the technologist adoption curve. For example video on the Web failed a long time, until it didn't, when Youtube began to succeed. The problem became "boring" to technologists in some ways, but consumers gradually caught up.


Blockchain --> ?

Blockchain the the internet seem to have stabilized. The internet as fast video capable links between computers, blockchain as a tech for speculation, gambling and some criminal stuff. AI has not and is still on the exponential bit of the S curve.

Most people would agree with this, question is just how much faster.

> To me it feels like the AI inevitability will be much quicker.

AI is accelerating "let them eat cake" at rates never seen before in history, so I imagine the violence will follow soon after


Yeah but AI can also generate a picture of any cake you prompt it.

Aside from a belief that the AI adoption will happen very quickly, which maybe that’s your main point, you’re not really disagreeing with the article:

> All this means two things to us: 1)The AI revolution will indeed be one of the biggest technology shifts in history. It will spark a generation of innovations that we can’t yet even imagine. 2) It’s going to take way longer to see those changes than we think it’s going to take right now.


What bugs me the most about all this surveillance is that crime clearance rates dont seem to be improving, I guess it just makes law enforcents job easier, they just click click click instead of actual shoe leathering.

Murder clearance rates in the 50s was in the high ninety percent.


There are some good reasons it is lower now, like defense lawyers and Miranda rights. Obviously it'd be good if we had both good civil rights AND high murder clearance, but they seem in obvious tension with each other.

Does anyone know if ALPRs are being combined with Bluetooth/TPMS scanning to associate devices across vehicles or if TPMS is getting associated to vehicles (like if a stolen plate is put on another vehicle because the TPMS doesn't match)?

TPMS scanning is particularly nasty. I've been meaning to read up more about all of the potential ways it can violate our privacy. Sorry that I don't have any further info about it in this case.

>You are able to turn these ads off, but only individually by editing the videos one by one. I spent hours going through my backlog of videos disabling ads I didn’t place.

This is like my Google Wallet where I have hundreds of old boarding passes that can only be deletion d by editing each on. No delete all, no multi select. I consider this malicious compliance, where Google sees a way to store your history (travel in this case) despite having all other location history off.


> I consider this malicious compliance

Hadn't thought of it that way before but this sounds spot-on.


Google Photos too

I'm not sure what platform of Google Photos you were trying, but on the web and mobile app, you can drag to select sequential photos (i.e multi-select).

What if you are trying to delete all 30k of them after a google takeout?

Yup it is amazing, it also focuses illumination on signs and maintains that focus as you drive towards them.

Luminar was featured in the Mark Rober video where he pitted Tesla vs Lidar

Plus the separation of powers, which is nice and brilliant...

House ----- Impeach Purse Break Electoral Tie for President

Senate ----- Try the impeachment Break Electoral Tie for Vice President Ratify treaties Confirm executive appointments


> Now check how many recalls there are with companies like Ford. Recalls are pretty much standard in the vehicle industry.

As of this story's publication, Thursday, September 4, 2025, Ford has issued 109 recalls that have covered 7,871,344 vehicles. Of course, quite a few of those cars are repeat offenders, but you get the point. It's staggering.

Of those 109 recalls, 26 of them are re-recalls. That means they're recalls on recalls Ford has already carried out


Making cars is hard, just ask Ford...

>As of this story's publication, Thursday, September 4, 2025, Ford has issued 109 recalls that have covered 7,871,344 vehicles. Of course, quite a few of those cars are repeat offenders, but you get the point. It's staggering.

>Of those 109 recalls, 26 of them are re-recalls. That means they're recalls on recalls Ford has already carried out,

>Read More: https://www.jalopnik.com/1958179/all-ford-vehicle-recalls-20...


What is the PE for Ford v/s the PE for Tesla.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: