Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin
Most-upvoted comments of the last 48 hours. You can change the number of hours like this: bestcomments?h=24.

This is how "end of support" should be handled. Instead of turning devices into e-waste, open-source them and let the community extend their life. Kudos to Bose for setting a good example.

More companies should follow this approach - especially as right-to-repair becomes a bigger issue.


> Bose should not receive praise for this move. Bose only took this action after community backlash.

They received the backlash, they responded to it by properly addressing the criticism and doing the right thing. It should be praised. Especially since it wasn't some PR-centric damage control, but an actual direct address of the specific points their original approach was criticized for.

Compare Bose's response to that of Sonos (another large techy audio brand). Sonos had an absolutely massive backlash recently (within the past few years iirc) in regards to deprecating software support for their older speakers that I'd read about everywhere (including HN) for months and months.

Afaik, it didn't lead to Sonos doing the right thing in the end (unlike the scenario at hand here), despite the online outrage being way more widespread than in the Bose's case.


Hey! I created Jeff Dean Facts! Not the jokes themselves, but the site that collected them.

It was in 2008 I think (give or take a year, can't remember). I worked at Google at the time. Chunk Norris Facts was a popular Internet meme (which I think later faded when he came out as MAGA, but I digress...). A colleague (who wishes to remain anonymous) thought the idea of Jeff Dean Facts would be funny, and April 1st was coming up.

At the time, there was a team working on an experimental web app hosting platform code named Prometheus -- it was later released as App Engine. Using an early, internal build I put together a web site where people could submit "facts" about Jeff Dean, rate each other's facts on a five-star scale, and see the top-rated facts. Everything was anonymous. I had a few coworkers who are funnier than me populate some initial facts.

I found a few bugs in Prometheus in the process, which the team rapidly fixed to meet my "launch date" of April 1st. :)

On the day, which I think was a Sunday, early in the morning, I sent an email to the company-wide "misc" mailing list (or maybe it was eng-misc?) from a fake email address (a google group alias with private membership), and got the mailing list moderator to approve it.

It only took Jeff an hour or two to hack his way through the back-end servers (using various internal-facing status pages, Borg logs, etc.) to figure out my identity.

But everyone enjoyed it!

My only regret is that I targeted the site specifically at Jeff and not Sanjay Ghemawat. Back then, Jeff & Sanjay did everything together, and were responsible for inventing a huge number of core technologies at Google (I have no idea to what extent they still work together today). The site was a joke, but I think it had the side effect of elevating Jeff above Sanjay, which is not what I intended. Really the only reason I targeted Jeff is because he's a bit easier to make fun of personality-wise, and because "Jeff Dean Facts" sort of rolls off the tongue easier that "Sanjay Ghemawat Facts" -- but in retrospect this feels a little racist. :(

My personal favorite joke is: Jeff Dean puts his pants on one leg at a time, but if he had more than two legs, you'd see his approach is actually O(log n).


One thing I find really funny is when AI enthusiasts make claims about agents and their own productivity its always entirely anecdotally based on their own subjective experience, but when others make claims to the contrary suddenly there is some overwhelming burden of proof that has to be reached in order to make any sort of claims regarding the capabilities of AI workflows. So which is it?

The biggest "evil" that has been committed (and is still being committed) against computing has been normalizing this idea of not having root access to a device you supposedly own. That having root access to your computer, and therefore being the ultimate authority over what gets run on it, is bad or risky or dangerous. That "sideloading" is weird and needs a separate name, and is not the normal case of simply loading and running software on your own computer.

Now, we're locking people out of society for having the audacity of wanting to decide what gets run and not run on their computers?


No ill will towards the team, but isn’t it almost absurd that a CSS library is funded to the tune of 1m+ yearly and is still in financial difficulty? It is technically complete. There is no major research work or churn like in React, no monstruous complexity like Webpack.

> grass fed, free range... Because agribusiness doesn't make money with those.

Agribusiness absolutely makes money off of those. In fact they had a hilariously easy time adapting to the consumer trend because all they had to do to label a cow “free range” or “grass fed” was change the finishing stage to a lower density configuration instead of those abominable feed lots you see along highways. The first two stages, rearing and pasturing, didn’t change because they were already “free range” and “grass fed”. Half of the farmland in the US is pastureland and leaving animals in the field to eat grass was always the cheapest way to rear and grow them. They only really get fed corn and other food at the end to fatten them up for human consumption.

The dirty not-so-secret is that free range/grass fed cows eat almost the exact same diet as regular cows, they just eat a little more grass because they’re in the field more during finishing. They’re still walking up to troughs of feed, because otherwise the beef would be unpalatable and grow quite slower.

True grass fed beef is generally called “grass finished” beef and it’s unregulated so you won’t find it at a supermarket. They taste gamier and usually have a metallic tang that I quite honestly doubt would ever be very popular. The marbling is also noticeably different and less consistent. Grain finished beef became popular in the 1800s and consumers in the West have strongly preferred it since.

I’m not sure you can even find a cow in the entire world that isn’t “grass fed”. Calves need the grass for their gut microbiomes to develop properly.


This is not open sourcing any actual software or hardware it is “open-sourcing the API documentation for its SoundTouch smart speakers”. You might be able to point them at an alternative back-end¹ if you want the cloud features, but that will need to be written from scratch rather than being forked from code provided by Sonos.

> When cloud support ends, an update to the SoundTouch app will add local controls to retain as much functionality as possible without cloud services

This is a far bigger move than releasing API information, IMO bigger than if they had actually open sourced the software & hardware, from the point of view of most end users - they can keep using the local features without needing anyone else to maintain a version.

--------

[1] TFA doesn't state that this will be possible, but opening the API makes no sense if it isn't.


Hi Kenton! No worries at all. I tend to be quieter than Jeff anyway (less public speaking etc.) and I am happy to not have a dedicated website. :-). -Sanjay

This def needs to be celebrated and rewarded. I am more likely to purchase Bose now.

They are not worse - the results are not repeatable. The problem is much worse.

Like with cab hailing, shopping, social media ads, food delivery, etc: there will be a whole ecosystem, workflows, and companies built around this. Then the prices will start going up with nowhere to run. Their pricing models are simply not sustainable. I hope everyone realizes that the current LLMs are subsidized, like your Seamless and Uber was in the early days.


For folks not following the drama: Anthropic's $200/month subscription for Claude Code is much cheaper than Anthropic's pay-as-you-go API. In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API.

Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI. They want OpenCode users to pay API prices, which could be 5x or more.

So, of course, OpenCode has implemented a workaround, so that folks paying "only" $200/month can use their preferred OpenCode CLI at Anthropic's all-you-can-eat token buffet.

https://github.com/anomalyco/opencode/issues/7410#issuecomme...

Everything about this is ridiculous, and it's all Anthropic's fault. Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)


I know a certain set of HN users doesn't like to discuss "politics" but if the government's site about "Eat Real Food" can sit on the front page for many hours (currently at spot 14 after being posted 23 hours ago) then this can too. It's important that US citizens know what their federal government is doing in their name.

If you require a tech angle: how about the fact that smartphones have enabled this incident to be recorded from many angles by everyday citizens? A couple of decades ago we'd likely only have the government's word for it. How long before AI messes up that trust?

EDIT: what do you know? This post has disappeared from the front page. Currently in the 57th spot on page 2. And yes, "Eat Real Food" remains exactly where it was.

If you didn't already know about HN's moves to minimize visibility of government wrongdoing, well, you do now.


Fortunately, the government cannot enforce complete blackout because thousands of startlink terminals are active inside the country. They have been complaining about it [1] to no avail. Using these terminals activists and journalists continue to upload videos of demonstrations to social media which has enabled analyses that show demonstrations are very wide spread [2] and continue to grow.

[1] https://www.itu.int/en/ITU-R/conferences/RRB/Pages/Starlink....

[2] https://www.bbc.com/news/articles/cre28d2j2zxo


This is a finding that keeps coming up, and I've certainly found it true in my life, but there's a significant chicken-and-egg problem in that depression frequently precludes the motivation to exercise, and if you don't already have a deeply-disciplined routine to overcome the lack of motivation, people won't do it.

Exhortation to develop those good habits in the good times, I suppose.


This thread reads like an advertisement for ChatGPT Health.

I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"

OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.

https://consciousdigital.org/chatgpt-health-is-a-marketplace...


This is an unusual L for Anthropic. The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code. Obviously, CC is a great tool, but that's more about the magic of the model than the engineering of the CLI.

The opencode team[^1][^2] built an entire custom TUI backend that supports a good subset of HTML/CSS and the TypeScript ecosystem (i.e. not tied to Opencode, a generic TUI renderer). Then, they built the product as a client/server, so you can use the agent part of it for whatever you want, separate from the TUI. And THEN, since they implemented the TUI as a generic client, they could also build a web view and desktop view over the same server.

It also doesn't flicker at 30 FPS whenever it spawns a subagent.

That's just the tip of the iceberg. There are so many QoL features in opencode that put CC to shame. Again, CC is a magical tool, but the actual nuts and bolts engineering of it is pretty damning for "LLMs will write all of our code soon". I'm sorry, but I'm a decent-systems-programmer-but-terminal-moron and I cranked out a raymarched 3D renderer in the terminal for a Claude Wrapped[^] in a weekend that...doesn't flicker. I don't mean that in a look-at-me way. I mean that in a "a mid-tier systems programmer isn't making these mistakes" kind of way.

Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

[^1] https://github.com/anomalyco/opentui

[^2] From my loose following of the development, not a monolith, and the person mostly responsible for the TUI framework is https://x.com/kmdrfx

[^3] https://spader.zone/wrapped/


Before the "rewrite it in Rust" comments take over the thread:

It is worth noting that the class of bugs described here (logic errors in highly concurrent state machines, incorrect hardware assumptions) wouldn't necessarily be caught by the borrow checker. Rust is fantastic for memory safety, but it will not stop you from misunderstanding the spec of a network card or writing a race condition in unsafe logic that interacts with DMA.

That said, if we eliminated the 70% of bugs that are memory safety issues, the SNR ratio for finding these deep logic bugs would improve dramatically. We spend so much time tracing segfaults that we miss the subtle corruption bugs.


It seems to me that Wasm largely succeeded and meets most/all of the goals for when it was created. The article backs this up by listing the many niches in which its found support, and I personally have deployed dozens of projects (both personal and professional) that use Wasm as a core component.

I''m personally a big fan of Wasm; it has been one of my favorite technologies ever since the first time I called malloc from the JS console when experimenting with an early version of Emscripten. Modern JS engines can be almost miraculously fast, but Wasm still offers the best performance and much higher levels of control over what's actually running on the CPU. I've written about this in the past.

The only way it really fell short is in the way that a lot of people were predicting that it would become a sort of total replacement for JS+HTML+CSS for building web apps. In this regard, I'd have to agree. It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.

I've tried out some of the Wasm-powered web frameworks like Yew and not found them to provide an improvement for me at all. It just feels like an awkwardly bolted-on layer on top of JS and CSS without adding any new patterns or capabilities. Like you still have to keep all of the underlying semantics of the way JS events work, you still have to keep the whole DOM and HTML element system, and you also have to deal with all the new stuff the framework introduces on top of that.

Things may be different with other frameworks like Blazor which I've not tried, but I just find myself wanting to write JS instead. I openly admit that it might just be my deep experience and comfort building web apps using React or Svelte though.

Anyway, I strongly feel that Wasm is a successful technology. It's probably in a lot more places than you think, silently doing its job behind the scenes. That, to me, is a hallmark of success for something like Wasm.


This is good, but it doesn't necessarily mean that Tailwind is out of the financial difficulty that we talked about yesterday. You can sponsor Tailwind for as little as $6,000/year. 29 companies were already sponsoring Tailwind including 16 companies at the $60,000/year level. Maybe Google AI Studio has decided to shell out a lot more, but it could also be a relatively small sponsorship compared to the $1.1M in sponsorships that Tailwind is already getting. Google has deep pockets and could easily just say "f-it, we're betting on AI coding and this tool helps us make UIs and $2M/year is nothing compared to what we're spending on AI." It's also possible that the AI Studio team has a small discretionary budget and is giving Tailwind $6,000/year.

It's good, but it's important to read this as "they're offering some money" and not "Tailwind CSS now doesn't have financial issues because they have a major sponsor." This could just be a 1-5% change in Tailwind's budget. We don't know.

And that's not to take away from their sponsorship, but on the heels of the discussion yesterday it's important to note that Tailwind was already being sponsored by many companies and still struggling. This is a good thing, but it's hard to know if this moves the needle a bunch on Tailwind's problems. Maybe it'll be the start of more companies offering Tailwind money and that'd be great.


I couldn't help but focus on the vicarious adventure aspect Kelly mentions which was the "payment" he offered drivers in exchange for the ride. This is a mechanism that has largely been deprecated by the modern attention economy.

In the era of hitchhiking, the bandwidth for novelty was low. A driver on a long commute had no podcasts, no Spotify or audiobooks. A stranger with a story was high value. The transaction was something like = I provide logistics and you provide content; like the story of your cross-country bike trip.

Today, we have near infinite content in our pockets. The marginal utility of a stranger's story has plummeted because the competition is Joe Rogan or an endless algorithmic feed. We have largely replaced the P2P protocol of kindness with a sort of centralized platform of service. We stripped out the human latency and the requirement for social reciprocity and replaced it with currency and star ratings. It makes me surreal to think about this.


I agree with others here that focusing your eyes on _using_ open source is, at least, an incomplete view of the problem.

What we (European software engineers) have been arguing, is that software that is funded by public means, such as from universities or institutions, ought to be made fully public, including ability to tweak. Thinking that open source software will help solve your budget and/or political problem is not something we're interested in doing for free. This excerpt here:

> In the last few years, it has been widely acknowledged that open source – which is a public good to be freely used, modified, and redistributed – has

suggests they see it as free candy, rather than the result of love and hard work, provided for free because it's nice. Pay for what you use, especially at the government level.

Of course, I strongly encourage the European governments to invest in open source. And if you're interested in giving money, I'm interested in doing work. Same as ever.


It's a great point and everyone should know it: the core of a coding agent is really simple, it's a loop with tool calling.

Having said that, I think if you're going to write an article like this and call it "The Emperor Has No Clothes: How to Code Claude Code in 200 Lines of Code", you should at least include a reference to Thorsten Ball's excellent article from wayyy back in April 2025 entitled "How to Build an Agent, or: The Emperor Has No Clothes" (https://ampcode.com/how-to-build-an-agent)! That was (as far as I know) the first of these articles making the point that the core of a coding agent is actually quite simple (and all the deep complexity is in the LLM). Reading it was a light-bulb moment for me.

FWIW, I agree with other commenters here that you do need quite a bit of additional scaffolding (like TODOs and much more) to make modern agents work well. And Claude Code itself is a fairly complex piece of software with a lot of settings, hooks, plugins, UI features, etc. Although I would add that once you have a minimal coding agent loop in place, you can get it to bootstrap its own code and add those things! That is a fun and slightly weird thing to try.

(By the way, the "January 2025" date on this article is clearly a typo for 2026, as Claude Code didn't exist a year ago and it includes use of the claude-sonnet-4-20250514 model from May.)

Edit: and if you're interested in diving deeper into what Claude Code itself is doing under the hood, a good tool to understand it is "claude-trace" (https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...). You can use it to see the whole dance with tool calls and the LLM: every call out to the LLM and the LLM's responses, the LLM's tool call invocations and the responses from the agent to the LLM when tools run, etc. When Claude Skills came out I used this to confirm my guess about how they worked (they're a tool call with all the short skill descriptions stuffed into the tool description base prompt). Reading the base prompt is also interesting. (Among other things, they explicitly tell it not to use emoji, which tracks as when I wrote my own agent it was indeed very emoji-prone.)


Who could have guessed that the greedy, opportunistic, evil corporation whose sole intent is to invade our privacy in the name of "security" would be run by incompetents in the security realm?

It's an impossible thing to disprove. Anything you say can be countered by their "secret workflow" they've figured out. If you're not seeing a huge speedup well you're just using it wrong!

The burden of proof is 100% on anyone claiming the productivity gains


I don't know how many others here have a CoPilot+ PC but the NPU on it is basically useless. There isn't any meaningful feature I get by having that NPU. They are far too limited to ever do any meaningful local LLM inference, image processing or generation. It handles stuff like video chat background blurring, but users' PC's have been doing that for years now without an NPU.

The comments here surprise me a bit. The common thread so far seems to be a general fear of US based companies, but how is that relates to the article?

Cloudflare's post is pretty boring here in that regard. They dig into how BGP works and propose that similar leaks seem common for the Venezuelan ISP in question.

Sure they could be wrong or even actively hiding the truth of what happened here, but the article mentions nothing of Cloudflare being involved in the action and they're describing a networking standard by pointing to publicly available BGP log data.

What am I missing here that everyone else seemed to zero in on?


This is a healthy thing to happen to the Linux browser ecosystem imho.

We talk a lot about browser diversity, but on Linux and Windows, it is a lie. You have firefox (gecko) and fifty flavors of chromium. Webkit on Linux has essentially been relegated to embedded devices or the GNOME epiphany browser, which I'll admit while is a noble effort, lags a bit in the stability and power-user features department. Big reason for that is that it lacks the commercial backing to keep up with the modern web standards rat race.

Kagi bringing orion to Linux changes the calculus. It introduces a third commercially incentivized, consumer-grade engine to the platform. Even if you never use orion, you want this to succeed because it forces WebKitGTK upstream to get better, which benefits the entire open source ecosystem.

The sticking point like always will be media playback (read: DRM/widevine). That is the graveyard where Linux browsers go to die. If Kagi can legally and technically solve the widevine integration on a non-standard Linux webkit build, they win. If not, it will be a secondary browser.


Business books sometimes get a bad rap on here, but I never read an essay where I more thought "wow this guy really needs to read some basic business books." Even though it was a non-profit, there is so much wisdom in them about management and leadership that was clearly lacking throughout his experience. It's too late now. But maybe if he understood some of the reasons back when they were starting the app why organizations are structured the way they typically are, he wouldn't have experimented with so many poor (and ultimately failed) governance structures.

It seems like he was looking at his organization through a social lens (democracy, everyone should have a say) from a governance perspective but having it focused through a product lens (the app). That just doesn't mesh well. Social organizations typically have social missions, not products. When the two mix it doesn't always go well (see Mozilla).

He also explicitly gave up his leadership position and then later wanted a say in management's direction. Ultimately, he sounds like a caring, nice guy, who was more interested in "having everyone heard" than learning some management skills. What happened later after he dropped out of the leadership circle is just a product of that and I imagine significant bad blood between him and those who remained.


Bose should not receive praise for this move. Bose only took this action after community backlash. In an older version of their end-of-life announcement, most functionality of the speaker systems would have removed and transformed the devices into dumb-speakers/amps.

Good that they changed their statement and took the right action. Even better for the community for stepping up and 'forcing' Bose to do so.

Sources: https://web.archive.org/web/20251201051242/https://www.bose.... https://arstechnica.com/gadgets/2025/10/bose-soundtouch-home...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: