Gemini users kind of have a meltdown if you try to implement any optional features. One browser implemented favicons and users were flaming the github issues demanding it be removed or they would implement IP blocks for any users requesting the favicon url. I tried to find the link but search results are drowned out by Google's Gemini.
Most of these util libraries require basically no changes ever. The problem is the package maintainers getting hacked and malicious versions getting pushed out.
If you use an LLM to generate a function, it will never be updated.
So why not do the same thing with a dependency? Install it once and never update it (and therefore hacked and malicious versions can never arrive in your dependency tree).
You're a JS developer, right? That's the group who thinks a programmer's job includes constantly updating dependencies to the latest version constantly.
You're not a web developer, right? See my other comment about context if you want to learn more about the role of context in software development in general. If you keep repeating whatever point you're trying to make about some imaginary driving force to pointlessly update dependencies in web dev, you'll probably continue to embarrass yourself, but it's not hard to understand if you read about it instead of repeating the same drivel under every comment in this thread.
Oh god, without tree shaking, lodash is such a blight.
I've seen so many tiny packages pull in lodash for some little utility method so many times. 400 bytes of source code becomes 70kb in an instant, all because someone doesn't know how to filter items in an array. And I've also seen plenty of projects which somehow include multiple copies of lodash in their dependency tree.
Its such a common junior move. Ugh.
Experienced engineers know how to pull in just what they need from lodash. But ... most experienced engineers I know & work with don't bother with it. Javascript includes almost everything you need these days anyway. And when it doesn't, the kind of helper functions lodash provides are usually about 4 lines of code to write yourself. Much better to do that manually rather than pull in some 70kb dependency.
> 400 bytes of source code becomes 70kb in an instant,
This only shows how limited and/or impractical dependency management story is. The whole idea behind semver is that at the public interface level patch version does not matter at all and minor versions can be upped without breaking changes, therefore a release build should be safe to only include major versions referenced (or on the safe side, the highest version referenced).
> Its such a common junior move. Ugh.
I can see this happening if a version is pinned at an exact patch version, which is good for reproducibility, but that's what lockfiles are for. The junior moves are to pin a package at an exact patch version and break backwards compatibility promises made with semver.
> Experienced engineers know how to pull in just what they need from lodash. But ...
IMO partial imports are an antipattern. I don't see much value in having exact members imported listed out at the preamble, however default syntax pollutes the global namespace, which outweighs any potential benefits you get from members listed out the preamble. Any decent compiler should be able to shake dead code in source dependencies anyway, therefore there should not be any functional difference between importing specific members and importing the whole package.
I have heard an argument that partial imports allow one to see which exact `sort` is used, but IMO that's moot, because you still have to perform static code analysis to check if there are no sorts used from other imported packages.
> Any decent compiler should be able to shake dead code in source dependencies anyway, therefore there should not be any functional difference between importing specific members and importing the whole package.
Part of the problem is that a javascript module is (or at least used to be) just a normal function body that gets executed. In javascript you can write any code you want at the global scope - including code with side effects. This makes dead code elimination in the compiler waay more complicated.
Modules need to opt in to even allowing tree shaking by adding sideEffects: false in package.json - which is something most people don't know to do.
> I don't see much value in having exact members imported listed out at the preamble
The benefit to having exact members explicitly imported is that you don't need to rely on a "sufficiently advanced compiler". As you say, if its done correctly, the result is indistinguishable anyway.
In my mind, anything that helps stop all of lodash being pulled in unnecessarily is a win in my books. A lot of javascript projects need all the help they can get.
> Modules need to opt in to even allowing tree shaking by adding sideEffects: false in package.json - which is something most people don't know to do.
That flag has always been a non-standard mostly-just-Webpack-specific thing. It's still useful to include in package.json for now, because Webpack still has a huge footprint.
It shouldn't be an opt-in that anything written and published purely as ESM should need, it was a hack to paper over problems with CommonJS. One of the reasons to be excitedly dropping CommonJS support everywhere and be we are getting to be mostly on the other side of the long and ugly transition and getting to a much more ESM-native JS world.
> The whole idea behind semver is that at the public interface level patch version does not matter at all and minor versions can be upped without breaking changes, therefore a release build should be safe to only include major versions referenced (or on the safe side, the highest version referenced).
... Sorry, what does that have to do with tree shaking?
I agree the JS standard library includes most of the stuff you need these days, rendering jquery and half of lodash irrelevant now. But there's still a lot of useful utilities in lodash, and potentially a new project could curate a collection of new, still relevant utilities.
It was more useful before when browsers didn't support array.map and fromEntries. That's the origin of all these libraries, but browsers caught up. Things like keyBy, groupBy, debounce, uniqueId, and some others, are still useful.
The problem with helper functions is that they're often very easy to write, but very hard to figure out the types for.
Take a generic function that recursively converts snake_case object keys to pascalCase. That's about 10 lines of Javascript, you can write that in 2 mins if you're a competent dev. Figuring out the types for it can be done, but you really need a lot of ts expertise to pull it off.
Not really familiar with TS, but what would be so weird with the typing? Wouldn't it be generic over `T -> U`, with T the type with snake_case fields and U the type with pascalCase fields?
Turns out in TypeScript you can model the conversion of the keys themselves from snake_case to pascalCase within the type system[0]. I assume they meant that this was the difficult part.
Unless you're part of the demoscene or your webpage is being loaded by Voyager II, why is 70kb of source code a problem?
Not wanting to use well constructed, well tested, well distributed libraries to make code simpler and more robust is not motivated by any practical engineering concern. It's just nostalgia and fetishism.
Because javascript isn't compiled. Its distributed as source. And that means the browser needs to actually parse all that code before it can be executed. Parsing javascript is surprisingly slow.
70kb isn't much on its own these days, but it adds up fast. Add react (200kb), a couple copies of momentjs (with bundled timezone databases, of course) (250kb or something each) and some actual application code and its easy to end up with ~1mb of minified javascript. Load that into a creaky old android phone and your website will chug.
For curiosity's sake, I just took a look at reddit in dev tools. Reddit loads 9.4mb of javascript (compressed to 3mb). Fully 25% of the CPU cycles loading reddit (in firefox on my mac) were spent in EvaluateModule.
This is one of the reasons wasm is great. Wasm modules are often bigger than JS modules, but wasm is packed in an efficient binary format. Browsers parse wasm many times faster than they parse javascript.
It isn't but then everyone does it and then everyone does it recursively and 70kb become 300MB and then it matters. Not to mention that "well constructed, well tested, well distributed" are often actually overengineered and poorly maintained.
Of course it's difficult. Even if you could convert it to cash you wouldn't be able to deposit in any bank or meaningfully use it. The moment you do anything with it you'll trigger anti money laundering laws and have to explain where the money came from.
From a criminal perspective you may not have to launder it. Just deposit your XMR/ZEC into an exchange and sell it. If they ask, say you bought it years ago at $10.
The official steam deck dock seems to be fairly buggy, it's widely complained about. I've been using an Apple USB-C to HDMI adapter and it's worked perfect on every TV I've tried it on. Since the steam machines don't use USB-C video out this wouldn't be an issue.
That may be the case but “official valve hardware is buggy even after almost 4 years” is a troubling yet true statement when their entire pitch today was literally “check out our new hardware.”
I love my deck. But a smooth experience it is not. Up until idk a year ago…? Flipping from gaming mode to desktop mode or vice versa had a solid 50-50 shot of inducing a fail state requiring a hard reset.
The main problem I see is that if this is any cheaper than it's hardware, people will buy 100s of them and stack them in server racks for CI runners or whatever. Generating only losses for Valve and making the hardware unavailable to gamers.
It needs to either be at market rate or locked down to only be useful for gaming.
I don't think they could possibly make it cheap enough for that - especially once you consider all the money being wasted on RGB/Bluetooth/a GPU you won't use.
Messing around with weird consumer hardware in a datacenter context isn't exactly attractive. If all you need is some x86 cores, an off-the-shelf blade server approach gets you far more compute in the same space with far less hassle. Even if the purchase cost is attractive, TCO won't be.
There are already small PCs without a GPU for around $200–300, and this will cost at least 2-3 times that. Valve already comfirmed, that the pricing will not be 'console like' and would match entry level PC. And PS5 is $500.
The PS3 was weird. It had a unique architecture that made it especially useful for HPC in an era before GPUs were useful for that purpose. The CPU and GPU in the Steam Machine are not particularly high-end.
Does it have IPMI? Does it have ECC ram? Racking Mac Minis is a painful enough, this form factor is less rackable than that. If you need to physically adjust the form factor per device, whatever you could've saved will be immediately lost in labor.
The PS3 was uniquely powerful, compared to its x86 peers. It wasn't just cheap - it provided the compute of 30 desktop computers in the space, power, and price envelope of one.
I think the limitation on server gear these days is electricity price vs compute, with the hardware price being an up front investment but not dominating the lifetime cost. At least at this end of the price range - it's a consumer GPU, not an A100 or anything.
Iiuc, unlike Sony’s PS3 (which were bought and used like this), Steam is the unique distributor so it would be easy for them to not allow (or make really difficult to) buying thousands of machines.
(Or they could sell it everywhere for higher price but the Machine would come with a non transferable Steam gift card.)
Immutability doesn't provide this on it's own. You can load any custom immutable image you want. What game devs want is full boot chain attestation where every part of the OS is measured and verified untampered with, and then to load their own spyware at the highest level.
The only way immutability helps here is you could have two OS images, the users own customisable one, and a clean one. Then when you try to load an anti cheat game, the console could in theory reboot in to the clean one, and pass all the verification checks to load the game.
I am, indeed, assuming that their immutable image can generate attestations chained appropriately. If not, it’s a catastrophic business error on their part to put in all that work, and I don’t consider that degree of failure likely. Definitely curious to see if they can enable the chain on existing Steamdecks or not.
Immutable images provide many benefits that are unrelated to DRM. The main one being that the entire fleet of Steam Decks/Machines are all in a known state. Updates are a matter of pushing a new OS image, you don't have to worry about migrating files, conflicting configurations, strange user changes. And if an update fails, the bootloader shows a screen where you can boot a previous OS image that worked.
It's like docker images for the whole OS. As far as I can tell, the Steam Deck does not have secure boot or any kind of attestation enabled. They have been very forward in marketing it as an open and free system you can do anything on. The hardware does have a TPM that is seemingly unused currently, not sure if it supports some form of secure boot.
> They have been very forward in marketing it as an open and free system you can do anything on.
Attested sealed images and Open and Free systems have no conflict with each other. Mod it all you want; sure, it’ll generate a different attestation than the shipping sealed image, or if your customizations turn off attestations and/or secure boot, none at all. You do you! Source code releases will never include the private key used to sign them, just as with all open source today, so either the OS’s attestation will be signed by Valve or by you or by someone else. It takes me about sixty seconds to add my own signing key to my PC BIOS today and it would not surprise me to find Valve’s BIOS implements the same, as I’m pretty certain this is basic off-the-shelf functionality on Zen4/Zen5. But, regardless, Free/Open Source is wholly unconcerned by whose release signing key is used; otherwise it wouldn’t be Free/Open! The decision to care about whose release signature is live right now is the gaming server’s decision, not Steam Linux’s, and that decision is not restricted by any OSS-approved license that I’m aware of.
Secure boot attestations plus sealed images do enable “unmodified Valve Linux release” checks to be performed by multiplayer game servers, without needing the user to be locked out of making changes at all. This is already demonstrated in macOS today with e.g. Wallet’s Apple Pay support; you can disable and mod the OS as much as you wish, and certain server features whose attestation requirements require an Apple release signature on the booted OS will suspend themselves when the attestation doesn’t match. When you’re ready to use those servers, you secure boot to an OEM sealed environment and they resume working immediately. This is live, today, on every Apple Silicon (and T2 chipped Intel) device worldwide, and has been available for developers to use for years.
Attestations are, similarly, already available on all AMD devices with a TPM today, so long as the BIOS to OS chain implements Secure Boot — not requires, but implements, as there’s no reason to deny users unsigned OS booting once you’re checking attestation signatures server-side. As you note, it remains to be seen if the Steam Box will make use of it. If they do, it coexists just fine with full reputposability and modifiable, because you can do whatever you like with the device — and, correspondingly, each game may choose to require an unmodified environment to ensure a level playing field without kernel or OS modifications.
It would be a lost opportunity for them if they were not the first fully open OS with a fully secure multiplayer environment that prohibits both third-party cheating mods and third-party DRM rootkits. VAC becomes as simple as a sysctl, and patches are still welcome. Open source for the win, and one step further towards the Linux desktop finally overtaking residential Windows, and thr ability to play console-grade multiplayer without the proliferation of on-device software-only hacks? Yes, please.
(Note that manufacturers who use Secure Boot to lock out device modifications are not in-scope here; that choice has no effect on attestations. Secure Boot is “the OS booted had this checksum and signature” with HSM backing, so that the software can’t lie. It is extremely unlikely that Valve would demand that the OS booted be signed by Valve. That would be no different than Xbox/PS5/Switch, and they’d be leaving a massive competitive advantage over tvOS on the table: device repurposeability.)
I think the hardest battle is going to be with anti cheat. The anti cheat that developers want basically requires dystopian levels of restrictions which are against everything valve has done on SteamOS so far.
Personally I'd love if we all just went back to playing on personal servers with your real life friends or people you otherwise trust. But I don't think this is would go over well with the average online gamer.
Hard agreement from me, but my 16 year old bricked his PC on Sunday trying to enable Valorant’s BS anti-cheat, secure boot required crap. He even knew ahead of time that he couldn’t enable it, but the pull of online gaming turned off his brain. I don’t think we’re gonna win this battle and the war is probably done as well.
We know, and the game devs know too. But Kernel anti cheat is not a solution but simply a marketing feature to make their users think they try.
Just seeing all the gamers requesting a kernel AC for CS2, saying VAC does not work; but now they have banned a lot of cheaters and seem to have less cheaters than the new Battlefield which has kernel AC.
> I think the hardest battle is going to be with anti cheat. The anti cheat that developers want basically requires dystopian levels of restrictions which are against everything valve has done on SteamOS so far.
If anyone is capable of moving things along in this space, Valve should be it.
> Personally I'd love if we all just went back to playing on personal servers with your real life friends or people you otherwise trust. But I don't think this is would go over well with the average online gamer.
It's not the gamers that don't want this - although, yes, I do also want the option of matchmaking - it's the companies that don't allow dedicated servers, or shut down the servers after releasing that year's full-price version of the same game.
They have enough first party games which only release on their hardware that people are willing to buy a Switch for nintendo games, and another gaming device for everything else.
Sad part is that I would be willing to pay a substantial mark up to be able to play some of those first party titles on my PC, but since my kids have a Switch I just settle for using it. So even if I don’t think I’d buy a console just for their games, I’m gonna end up buying it anyway and Nintendo still wins.
reply