Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
macOS updates for Apple Silicon Macs are larger than reported (eclecticlight.co)
231 points by ingve on Aug 21, 2023 | hide | past | favorite | 162 comments


APFS (Apple filesystem) calculates free/used space in a non-intuitive way which causes users to be confused about file size.

Link from the same publication on this exact topic.

https://eclecticlight.co/2020/04/09/where-did-all-that-free-...


This is a little like a system which says "we will use all free space behind your filesystems linear maximal extent as swap" and then discovering it gobbled all your freespace and DF reports you have no space. As long as the space comes back when you need it, you don't have to care. But, time machine, and swap both have a habit of grabbing stuff at high privilege you can't always directly impact without using abstruse methods. So, in practice while you can say "it doesn't matter" it can matter, depending: it can cause dismay, and it can interfere with your otherwise normal use of the machine.

"free" space being contextualised isn't entirely nice. better to be clear how its consumed, than say its free.


My interpretation of this is different:

- free space is being used as swap until needed, then transparently released to be used

- "free" uniformly means being used this way, unlike "not free" which means it will not be released transparently when space is needed, so "free" is not being contextualized

- since the space is free but reported as "not free" by df, df has a bug

(edit: formatting)


As someone who often plays near this limit, I can tell you that

> then transparently released to be used

is doing most of the heavy lifting here, because in practice it often is NOT transparently released.

To give a concrete example, imagine you have 500GB free, which is enough for you to unpack and analyze a single simulation output at a time. You unpack the first one, do your analysis, and then rm that folder and its archive to make space to pull down the next batch of results.

However, you then will find that for some reason, despite Finder saying that it has plenty of space, that trying to unpack another 500GB result will fail with an "out of space" error, unless you wait several hours/days for unnamed "background processes" to garbage collect that drive space. What that looks like in practice is that Finder will start to report superlinear space usage (e.g. for every 10GB you unpack, >>10GB of usage will be reported) until the operation fails.

Thus, I find that df actually is much better about predicting the actual expected behavior of the system than Finder itself. If there were a way to manually trigger an immediate cleanup via the command line, that would also be fine. As it is, I just work off of external drives w/o APFS, which avoids the issue entirely.


df just uses C library calls which use system calls.

If you're going to implement something like this then ffs do it right. I don't know the "easy" way to fix this, but in any case if something is supposed to be transparently free and available then standard tools using standard calls should see it as free and available, with specialized user space tools that can look behind the screens when needed.


For clarification, by "df has a bug" I meant that it doesn't produce the result it should given the situation, but not necessarily that the problem is located in df's own source code, vs. the kernel, other process, library etc.


if the system is not reporting the space as free, then how in the world does the bug belong to df. it is the system not reporting it correctly.


question: how can space needed for swap be transparently released?

When disk, RAM, and swap are all full, does write() sometimes cause the OOM killer to kill other processes?


Hopefully, yes.


> As long as the space comes back when you need it, you don't have to care.

That's a common assumption but it's also a lie: because you do have to care about wearing your media, reducing its lifespan. Abusing storage with heavy doses of r/w activity (which is what swapping does) is not good. Yes, the issue is not as bad in modern "disks" as in rust-spinners, and can often look like a matter of efficiency (because of how memory is physically "flipped", on modern drives it's more efficient to write a ton of data rather than a few bytes), but in reality you are wearing your disk more than you would if macOS kept its grubby hands to itself.

Of course this is not a downside for the people who will sell you replacement disks, or rather replacement machines (since drives are now soldered), who coincidentally happen to be the people who develop the disk-wearing OS.


> Yes, the issue is not as bad in modern "disks" as in rust-spinners

Unlike SSDs, hard drives never bothered to track IOP counts and accumulated read-write volume.


Swapping to a spinner made that memory borderline worthless, compared to modern SSD swap speeds, I'd guess is a better comparison.


> "free" space being contextualised isn't entirely nice. better to be clear how its consumed, than say its free.

I would guess that for more than 99% of users just saying "free" about swap/temp is the best option and it would just be confusing to say less disk is free or show multiple values. And if you really want to know you can just select "Get info" on the hard drive and see how much purgeable space you have.

Having really good background processes that makes sure the space is free when you need it and easy to understand error messages when it can't purge is of course great for everyone.


People on forums like this seem to consistently forget that the average computer user doesn’t even know the difference between storage and ram, much less understand concepts like swap. And why would they need to? It would cause way more headaches to be “honest” here than tell the convenient “lie.”


isn't this by design coz apple's business practice includes storage upsell?

if you are an avg mac consumer running out of space constantly and hitting that issue- you ask support/genius and they tell you to buy a bigger capacity more expensive mac or icloud storage as one of their "solutions".


Obviously no, because that's a very dubious way of getting sales.

(a) An issue that the vast majority of the users wont ever see.

(b) Those that would see it, would only see it after they already bought a Mac (with no storage upgrade option available to them to buy from Apple for their current Mac).

(c) So Apple's is hoping that a handful of those would do what? Buy a new Mac immediately? Totally unlikely. Get their next Macs 2-5 years down the line with more disk space?

(d) And all that combined with Apple already doing several things for the macOS to need less disk space, like the "sent online, storage restored, downloadable on demand" files feature introduced in the last OS, or a couple of rounds of prunning of the OS install size.

In general this is the kind of conspiracy theory that totally misunderstands business tactics, their feasibility, and their margin for profit.


Thank you.

I grow weary of the internet meme that everything is some trick or scam, often including things that aren’t.

https://youtu.be/zvRXp35rqjk


any advice for buying a mac includes the tagline "get the max storage device that you can afford", i think mac's engineering is highly responsible for that mindset. people like you can claim it's not profitable but it does lead to situations where ppl buy storage sizes they don't need for "peace of mind".

it's not as innocent as you imply it is.

edit: don't forget apple solders their storage in nowadays, so buying anxiety multiplies as you can't just fix it later.


APFS has a nice feature in that it can share free space between different volumes on the same physical device. So, for example, I can have a Time Machine backup and a general-purpose volume on my external SSD, and free space is automatically allocated to whichever volume needs it.

The downside is that the output of "df" can be confusing at first if you're not used to it!

(Maybe this feature is common on all OS's/filesystems now days, but I certainly could have used it back in my Linux-admin days. It was quite a revelation when I discovered it!)


How does that work with Time Machine? I once considered doing that, but I was worried that Time Machine would gobble the entire drive.


Good question. I've only had it set up for a few months and so far Time Machine is only using 660 GB of my 2 TB SSD, backing up my MacBook's 512GB internal drive.

So Time Machine is pretty space-efficient, but it will keep growing over time until it fills the entire backup drive since there is no limit on how old the backups it keeps can be[1]. I guess when it starts filling up I will just manually delete some old backups to free up space if I need it. Apparently, another alternative is to limit the size (set quota) of Time Machine's backup volume in Disk Utility.

[1] Personally I wish this could be configured. I don't really care much about keeping old files, I just want to make sure I have a recent backup just in case my MacBook gets lost or stolen or it's SSD fails...


The way I set mine up is the Time Machine section of the drive has a quota capped at 1.5x the size of my internal drive, unfortunately you can’t edit quotas without formatting the drive. (Is there any functional difference between a quota and a partition then…? Weird…)


Yeah, that's annoying that you can't edit quotas. But they are functionally different to a partition because the quota just limits the maximum size of the volume. If it's not filled to its quota, any unused space is still available for other volumes to use.


What does this have to do with an article about network download size?


> its ‘software update brain’ calls for the second download, whose size is written only in the log

This is also when it personalizes its boot signature (ticket) for your specific CPU via the TSS server.

This is done by transmitting your ECID (a hardware serial number for your CPU) to Apple. This is done over unencrypted plaintext port 80, allowing anyone who is passively monitoring all internet backbone traffic to associate that (and every other!) serial number with the client IP (and thus city level location). It also permits Apple to associate every ECID with a client IP and city level geolocation (and timestamp).

I have reported this behavior to Apple on multiple occasions and yet the plaintext transmission continues. Apple does not permit apps in the App Store that behave like this, but Apple's own insecure boot loader updater is exempt from this security policy apparently.

https://sneak.berlin/20220409/apple-is-still-tracking-you-wi...


What’s more scandalous are the 10+ gig Xcode updates. What justifies this absurd payload size for every minor version upgrade?


This is fixed in Xcode 15.

The "absurd payload size" is the effect of having at least four different SDKs (macOS, watchOS, i[Pad]OS, tvOS) bundled, including a simulator for three of them.

Xcode 15 is a ~3.5 gigs download, and includes only macOS SDKs, with all of the other (including the visionOS SDK + sim) ones being downloaded on-demand.


I swear delta updates were a thing at one point. When did that stop being the case?


They’re still A Thing for App Store apps I think, but:

- using App Store for Xcode was historically very painful and ~nobody who uses Xcode daily downloads it from MAS

- it never worked ~that well~ for Xcode anyway - apart from the big download sizes, the problem with Xcode and the bundled SDKs is that they’re all approximately trillion little text files, and it’s not unheard of to wait longer for codesign (technically unxip) verification than for the download itself, which negates the user-facing benefits of delta upgrades for many users.


Xcode is so painful. I had to install it on mojave for some command line program dependency that demanded xcode be installed. Ok, lets try installing xcode via the command line. Turns out that did not install whatever was in the xcode suite that this tool depended on so I needed the web download. Problem was, I am on mojave. It took way to long to dig up the appropriate download link that was compatible. Then of course like all large web downloads from apple, I had to babysit it and restart the download when it failed every couple hundred mb.


Then extract the xip (requires double the space) then wait forever for it to verify. One of the small things I do not miss after quitting being an iPhone dev.


I always use MAS to install my production Xcode, specifically because it lets me have small updates.

I've been doing this for as long as it's been available there as a developer at Apple, in my own startup, and in my current role as well. Never had a problem with it.


I'm pretty sure mas-cli [https://github.com/mas-cli/mas] is abandonware. It hasn't received any meaningful updates in almost 2 years and most of its features are broken


I've tried MAS many times and never had it work. It always gets stuck downloading at 80% or something.


Never seen it get stuck, but I have seen it take 12 hours to get through that last 20%.


That might have been what happened for me, except I give up and sudo rm -rf it, leaving MAS in a confused state while I go download the dmg/xip directly.


[flagged]


Upgrades are in place for Xcode. Your theory would only make (the most minimal of) sense for someone having multiple copies of Xcode, but in that case would be irrelevant to delta updates


Worth noting Xcode 15 is still in beta which is why I (and other mobile developers I work with) didn't know this was a thing. Glad they eventually got to this, but judging by the upvotes (and my own personal experience) this would have been welcome much sooner.


On one of my computers Xcode always seems to download... twice. I can't for the life of me figure out whether its doing some sort of "serial update" where if I miss a version in-between, it first downloads and installs that version, then immediately starts the latest one, or if its just a weird bug where it installs the latest version twice, consecutively, for no apparent reason. I've seen other people report this as well. I find it hard to believe its the "serial" explanation, as one of the few benefits of just downloading the entire program every single time is that it shouldn't need the intermediate diffs. So yeah, I have no idea what's going on. My other computer seems to not have this happen... yet?


A few years ago I had a Mac that did that. Pretty sure it was because I actually had two Xcodes, one from the App Store and one downloaded from developer.apple.com.


There's something charmingly frustrating about typing "git" into the command line, and then waiting 4-5 hours for "Xcode Command-line Tools" to install.


Maybe they shipped the installer pictures as uncompressed originals like they did a long time ago with some update I forgot the name of. But what resulted in a few MB update to be hundreds of MBs.


Do the updates still take forever on new hardware? I have a M1 MacMini and it feels like every update takes like three hours.


App Store (softwareupdated) takes forever to update Xcode for some reason. I suspect some accidental nonlinear behavior. Downloading Xcode.xip from https://developer.apple.com/downloads and directly unpacking and replacing is way faster. Then I run a script to remove all the useless platforms.


This is the way I've always done it. Grab the package for manual install. Using the App Store version was always a massive pain.


Because "download too big" is a better problem than "what the fuck do I need to install to do XXX." XCode has the first problem, Visual Studio has both.


That is crazy. Im also bitter that you basically need xcode for git on macOS


If it's just git and the compiler environment ("SDK") you need then the command line developer tools suffice. The download is something like 700 MiB.


Not if you use the Nix package manager: https://nixos.org/

Before that though I used to only install command line tools.

At some point though, I forgot the incantation for it and realized I could just use Nix for git as I was using it for other things already.


You need the developer tools (like 2.5gb), or homebrew.


A lot of homebrew installs (those from code) require Apple's compile time stuff which only xcode has.


No, you can build pretty much everything from homebrew using just the command line developer tools. It's been like that for 7-8 years, if not more. Off the top of my head I've only once ever run into a homebrew recipe requiring "xcodebuild" to proceed.


You can only install the command line tools now. You don't need a full blown Xcode install.


no you don't. You just need the devtools which are like 2 gigs. I have developed on mac for years and have never had xcode installed.


Or download the official git binary (https://sourceforge.net/projects/git-osx-installer/) if you don't even want devtools. Been a while since I did that, but there are plenty of other things I avoid building or downloading from a package manager (like NodeJS or Python).


> the official git binary

Unfortunately, as the git website [1] states, these are fairly out of date (currently 2.33 from 2021-08-30). To get something recent, you have to build them yourself, which requires...devtools.

Seems bizarre, to me, that they don't have them built automatically.

[1] https://git-scm.com/download/mac


Yeah, it's weird how out of date it is, but it also doesn't matter a lot. I think my laptop still has a version of git from 2018.


Some tools, with git integration, require the newer versions. For example, newer PyCharm requires newer git.


That makes sense.


Well over 1GB for the Ventura 13.5.1 update and it takes 20 minutes to install on my M1 Pro MBP. "macOS Ventura 13.5.1 fixes an issue in System Settings that prevents location permissions from appearing."

I never thought I'd appreciate Windows Update but on there this would be less than 100MB and take 5 minutes to install...


I remember when this was a huge issue on Windows. Lately for me, Windows updates take maybe one minute (managed with autoupdate on), while my Mac and iPhone regularly see 10+ min updates.


Exactly that and then they announced their rapid security response updates with some hope for change but... nope, still needing a reboot for browser updates!


The reboot is required because Safari is managed through a signed security volume (SSV). Unfortunately the inconvenience of that security feature means that most users will avoid patching, which defeats the purpose.

The requirement to reboot is supposedly fixed in Sonoma, with discrete ‘cryptexes’.

https://developer.apple.com/news/?id=3xpv8r2m


Sometimes you need a reboot for updating Safari, sometimes not. It depends on what components of it that are updated. Some of the stuff Safari uses are protected system resources that lie outside the application bundle itself, and I personally think the security scheme behind this is great.


I had a gaming laptop idle for about 3 months. I decided to run some updates before we played some games on it one night and the updates took hours. They seem to always take hours


But your computer was usable for nearly the entire time, right? I don't care how long updates take if I can still keep doing things with it in the meantime. MacOS updates are the only thing that makes me sit and watch a progress bar for 20+ minutes.

I've never experienced anything similar with any of my Windows computers, nor Linux ones, nor ChromeOS ones. MacOS stands alone as having by far the shittiest and slowest update system imaginable. Even Android during its "recompile every installed app after an OTA" phase didn't take as long as MacOS does, and that's with massively slower CPU & storage!


And then you reboot (because obviously that's necessary) and for some reason that means there's now 94 more updates available.


Didn't the m1 mbp have an absurdly fast ssd as well? I always blamed my hackintosh's sata ssd for the slow updates but that's absurd


" Apple's custom NVMes are amazingly fast – if you don't care about data integrity"

https://news.ycombinator.com/item?id=30370551


Not super fast, especially the smaller sized ones. My M2 MBP is slower than the 3 year old PCIE4 Sabrent rocket in my desktop.


Those 20 minutes combined with the (out-of-the-box) inability to schedule an update at a certain time is one of the more annoying aspects of macOS.


Let's just grab a coffee before my daily call and I get back to the dreaded update progress bar.


I don't understand how Apple have got this update process so wrong. I recently switched from a Ubuntu work laptop (that had live Kernel patching and occasional non-forced reboots) to big whole image OSX needing to update at the worst possible moment.


Just leave iterm open and auto magic updates will never catch by surprise again :)


The thing is, Apple has been here before with at least 2 architecture transitions under it's belt. Motorola 68000 -> PowerPC -> Intel -> Arm -So they have lived this "how do we bundle upgrades, cross architecture" for some time.


I wonder, how many engineers with hands on experience on the PowerPC to intel transition are still around? I suspect Apple is benefitting from software layers and testing infrastructure, rather than institutional knowledge in this ARM transition. It’s just a guess though, I’d love to be corrected by someone with direct knowledge.


I agree. Mac OS and Windows alike seem to have shifted towards being developed by people who don't grok the whole OS and the context for how it got to where it is today. Former engineers being promoted or retiring (often as young millionaires), the growing size of the timeline/code size & cross-pollination of ideas from other OSes.


For other packages, this is where we start to see "We rewrote ______ in Go/Rust/blah" type articles. Nobody working on the product groks the entire code base, and rather than reading the "archaic" language, it is decided that it would be better to eliminate the tech debt and rewrite the entire thing in a language dejour so that "more" people will be familiar. This is usually followed by a series of releases fixing the bugs that were re-introduced because of all of the aha gotchas originally solved in that not understood archaic code base


Although I agree in part, usually new languages are brought in not just because they’re hyped, but because they provide better abstractions to deal with the complexity that made the original codebase unmaintainable.

That’s certainly the case with Go/Rust with respect to C/C++.


I think the Windows platform could do with a re-write to be honest. The NT kernel is fantastic - just need to develop a new consumer OS atop it that can work with any device. (Maybe Rewrite it in Rust or Zig).

And maybe they will give us a GUI framework apart from HWND that will at-least last 5 years. Or integrate web standards right into the OS.


They're probably benefitting more from the cross-platform architecture they inherited from NeXT than from anything from the pre-NeXT days. Though the fact the company had been through one architecture transition already probably influenced the decision to maintain the portability just in case (Mac OS X ran on x86 from the start).


Bandwidth/Quota constrained people are by now a small minority of Mac users. Vocal but they don't define the market. I have yet to flood my disk, 6 macs down the line from Intel onward (I missed PowerPC) but its work. I help some people who own them for domestic use and I can understand their hesitancy to just let data happen.


Don't forget x86 to x86_64 which was lighter e.g did not need things like Rosetta/Rosetta 2 but still involved several steps (userland, kernel, bootloader - IIRC in that order)


If I recall correctly, Apple went straight to 64 bits during the Intel transition. If they didn't, it wasn't long after so devs were still in the process of porting over to Intel anyway.


I had a "Blackbook", it was definitely a 32 bit CPU (T2500).

Software-wise 64 bit userland processes were supported since Tiger (CLI and Cocoa only, Carbon UI stayed 32 bit) before the kernel itself moved to 64 bits in Snow Leopard (although IIRC not the default except on Xserve and Mac Pro, the two that could have loads of RAM. You could tell it to boot the 64 bit kernel through nvram boot args or something though)

A few Macs had a 64 bit CPU but a 32 bit only bootloader/firmware. That made them disqualified for one of the major OS version updates when it required a full 64 bit boot chain.


There are definitely Mac 32-bit Intel games on Steam, like Portal, that won't run on anything recent.


Apple switched to Intel with first generation Intel Core Solo/Duo which were 32 bit CPUs in mid 2006. Then later that year they released "late 2006" models using Core 2 CPUs which were 64 bit. However, they only has support for 32-bit EFI so they could not boot 64-bit OS.

https://everymac.com/mac-answers/snow-leopard-mac-os-x-faq/m...


There were no online internet updates during the 68k to PPC or the PPC to Intel transition periods. Additionally, the 68k to PPC transition happened when Apple used a totally different OS, with a different application file format (hfs dual fork file, vs .app directory bundle).

I am not sure how much of that really applies here, this seems like a simple arithmetic bug.


There were online updates during the PPC to Intel era, even later versions of classic Mac OS (circa Mac OS 9) had some online update capabilities.


Why do people think that the late 1990s and early 2000s were some sort of times of computer barbarism?

Yes, the unwashed masses were just getting past the nerd factor of the Internet, but it wasn’t that bad for software distribution.

The experience of using a mobile phone, on the other hand, was brutal. If you wanted to get a new feature enabled on your device, you’d only be able to take it to a retail outlet of your phone provider.

My specific examples were (1) getting a firmware update on a Nokia 6188 to enable access to the 800 MHz band of the provider’s then-restructured CDMA network; and (2) same for a CDMA Blackberry to enable EVDO data.

Eg: http://www.arcxsites.shh.net/Nokia6188.htm


Yeah, and the mobile phone experience didn't really get any better for quite some time. I bought a Nokia N8 in 2010 while abroad, and I had to take it to a retail outlet afterwards to install a new keyboard layout for the on-screen keyboard.


> Additionally, the 68k to PPC transition happened when Apple used a totally different OS, with a different application file format (hfs dual fork file, vs .app directory bundle).

What? The 68k to PPC transition was from 1994 to 1998 (versions 7.1.2 to 8.1). The NeXTSTEP-derived Mac OS X (which uses .app bundles rather than resource forks) was released in 2001; though there were betas and developer releases as early as 1997, the OS transition is still distinct from the 68k to PPC transition and there were quite a few PPC-only releases of the pre-OS X Mac OS.


>"There were no online internet updates during the 68k to PPC or the PPC to Intel transition periods."

There certainly were during the PPC to Intel transition. This happened in 2005 with "Tiger", and at that time OS X had already been receiving updates over the Internet since quite some time.


But why do Apple Silicon macs require an addition 1.1 GB of updates?


They may be running multiple updaters. https://eclecticlight.co/2021/12/15/how-recovery-works-on-m1... (emphasis added):

“for M1 models, their ‘firmware’ update also brings a new Recovery system which is based on the latest macOS, in this case 12.0.1 even when the update is 11.6.1


Ah, maybe because it boots straight into the Mac kernel instead of an independent firmware/bootloader.


That's exactly it. The boot environment on Apple Silicon is a stripped down macOS system


Because the original download is a combined Intel+Non platform specific data. It then downloads the ARM specific binaries.


I don’t think that’s exactly right, I think the second ARM package is personalized, ie, signed by apple for your device. That’s why it’s not cacheable per the article.


Six of one, half dozen of another.


Not trying to be pedantic but the intel package has everything you need, it’s equivalent to packages 1 and 2 from the ARM update. I don’t think ARM package 1 contains any intel code at all.


I'm just going by what the article says. Feel free to disagree with the author if you have better data.


My comment is supported by the article.

> …but the second 1.1 GB always has to be downloaded direct from Apple’s software update servers and can’t be served from the cache, making it slower to download.

Why can’t it be cached? Because it’s unique per device. Only thing that makes sense.

There’s no reason to think the ARM package is supplemental binaries to overlay on top of an Intel package in the way your original comment stated.

All the article says is Intel gets 1 500mb package, ARM gets two packages, a 700mb and a 1.1GB. It doesn’t opine on the contents. All we know is that Intel gets 1 cacheable package and ARM gets 2, 1 cacheable and one not, and that the Intel one happens to be roughly the same size - but not exactly - as one of the ARM ones.


The “Intel” version actually contains a lot of Arm-specific code in Universal Binary form.


Probably because Apple Silicon is the only ISA platform Apple still has fucks to give about. I keep running headlong into driver bugs on Intel Macs -- that work just fine on Apple Silicon.


I haven’t seen a stable intel laptop in about 10 years.

My sample includes Windows, Linux and Macs. My most recent intel mac was the worst laptop I’ve ever used, but it was the end of a run of machines that couldn’t reliably resume, had sub-2-hour battery life out of the box (advertised at 8+ hours), woke up to fry themselves in my bag, thermally throttled at random times, etc, etc.


Its okay, they'll switch ISA again in a few years then ARM will get parity with Intel in updates


Looking at the timeline (using announcement dates):

  1977      : 6502
  1984 (+ 7): 68000
  1994 (+10): PowerPC
  2005 (+11): x86
  2020 (+15): ARM
intervals between transitions creep up (I know those dates may not be the ones others would pick. I skipped the Apple 1 (1976), transition periods were sometimes long (they sold Apple 2’s up to 1993, etc))

Because of that I don’t see a next transition happen before around 2040.

If it happens my $0.02 is on riscv. That’s a big if, though. The main CPU is getting less and less relevant relative to custom accelerators, and they’re in control of their own instruction set, so they may be able to fix a few bottlenecks while (more or less) staying on ARM.


What could Apple possibly gain by switching to RISC-V? They already design their own ARM cores anyway so licensing isn’t an issue


In that timespan RISC-V could leapfrog ARM in overall performance (including power usage and price) -- who knows? Performance improvements have been a big driver of those previous migrations.


Apple didn’t migrate from PowerPC to Intel because Intel CPUs were way faster, there was just not a way for keeping with the demand and get new processors into the new cash drivers of the time, i.e. laptops.

It was essentially the same with Intel and Arm, plus the fact that they could save money and further integrate vertically.


Yes they did. You had fast PowerPC chips at the time, but they couldn't compete at power efficiency, which was important for consumer hardware, especially portable devices.


I'm confused. Of course Intel chips were better for laptops, that's exactly what I said.


That’s why I said “including power usage and price” above. IBM just weren’t able and/or willing to match Intel’s chips for Apple’s purposes.

(As I recall, but maybe I misremember, Apple claimed a big performance advantage for their high-end G5 Macs but in the real world it was a bit dubious.)


That could happen for ordinary people and companies who have to rely on commodity processors or who license ARM cores and add their own peripherals around them on a custom chip, but Apple designs their own ARM processors.

They have an ARM architecture license. They design their own ARM implementations from the ground up.

If something can be done to RISC-V designs that would leapfrog ARM and ARM does not want to incorporate whatever breakthrough that is into their own cores for some reason that will mean that the people who rely on commodity ARM chips or on licensed ARM cores will be out of luck.

Apple on the other hand would be free to incorporate that into their ARM implementation.


> RISC-V could leapfrog ARM in overall performance

RISC-v is an instruction set. If any company made a better RISC-v core than Apple can make themselves they could either buy it from them or make their own faster ARM core.

That’s what happened with Intel and PowerPC…

What improvements in the instruction set itself could make RISC-v fundamentally faster than equivalent ARM chips?


Apple could just buy up the world's capacity of the smallest node size process and build a faster chip with lower power draw, irrespective of whether ARM or RISC-V.


Calling 15 years “a few” is a tad dramatic.


It felt like less time because most of Apple's devices (nearly everything that wasn't a mac) already ran ARM in the interim.


I don't think it'll take 15 years for RISC-V processors to be competitive with ARM.


What’s the point if they’re merely competitive? There has to be some compelling advantage - why would Apple throw out their ARM ISA for RISC-V. Unlike other companies the licensing isn’t really an issue.


That’s true, but at the same time, throwing out their ARM ISA wouldn’t necessarily be a huge cost for them; they’ve migrated smoothly a bunch of times in the past, so they’re really good at it.

If there were a compelling advantage at some point, they could go for it.


Well, Apple already designs their own processors; the math also works out for them to cut out the middleman with ARM if you disregard their Softbank relationship. They don't rely on Cortex designs like Qualcomm and they already have extended the ISA for their own purposes. It appears that Apple could stand to benefit from an ISA they can truly control. Whether it's worth another transition is unclear, but Apple's desire for control over their digital supply chain is obvious. It's not like you can argue that they can't pull off a RISC transition, either.


Hoe much are they even paying ARM?

> It's not like you can argue that they can't pull off a RISC transition

Apple could also possibly send a spaceship to the moon if they really wanted to. Not sure there are any practical reasons to do that.

They’ve been working on ARM chips for almost a decade to get to where they are now. Throwing that away for some ill defined advantages seems not that rational.


A RISC transition and a moon landing are markedly different things, though. They've already migrated MacOS to a RISC architecture, whereas nothing Apple makes is really intended to go into space. Unlike building a rocket ship, we already know Apple can do this.

None of us really know the specifics of Apple's deal, though. It's a perpetual license and they are in-bed with the firm that majority-owns ARM. It seems like a good deal, but obviously it's not good enough; Apple doesn't use any ARM-provided core designs and even adds their own ISA extensions when-needed. They themselves are using ARM like it is RISC-V, and they're a big enough stakeholder that they can get away with it.

> They’ve been working on ARM chips for almost a decade

https://appleinsider.com/articles/21/09/03/apple-investigati...

They've been exploring RISC-V internally for a couple years now, too.

I think it is either extremely foolish or ignorant to think that RISC-V won't undermine most ARM-licensed devices. The vast majority of places where ARM is used (even in Apple devices) is small and cheap microcontrollers that mostly cost whatever their license demands. If Apple could switch their internal ICs to RISC-V, they would already be saving a lot of money without transitioning to RISC-V fully.


> They've already migrated MacOS to a RISC architecture,

Have they?

I certainly think you have a point about microcontrollers.

I find it very doubtful RISC-v would supplant ARM on the high end anytime soon. The licensing fees are pretty peanuts compared to the cost of designing your own cores and even the big players have been struggling to come up with anything that could compete against ARMs designs (besides Apple of course).

Also it’s not at all clearer what incentives would Apple have to switch to RISC-V? Sure they could do it.. what would they gain though?


> Hoe much are they even paying ARM?

I think Apple has a perpetual license to ARM tech because they put up a bunch of investment money after ARM spun off from Acorn.


The perpetual license to ARM ISA has nothing to do with Apple's investment into ARM or Acorn. Anyone with money could buy a perpetual license to its ISA under the condition ARM and buyer agree with.


Knowing Apple, their next move is probably going to be inventing an ISA out of whole cloth; either that or acquire RISC-V International, so that they can be sure that future ISA changes and developments support their needs first.


They already invented an ISA, it's called ARMv8. The ISA was designed for Apple Silicon, not the other way round.


You're right, it'll sadly be a lot longer.


Could be rosetta-related since that part is unique to the m1 macs


It's not. Rosetta is quite small and is installed separately.

You can install Rosetta yourself from the command-line with softwareupdate --install-rosetta

https://lapcatsoftware.com/articles/Rosetta.html


That's merely the mechanism to enable running Intel software. Many of the system frameworks and a few of the built-in application extensions are universal binaries, so there's quite a portion of Intel code installed on the system.

Same thing happened with the PPC transition; 10.4.5 to 10.5.8 had a lot of PowerPC code even without Rosetta installed.


The Signed System Volume is exactly the same everywhere, on Intel and Apple Silicon Macs. It doesn't selectively install architecture-specific binaries.

The "personalization" is basically firmware updates specific to your Mac. These are a lot bigger and more complicated on Apple silicon than on Intel. You can see this with the --downloadassets argument to the createinstallmedia tool:

https://scriptingosx.com/2018/10/include-assets-in-external-...

The same thing happens with iOS updates.


> It doesn't selectively install architecture-specific binaries.

I know, I didn't say otherwise. I said the system is universal.

I wasn't even talking about the updated mechanism.


Which makes sense, since users have come to expect software to be portable. Let's say you have an Intel machine, and an Apple Silicon machine. You could then install software to a portable USB drive, and run it on both devices.


Apple silicon offloads processing to its various sub-engines (neural ones etc). Intel Macs don't have these. I imagine the extra space will be ML models + other associated stuff.


I don't think this is it. Those systems may require larger drivers for Apple Silicon, but Intel Macs should be perfectly capable of running CoreML models too. I'm unaware of any Apple Silicon-exclusive AI features on Mac (last I used it).


Just 1 example from a google: https://9to5mac.com/2022/06/07/macos-ventura-apple-silicon-f...

There are plenty more.


What exactly is this supposed to prove around update sizes though?


Extra files for ML models etc, as it says in my comment.


The ML models for coprocessors are usually in size range of few MB.


I’m not sure any (large) ML models as such are bundled.


In general, what causes Apple Silicon Macs to have a larger update size than the Intel Macs?


ARM binaries in general tend to be a bit larger fwiw.


Perhaps it’s the personalized part of the update?


I’ve never heard the software updates are personalized. Can you give a source? I’m not sure what it would be for.

“According to the logged in iCloud account this user likes reggae, thriller movies, and paint programs. So… don’t patch flaw X7b-9?”


iOS apps, and by extension, app updates are encrypted with a per-account key. No idea why it would be done for system updates though. iOS system update .ipsw’s certainly aren’t.


Apps are encrypted as a DRM scheme. iOS updates aren’t encrypted but are authorized based on your hardware ID.


I can't see it either, but the bit about it not being cachable would seem to lend credence to the idea. Maybe they did something weird like signing/encrypting it with a per-device key? Not that I can see why they'd do that...


I believe this is to prevent replay attacks?


They’re signed with a key from the CPU per device.


Larger binary sizes for ARM? Is Thumb still a thing?

Still, a binary diff update would be helpful, especially since Apple seems to have more control over the system files. Maybe reproducible builds would help so you wouldn't end up with meaningless diffs


Thumb is still a thing, but not on Aarch64. For the most part it’s only relevant on the tiny M-class (low power embedded) Arm parts.


Thumb were 16bit instructions for their ARM7 TDMI 32bit microcontrollers. Very different beasts to the Aarch64.


this, the mere size and constant growth annoys me a lot about those megacorp products.

Thanks for the writeup.


I am uneasy with the idea that a more locked version of a Mac m1 or m2 based has 1.1 gig download that appears to be individualized and must be downloaded directly from Apple.


I imagine it isn’t a surprise that a company as philosophically dedicated to the graphical user interface as Apple wouldn’t prioritize a non-breaking bug fix for the command line update tool.


the size of the 2nd update is a bit smaller than the first (if you include both uarchs). Maybe a XOR decryption key for the first set? Bootloader, then (Intel+Apple files) XOR version-key or my-key? I am thinking of the secure mode boot attestation, and extensions that want you to disable that and allow custom kexts, etc.

It makes sense that maybe the x86 stuff might be there just in case of rosetta needing to do something. No idea but seems plausible that they want to have the fork there.

there definitely is some kind of per-instance registration of which laptop has which version, iphones similarly have a very visible "requesting update" state. Apple has secure boot on all their shit for a long time, they have hardware attestation and the OS will report when it's been updated, and they can monitor attempts for rollback attacks unless the laptop has been Startup Config > Security policy changed to allow insecure mode and older/non-signed/non-apple OSs or custom extensions etc. Apple legitimately did build a very secure default system, you can unlock it but you do have to go out of your way to go into bios etc, by default they will very aggressively assume that apple signing = on and version go forward.

Which if any of those files are bit-identical when received by different laptop identifier instances (via wireshark or w/e)? Like "it has to be downloaded from apple" is that because it's custom to your hardware instance?

Is there a common file image and then a mask, or is the second thing a giant blob of binaries that's the same for everyone, or what? Or just can't tell because it's encrypted or something?


>Maybe a XOR decryption key for the first set?

Gigabytes of keys? I hope Apple isn't wasting tons of bandwidth to achieve information-theoretical security that's no more secure than than 256-bit AES in practice.


It doesn't even make sense anyway. If the cacheable content was encrypted with a per-device key it wouldn’t be cacheable, obviously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: