My takeaway from the demo is less that "it's different each time", but more a "it can be different for different users and their styles of operating" - a poweruser can now see a different Settings UI than a basic user, and it can be generated realtime based on the persona context of the user.
Example use case (chosen specifically for tech): An IDE UI that starts basic, and exposes functionality over time as the human developer's skills grow.
On one hand, I'm incredibly impressed by the technology behind that demo. On the other hand, I can't think of many things that would piss me off more than a non-deterministic operating system.
I like my tools to be predictable. Google search trying to predict that I want the image or shopping tag based on my query already drives me crazy. If my entire operating system did that, I'm pretty sure I'd throw my computer out a window.
I know what it's doing and I'm impressed. If you understand what it's doing and aren't impressed, that's cool too. I think we just see things differently and I doubt either of us will convince the other one to change their mind on this
I feel like one quickly hits a similar partial observability problem as with e.g. light sensors. How often do you wave around annoyed because the light turned off.
To get _truly_ self driving UIs you need to read the mind of your users.
It's some heavy tailed distribution all the way down.
Interesting research problem on its own.
We already have adaptive UIs (profiles in VSC anyone? Vim, Emacs?) they're mostly under-utilized because takes time to setup + most people are not better at designing their own workflow relative to the sane default.
I would bet good money that many of the functions they chose not to drill down into (such as settings -> volume) do nothing at all or cause an error.
It's a fronted generator. It's fast. That's cool. But is being pitched as a functioning OS generator and I can't help but think it isn't given the failure rates for those sorts of tasks. Further, the success rates for HTML generation probably _are_ good enough for a Holmes-esque (perhaps too harsh) rugpull (again, too harsh) demo.
A cool glimpse into what the future might look like in any case.
It's a brand of terribleness I've somewhat gotten used to, opening Google Drive every time, when it takes me to the "Suggested" tab. I can't recall a single time when it had the document I care about anywhere close to the top.
There's still nothing that beats the UX of Norton Commander.
Ah yes, my operating system, most definitely a place I want to stick the Hallucinotron-3000 so that every click I make yields a completely different UI that has absolutely 0 bearing to reality. We're truly entering the "Software 3.0" days (can't wait for the imbeciles shoving AI everywhere to start overusing that dogshit, made-up marketing term incessantly)
We'll need to boil a few more lakes before we get to that stage I'm afraid, who needs water when you can have your AI hallucinate some for you after all?
Is me not wanting the UI of my OS to shift with every mouse click a hot take? If me wanting to have the consistent "When I click here, X happens" behavior instead of the "I click here and I'm Feeling Lucky happens" behavior is equal to me being dense, so be it I guess.
No. But you interpreting and evaluating the demo in question as suggesting the things you described - frankly, yes. It takes a deep gravity well to miss a point this clear from this close.
It's a tech demo. It shows you it's possible to do these things live, in real time (and to back Karpathy's point about tech spread patterns, it's accessible to you and me right now). It's not saying it's a good idea - but there are obvious seeds of good ideas there. For one, it shows you a vision of an OS or software you can trivially extend yourself on the fly. "I wish it did X", bam, it does. And no one says it has to be non-deterministic each time you press some button. It can just fill what's missing and make additions permanent, fully deterministic after creation.
Personally I think its a mistake; at least at "team" level. One of the most valuable things about a software or framework dictating how things are done is to give a group of people a common language to communicate with and enforce rules. This is why we generally prefer to use a well documented framework, rather than letting a "rockstar engineer" roll their own. Only they will understand its edge cases and ways of thinking, everyone else will pay a price to adapt to that, dragging everyone's productivity down.
Secondly, most people don't know what they want or how they want to work with a specific piece of software. Its simply not important enough, in the hierarchy of other things they care about, to form opinions about how a specific piece of software ought to work. What they want, is the easiest and fastest way to get something done and move on. It takes insight, research and testing to figure out what that is in a specific domain. This is what "product people" are supposed to figure out; not farm it out to individual users.
You bake those rules into the folders in a Claude.md file and it becomes it's guide when building or changing anything. Ubiquitous language and all that Jazz.
Behavioral patterns are not unpredictable. Who knows how far an LLM could get by pattern-matching what a user is doing and generating a UI to make it easier. Since the user could immediately say whether they liked it or not, this could turn into a rapid and creative feedback loop.
So, if the user likes UI’s that don’t change, the LLM will figure out that it should do nothing?
One problem LLM’s don’t fix is the misalignment between app developers’ incentives and users’ incentives. Since the developer controls the LLM, I imagine that a “smart” shifting UI would quickly devolve into automated dark patterns.
A mixed ever-shifting UI can be excellent though. So you've got some tools which consistently interact with UI components, but the UI itself is altered frequently.
Take for example world-building video games like Cities Skylines / Sim City or procedural sandboxes like Minecraft. There are 20-30 consistent buttons (tools) in the game's UX, while the rest of the game is an unbounded ever-shifting UI.
The rest of the game is very deterministic where its state is controlled by the buttons. The slight variation is caused by the simulation engine and follows consistent patterns (you can’t have building on fire if there’s no building yet).
Tools like v0 are a primitive example of what the above is talking about. The UI maintains familiar conventions, but is laid out dynamically based on surrounding context. I'm sure there are still weird edge cases, but for the most part people have no trouble figuring out how to use the output of such tools already.
Border-line off-topic, but since you're flagrantly self-promoting, might as well add some more rule breakage to it.
You know websites/apps who let you enter text/details and then not displaying sign in/up screen until you submit it, so you feel like "Oh but I already filled it out, might as well sign up"?
They really suck, big time! It's disingenuous, misleading and wastes people's time. I had no interest in using your thing for real, but thought I'd try it out, potentially leave some feedback, but this bait-and-switch just made the whole thing feel sour and I'll probably try to actively avoid this and anything else I feel is related to it.
Thanks for the benefit of the doubt. I typed that in a hurry, and it didn’t come out the way I intended.
We had the idea that there’s a class of apps [1] that could really benefit from our tooling - mainly Fireproof, our local-first database, along with embedded LLM calling and image generation support. The app itself is open source, and the hosted version is free.
Initially, there was no login or signup - you could just generate an app right away. We knew that came with risks, but we wanted to explore what a truly frictionless experience could look like. Unfortunately, it didn’t take long for our LLM keys to start getting scraped, so the next best step was to implement rate limiting in the hosted version.
The generation is running while you login, so this appreciable decreases wait time from idea to app, because by the time you click through the login, your app is ready. (Vibes DIY CEO here.)
If login takes 30 seconds, and app gen 90, we think this is better for users (but clearly not everyone agrees.) Thanks for the feedback!
This talk https://www.youtube.com/watch?v=MbWgRuM-7X8 explores the idea of generative / malleable personal user interfaces where LLMs can serve as the gateway to program how we want our UI to be rendered.
Humans are shit at interacting with systems in a non-linear way. Just look at Jupyter notebooks and the absolute mess that arises when you execute code blocks in arbitrary order.
If you run cells out of order, you get weird results. Thus you have efforts like marimo which replace jupyter with something that reruns all dependent cells.
It immediately makes me think a LLM that can generate a customized GUI for the topic at hand where you can interact with in a non-linear way.