It is a vs code fork. There were some UI glitches. Some usability was better. Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it. Design of this one looks more polished and less muddy.
I was working on a project and just continued with it. It was easy because they import setting from cursor. Feels like the browser wars.
Anyway, I figured it was the only way to use gemini 3 so I got started. A fast model that doesn't look for much context. Could be a preprompt issue. But you have to prod it do stuff - no ambition and a kinda offputting atitude like 2.5.
But hey - a smarter, less context rich Cursor composer model. And that's a complement because the latest composer is a hidden gem. Gemini has potential.
So I start using it for my project and after about 20 mins - oh, no. Out of credits.
What can I do? Is there a buy a plan button? No? Just use a different model?
What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.
I switched back to cursor. And you know? it had gemini 3 pro. Likely a less hobbled version. Day one. Seems like a mistake in the eyes of the big evil companies but I'll take it.
Real developers want to pay real money for real useful things.
Google needs to not set themselves up for failure with every product release.
If you release a product, let those who actually want to use it have a path to do so.
As someone who used to work there, Google will never get product releases right in general because of how bureaucratic and heavyweight their launch processes are.
They force the developing team to have a huge number of meetings and email threads that they must steer themselves to check off a ridiculously large list of "must haves" that are usually well outside their domain expertise.
The result is that any non-critical or internally contentious features get cut ruthlessly in order to make the launch date (so that the team can make sure it happens before their next performance review).
It's too hard to get the "approving" teams to work with the actual developers to iron these issues out ahead of time, so they just don't.
Spot on. I would suggest a slightly different framing where the antagonist isn't really the "approving" teams but "leaders" who all want a seat at the table and exercise their authority lest their authority muscles atrophy. Since they're not part of the development, unless they object to something, would they really have any impact or leadership?
I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.
As someone who just GA'd an Azure service - things aren't all that different in Azure. Not sure how AWS does service launches but it would be interesting to contrast with GCP and Azure.
Yep, that and (also used to work there) the motivations of the implementing teams end up getting very detached from the customer focus and product excellence because of bureaucratic incentives and procedures that reward other things.
There's a lot of "shipping the org chart" -- competing internal products, turf wars over who gets to own things, who gets the glory, rather than what's fundamentally best for the customer. E.g. Play Music -> YouTube Music transition and the disaster of that.
So I start using it for my project and after about 20 mins - oh, no. Out of credits.
I didn't even get to try a single Gemini 3 prompt. I was out of credits before my first had completed. I guess I've burned through the free tier in some other app but the error message gave me no clues. As far as I can tell there's no link to give Google my money in the app. Maybe they think they have enough.
After switching to gpt-oss:120b it did some things quite well, and the annotation feature in the plan doc is really nice. It has potential but I suspect it's suffering from Google's typical problem that it's only really been tested on Googlers.
EDIT: Now it's stuck in a loop repeating the last thing it output. I've seen that a lot on gpt-oss models but you'd think a Google app would detect that and stop. :D
EDIT: I should know better than to beta test a FAANG app by now. I'm going back to Codex. :D
Don't think so, I expect that system to use Spanner, so my best guess is that the user generated an image at the end of the credit reset window (which is around noon EST).
The first patch release (released on launch day) says: "Messaging to distinguish particular users hitting their user quota limit from all users hitting the global capacity limits." So, collectively we're hitting the quota, its not just your quota. (One would think Google might know how to scale their services on launch day...)
The Documentation (https://antigravity.google/docs/plans) claims that "Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won't have to worry about, and you feel unrestrained in your usage of Antigravity."
With Ultra I hit that limit in 20 minutes with Gemini 3 low. When the rate limit cleared some hours later, I got one prompt before hitting limit again.
you'd hope so, the same way you'd hope that AI IDEs would not show these package/dependency folder contents when referencing files using @ - but i still get shown a bunch of shit that i would never need to reference by hand
One would think this would have been obvious when it fails on the first or second request already, yet people here all complain about rate limits.
When I downloaded it, it already came with the proper "Failed due to model provider overload" message.
When it did work, the agent seemed great, achieving the intended changes in a React and python project. Particularly the web app looks much better than what Claude produced.
I did not see functionality to have it test the app in the browser yet.
> Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it.
The state of Cursor "review" features make me convinced that the cursor devs themselves are not dogfooding their own product.
It drives me crazy when hundreds of changes build up, I've already reviewed and committed everything, but I still have all these "pending changes to review".
Ideally committing a change should treat it as accepted. At the very least, there needs to be a way to globally "accept all".
VSCode is Electron based which, yes, is based on Chromium. But the page you link to isn't about that, its about using VSCode as dev environment for working on Chromium, so I don't know why you linked it in this context.
Which came from "the KDE HTML Widget" AKA khtmlw. Wonder if that's the furthest we can go?
> if all that effort stayed inside the KDE ecosystem
Probably nowhere, people rather not do anything that contribute to something that does decisions they disagree with. Forking is beautiful, and I think improves things more than it hurts. Think of all the things we wouldn't have if it wasn't for forking projects :)
On the other hand if that had stopped google from having a browser they push into total dominance with the help of sleazy methods, maybe that would have been better overall.
I still prefer a open source chromium base vs a proprietary IE (or whatever else) Web Engine dominating.
(Fixing IE6 issues was no fun)
Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.
I started webdev on FF with firebug. But at some point chrome just got faster with superior dev tools. And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.
> I still prefer a open source chromium base vs a proprietary IE (or whatever else) Web Engine dominating.
Okay but that's not the tradeoff I was suggesting for consideration. Ideally nothing would have dominated, but if something was going to win I don't think it would have been IE retaking all of firefox's ground. And while I liked Opera at the time, that takeover is even less likely.
> Also I do believe, the main reason chrome got dominance is simply because it got better from a technical POV.
Partly it was technical prowess. But google pushing it on their web pages and paying to put an "install chrome" checkbox into the installers of unrelated programs was a big factor in chrome not just spreading but taking over.
> And their dev tools kept improving while FF stagnated and rather started and maintained u related social campaigns and otherwise engaged with shady tracking as well.
Since when you don't touch Firefox or try the dev tools ?
I use FF for browsing, but every time I think of starting dev tools, maybe even just to have a look at some sites source code .. I quickly close them again and open chrome instead.
I wouldn't know where to start, to list all the things I miss in FF dev tools.
The only interesting thing for me they had, the 3D visualizer of the dom tree, they stopped years ago.
We might not have had Mozilla/Phoenix/Firefox in the first place if so either, who I'd like to think been a net-positive for the web since inception. At least I remember being saved by Firefox when the options were pretty much Internet Explorer or Opera on a Windows machine.
> Both are based on khtml. We could be living in a very different world if all that effort stayed inside the KDE ecosystem
How so?
Do you think thousands of googlers and apple engineers could be reasonably managed by some KDE opensource contributors? Or do you imagine google and apple would have taken over KDE? (Does anyone want that? Sounds horrible.)
I think they meant we wouldn’t have had Safari, Chrome, Node, Electron, VSCode, Obsidian? Maybe no TyeScript or React either (before V8, JavaScript engines sucked). The world might have adopted more of Mozilla.
that's a bit misleading. it was based on webcore which apple had forked from khtml. however google found apple's addition to be a drag and i think very little of it (if anything at all, besides the khtml foundation) survived "the great cleanup" and rewrite that became blink. so actually webkit was a just transitional phase that led to a dead end and it is more accurate to say that blink is based on khtml.
Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.
Even more importantly, though, the more we move towards "I'm supervising a fleet of 50+ concurrent AI agents developing code on separate branches" the more the notion of the IDE starts to look like something you want to be able to launch in an unconfigured cloud-based environment, where I can send a link to my PM who can open exactly what I'm seeing in a web browser to unblock that PR on the unanswered spec question.
Sure, there's a world where everyone in every company uses Zed or similar, all the way up to the C-suite.
But it's far more likely that web technologies become the things that break down bottlenecks to AI-speed innovation, and if that's the case, IDEs built with an eye towards being portable to web environments (including their entire extension ecosystems) become unbeatable.
Many of VSCode extensions are written in C++, Go, Rust or C#, Java, exactly because performance sucks when written in JavaScript and most run out of process anyway.
> Firstly, the barrier to entry lower for people to take web experience and create extensions, furthering the ecosystem moat for Electron-based IDEs.
The last thing I want is to install dozens of JS extensions written by people who crossed that lower barrier. Most of them will probably be vibe coded as well. Browser extensions are not the reason I use specific browsers. In fact, I currently have 4 browser extensions installed, one of which I wrote myself. So the idea that JS extensions will be a net benefit for an IDE is the wrong way of looking at it.
Besides, IDEs don't "win" by having more users. The opposite could be argued, actually. There are plenty of editors and IDEs that don't have as many users as the more popular ones, yet still have an enthusiastic and dedicated community around them.
I tried switching to Zed and switched back less than 24 hours later. I was expecting it to be snappier than VS Code and it wasn’t to any significant degree, and I ran into several major bugs with the source control interface that made it unusable for me.
People dunk on VS Code but it’s pretty damn good. Surely the best Electron app? I’m sure if you are heavily into EMACS it’s great but most people don’t want to invest huge amounts of time into their tools, they would rather be spending that time producing.
For a feature rich workhorse that you can use for developing almost anything straight out of the box, it within minutes after installing a few plugins, it’s very hard to beat. In my opinion lot of the hate is pure cope from people who have probably never really used it.
VS Code is technically an Electron app, but it's not the usual lazy resource hog implementation like Slack or something. A lot of work went into making it fast. I doubt you'll find many non-Electron full IDEs that are faster. Look at Visual Studio, that's using a nice native framework and it runs at the speed of fossilized molasses.
VSCode has even less features than Emacs, OOTB. Complaining about full IDEs slowness is fully irrelevant here. Full IDEs provide an end to end experience in implementing a project. Whatever you need, it's there. I think the only plugins I've installed on Jetbrains's ones is IdeaVim and I've never needed something else for XCode.
It's like complaining about a factory's assembly line, saying it's not as portable as the set of tools in your pelican case.
In 2025, you really picked Emacs as the hill to die on? Who is under 30 who cares about Emacs in 2025? Few. You might as well argue that most developers should be using Perl 6.
> the only plugins I've installed on Jetbrains's ones
By default, JetBrains' IntelliJ-based IDEs have a huge number of plug-ins installed. If you upgrade from Community Edition to a paid license, the number only increases. Your comment is slightly misleading to me.
Just wait until vi steps into the room. Perhaps we can recreate the Usenet emacs vs vi flame wars. Now, if only '90's me could see the tricked out neovim installs we have these days.
Please take a look at the Emacs documentation sometimes.
VSCode is more popular, which makes it easy to find extensions. But you don’t see those in the Emacs world because the equivalent is a few lines of config.
So what you will see are more like meta-extensions. Something that either solve a whole class of problems, could be a full app, or provides a whole interaction model.
Like writing out of process extensions in compiled languages.
VS is much faster considering it is a full blown IDE not a text editor, being mostly C++/COM and a couple of .NET extensions alongside the WPF based UI.
Load VSCode with the same amount of plugins, written in JavaScript, to see where performance goes.
They just made a big song and dance about full updating Visual Studio so it launches in milliseconds and is finally decoupled from all the underlying languages/compilers.
It's still kinda slow for me. I've moved everything but WinForms off it now, though.
VS Code is plenty fast enough. I switched to Zed a few months back, and it's super snappy. Unless you're running on an incredibly resource constrained machine, it mostly comes down to personal preference.
I have always found JetBrains stuff super snappy. I use neovim as a daily driver but for some projects the inference and debugging integration in JetBrains is more robust.
It funny that despite how terrible, convoluted and maladapted web tech is for displaying complex GUIs it still gradually ate lunch of every native component library and they just couldn't innovate to keep up on any front.
Amazon just released OS that uses React Native for all GUI.
It's easy to design bad software and write bad code. Like the old saying: "I didn't have time to write you a short letter, so I wrote you a long one". Businesses don't have time to write good and nice software, so they wrote bad one.
I didn't really mean Electron, but rather unholy amalgam of three languages, each with 20 years of "development", which mostly consisted of doing decrapifying and piling up new (potentially crappy) stuff. Although Electron with UI context and system (backend? background?) context both running js is another can of worms.
The anti-Electron meme is a vocal minority who don’t realize they’re a vocal minority. It’s over represented on Hacker News but outside of HN and other niches, people do not care what’s under the hood. They only care that it works and it’s free.
I used Visual Studio Code across a number of machines including my extremely underpowered low-spec test laptop. Honestly it’s fine everywhere.
Day to day, I use an Apple Silicon laptop. These are all more than fast enough for a smooth experience in Visual Studio Code.
At this point the only people who think Electron is a problem for Visual Studio Code either don’t actually use it (and therefore don’t know what they’re talking about) or they’re obsessing over things like checking the memory usage of apps and being upset that it could be lower in their imaginary perfect world.
Complaining about Electron is an ideological battle, not a practical argument. The people who push these arguments don’t care that it actually runs very well on even below average developer laptops, they think it should have been written in something native.
The word "developer" is doing a lot of work there spec-wise.
The extent to which electron apps run well depends on how many you're running and how much ram you had to spare.
When I complain about electron it has nothing to do with ideology, it's because I do run out of memory, and then I look at my process lists and see these apps using 10x as much as native equivalents.
And the worst part of wasting memory is that it hasn't changed much in price for quite a while. Current model memory has regularly been available for less than $4/GB since 2012, and as of a couple months ago you could get it for $2.50/GB. So even a 50% boost in use wipes out the savings since then. And sure the newer RAM is a lot faster, but that doesn't help me run multiple programs at the same time.
> The word "developer" is doing a lot of work there spec-wise.
Visual Studio Code is a developer tool, so there’s no reason to complain about that.
I run multiple Electron apps at a time even on low spec machines and it’s fine. The amount of hypothetical complaining going on about this topic is getting silly.
You know these apps don’t literally need to have everything resident in RAM all the time, right?
> I run multiple Electron apps at a time even on low spec machines and it’s fine.
"Multiple" isn't too impressive when you compare that a blank windows install has more than a hundred processes going. Why accept bloat in some when it would break the computer if it was in all of them?
> Visual Studio Code is a developer tool, so there’s no reason to complain about that.
Even then, I don't see why developers should be forced to have better computers just to run things like editors. The point of a beefy computer is to do things like compile.
But most of what I'm stuck with Electron-wise is not developer tools.
> The amount of hypothetical complaining going on about this topic is getting silly.
I am complaining about REAL problems that happen to me often.
> You know these apps don’t literally need to have everything resident in RAM all the time, right?
Don't worry, I'm looking specifically at the working set that does need to stay resident for them to be responsive.
Ease of writing and testing extensions is actually the cause why Electron won IDE wars.
Microsoft made a great decision to jump on the trend and just pour money to lap Atom and such in optimization and polish.
Especially when you compare it to Microsoft effort for desktop. They acumulated several more or less component libraries over they years and I still prefer WinForms.
If you want electron app that doesn't lag terribly, you'll end up rewriting ui layer from scratch anyway. VSCode already renders terminal on GPU and GPU-rendered editor area is in experimental. There will soon be no web ui left at all
> If you want electron app that doesn't lag terribly
My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.
I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:
- Visual Studio Code: 589.4 MB
- Visual Studio 2022: 732.6 MB
update:
I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!
That easily takes the worst designed benchmark in my opinion.
Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.
- notepad++: 56.4 MB (went gray-window unresponsive for 10 seconds when opening the explorer)
- notepad.exe: 54.3 MB
- emacs: 15.2 MB
- vim: 5.5MB
I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.
I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.
It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.
I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
> I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.
Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).
> RAM is so incredibly cheap compared to 5/10/15/20 years ago
Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.
Anyone saying that Java-based Jetbrains is worse than Electron-based VS Code, in terms of being more lightweight, is living in an alternate universe which can’t be reached by rational means.
Wow, it's true--Terminal is <canvas>, while the editor is DOM elements (for now). I'm impressed that I use both every day and never noticed any difference.
Could you suggest an example such application we can try / look at screenshots of?
(I've been aware of Qt for like two decades; back in the early 2000s my employer was evaluating such options as Tk, wxWindows, and ultimately settled on Java, I think with AWT. Qt seems to have a determined survival niche in "embedded systems that aren't android"?)
I wouldn't underestimate Eclipse user statistics. That may sound insane in 2025, but I've seen a lot of heavily customized eclipse editors still kicking around for vendor specific systems, setting aside that Java is still a pretty large language in its own right.
At best, that's subjective, but it's fact that JetBrains is comically far behind when it comes to AI tooling.
They have a chance to compete fresh with Fleet, but they are not making progress on even the basic IDE there, let alone getting anywhere near Cursor when it comes to LLM integration.
JetBrains' advantage is that they have full integration and better understanding of your code. WebStorm works better with TypeScript than even Microsoft's own creation. This all translates into AI performance
Have you actually given them a real test yet - either Junie or even the baseline chat?
What a strange claim. For enterprise Java, is there is a serious alternative in 2025? And, Rider is slowly eating the lunch of (classic) Visual Studio for C# development. I used it again recently to write an Excel XLL plug-in. I could not believe how far Rider has come in 10 years.
IME pycharm’s weakness is not integrating with modern tooling like ruff/pyright - their built in type checker is terrible at catching stuff, and somehow there isnt an easy way to run MyPy, black or isort in it.
If there’s a workflow I’m missing please let me know because I want to love it!
I just checked and I don’t even have the JVM installed on my machine. It seems like Java is dead for consumer applications. Not saying that’s why they aren’t popular but I’m sure it doesn’t help.
I see the VSCode management has been firmly redirected to prioritize GitHubs failing and behind "AI Coding" competition entry. When that will predictably falter expect them to lose interest in the editor all together.
More like "OBS is Qt". Which it is not, OBS uses Qt. And Chrome is just a runtime and GUI framework for VS Code. Let's not confuse forks of software with software built on something.
I believe our definitions of "winning the IDE wars" are very, very different. For one thing, using "user count" as a metric for this like using "number of lines of code added" in a performance review. And even if that was part of the metric, people who use and don't absolutely fall in love with it, so much so that they become the ones advocating for its use, are only worth a tiny fraction of a "user".
neovim won the IDE wars before it even started. Zed has potential. I don't know what IntelliJ is.
It started as a modernized Eclipse competitor (the Java IDE) but they've built a bunch of other IDEs based on it. Idk if it still runs on Java or not, but it had potential last I used it about a decade ago. But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
I don't know what it's based on, but it works extremely well. I use Rider & WebStorm daily and I find Rider is a lot faster than Visual Studio when it comes to the Unreal Engine codebase and WebStorm seems to be a lot more reliable than VSCode nowadays (I don't know if it's at fault, but ever since copilot was integrated I find that code completion can stop working for minutes at a time. Very annoying)
Android Studio is built on the IntelliJ stack. Jetbrains just launched a dedicated Claude button (the button just opens up claude in the IDE, but there are some pretty neat IDE integrations that it supports, like being able to see the text selection, and using the IDE's diff tool). I wonder if that's why Google decided to go VS code?
Uh, isn't that the regular Claude code extension that's been available for ages at this point? Not jetbrains but anthropics own development?
As a person paying for the jetbrains ultimate package (all ides), I think going with vscode is a very solid decision.
The jetbrains ides still have various features which I always miss whenever I need to use another IDE (like way better "import" suggestions as an easy to understand example)... But unless you're writing in specific languages like Java, vscode is way quicker and works just fine - and that applies even more to agentic development, where you're using these features less and less...
Jetbrains IDEs are all based on the JVM - and they work better than VSCode or the full Visual Studio for me. It's the full blown VS (which has many parts written in C++) that is the most sluggish of them all.
Since you last used IntelliJ "about a decade ago", what do you use instead?
> But running GUI apps on the JVM isn't the best for 1000 reasons, so I hope they've moved off it.
What would you recommend instead of Swing on JVM? Since you have "1000 reasons", it should easy to list a few here. As a friendly reminder, they would need to port (probably) millions of lines of Java source code to whatever framework/language you select. The only practical alternative I can think of would be C++ & Qt, but the development speed would be so much slower than Java & Swing.
Also, with the advent of wildly modern JVMs (11+), the JIT process is so insanely good now. Why cannot a GUI be written in Swing and run on the JVM?
Notice that INTELLIJ uses its own UI framework, really, which I don’t think has much Swing left in it after all these years. And Kotlin is the main language for a decade now.
Lol the second I saw the antigravity release I thought "there's no way I'm using that, they will kill it within a year". Looks like they're trying to kill it at birth.
Exactly my reaction. Every time I've used something from Google, it ends up dead in a few years. Life is too short to waste so many years learning something that is destined to die shortly
These are just extended press releases, for marketing and management layers, who don't have to use these things themselves, but can look good, when talking about it.
agree but at the same time there's not too much lock in with these IDEs these days and switching is very easy. Especially since they're all VSCode forks
> What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.
I wonder how much Google shareholders paid for that 20 minutes. And whether it's more or less than the corresponding extremely small stock price boost from this announcement.
Our modeling suggests that a very small fraction of power users will ever hit the per-five-hour rate limit, so our hope is that this is something that you won’t have to worry about, and you feel unrestrained in your usage of Antigravity
You have to wonder what kind of models did they run for this.
Interesting that a next-gen open-source-based agentic coding platform with superhuman coding models behind it can have UI glitches. Very interesting that even the website itself is kind of sluggish. Surely, someone, somewhere must have ever optimized something related to UI rendering, such that a model could learn from it.
if it were true, it would be a big miss to not point that out when you run out of credit, in their pricing page, or anywhere in their app.
I should also mention that the first time I prompted it, I got a different 'overloaded' type out of credit message. The one I got at the end was different.
I've rotated on paying the $200/month plans with Anthropic, Cursor, and OpenAI. But never Google's. They have maybe the best raw power in their models - smartest, and extremely fast for what they are. But they always drop the ball on usability. Both in terms of software surrounding the model and raw model attitude. These things matter.
speaking of paying for LLMs, am i doing something wrong? i paid cursor $192 for a year of their entry level plan and i never run out of anything. I code professionally, albeit i'm at the stage where it's 80% product dev in finding the right thing to build.
Is there another world where $200/m is needed to run hundreds of agents or something?
I pay $10/month for GitHub Copilot and I usually get to 100% burn on the final day of the month. I use it extensively for the entire month about 12 hours a day. It doesn't include any of the "Pro" models that are only on the $200/mo plans, but it does a pretty fantastic job.
When did you pay for it? There was a time when its limits were very generous. If you bought an annual plan at that time then you will continue with that until renewal. Or, alternatively, you’re using the Auto model which is still apparently unlimited. That’s going away.
It’s very easy to run into limits if you choose more expensive models and aren’t grandfathered.
The fact that they released this IDE means that they may cut Cursor out of their API in the future. Google has both the organizational history (Google Maps) and the invincibility of cutting clients out of their API.
With vendor lock-in to Google's AI ecosystem, likely scraping/training on all of your code (regardless of whatever their ToS/EULA says), and being blocked from using the main VS Code extensions library.
Thank you for saying what this entire blog post doesn't. It's actually disrespectful of Google to launch this without even a mention of the fact that it is based on VSCode.
Well, Kate has been around as an KDE based advanced text editor for nearly 2 decades now - its base feature set isn't too different from a base VS Code installation. And there's also KDevelop as a more full featured IDE.
It's a good point, and in fact I went and looked at the original announcement of VS Code and it appears that Microsoft didn't credit Chromium or Electron back then either. I guess big companies are allergic to crediting other big companies.
> given how many other companies have "created browsers" that are just Chromium forks and rubbed Google the wrong way
Has there been any indication that these folks are "rubbing Google the wrong way"? I think Chromium, as a project, is actually very happy that more people are using their engine.
It also feels like they couldn't use the GOOGLE ANTIGRAVITY logo enough times in this blog post. Gigantic image with the logo and a subtitle, plastered over and over again.
I no longer bother reading their press releases. I'd much rather read the comments and threads like these to get the real story. And I say that as a former googler.
It's so obvious from even just the vague screenshots that are hidden somewhere on the site that it's a VSCode fork, that I suppose I can see why they've tried to obfuscate that as much as possible.
VSCode isn't a Chromium fork, it's an Electron app. Utilizing something is different than making a derivative of it. For example, an empty "Hello World" Electron app wouldn't have any value for an app developer, but creating a web browser derived from Chromium means you've already finished 99.9% of the work.
Google is going to win AI and kill all the other market participants.
They have the revenues to support all of this.
They spent time learning from all the players and can now fast follow into every market. Now they're fast and nimble and are willing to clone other products wholesale, fork VSCode, etc.
They're developing all of this, meanwhile Pichai is calling it a "bubble" to put a chill on funding (read: competition). It's not like Google is slowing down.
We had a chance to break them up with regulation, and we didn't. Now they're going to kill every market participant.
This isn't healthy. We have an invasive species in the ecology eating up all the diverse, healthy species.
a16z and YC must hate this. It puts a cap on their returns.
As engineers, you should certainly hate this. Google does everything it can to push wages down. Layoffs, offshoring, colluding with competitors. Fewer startups mean fewer rewards for innovation capital and more accrual to the conglomerate taxing the entire internet.
Chrome, Android, Search, Ads, YouTube, Cloud, Workspace, Other Bets, and AI/Deepmind need to be split into separate companies.
Google? Push wages down? Google is mostly known for paying top of market to keep a zoo of engineers whose only output is blog posts about how smart they are because they solved a problem they also caused.
(presumably because if they touch the ad system it might break)
> a16z and YC must hate this. It puts a cap on their returns.
And a16z's main business is investing in financial scams.
You mean like web search, webmail, internet ads, maps, calendars, browsers, smartphone operating systems, online document editing, and translation? I mean, I'm not even including stuff they acquired early like YouTube. Google was the most feared company for a decade or more for a good reason: they absolutely devoured competition in what were thought to be mature markets.
Putting aside that several of these were acquisitions, these are all great examples of things where Google introduced something for free because it would make the money through advertising, both directly and through ecosystem effects. Even the paid enterprise versions of these services were a tiny % of Google's overall gross revenue.
Prior to the push into Cloud computing, Ad revenue was well over 90% of all Google gross income, and Cloud was the first big way they diversified. GCP is definitely a credible competitor these days, but it did not devour AWS. Other commercial Google services didn't even become credible competitors, e.g. Google Stadia was a technically exceptional platform that got nowhere with customers.
The question now is whether Google carves out an edge in AI that makes it profitable overall, directly or strategically. Like many companies, there seems to be a presumption of potentially infinite upside, which is what it would take to justify the astronomical costs.
Google’s main ability is to win by pure technical prowess. They hire a lot of bright engineers. Google Search won over Altavista by pure algorithms. Google Docs (and Writely) were way more feature complete than competitors.
You love a Google product because of its features but never actually because of the product itself. But you can’t win everything by engineering and sometimes Google struggles with the product side.
I'm not sure you can call Docs (Writely) and Android acquisitions though. Android was an OS for cameras and Writely was an experimental rich text editor, not a word processor.
It's not like Youtube where they legitimately bought their way to dominance. And I'd argue that even in the case of DoubleClick, google was already dominating the search advertising market when they bought DoubleClick to consolidate their dominance.
> Plaintiffs maintain that Google has monopoly power in the product market for general
search services in the United States.
> According to Plaintiffs, Google has a dominant and durable
share in that market (general search), and that share is protected by high barriers to entry.
> Google counters that there is no such thing as a product market for general search services.
> What exists instead, Google insists, is a broader market for query response.
(+ yes obviously, products like Sheets or Maps were amazing, and are still very much the best.
It was a joke to say that even Google denies its own success, the same way as the earlier comment).
Why credit? Come on, the world has moved on from 1990s-era 4-clause BSD licenses. If you recall, the 4-clause BSD license states that all advertising materials must display an acknowledgement. It’s widely considered to be a mistake and nobody uses this license any more. Not because of legal reasons (incompatibility with GPL) but because it is madness to require so many acknowledgements. Stallman was right.
Yes, madness. VS Code wasn’t developed entirely by Microsoft. It uses plenty of other open source libraries. Why is it that VS Code should be acknowledged but not the underlying V8 engine, or Chromium, or WebKit or KHTML?
Stallman said that in 1997 there were 75 acknowledgements in a single piece of software. With today’s trend of micro libraries on npm, there will be at least thousands of acknowledgements in one piece of software.
Just the Eclipse legacy and development approach perhaps. Eclipse is 24 years old and its codebase stems from VisualAge before that, so I imagine it's ... idiosyncratic. Probably the most successful thing built on OSGi, though I dont imagine Theia brought any of that along.
At least for a while, Eclipse seemed like an Architect Astronaut's happy fever dream: actual bits if implementations hidden behind 5 to 10 nested interfaces and facades wherever I looked. I remember that at one point that I wanted to debug some weird behavior in the Plugin Development Kit and after sinking about half a day into exploring the source code, I didn't even see a single line of code that was actually doing anything meaningful. It was quite shocking to my old junior level self.
I am not fan of Eclipse, mostly due to bad experience with it, but this is excellent idea, if more people and companies would invest in it's development, we would have alternative to VS Code.
Interesting that they include non-Gemini models. Both Claude and GPT oss are both on Google Cloud, so I assume that Antigravity is using GC as the provider and not making API calls to Anthropic or OpenAI.
As somebody who worked on two IDEs which didn't fork VSCode but still used Monaco for code editing views, I think forking VSCode is almost always the right solutions for a new IDE. You get extensions, familiarity and most importantly, don't waste valuable time on the boring stuff which VSCode has already implemented.
Nothing bad with using code other people made open. Our whole industry is built on this.
forking vscode? simple. extensions not so simple. they are controlled by microsoft. without them you’ll run into continual papercuts as a vendor who has forked vscode.
Because if they're just an extension they're stuck with whatever rules Microsoft makes up, and Google is no stranger to using this leverage against others.
Because there are plenty of good reasons why you may want to modify/extend the code and the look and feel beyond what an extension would let you do.
I never understood why people scoff at VS Code forks. I'd honestly tend to be more skeptical of new editors that don't fork VS Code, because then they're probably missing a ton of useful capabilities and are incompatible with all the VSC extensions everyone's gotten used to.
Native app dev is covered in red tape and puts you at the mercy of Apple etc. It's unfortunate that things are so inefficient now, but competition is good, and native platforms can get good.
> Native app dev is covered in red tape and puts you at the mercy of Apple etc.
... Aren't we talking about a programming IDE here? When did mobile become anything like the primary market for that? Are people expected to sit around for hours inputting symbols with an OSK?
I'm not ignoring this question, I'm just not familiar with Zed and don't know how native it feels. Maybe Vscode is quicker to modify in the future as needed (esp if Google already uses it), or Antigravity is better in ways than Zed, or Zed just has a more skilled team than Google.
Also I'm used to vim and sensitive to lag, so I always hated vscode, but seems a lot of people don't notice or something. And when you're using AI for 90% of the loc, it matters less.
I tried writing a native Windows app using WinUI 3.
I wasted a day on trying to get some PNGs to render correctly, but no matter the config I used, the colors came out wrongly oversaturated.
I used Tauri with a WebView, and the app was rendering the images perfectly fine. On top of that the UI looked much better, and I was done in half the time I spend trying to fix the rendering issue in WinUI 3.
Making a VSCode fork is probably the wrong direction at this point in time. The future of agentic coding should need less support for code editor related functionality, and could eventually primarily support viewing code rather than editing code. There's a lot more flexibility in UI starting from scratch, and personally I want to see a UI that allows flexible manipulation of context and code changes with multiple agents.
GitHub is building a UI like this. I like it. I sometimes need the full IDE, but plenty of times don't. It's nice to be able to easily see what the agent is up to and converse with it in real-time while reviewing it's outputs.
Lots of commenters are simply calling this a VSCode fork and I think they're missing something important as far as how this product fits into the market.
Anthropic and OpenAI are investing a lot into this space and are now competing directly with companies like Cursor. Cursor's biggest moat at the moment is their tab completion model, which doesn't exist in the Anthropic's and OpenAI's current offerings and is leagues ahead of Github Copilot's.
Antigravity is a VSCode fork that adds both Google's own tab complete and an agent composer, similar to products like https://conductor.build/. Assuming that Google doesn't shoot themselves in the foot (which they seem to like doing), we'll see if wrappers like Cursor / Windsurf / Cognition can compete against the big labs. It's worth noting that the category seems to be blurring, since Cursor has trained not only their own tab complete model but also their own agent model.
Whenever I have a model fix something new I ask it to update the markdown implementation guides I have in the docs folder in my projects. I add these files to context as needed. I have one for implementing routes and one for implementing backend tests and so on.
They then know how to do stuff in the future in my projects.
They still aren't learning. You're learning and then telling them to incorporate your learnings. They aren't able to remember this so you need to remind them each day.
That sounds a lot like '50 First Dates' but for programming.
Yes, this is something people using LLMs for coding probably pick up on the first day. They're not "learning" as humans do obviously. Instead, the process is that you figure out what was missing from the first message you sent where they got something wrong, change it, and then restart from beginning. The "learning" is you keeping track of what you need to include in the context, how that process exactly works, is up to you. For some it's very automatic, and you don't add/remove things yourself, for others is keeping a text file around they copy-paste into a chat UI.
This is what people mean when they say "you can kind of do "learning" (not literally) for LLMs"
While I hate anthropomorphizing agents, there is an important practical difference between a human with no memory, and an agent with no memory but the ability to ingest hundreds of pages of documentation nearly instantly.
I believe LLMs ultimately cannot learn new ideas from their input in the same way as they can learn it from their training data, as the input data doesn't affect the weights of the neural network layers.
For example, let's say LLMs did not have examples of chess gameplay examples in their training data. Would one be able to have an LLM play chess by listing the rules and examples in the context? Perhaps, to some extent, but I believe it would be much worse than if it was part of the training (which of course isn't great either).
The outcome is definitely not the same, and you need to remind them all the time. Even if you feed the context automatically they will happily "forget" it from time to time. And you need to update that automated context again, and again, and again, as the project evolves
The feeding can be automated in some cases. In GitHub copilot you can put it under .github/instructions and each instructions markdown file starts with a section that contains a regex of which files to apply the instructions to.
You can also have an index file that describes when to use each file (nest with additional folders and index files as needed) and tell the agent to check the index for any relevant documentation they should read before they start. Sometimes it will forget and not consult the docs but often it will consult the relevant docs first to load just the things it needs for the task at hand.
Yeah, if my screwdriver undid the changes I just made to my mower, constantly ignored my desire to unscrew screws and instead punched a hole in my carb - I'd be throwing that screwdriver in the garbage.
- don't have career growth that you can feel good about having contributed to
Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
- don't have a genuine interest in accomplishment or team goals
Easy to train for, if it turns out to be necessary. I'd always assumed that a competitive drive would be necessary in order to achieve or at least simulate human-level intelligence, but things don't seem to be playing out that way.
- have no past and no future. When you change companies, they won't recognize you in the hall.
Or on the picket line.
- no ownership over results. If they make a mistake, they won't suffer.
Good deal. Less human suffering is usually worth striving for.
> Humans are on the verge of building machines that are smarter than we are. I feel pretty goddamned awesome about that. It's what we're supposed to be doing.
It's also the premise of The Matrix. I feel pretty goddamned uneasy about that.
(Shrug) There are other sources of inspiration besides dystopic sci-fi movies. There's the Biblical story of the Tower of Babel, for instance. Better not work on language translation, which after all is how the whole LLM thing got started.
Sometimes fiction went in the wrong direction. Sometimes it didn't go far enough.
In any case, the matrix wasn't my inspiration here, but it is a pithy way to describe the concept. It's hard to imagine how humans maintain relevancy if we really do manage to invent something smarter than us. It could be that my imagination is limited though. I've been accused of that before.
> Humans are on the verge of building machines that are smarter than we are.
You're not describing a system that exists. You're describing a system that might exist in some sci-fi fantasy future. You might as well be saying "there's no point learning to code because soon the rapture will come".
That particular future exists now, it's just not evenly distributed. Gemini 2.5 Pro Thinking is already as good at programming as I am. Architecture, probably not, but give it time. It's far better at math than I am, and at least as good at writing.
Computers beat us in maths decades ago, yet LLMs are not able to beat a calculator half of the time. The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.
Most AI experts not heavily invested in the stocks of inflated tech companies seem to agree that current architectures cannot reach AGI. It's a sci-fi dream, but hyping it is real profitable. We can destroy ourselves plenty with the tech we already have, but it won't be a robot revolution that does it.
The maths benchmarks that companies so proudly show off are still the realm of a traditional symbolic solvers. You claiming much success in asking LLMS for math makes me question if you have actually asked an LLM about maths.
What I really need to ask an LLM for is a pointer to a forum that doesn't cultivate proud exhibition of ignorance, Luddism, and general stupidity at the level exhibited by commenters in this entire HN story, and in this subthread in particular.
They can usually write code, but not that well. They have lots of energy and little to say about architecture and style. Don't have a well defined body of knowledge and have no experience. Individual juniors don't change, but the cast members of your junior cohort regularly do.
I actually do find there is a subset of meetings that are far more productive on Zoom. We can be voice chatting on one screen, share another screen and both be able to type, record notes, pull up side research without interrupting the conversation. It's a bit closer to co-working than a meeting but it hits a sweetspot for me.
I used to be really excited about "agents" when I thought people were trying to build actual agents like we've been working on in the CS field for decades now.
It's clear now that "agents" in the context of "AI" is really about answering the question "How can we make users make 10x more calls to our models in a way that makes it feel like we're not just squeezing money out of them?" I've seen so many people that think setting some "agents" of on a minutes to hours long task of basically just driving up internal KPIs at LLM providers is cutting edge work.
The problem is, I haven't seen any evidence at all that spending 10x the number of API calls on an agent results in anything closer to useful than last year when people where purely vibe coding all the time. At least then people would interactively learn about the slop they were building.
It's astounding to watch a coworker walk though through a PR with hundreds of added new files and repeatedly mention "I'm not sure if these actually work, but it does look like there's something here".
Now I'm sure I'll get some fantastic "no true Scotsman" replies about how my coworkers must not be skilled enough or how they need to follow xyz pattern, but the entire point of AI was to remove the need for specialize skills and make everyone 10x more productive.
Not to mention that the shift in focus on "agents" is also useful in detracting from clearly diminishing returns on foundation models. I just hope there are enough people that still remember how to code (and think in some cases) to rebuild when this house of cards falls apart.
> but the entire point of AI was to remove the need for specialize skills and make everyone 10x more productive.
At least for programming tools, for everything (well, the vast majority, at least) that is sold that way—since long before generative AI—it actually succeeds or fails based not on whether it eliminates need for specialized skills and makes everyone more productive, but whether it further rewards specialized skills, and makes the people who devote time to learning it more productive than if they devoted the same time to learning something else.
Yeah it's saccharine. Reminds me quite a lot of Americans who work for tips (e.g. waiters) - disconcertingly friendly.
Someone gave me a great tip though - at least for ChatGPT there's a setting where you can change its personality to "robot". I guess that affects the system prompt in some way but it basically fixes the issue.
I am essentially in this exact role. The junior developers simply don't have the experience to evaluate the output of the agents. You wind up with a lot of slop in PRs. People can't justify why they did something. I've seen whole PRs closed, work redone, and opened anew because they were 70% garbage. Every other comment was asking "why is this here? it has nothing to do with the ticket."
Sadly, this is not sustainable and I am not sure what I'm going to do.
I enjoy getting into a good flow state and pounding out clever and elegant code but watching a good LLM generate code according to my specs and refining it is also enjoyable. I've been burning through $250 of free Claude Code Web credits and having multiple workers running at the same time is fun.
They have a capacity to "learn", it's just WAY MORE INVOLVED than how humans learn.
With a human, you give them feedback or advice and generally by the 2nd or 3rd time the same kind of thing happens they can figure it out and improve. With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.
Depending on your definition of "learn", you can also use something akin to ChatGPT's Memory feature. When you teach it something, just have it take notes on how to do that thing and include its notes in the system prompt for next time. Much cheaper than fine-tuning. But still obviously far less efficient and effective than human learning.
I think it’s reasonable to say that different approaches to learning is some kind of spectrum, but that contemporary fine tuning isn’t on that spectrum at all.
> With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.
The only way that an AI model can "learn" is during model creation, which is then fixed. Any "instructions" or other data or "correcting" you give the model is just part of the context window.
Fine tuning is additional training on specific things for an existing model. It happens after a model already exists in order to better suit the model to specific situations or types of interactions. It is not dealing with context during inference but actually modifying the weights within the model.
As a team lead, working with people is so... cumbersome. They need time to recharge, lots of encouragement, and a nice place to work in. Give me a coding agent any time!
Well it's a helluva lot faster to make for one. For two, just about everyone knows how to navigate in vscode by now. Reducing the barrier of entry has obvious advantages.
I just opened the app to see what else I can bring up, and while clicking through UI I noticed I had some crappy key bindings extension installed, which apparently caused many of my annoyances.
I've probably installed it very long ago, or even by accident.
For example, I was always annoyed that open file/directory shortcut (one of most common operations) is not assigned and requires mouse interaction -- fixed by disabling the extension.
Go to file shortcuts does something completely different -- fixed by disabling the extensions.
I likely won't adopt Cursor as my main IDE/Editor, but it's miles better than I thought just an hour ago.
Not the person you asked, but I hate how it screws up keyboard shortcuts.
It overrode the delete line shortcut with its own inline chat one, for example.
Decided to ditch it for claude code right after that, since I cannot be bothered to go over the entire list of keyboard shortcuts and see what else it overrode/broke.
I've found that annoying too, but you can always rebind them as you wish. It's only a few new keybinds that get in the way of my muscle memory.
That said I also have moved to CLI agents like Claude Code and Codex because I just find them more convenient and, for whatever reason, more intelligent and more likely to correctly do what I request.
> just about everyone knows how to navigate in vscode by now.
I don’t know and honestly I hate the assumption of the software industry that everyone knows or uses vs code. I stuck to sublime for years until I made the switch to Jetbrains IDEs earlier this year.
I quickly looked up the market share and VS code seems to have about 70% which is a lot but the 30% that don’t use it is not that small of a number either.
Like I get it it’s very popular but it’s far from the only editor/IDE people use.
These new editors are trying to differentiate themselves via their AI features. Working on the core editor may waste resources that could have been better spent improving the AI features.
Until someone finally figures out that we need to rethink editors from the ground up to support different sort of operations and editing experience, to better facilitate LLMs doing work as agents.
But we're probably 1-2 years away from there still, so we'll live with skinned-forks, VSCode extensions and TUIs for now.
It's actually weird to me how none of the big players put their money where their mouth is and vibe coded a new IDE built from the ground up for this paradigm shift regardless of tech stack.
Zed team is writing their own in-house GUI stack [1] that leverages the computer's GPU with minimal middleware in-between. It's a lot of work short-term but IMO the payoff would be huge if they establish themselves. I imagine they could poke into the user-facing OS sector if their human-agent interaction is smooth. (I have not tried it yet though)
I am very sensitive to input latency and performance but after comparing Zed and VS Code for a while I really couldn't find any reason to stick with Zed. It's been a year or so since I last tried it but VSC just lets me do way more while still, IMO, having a nice, clean UI. I never notice any performance or key input latency with VSC.
> I wonder why they are not trying to fixup something extremely complex that only a handful players managed to get right using gui stacks made with only mobile in mind that are desperately trying to catch up to desktop now
We need a VS Code fork that just exposes more interfaces, and does nothing else. Then all these forks could just use that with power extensions, and it'd force Microsoft to change its behavior.
The issue with Eclipse and that approach is the complexity of mixing plugins to do everything, which kills the UX.
When VSCode started, the differentiator from Atom and Eclipse was that the extension points were intentionally limited to optimize the user experience. But with the introduction of Copilot that wasn’t enough, hence the amount of forks.
I think that the Zed approach of having a common protocol to talk with agents (like a LSP but for agents) is much better. The only thing that holds me from switching to Zed is that so far my experience using it hasn’t been that good (it still has many rough edges)
Microsoft just fork Atom, and Atom had already good and a lot of extensions.
Before Microsoft buy Github, there was no reason to switch to VSCode instead of Atom.
When Microsoft buy Github, it received Atom from the Github team, and Microsoft stops the development of Atom.
VSCode was just Atom with the Microsoft brand, and some little tweaks from Microsoft, never a game changer compared to Atom, like Atom was in it's time.
Now Antigravity is again a fork with some little tweaks from VSCode, no game changer, just with the Google branding.
Microsoft has very specific constraints on what extensions can and can't do, it's not a free for all. They're actively defending their mote by allowing Copilot to do things in a way that extensions couldn't. That's why all the serious contenders make a fork, it's simply not possible to have the same integration otherwise.
Because it just searches Google and HN is indexed regularly, nothing really noteworthy. If you copy paste the same quote into Google you get the same thing.
Still it would be a lot wiser if all the forkers would do one 'AI-enabled' fork together that exposes all the extras that copilot gets. The barrier for testing would be much lower and all the extension makers would also jump onto the train. Likely MS would finally give in and make all the extras available for everyone. But all the fragmentation only helps MS.
I've had a Github Copilot subscription from work for 1yr+ and switch between the official Copilot and Roo/Kilo Code from time to time. The official Copilot extension has improved a lot in the last 3-6 months but I can't recall ever seeing Copilot do something that Roo/Kilo can't do, or am I missing something obvious?
The Copilot extension uses proposed APIs, meaning it's on an allowlist bundled with VS Code. Roo likely enables these early. The API can stay proposed for years before Microsoft opens it up to third party users.
They're all going to have quite divergent opinions on how to structure the fork and various other design decisions that would inevitably lead to forks of the fork again.
I think forking VS Code is probably the most sensible strategy and I think that will remain the case for many years. Really, I don't think it's changing until AI agents get so ridiculously good that you can vibe code an entire full-featured polished editor in one or a few sessions with an LLM. Then we'll be seeing lots of de novo editors.
The Claude Code extension on VS Code does very little (too little in my opinion). The integration level with agentic functionality provided by Antigravity goes much deeper in my 20 minutes or so of playing with it. The biggest value pieces I see is: Agent Manager window which provides a unified view of all my agents running across all my workspaces (!) where I can quickly approve or respond to followup questions and quickly brings me to the code in context for each agent, additionally, I can select a piece of code and comment on it inline and that comment gets sent to the correct, active agent. These two things alone are items which I have been looking for... Too bad I only have approval to use Claude Code at work. This looks promising.
Well, you are entitled to your opinion but many people would disagree with you, and that's the crux of the issue, everyone has their own conflicting views on what the UX should be, hence all the forks.
I don't even know what the Claude Code extension does in vscode. I have it installed but hell if I know what it's doing. I run Claude in one of vscode's terminals, and do everything through there. I do see (sometimes) diffs pop up in the IDE, I guess that's the extent of this integration.
> AIUI the forks are required because Microsoft is gatekeeping functionality used by Copilot from extensions so they can't be used by these agents.
reply
I always wonder how this works legally. VSCode needs to comply with the LGPL (it's based on Chromium/Blink which is LGPL) ; they should provide the entire sources that allow us to rebuild our own "official" VSCode binary
Could you give an example of what they're gatekeeping for Copilot exclusively? I'm kinda confused because Copilot in VS Code isn't exactly a powerhouse of unique features in my experience, it still feels well behind Roo/Kilo Code in most ways I can think of, although much closer to the competition than it was a year ago.
I was going to ask why all these companies choose to fork the entire IDE rather than just writing an extension like every other sane developer, and this response is the most believable reason why.
But Microsoft made VSCode lol, I think being able to gatekeep things like that shouldn’t allow a billion dollar company just reuse all of your code instead of making their own IDE
> 2024: every day a new Chrome fork browser is announced
I think this was more accurate around 2012. My local tech magazine had their own fork and they attached CD with the magazine which included the browser.
This whole blog post is seemingly about Google, not about the user. "Why We Built Antigravity" etc. "We want Antigravity to be the home base for software development in the era of agents" - cool, why would I as the user care about that?
This kind of cynicism is wild to me. Of course most AI products (and products in general) are for end users. Especially for a company like Google--they need to do everything they can to win the AI wars, and that means winning adoption for their AI models.
This is different. AI is an existential threat to Google. I've almost stopped using Google entirely since ChatGPT came out. Why search for a list of webpages which might have the answer to your question and then manually read them one at a time when I can instead just ask an AI to tell me the answer?
If Google doesn't adapt, they could easily be dead in a decade.
That's funny. I stopped using ChatGPT completely and use Gemini to search, because it actually integrates with Google nicely as opposed to ChatGPT which for some reason messes up sometimes (likely due to being blocked by websites while no one dares block Google's crawler lest they be wiped off the face of the internet), and for coding, it's Claude (and maybe now Gemini for that as well). I see no need to use any other LLMs these days. Sometimes I test out the open source ones like DeepSeek or Kimi but those are just as a curiosity.
If web-pages don't contain the answer, the AI likely won't either. But the AI will confidently tell me "the answer" anyway. I've had atrocious issues with wrong or straight up invented information that I must search up every single claim it makes on a website.
My primary workflow is asking AI questions vaguely to see if it successfully explains information I already know or starts to guess. My average context length of a chat is around 3 messages, since I create new chats with a rephrased version of the question to avoid the context poison. Asking three separate instances the same question in slightly different way regularly gives me 2 different answers.
This is still faster than my old approach of finding a dry ground source like a standards document, book, reference, or datasheet, and chewing through it for everything. Now I can sift through 50 secondary sources for the same information much faster because the AI gives me hunches and keywords to google. But I will not take a single claim for an AI seriously without a link to something that says the same thing.
Given how embracing AI is an imperative in tech companies, "a link to something" is likely to be a product of LLM-assisted writing itself. Entire concept of checking through the internet becomes more and more recursive with every passing moment.
I do not believe that Google Antigravity is aimed at wooing investors. I believe it is intended to be a genuine superior alternative to Cursor and Kiro etc. and is attempting to provide the best AI coding experience for the average developer.
Most of the other people (so far) in this sub-thread do not think this. They essentially have a conspiratorial view on it.
Colab is still going strong. Chrome inspector is still going strong.
They've never released a full-fledged IDE before, have they? Which I don't count Apps Script editor as one, but that's been around for a long time as well.
I think it's much more likely that Google believes this is the future of development and wants to get in on the ground floor. As they should.
> Google believes this is the future of development
This is hardly possible as this is definitely not the future of development which is obvious to developers who created this. Or to any developer for that matter.
Agree to disagree, I guess. What you think is obvious, I think is false. And I think the rapidly growing success of Cursor is the proof of that. But I guess you must think Cursor is just a fad or something, since you don't see why Google would want to legitimately compete with it?
Cursor is obviously a fad (unlike Copilot - I'm not at all an AI hater, quite the opposite) and perhaps Google needs to present something to shareholders that will pretend to be competing.
Well, just so you know, there are lots of us who think Cursor is not a fad, and see that Google realizes this as well, and is genuinely competing with it.
A lot of people find it matters quite a lot of actual development work. If you want to ignore all that, then I guess go ahead.
But just know that what you're claiming is "obvious", is clearly not. There seems to be large disagreement over it, so it is clearly not obvious, but rather quite debatable.
I agree what you’ve listed makes sense as a product portfolio.
But AI Studio is getting vibe coding tools. AI Studio also has a API that competes with Vertex. They have IDE plugins for existing IDEs to expose Chat, Agents, etc. They also have Gemini CLI for when those don’t work. There is also Firebase Studio, a browser based IDE for vibe coding. Jules, a browser based code agent orchestration tool. Opal, a node-based tool to build AI… things? Stich, a tool to build UIs. Colab with AI for a different type of coding. Notebook LM for AI research (many features now available in Gemini App). AI Overviews and AI mode in search, which now feature a generic chat interface.
Thats just new stuff, and not including all the existing products (Gmail, Home) that have Gemini added.
This is the benefit of a big company vs startups. They can build out a product for every type of user and every user journey, at once.
In "real world" you don't use OpenAI or Anthropic API directly—you are forced to use AWS, GCP, or Azure. Each of these has its own service for running LLMs, which is conceptually the same as using OpenAI or Anthropic API directly, but with much worse DX. For AWS it's called Bedrock, for GCP—Vertex, and for Azure it's AI Foundry I believe. They also may offer complementary features like prompt management, evals, etc, but from what I've seen so far it's all crap.
Jules is the first and only one to add a full API, which I've found very beneficial. It lets you integrate agentic coding features into web apps quite nicely. (In theory you could always hack your own thing together with Claude Code or Codex to achieve a similar effect but a cloud agent with an API saves a lot of effort.)
Remember took my a while early in my career from changing my resume away from saying "I want to do this at my next job and make a lot of money" and towards "here is how I can make money and save costs for your company".
Google didn't learn that lesson here. They are describing why us using Antigravity is good for Google, not why us using Antigravity is good for us.
More accurately, it should be neither about Google nor about the user, but about the product. Describe what the product is and does, don’t make assumptions about the user, and let the user be the judge of it.
On the pricing page it says that for public preview they are offering a free individual plan with "generous rate limits". I gave it an HTML file and asked it to create Jinja templates from it and 2 minutes later (still planning, no additional prompt) I got this:
> Model quota limit exceeded. You have reached the quota limit for this model.
I'm saying it's egregious to expect all users to know the fact that an HTML document, for some reason, uses an enormous amount of context in an LLM designed specifically for working with code.
I haven't used it myself but a few of my colleagues used it saying it is good and they completed a huge chunk of work with Antigravity, mind you I am very skeptical of this.
The don't seem to be getting any rate limiting issue which I don't understand, maybe a bug in Antigravity allowing them to use it for more. They are really confident in the IDE after a few hours and the output given is really good.
It's the same problem with OpenRouter's free tiers for a long time. If something is truly $0 and widely available, people will absolutely bleed it dry.
Same here. I tried to build a super simple iOS App in antigravity and I was out of quota before it finished. The whole thing was a couple of files and a few hundred lines of code.
> Spin up agents to tackle routine tasks that take you out of your flow, such as codebase research, bug fixes, and backlog tasks.
The software of the future, where nobody on staff knows how anything is built, no one understands why anything breaks, and cruft multiplies exponentially.
After a bunch of people leave the company it's already like nobody knows how anything is built. This seems like a good thing to accelerate understanding a codebase.
it's funny - nervous funny, not haha funny - that you think drawing a real issue like this out into the open would focus an organization on solving it.
You can ask agents to identify and remove cruft. You can ask an agent why something is breaking -- to hypothesize potential causes and test them for validity. If you don't understand how something is built, you can ask the agent to give you an overview of the architecture and then dive into whatever part you want to explore more.
And it's not like any of your criticisms don't apply to human teams. They also let cruft develop, are confused by breakages, and don't understand the code because everyone on the original team has since left for another company.
> you can ask the agent to give you an overview of the architecture and then dive into whatever part you want to explore more.
This is actually a cool use that's being explored more and more. I first saw it in the wiki thing from the devin people, and now google released one as well.
Humans are just better at communicating about their process. They will spend hours talking over architectural decisions, implementation issues, writing technical details in commit messages and issue notes, and in this way they not only debug their decisions but socialize knowledge of both the code and the reasons it came to be that way. Communication and collaboration are the real adaptive skills of our species. To the extent AI can aid in those, it will be useful. To the extent it goes off and does everything in a silo, it will ultimately be ignored - much like many developers who attempt this.
I do think the primary strengths of genai are more in comprehension and troubleshooting than generating code - so far. These activities play into the collaboration and communication narrative. I would not trust an AI to clean up cruft or refactor a codebase unsupervised. Even if it did an excellent job, who would really know?
> Humans are just better at communicating about their process.
I wish that were true.
In my experience, most of the time they're not doing the things you talk about -- major architectural decisions don't get documented anywhere, commit messages give no "why", and the people who the knowledge got socialized to in unrecorded conversations then left the company.
If anything, LLM's seem to be far more consistent in documenting the rationales for design decisions, leaving clear comments in code and commit messages, etc. if you ask them to.
Unfortunately, humans generally are not better at communicating about their process, in my experience. Most engineers I know enjoy writing code, and hate documenting what they're doing. Git and issue-tracking have helped somewhat, but it's still very often about the "what" and not the "why this way".
"major architectural decisions don't get documented anywhere"
"commit messages give no "why""
This is so far outside of common industry practices that I don't think your sentiment generalizes. Or perhaps your expectation of what should go in a single commit message is different from the rest of us...
LLMs, especially those with reasoning chains, are notoriously bad at explaining their thought process. This isn't vibes, it is empiricism: https://arxiv.org/abs/2305.04388
If you are genuinely working somewhere where the people around you are worse than LLMs at explaining and documenting their thought process, I would looking elsewhere. Can't imagine that is good for one's own development (or sanity).
I've worked everywhere from small startups to megacorps. The megacorps certainly do better with things like initial design documents that startups often skip entirely, but even then they're often largely out-of-date because nobody updates them. I can guarantee you that I am talking about common industry practices in consumer-facing apps.
I'm not really interested in what some academic paper has to say -- I use LLM's daily and see first-hand the quality of the documentation and explanations they produce.
I don't think there's any question that, as a general rule, LLM's do a much better job documenting what they're doing, and making it easy for people to read their code, with copious comments explaining what the code is doing and why. Engineers, on the other hand, have lots of competing priorities -- even when they want to document more, the thing needs to be shipped yesterday.
Alright, I'm glad to hear you've had a successful and rich professional career. We definitely agree that engineers generally fail to document when they have competing priorities, and that LLMs can be of use to help offload some of that work successfully.
Your initial comment made it sound like you were commenting on a genuine apples-for-apples comparisons between humans and LLMs, in a controlled setting. That's the place for empiricism, and I think dismissing studies examining such situations is a mistake.
A good warning flag for why that is a mistake is the recent article that showed engineers estimated LLMs sped them up by like 24%, but when measured they were actually slower by 17%. One should always examine whether or not the specifics of the study really applies to them--there is no "end all be all" in empiricism--but when in doubt the scientific method is our primary tool for determining what is actually going on.
But we can just vibe it lol. Fwiw, the parent comment's claims line up more with my experience than yours. Leave an agent running for "hours" (as specified in the comment) coming up with architectural choices, ask it to document all of it, and then come back and see it is a massive mess. I have yet to have a colleague do that, without reaching out and saying "help I'm out of my depth".
The paper and example you talk about seem to be about agent or plan mode (in LLM IDEs like Cursor, as those modes are called) while I and the parent are talking about ask mode, which is where the confusion seems to lie. Asking the LLM about the overall structure of an existing codebase works very well.
OK yes, you are right that we might be talking about employing AI toolings in different modes, and that the paper I am referring to is absolutely about agentic tooling executing code changes on your behalf.
That said, the first comment of the person I replied to contained: "You can ask agents to identify and remove cruft", which is pretty explicitly speaking to agent mode. He was also responding to a comment that was talking about how humans spend "hours talking about architectural decisions", which as an action mapped to AI would be more plan mode than ask mode.
Overall I definitely agree that using LLM tools to just tell you things about the structure of a codebase are a great way to use them, and that they are generally better at those one-off tasks than things that involve substantial multi-step communications in the ways humans often do.
I appreciate being the weeds here haha--hopefully we all got a little better talking abou the nuances of these things :)
Idealized industry practices that people wish to follow, but when it comes to meeting deadlines, I too have seen people eschew these practices for getting things out the door. It's a human problem, not one specific to any company.
Yes I recognize that, for various reasons, people will fail to document even when it is a profesional expectation.
I guess in this case we are comparing an idealized human to an idealized AI, given AI has equally its own failings in non-idealized scenarios (like hallucination).
Sure, you can ask the agents to "identify and remove cruft" but I never have any confidence that they actually do that reliably. Sometimes it works. Mostly they just burn tokens, in my experience.
> And it's not like any of your criticisms don't apply to human teams.
Every time the limitations of AI are discussed, we see this unfair standard applied: ideal AI output is compared to the worst human output. We get it, people suck, and sometimes the AI is better.
At least the ways that humans screw up are predictable to me. And I rarely find myself in a gaslighting session with my coworkers where I repeatedly have to tell them that they're doing it wrong, only to be met with "oh my, you're so right!" and watch them re-write the same flawed code over and over again.
This argument warrants introspection for "crusty devs", but also has holes. A compiler is tightly engineered and dependable. I have never had to write assembly because I know that my compiled code 100% represents my abstract code and any functional problems are in my abstract code. That is not true in AI coding. Additionally, AI coding is not just an abstraction over code, but an abstraction over understanding. When my code compiles, I don't need to worry that the compiler misunderstood my intention.
I'm not saying AI is not a useful abstraction, but I am saying that it is not a trustworthy one.
I do still write assembly sometimes, and it's a valued skill because it'll always be important and not everyone can do it. Compilers haven't obsoleted writing assembly by hand for some use cases, and LLMs will never obsolete actually writing code either. I would be incredibly cautious about throwing all your eggs into the AI basket before you atrophy a skill that fewer and fewer will have
How is a compiler and an LLM equivalent abstractions? I'm also seriously doubtful of the 10x claim any time someone brings it up when AI is being discussed. I'm sure they can be 10x for some problems but they can also be -10x. They're not as consistently predictable (and good) like compilers are.
The "learn to master it or become obsolete" sentiment also doesn't make a lot of sense to me. Isn't the whole point of AI as a technology that people shouldn't need to spend years mastering a craft to do something well? It's literally trying to automate intelligence.
The increasing levels of abstraction work only as long as the abstractions are deterministic (with some limited exceptions - i.e. branch prediction/preloading at CPU level, etc). You can still get into issues with leaky abstractions, but generally they are quite rare in established high->low level language transformations.
This is more akin to manager-level view of the code (who need developers to go and look at the "deterministic" instructions); the abstraction is a lot lot more leaky than high->low level languages.
In the 00s I saw so many C codebases with hand-rolled linked lists where dynamically resized arrays would be more appropriate, "should be big enough" static allocations with no real idea of how to determine that size, etc. Hardly anyone seemed to have a practical understanding of hashes. When you use a higher level language, you get steered towards the practical, fundamental data structures more or less automatically.
even JS doesn't churn as fast as the models powering vibe coding, and that cut & paste node app is still deterministic, compared to what happens when the next version of the model looks at AI-generated code from two years ago...
One thing I’ve noticed though that actually coding (without the use of AI; maybe a bit of tab auto-complete) is that I’m actually way faster when working in my domain than I am when using AI tools.
Everytime I use AI tools in my domain-expertise area, I find it ends up slowing me down. Introducing subtle bugs, me having to provide insane amount of context and details (at which point it becomes way faster to do it myself)
Just code and chill man - having spent the last 6 months really trying everything (all these context engineering strategies, agents, CLAUDE.md files on every directory, et, etc). It really easy still more productive to just code yourself if you know what you’re doing.
The thing I love most though - is having discussions with an LLM about an implementation, having it write some quick unit tests and performance tests for certain base cases, having it write a quick shell script, etc. things like this, it’s Amazing and makes me really enjoy programming since I save time and can focus on doing the actual fun stuff
When I'm doing the coding myself, I'm at least making steady progress and the process is predictable. With LLMs, it's a crapshoot. I have to get the AI to understand what I want and may have try again multiple times, many times never succeeding, and end up writing a lot of text anyway. And in between, I'll have to read a lot of code that probably ends up being thrown away or heavily modified.
This probably depends a lot on what kind of project one is working on, though.
But it's like you said, I like using LLMs for completing smaller parts or asking specific kind of help or having conversations about solutions, but for anything larger, it just feels like banging my head to a wall.
Don't use agent mode, only use ask mode. Once I did that, it works as expected. I can still code but not have to rely on the randomized nature of "vibe coding."
One AI workflow I rather like seems to have largely vanished from many modern tools. Use a very dumb simple model with syntax knowledge to autocomplete. It fills out what I'm about to type, and takes local variables and pass them to functions I wanna call.
It feels like just writing my own code but at 50% higher wpm. Especially if I can limit it to only suggest a single row; it prevents it from effecting my thought process or approach.
This is how the original GitHub copilot worked until it switched to a chat based more agentic behavior. I set it up locally with an old llama on my laptop and it's plenty useful for bash and c, and amazing for python. I ideally want a model trained only for code and not conversational at all, closer to the raw model trained to next-token predict on code.
I think this style just doesn't chew enough tokens to make tech CEOs happy. It doesn't benefit from a massive model and almost drains more networking than compute to run in the cloud.
Most editors and LSPs offer variable, method, keyword and a bunch of other completions that are 100% predictable and accurate, you don't need an LLM for this.
Devs are starting to realize that the sweet spot for AI support in coding is on a small scale, i.e. extended code completion. Generating huge chunks of code is often not reliable enough except for some niches (such as simple greenfield projects which profit from millions of examples online).
One of the core principles of my workflow (inspired by REPL development and some unix tools) is to start with a single file (for a function or the whole project). The I refactor the code to have a better organization and to improve reliability, especially as I'm handling more scenarios (and failure modes).
LLMs are not useful in this workflow, because they are too verbose. Their answers are generic and handle scenarios you don't even support yet. What's useful is good documentation (as in truthful) and the code if it's open.
This approach has worked really well in my career. It gives me KISS and YAGNI for free. And every line of code is purposeful and have a reason to be there.
I’ve been actively using the first tier paid version of:
- GPT
- Claude
- Gemini
Usually it’s via the cli tool. (Codex, Claude code, Gemini cli)
I have a bunch of scripts setup that write to the tmux pane that has these chats open - so I’ll visually highlight something nvim and pipe that into either of the panes that have one of these tools open and start a discussion.
If I want it to read the full file, I’ll just use the TUIs search (they all use the @ prefix to search for files) and then discuss. If I want to pipe a few files, I’ll add the files I want to nvim quickfix list of literally pipe the files I want to a markdown file (with a full path) and discuss.
So yes - the chat interface in these cli tools mostly. I’m one of those devs that don’t leave the terminal much lol
I also have a personal rule that I will try something for at least 4 months actively before making my decision about it (programming language, new tools, or in this case AI assisted coding)
I made the claim that in my area of expertise - I have found that *most of the time it is faster to write something myself than I write out really detailed md file / prompt. It becomes more tedious to express myself via natural language then it is with code when I want something very specific done.
In these types of cases - writing the code myself, allows me to express the thing I want faster. Also, I like to code with the AI auto complete but still while this can be useful I sometimes disable it because it’s distracting and consistently incorrect with its predictions)
or the complete opposite. Very skilled people with a lot of experience in a specific project. I am like that too at my current job. I've REALLY tried to use AI but it has always slowed me down in the end. AI is only speeding me up in very specific and isolated things, tangent to the main product development.
For seasoned maintainers of open source repos, there is explicit evidence it does slow them down, even when they think it sped them up: https://arxiv.org/abs/2507.09089
Cue: "the tools are so much better now", "the people in the study didn't know how to use Cursor", etc. Regardless if one takes issue with this study, there are enough others of its kind to suggest skepticism regarding how much these tools really create speed benefits when employed at scale. The maintenance cliff is always nigh...
There are definitely ways in which LLMs, and agentic coding tools scaffolded in top, help with aspects of development. But to say anyone who claims otherwise is either being disingenuous or doesn't know what they are doing, is not an informed take.
Turn off the noise, just keep coding and get better at it, go all-in with the fundamentals it will pay off over time. You can leverage autocomplete to go as fast as you can, don't waste your time writing md files, most of the world still does hand coding, sure with some ai assistance, but without totally ceding control. They want you to believe the contrary to keep the bubble inflating or to sell you some ai courses where you spend hundreds of dollars for "learning" how to please a statical tool into doing what you want to do, spoiler alert it eventually won't and steer away in unpredictable ways
I feel this. Tbh I was excited about most of the previous tech fads I have come across. Sure, people got overzealous with them and also failed to realize that what F*NG needs is not what your three person startup needs, but there were always some good and interesting ideas there.
This just feels... a little too dystopian. Companies hoovered up the entirety more or less of all of our collective thoughts and writings and output and now want to sell it back to us- and I fear that cost is going to be extremely steep.
It's impressive, but at the same time, just feels like its going to somehow be a net detractor to society, and yet I feel I need to keep up with each new iteration or potentially get washed over and left behind by the wave.
I am somewhat fortunate to be towards the top of the pyramid and also in a position where I could theoretically ride off into the sunset, but I fear the societal implications and the pain that is going to come for vast numbers of people.
> The agentic spam is exhausting. I just wanted to code.
Ditto. And we still can.
I've yet to use an "agent", and still use a chat UI to an LLM in Emacs. I rely on these tools for design discussion, rough prototyping, and quick reference, but they still waste my time roughly a quarter of the time I use them. They have gotten better in the last year, though, and I've been able to broaden my reach into stacks and codebases I wouldn't have felt comfortable with before, which is good.
I just have no interest in "agents". I don't want to give these companies more access to my system and data, and I want to review every thing these tools generate. If this makes me slower than a vibe coder, that's intentional. Thankfully, there are still sane people and companies willing to pay me for this type of work, so I'm not worried about being displaced any time soon. Once that happens, I'll probably close up shop, figure out an alternative income stream, and continue coding as a hobby.
Wrong. Continuing with the architect metaphor, he doesn't understandably want to delegate digital draft drawings to a non predictable statical tool ridden with idiosyncracies and biases based off mostly on poor open source training data and write some dumb architect.md to try steer the will of the tool. He wants to design and do it better with some highly integrated hidden ai. So far whenever I heard product x says that has AI, most of the time is a product with zero real integration and just a chatbot in a separate window
I've been using it today and honestly I'm impressed. I seem to be the exception when I say I don't care it's a vscode fork. Having access to the extension ecosystem seems like a boon to me and I could quickly get a setup to my liking to rival my current Jetbrains Claude Code setup.
It seems to streamline my existing Claude Code workflow with a much better UI. The tab complete seems the best I've experienced and the text/image selection, adding comments and iterating on a plan is genius.
Depressing to see everyone here unable to see the forest for the trees.
I agree! It's great. Until you hit the limit, then there's literally no path to continue.
I would happily pay 20 or whatever for 4x limits. I'm very curious what they end up offering. My major reservation is side project vibes. I think it's hard to believe on this long term unless Google themselves adopt it.
> Neither engenders user trust in the work that the agent undertook. Antigravity provides context on agentic work at a more natural task-level abstraction, with the necessary and sufficient set of artifacts and verification results, for the user to gain that trust.
I'm going to need an AI summary of this page to even start comprehending this... It doesn't help that the scrolling makes me nauseous, just like real anti-gravity probably would.
"A more intuitive task-based approach to monitoring agent activity, presenting you with essential artifacts and verification results to build trust."
The whole thing around "trust" is really weird. Why would I as a user care about that? It's not as if LLMs are perfect oracles and the only thing between us and a brave new world is blind trust in whatever the LLM outputs.
I'd say that because right now, verifying the LLM output (meaning: not trusting it) is a huge bottleneck (rightfully so). I guess they are trying to convince people that this bottleneck is no more, with this IDE.
Translation: "don't trouble your little brain actually trying to read the code that our model produced. we can show you pictures of the output instead."
It really seems like it's just standardizing into a first-class UI what a lot of people have already been doing.
I don't think I'm the target for this - I already use Claude Code with jj workspaces and a mostly design-doc first workflow, and I don't see why I would switch to this, but I think this could be quite useful for people who don't want to dive in so deep and combine raw tooling themselves.
Sure. Honestly I think you can get the same with git work trees, though I haven't tried.
After a couple iterations on this, I've ended up having claude code vibe-code a helper CLI in Go for me which I can invoke with `ontheside <new-workspace-name> <base-change>` and will
- create a new jj workspace based on the given change
- create a docker container configured with everything my unit tests need to run, my claude code config mounted, and jj configured
- it also sets up a claude code hook to run `jj` (no arguments) every time it changes a file, so that jj does a snapshot
- finishes by starting an interactive claude code session with `--dangerously-skip-permissions`
- it also cleans it all up once I exit the claude code session and fish shell that's running it
With this I can have Claude Code working asynchronously on something, while I can review (or peek) the changes in batch from my main editor by running `jj show <change-id>` / `jj diff -r "..."` (which in my case opens it up in the Goland multi-file diff viewer). I can also easily switch to the change it's working on in my main editor, to make some manual modifications if necessary.
This is, in general, primarily for "in the background async stuff" I want to have it work on. Most of the time I just have a dead-normal claude code session running in my main workspace.
Minor self-plug - if you want, I posted a jj intro article a while ago[0], though it doesn't include my current workspace usage.
The Agent Manager view providing a unified view of all active agents and allowing you to immediately respond to any approval requests or followup questions looks very useful regardless of which VCS you're using under the covers. Am I missing something here that jj does?
See my setup detailed in another sibling comment of yours, jj is just a small part of it, and you can probably get that with git too.
I’m already at full mental capacity planning and reviewing the work of two agents (one foreground which almost never asks for approval, and one background which never asks for approval).
I don’t really need the ability to juggle more of them, and noticing their messages is not a bottleneck for me, while I’m happy with the customizability and adaptability of my raw’er workflow.
Fair enough. I use git worktrees (with a script that creates the git branch, worktree and opens a new vs code workspace). You're right, managing more than about two active sessions at once is probably the limit though I'm somewhat hopeful that better tooling similar to the Agent Manager window here would allow me to scale a bit past that especially if some of those sessions are more design explorations.
Reminds me of the pre-GitHub days, when I had to use CM tools designed to appeal to project and CM managers, not to the poor developers who had to use them every day. Anybody else remember Harvest?
"Professional riders number roughly three to six thousand worldwide, while professional drivers number roughly twenty to forty thousand across major sanctioned series."
You weren't the target audience. The target audience was manager types tired of being told no by engineers. Always listen to the quiet parts left unspoken/unacknoeledged.
It's the same thing they tried to sell with low/no-code.
The problem is that the engineer turning what you want into code isn't normally the bottleneck. I would say about 50% of my job is helping people specify what they want sufficiently for someone to implement.
Non-technical people are used to a world of squishy definition where you can tell someone to do something and they will fill in the blanks and it all works out fine.
The problem with successful software is that the users are going to do all the weird things. All the things the manager didn't think about when they were dreaming up their happy path. They are going to try to update the startTime to the past, or to next year and then back to next week. They are going to get their account into some weird state and click the button you didn't think they could. And this is just the users that are trying to use the site without trying to intentionally break it.
I think if managers try to LLM up their dreams it'll go about as well as low/no-code. They will probably be able to get a bit further because the LLM will be willing to bolt on feature after feature and bug fix after bug fix until they realize they've just been piling up bandaids.
I am cautiously optimistic that there will be a thriving market for skilled engineers to come in and fix these things.
Perhaps it's worth posing the question: what sorts of "engineers" might feel threatened by agents? Those doing engineering, or those who spend their careers wading in the shallows? Competent designers with deep comprehension, or, at best, the superficial pedants?
Once the UI soup around AI dev use has settled (and it's getting closer) I bet you we will see native apps with c/c++/zig/rust backends that render so much faster on all the junctions that aren't roundtrip limited (and yes, that will still matter to many people).
Why would you bet that? It hadn't happened before AI so not sure why it would now. In fact this is why VSCode was even created, because it was easier hacking on a browser renderer than making something from scratch.
These are two of the fastest editors, and I use both, but I don't think they're for the same thing: Zed is for multi-file projects with moderate IDE abstractions (Worse than Jetbrains; better than most others); Sublime is for editing one-off files with syntax highlighting.
I would bet in time they are vindicated. Then Google et al will release something with much advertising hype about "returning to basics" or something in a couple of years.
It's also an absolutely basic fork. They haven't even bothered with a custom theme, or custom UI, it's just vscode with an agents window slapped on top.
Weirdly, out of all the vscode forks the best UI is probably bytedance's TRAE
My YC2026 startups are an AI agent that automatically manages your VSCode forks, and a public safety computer vision app for smart glasses that predicts whether someone can afford a lawyer based on their skin color
They'll have to compete with these other pending funded companies:
- ai therapist for your ai agents
- genetic ai agent container orchestration; only the best results survive so they have to fight for their virtual lives
- prompt engineering as a service
- social media post generator so you can post "what if ai fundamentally changes everything about how people interact with software" think pieces even faster
> genetic ai agent container orchestration; only the best results survive so they have to fight for their virtual lives
AI Agent Orchestration Battle Bots. Old school VMs are the brute tanks just slowly ramming everybody off the field. A swarm of erratically behaving lightweight K8s Pods madly slicing and dicing everything coming on their way. Winner takes control of the host capacity.
Is the extension system in VSCode not powerful enough to make these just normal extensions for a vanilla VSCode executable? Or is everyone just going for lock in, since if you download MyFork, you can't start using some other extension that uses OtherGuysModel?
Back when Cursor was new (before literally everything was AI hype) they explicitly called out that they wanted to do more in-depth integration with the editor than was possible with just the extension APIs.
Presumably that hasn't changed much. If you want to do any large-scale edits of the UI you need to spin up a fork.
I don't know, does Cursor offer anything substantial beyond an extension like kilocode? (I've only used vanilla VSCodium branch with various extensions but they all seem to integrate everything from tab-completion to complete UI agentic take-over very well)
Correct. One could say the same thing for browsers. I suppose on the one hand it's good that it's relatively easy to spin up a new project like this, on the other hand one must swear allegiance to their large software company of choice.
With all due to respect to the folks working on Antigravity, this feels like a vibe-coded VSCode fork to me. Font sizes, icon sizes, panel sizes are all over the place (why?). To top it all off, the first request just failed with overload/quota exceeded errors (understandable, but still).
Looks like I'll wait to see if Google cares about putting the polish into a VSCode fork that at least comes close to what Cursor did.
The UI is certainly buggy, and things are getting fixed all the time. Guess it was more a "let people try the agent manager" instead of overfocusing on looks.
I haven't used Windsurf (been using Claude Code and similar). Does it provide an Agent Manager window/view? This to me looks more useful to me than the browser integration piece.
Google likes to fuck with basic browser functionality for some reason. Scrolling, sometimes also how “click” intents through touch are triggered (that is, using js listeners for touch events instead of watching for the browser to communicate a “click” on an element; this does usability-killing shit like make a touch-to-stop-scrolling get interpreted as a click on whatever happens to be under your finger). I have no idea why they do this, but they do it a lot, so it must be a cultural thing.
And I don’t mean like some designers will highjack scroll to deliver a different experience like slide-like transitions or something (which may or may not be, differently, awful) but they’ll override it just to give you ordinary scrolling, except much worse (as on this page).
Seems like a lot of work to do just to make something shittier, but what do I know, I probably can’t implement a* on a whiteboard from memory or whatever.
Basic-to-great rationality or skill may not be what is being rewarded here (although the baseline of course needs to be met) - it could well be compliance capability. Hence the string of arbitrary memorization exercises.
Yeah, though with theirs I can usually see why they did it (even if I’d rather they didn’t). Google’s MO is messing with scrolling for what often appears to be no user-facing reason at all.
I can't believe these "smooth scrolling" scripts are still a thing. I was wondering why I was having a hard time scrolling the page on my phone, when I got to my PC and felt the reason.
It's incredible to think how many employees of this world-leading Web technology company must have visited this site before launch, yet felt nothing wrong with its basic behavior.
I just have zero faith in Google. How long until we hear that someone mysteriously got banned by Google (as we see on HN every few months? it feels like it anyway) and hear about how now they have no AI tooling etc etc etc because its all married to their Google Account.
Additionally... Google Code was shut down in 2016? I have zero confidence in such a user hostile company. They gave you a Linux phone, they extended it, and made it proprietary. They gave you a good email account, extended it and made it proprietary. They took away office software from you via Google Docs, so now you don't even own the software they do.
I wanted to like it as a Gemini Pro subscriber who does not want to also pay for Claude Code, but after running Antigravity for ~ 10 minutes, and a few back and forward exchanges I got the 'Model quota limit exceeded' message with no indication when it will reset.
You get the impression that these products from Google (including Gemini CLI ) are just made as prototypes -- they are supposed to work as a demo, but Google does not actually care about having a working workflow with them. Claude Code on the other hand is an actual product that works well.
If you want to feel something with software, leave the industry and never look back, saving programming as something you do for your own joy/reward (I'm not being hyperbolic—I'd argue we're in the early days of the web's "dark ages").
Unfortunately, once money came into the picture, quality, innovation, and anything resembling true progress flew out the window.
And agentic coding is about working at a much higher conceptual level. Further from the ground. Antigravity is a functional metaphor.
My only issue with it is that it's too long at five syllables, and "anti-" is an inherently negative connotation. I'm guessing this will eventually get renamed if it gets popular, much like Bard was.
As the parent said, actual anti-gravity is world-changing technology. It's telling the very laws of nature to go fuck themselves, you're gonna do what you want, even if all of known physics says it's impossible.
Working at a higher conceptual level is just project management. You're the legislator giving out unfunded mandates rather than the agency staff that has to figure out how to comply. There's power there, but it isn't anti-gravity.
That's why it's metaphor. "Operation Warp Speed" also delivered vaccines quickly, but not faster than the speed of light.
The list of company and product names that are based on a metaphor that is very obviously exaggerated is endless. Google doesn't index a googol number of pages either.
I feel it's a bit ignorant of you to double down on your argument and compare a cloned product release by some macbook swinging google engineers with vaccination, which actually positively impacted many human lives.
Are you for real? When someone disagrees, it's not "ignorance" or "doubling down". It's just legitimate disagreement. There's nothing I'm ignorant of here, so please don't throw around insults like that.
I just continue to stand by the fact that naming products using exaggerated metaphor is standard practice. The idea that it is "shameful" or "ignorant" seems absurd. I think it's OK not to take it too seriously. Nobody is going to be confused and walk off of a cliff or something because the product is named "antigravity"...
Do you get upset that the Milky Way candy bar doesn't actually contain a galaxy within? Or that the Chicago Bulls aren't as strong as actual bulls?
Since when did disagreeing become policing? But yes, if someone calls me ignorant without any justification, I'm going to disagree. And if you think that's "policing", I'm sorry but you seem to be the ignorant one here around the meaning of that word.
"Geez," it's just a name. Is it too much to not get worked up over a perfectly innocent and fun name?
Perfectly innocent and fun name, coming from Google? Are you for real? That's one of the craziest things I've read in this forum, someone still thinking Google is an innocent startup with a "Don't be evil" motto.
You think the name is evil...? Sorry, but that's one of the craziest things I've read in this forum.
We're not talking about monopolistic business strategy here or anything. We're talking about a product name. So yeah, I think the name is perfectly innocent and fun. I cannot understand the level of conspiratorial thinking that must be involved to think "antigravity" is some kind of offensive choice. Bizarre.
My heart really sinks every time someone launches a "new IDE" and it turns out to be VS Code. VSCode can be turned into an IDE for _some_ platforms. But not for others. It remains a text editor with some nice extras (syntax highlighting, navigation) but lacking others (debugger, testing, ...).
What's most astonishing is that I can't seem to find what actual platforms it works for. I don't doubt the LLM's can write code in almost any language and for almost all frameworks, with varying success.
But which languages/platforms/framework will the IDE work for technically, having compilers etc built in? I don't care if an LLM can help me with the code, if I then can't compile it within the same IDE!
I recognize the guys in the video, they were in marketing videos for the Windsurf IDE before its founding team was cannibalized by/absorbed into Google.
""Autonomously, an Antigravity Agent writes code for a new frontend feature, uses the terminal to launch localhost, and actuates the browser to test that the new feature works."
very interesting times; i'm glad to see browser automation becoming more mainstream as part of the ai-assisted dev loop for testing. (disclosure: started the selenium project, now working on something similar for a vibe coding context)
Most people are missing the point here. Testing the GUI/feature more reliable is something that Gemini 3 could unlock (looking at the ScreenSpot-Pro benchmark and its general improvement on visual understanding). At least for the (hobby-)projects I attempted this was really a bottleneck having to always test the GUI after each change as its quite often breaking something.
I really don't know why I struggle so much with this stuff. I believe these models / agents / whatever write code that is often at least as good as the code I write, and they are super helpful tools, but it just feels like it takes away so much of the joy that is programming to me. I'm not saying it's "right" of me to feel this way, but for me the struggle, and the figuring things out by testing, identifying patterns, or looking deeper into a library's implementation (etc) is part of the challenge that makes programming and software construction fun.
I'm in your boat. I picked this career because I enjoy solving problems and thinking and understanding. I'm interested in how things work inside below the layers. To me, it's like the tech has a purpose of its own, and not just to provide value.
Using agents effectively is this whole other skillset including managing requirements, prioritization and, worse yet, I'm rarely left with any knowledge. I don't nearly get the same joy out of "I finished a task with an agent" like I do with "I had a problem, I delved deep to understand it, learned something new and solved it"
Then again, I bet people making furniture out of wood felt the same about industrial furniture factories. And it can be argued that not every use case needs custom tailored furniture...
Google subscriptions and services are so terribly mismanaged that I will be staying away, no matter how incredible this shallow fork of vscode may be.
I remember a previous story months ago about Gemini that had Google PMs trying to hype their product, but it was all question about how nobody knows how to get Gemini API keys with any number of paid subscription.
> Google Antigravity is an agentic development platform, evolving the IDE into the agent-first era.
Antigravity enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core. Agents operate across the editor, terminal, and browser, enabling them to autonomously plan and execute complex, end-to-end tasks elevating all aspects of software development.
I have absolutely 0 idea why any developer would rely on any IDE produced by google. It'll be canned within 5 years max, with 3-4 seeming like a reasonable estimate of the lifespan of the product
I've been using my current IDE for 17 years, and plan to continue using it for at least another 15
Antigravity is based on VS Code, not designed from the ground up, and has second order revenue from the AI subscriptions (financials probably counted under the AI umbrella).
I still wouldn't trust a Google product to stick around, but these hints aren't a reliable oracle either.
When AS was launched Android was the only other viable option and it is the same even today. I don't believe Google's AI products will reach and/or sustain the same dominance as Android.
It is a product launched in the hype cycle of AI. Google has plenty of other products (launched during hype cycles) that are gathering dust.
That's not a guaranteed signal that it will meet the same fate but its something strong enough to be wary of.
It's interesting to think that Google's Antigravity is a forked version of MSFT's VS Code, which uses a browser engine built by Google, which they forked from Apple, which they forked from KHTML.
This is the fruit of Windsurf brain-drain and I think it might be better than what's out there since those guys got to start from scratch from everything they learned building Windsurf
It's insane to me that I can't pay $20/$200 bucks after running out of limits in ~5 messages.
Why would you not at least link it to the pro and ultra accounts
at least you could upsell the pro subs to ultra. Millions of claude code and codex users who are into agentic coding is your servicable market paying attention today.
Now I'll delete antigravity and go back to codex / claude code / cursor ...
Does anyone here have a take on why so many people are forking VSCode instead of writing a plugin? Is AI codegen the kind of thing that would be impoasible with a plugin or something?
My best guess would be that plugins are limited into what they can do within VSCode, and rewriting a whole IDE/Text editor just for AI Agent seems a lot of work.
For example a while back vscode-pets[1] plugin became popular and tried it and noticed that the pet can only live within a window, whether its the explorer section or in its own panel, I thought it'd be more of a desktop pet that could be anywhere within VSCode but apparently there are limitations (https://github.com/tonybaloney/vscode-pets/issues/4).
So my guess is that forking VSCode and customizing it that way is much easier to do things that you can't with a plugin while also not having to maintain an IDE/Text editor.
Nice that it's built-in, Claude Code needs an MCP for this at least.
> User Feedback: Intuitively integrate feedback across surfaces and artifacts to guide and refine the agent’s work.
I wish they'd just let me edit the implementation plan directly instead of me having to explain the corrections. Claude Code has the same weakness. Explaining the corrections is slower than editing the plan manually, and it still keeps the incorrect text in context as well.
> An Agent-First Experience: Manage multiple agents at the same time
Sounds nice in theory but I assume you can run multiple agents for 5 minutes or so and then you're out of credits.
As a claude code user I'm not really sold on this product.
Why can I not authenticate into Google Antigravity?
Google Antigravity is currently available for non-Workspace personal Google accounts in approved geographies. Please try using an @gmail.com email address if having challenges with Workspace Google accounts (even if used for personal purposes).
To save others the trouble, it doesn't matter whether you use Chrome or Safari for the auth flow. It's broken on both. (I'm using a personal @gmail account.)
Had the same issue, have been able to sign in finally using a Google Cloud Identity (former Workspace) account by changing my IP via a VPN to Singapore. No idea why, but that worked. Tried a few other countries too, but only had success with Singapore.
I spent a few days with Firebase Studio when it was announced. I stopped using it because it was clearly a very early alpha - tons of bugs, and didn't seem well thought out. Now, less than a few months later (!!!), they announce a competing IDE with essentially the same functionality, but a different brand? Is the right hand talking to the left?
Petty nitpick, but this sentence doesn’t sound right
> “Google Antigravity's Editor view offers tab autocompletion, natural language code commands, and a configurable, and context-aware configurable agent.”
Is it a typo or was there a reason to add configurable twice?
Pressing the "Submit" button on their "Google Antigravity for Organizations
Interest Form" (https://antigravity.google/interest-form) doesn't actually do anything for me (tried Firefox and Chrome) -> their metrics will indicate that there's no interest from organizations -> the product will be killed in a year.
I was very hyped: maybe Google finally did something new, complete, unifying CLI and IDE, a sort of Claude Code Web but as an efficient, IDE-like, local thing.
Generally if you are paying full price (paying per token), then it's not used for training.
If you are not paying, or paying a consumer level price ($20/mo) you will be trained on.
ETA: In the terms they say they use your data because "free" is the only option available in preview. However it does say you can disable sharing in your settings...
Not but you can be quite sure somewhere deep inside the TOS there is a line saying their telemetry swallow your soul. If not, it will be added. It's google, that's what they do.
After the first five minutes of using it on Ubuntu, it crashed with error saying I don't have enough free memory, quick look into system stats proved that wasn't the case.
Anyway, not a great first impression. I guess I'll try again in a few months.
I'm having the same issue. I thought it was just me or something with their cloud network. I also haven't been able to download Android Studio from the website for a month. I couldn't even download it from my Macbook so probably not the same issue.
Had some issues setting up with my Google account, never went past the setup page. Some friends faced the same problem, but managed to advance by skipping installing extensions. Trying that now to see if it works!
Tested it for roughly two hours now, far from ready for prime time, very buggy and clearly just quickly build on Windsurfs already rather issue laden code base. Essentially a less well thought out imitation of Trae's Solo mode, added in a second window on top of VSCode and not very well integrated, struggling with terminal commands and despite showing issues in the browser window and screenshot taken by the model, proclaiming the task to be completed. Tool calls also aren't as reliable as I would have expected considering their ownership of the code base, hard to tell whether that is underlying in the model (it was a major issue with 2.5 Pro) or simply Antigravity specific, hoping the latter.
Additionally, there are issues setting up accounts (Singapore VPN solved that for me), no support for Workspace users, only a free tier that requires data sharing, no additional rate limits for paying Pro or Ultra customers, etc. Even worse, Gemini CLI currently does NOT provide Gemini 3 Pro for Ultra Business customers despite paying over € 260,- per month, which is frankly ridiculous.
Will be honest, I was speculating that the reason for the multi month delay between the first A/B tests of Gemini 3 class models and the final release was so they'd have all their dugs in a row. Have some time to test everything, improve tooling, provide new paid subscriptions and/or ensure existing ones get access to everything day one, but they didn't.
Gemini 3 Pro seems very interesting (to early to say), but compared to every other recent launch by OpenAI (5, 5.1, Codex variants), Anthropic (Sonnet and Haiku 4.5), even Kimi (K2 Thinking) and Z.AI (GLM-4.6), this is by far the least organized launch of any frontier lab.
A buggy IDE which is unusable for paying customers, no CLI access for Ultra business (and none at all for Pro of any kind), etc. is frankly embarrassing when considering what competitors manage to provide the day a model launches.
What have they been working on these last two months besides going on X and posting "3" every couple of days? Why is there no paid Antigravity tier, no way to use Workspace accounts, etc? Before launching in this state, I feel it'd have been better to delay a bit more if it was absolutely needed.
Also, correct me if I'm wrong but isn't this the fourth or fifth IDE built by Google for LLM assisted coding? What happened to IDX and Firebase Studio and aren't they also based on VSCode?
I can't get the agent to use my MCP server. The MCP config is provided, and the application can query the tools, but the agent can't access my server, only the http operations the application performs to list the tools is something i see in my server logs. It knows there are tools, because it sees the list, it tells me when it's trying to connect to which tool. But that fails.
I have lamented the fact that download buttons can be hard to find on software home pages sometimes. But having just a download button and a "more" button on your landing page seems to be taking things a bit too far.
Something incredibly weird is going on in the "AI space", everyone seems to believe that they need their own browser and IDE. All of them are just forks of each other, so why aren't they extensions or plugins?
I can't really explain what the issue is, I'd assume it's about lock in, but I don't see a VS Code fork or yet another Chromium browser being something that a person couldn't easily replace with another similar fork, but with a different AI. It that the pitch internally? Lock users into a browser or IDE, so they'll be forced to use a certain AI?
> It that the pitch internally? Lock users into a browser or IDE, so they'll be forced to use a certain AI?
Shrugs that's the only reason that makes any sense short of they're just being blindly mimetic (which, let's be honest, isn't outside of the realm of possibility these days).
Seems interesting, makes for the second vscode clone with ai google has made. The demo they showed in the video avoided showing code so I guess thats what they're aiming for. Although when they mentioned you can easily verify code quality by looking at end product screenshots it felt like they don't know what 'code' quality means.
On my m2 MacBook Air with 16GB of RAM, it took over 12 minutes to startup and get to a usable state. When it did, it was plainly just a jacked version of VSCode. Opening a project caused it to hang again. Dumped it. VSCodium, with the terminal pane open so I can talk to Claude works fine for me...
I tried it.
Maybe I got unlucky, but Gemini performed poorly in my testing.
I gave it a task that I was working on with VSCode and GPT codex 5.1.
Gemini3 repeatedly failed to finish the task and started to go down a rabbit hole on an unrelated task.
The browser extension is really cool and it provides a needed tool for the agent to use. It used the extension to show the page that it updated in the task document (the task doc is great too).
However it showed me a page and did it was done, when it was clearly not done and not what I asked for.
I was expecting weaker tooling and a better model. I got good tooling and a not very good model.
Oh no. What's with the scrolling in the blog page. What a terrible experience. It's clearly vibe-coded and AI-tested. If a senior saw mingling with the scrolling behavior, it would have been never in production.
It might sound smart your defense of Google but VSCode is not a fork or electron and chromium, there's a lot of work there, not just clicking a fork button
I was genuinely impressed by the Antigravity browser plugin for "agentic" work.
I ran into a neat website and asked it to generate a similiar UX with Astro and it did a decent-ish job of seeing how the site handled scrolling visually and in code and replicating it in a tidy repo.
Small feedback if any of the Antigravity people read here: "Fast" is not a great name for the "eager" option (vs. "Planning") because "Fast" is associated with "dumb" in LLMs (fast/flash/mini). Probably "Eager" would be a more descriptive name
I don’t really understand, if replacing developers is right around the corner, why throw money into so many IDEs. Or perhaps it’s really cheap to produce something like this?
Slightly off-topic, but I really hate this trend that now every developer / researcher / engineer also needs to be an actor (typically cringy one) pretending to be quirky for the camera while trying to act unnaturally excited about the technology.
I really miss the days of the professional casualness and naturalness of something like the "mother of all demos" [0]. Like, can you imagine the guy wearing a turtleneck and going, "but wait!" and acting surprised after every sentence? It would NOT have been the same demo.
Someone decided that the audience of standup comedy overlaps with tech meetup one 100%.
Jokes aside though, it's much broader than that, it's just that the zeitgeist dictates that everyone shifts from work to meta-work: musicians must impress not with their music but the way they make music, researchers must entertain, developers must manage agents, children watch someone else play games instead of playing themselves.
That's yet another increment to the already dizzying level of simulation per Jean Baudrillard.
> I really miss the days of the professional casualness
Yes, the professional actor that doesn't seem like a paid actor is preferable to the autistic weirdo. That's why they get paid the big bucks and we get relegated to the basement.
There is currently no support for:
Paid tiers with guaranteed quotas and rate limits
Bring-your-own-key or bring-your-own-endpoint for additional rate limits
Organizational tiers (self-serve or via contract)
So basically just another case of vendor lock-in. No matter whether the IDE is any good - this kills it for me.
Vibe coding has taken a way to where its become psychological, and these days all I work with is just a "fast apt" solution like Cursor's Composer 1, I tried Gemini 3 inside Cursor, and while I had no real application to a real stratified properly to benchmark it with, it felt "already slow" in a very fast race.
It is my understanding that no experienced programmer would even consider touching that stuff given Google's stellar track record in developers' satisfaction.
I believe it is aimed at investors. Thus it will be forgotten the minute it stops influencing stock price.
Thus there is no need to take it literally as a developer tool - it's not.
That’s what it feels like yes. It has a lot of overlap with a ton of stuff that Google already does, and it seems like it’s one of those “rather than improving an existing product, let’s create a new one because that gets us a promotion” situations which Google is well known for.
Trip report: I'm using it now to revamp a dashboard I'm working on. TBH it's not feeling much better than Codex - it couldn't figure out how to launch chrome with my default profile nor how to regenerate the css with tailwind. I'm also getting a lot of model quota errors like everyone else.
I keep getting: "Antigravity server crashed unexpectedly. Please restart to fully restore AI features." On Ubuntu 24.04 - others on reddit reporting the same with 24.04.
Later edit: Probably this one [1], which is par for the course for Alphabet, they're, conceptually, still living in the early 2010s, when this stuff was culturally relevant.
the thing is, I tried it and it took 10 seconds to import all settings from cursor. the moat for vscode clones is really small. i imagine people will jump a lot from clone to clone, like from model to model now.
I'm stuck on "setting up this account" like most people. What a botched launch. This kind of bugginess and unreliability has become so much more frequent since big tech started tightening the screws with mass layoffs.
>>Google Antigravity's Editor view offers tab autocompletion, natural language code commands, and a configurable, and context-aware configurable agent.
Okay, but is it configurable? Also, can you configure it to write DRY code?
This thing crashes on Ubuntu LTS 24.04 during start. Apparently all these agents are not able to ensure that a desktop app starts on a popular Linux distribution.
If Google has forgotten how to do Software, than the future doesn't look bright.
That floating chess board was a subliminal message: Your project will teeter, and critical pieces will fall off. You will occasionally make an illegal move as the board annoyingly shifts beneath you.
Tried Antigravity for 2 queries and my model quota limit breached. Model definitely felt better than GPT 5.1 (my current daily driver). I am continuing to use Gemini 3 Pro on Cursor to evaluate further.
I don't get how these agents can work when even Claude Sonnet 4.5 (for example) needs a lot of hand-holding for basic, simple bugfixing stuff. Wouldn't the agents just be huffing and puffing their way off the rails all the time?
Did any of these VS Code forks yet fix their issues from official marketplace access leading to extensions being severely outdated and ripe with security issues?
I downloaded Antigravity this morning and was able get this Mobius Clock debugged in a few minutes - then for the heck of it added a whole list of features! I'm blown away by how fast you can work. Yes, there were a series of problems, as expected whenever you attempt hard stuff, but at the end of the day, do check out the improved mobius clock!!! https://www.mobiusclock.com
I don't want to hate on this but I remember last week, when as a backend developer doing frontend, I spent about 20 minutes prompting Claude Sonnet in a loop trying to build a landing page for a new feature.
The task was to put create a header, putting the company logo in the corner and the text in the middle.
The resulting CSS was an abomination - I threw it all away and rewrote it from scratch (using my somewhat anemic CSS knowledge), ending up with like 3 selectors with like 20 lines of styles in total.
This made me think that 1: CSS and the way we do UI sucks, I still don't get why don't we have a graphical editor that can at least do the simple stuff well. 2: when these model's don't wanna do what you want them to the way you want them, they really don't wanna.
I think AI has shown us there's a need for a new generation of simple to write software and libraries, where translating your intent into actual code is much simpler and the tools actually help you work instead of barely allowing to fight be all the accidental complexity.
We were much closer to this reality back in the 90s when you opened up a drag and drop UI editor (like VB6, Borland Delphi, Flash), wrote some glue code and out came an .exe that you could just give to people.
Somewhere along the way, the cool kids came up with the idea that GUIs are bad, and everything needs to go through the command line.
Nowadays I need a shell script that configures my typescript CDK template (with its own NPM repo), that deploys the backend infra (which is bundled via node), the database schema, compiles the frontend, and puts the code into the right places, and hope to god that I don't run into all sorts of weird security errors because I didn't configure the security the way the browser/AWS/security middleware wanted to.
>Somewhere along the way, the cool kids came up with the idea that GUIs are bad, and everything needs to go through the command line.
It's important for people to feel like "hackers" that is the primary reason why command line sort of exploded among devs. Most devs will never admit this... they may not even realize it, but I think this is the main reason it went big.
The irony is that the very thing that makes devs feel like "hackers" is the very thing that's enabling agentic AI and making developers get all resistant because they're feeling dumber.
Yes, its also failing on my workspace account but worked on my personal. Might be a bug or a delayed deployment for workspaces b/c it might need to be "enabled" by admins?
Oh well. Uninstalled. This was my first experience doing software development guided by AI. Doesn't seem like a tool that will serve me well in the long run.
There's no way I am using such an important piece of life as an IDE from Google just because I know they are going to kill it within 3 years, if it survives that much. Probably will die with the Windsurf guy jumping ship again.
It's really kind of pathetic how we live in a future where "antigravity" is a text editor that lies to you, "hoverboards" are one-wheeled electric skateboards that burn your house down, and... well, can't think of a third thing at the moment, but you know the vibe.
Lotta people mining science fiction for cool names and then applying them to their crappy products, cheapening the source ideas.
I've seen "agent" and "agentic" so many times in the last few months that the usage of the term is quickly becoming one of my biggest pet peeves. The previous one was "enshittification", and I'm glad that one was a short-lived fad.
That's a big name for a slop fork. So many possibilities (with LLMs and without) but Google just can't bring themselves to do anything creative, let alone transformative.
I'm going to treat this like Kiro, and just use it until they start charging for it and then probably switch back to VS code with its built-in agent support.
Eventually they're going to do a rug pull, and instead of paying $10 a month for tons of AI code request, it's going to be two or $300 for that. The economics just aren't there to actually make a profit, hopefully before the rug pool happens local models on normal hardware will be fast enough.
On Windows, it behaves like a malware. Suddenly flashing command prompt windows when you interact with it. Not very nice (also, lazy, since you don't need to do that flashing if you're a legitimate app).
Nobody is interested in proprietary editors. People only accepted vscode because most of it was open source. AIUI this is a fork of that, but seriously, push out the changes or pass.
Looks great, won't touch since they are probably going
to do a switcheroo or a shutdown as usual.
And of course I would need to look at all the implications of spying, being locked out of google account and absence of support that are google amo. No time for that. Not for them.
yeah... almost completely broken on my iphone. something tells me they used antigravity to vibe the website. would explain other issues mentioned like vim keybindings being ignored.
..."Youre absolutely right! I did mess up the internals of that feature and incorrectly reported that it works. let me try again..."
Another Google product whose launch will be used to justify somebody's promotion, only to be left for dead only a few months later after said promoted person moves on to something else.
Why would I even bother getting mildly invested in this when the product launch/promotion incentive structure at Google is so well known?
Looking at that page makes me think I should go the other direction and switch from a graphical IDE to vim or something. You know, ground myself by adopting more gravity.
I agree that the name is horrible. It has Anti, gravity and Google in it.
Vim and all other non-graphical tools are great because they work in WSL, they work in linux servers and they may even work in Docker containers, although sometimes you have to do something like: apk add vim
Too bad they never show these magic AI-Assisted tools being used to fix real-world bugs / implement feature requests in their open source GH repositories.
My experience with GPT and Claude, is that they are fantastic for learning something brand new to me, as a kind of tutor..
But for writing code in some domain I am good in, they are pretty much useless.. I would spend a lot longer struggling to get something that barely functions from them VS writing it myself, and the one I write myself will be terse and maintainable + if it has bugs they will be like obvious ones, not insane ones that a human would never do.
Even just when getting them to write individual functions with very clear and small scopes.
Honestly, the full page doesn't give you much more. Not a SINGLE product image. All paragraphs about "agentic" blah-blah you have read 100s of times by now - I do not see how this is anything different from all the other AI VS Code forks, besides that it comes with Gemini from the start.
Seems like they jumped the gun on the website release, the first version I saw was a wall of text, now it has photos and videos. Maybe they have an agentic CICD
It’s…a VSCode fork? Really? What has become of Google? Ten years ago, when I was getting into the world of software, there was still an aura about them. They built everything in this huge monorepo, and it worked. They were this deeply technical company for whom it seems anything could be done.
And now they can’t even ship a desktop app without forking VSCode? Look, I get it. There’s this huge ecosystem. Everyone uses it. I’m not saying it’s damning or even bad to fork it.
But why is this being painted as something revolutionary? It’s a reskin of all the other tools which are variations on the same theme, dressed up in business speak (an agent-first UX!). I’m sure it’s OK. I downloaded it. The default Tokyo Night theme is unusable; the contrast can’t be read. I picked Vim bindings, but as soon as I tried to edit a file I noticed that was ignored.
What happened? Is this how these beautiful, innovative companies are bound to end up?
I can see why people don’t release stuff on a permissive lisense anymore. It is absolutely insane that google is even allowed to do something like this.
It is a vs code fork. There were some UI glitches. Some usability was better. Cursor has some real annoying usability issues - like their previous/next code change never going away and no way to disable it. Design of this one looks more polished and less muddy.
I was working on a project and just continued with it. It was easy because they import setting from cursor. Feels like the browser wars.
Anyway, I figured it was the only way to use gemini 3 so I got started. A fast model that doesn't look for much context. Could be a preprompt issue. But you have to prod it do stuff - no ambition and a kinda offputting atitude like 2.5.
But hey - a smarter, less context rich Cursor composer model. And that's a complement because the latest composer is a hidden gem. Gemini has potential.
So I start using it for my project and after about 20 mins - oh, no. Out of credits.
What can I do? Is there a buy a plan button? No? Just use a different model?
What's the strategy here? If I am into your IDE and your LLM, how do I actually use it? I can't pay for it and it has 20 minutes of use.
I switched back to cursor. And you know? it had gemini 3 pro. Likely a less hobbled version. Day one. Seems like a mistake in the eyes of the big evil companies but I'll take it.
Real developers want to pay real money for real useful things.
Google needs to not set themselves up for failure with every product release.
If you release a product, let those who actually want to use it have a path to do so.
reply