Hacker Newsnew | past | comments | ask | show | jobs | submit | more neoden's commentslogin

Sadly, I came back to using VS Code recently. There's a lot to like in Zed but imo that decision to write their own rendering framework is unfortunate, because of ridiculous problems in Linux still not resolved like poor font rendering, especially on low-DPI screens, or visible lags of UI which is being developed to be blazingly fast. So far, VS Code is faster for me.


Remember that Zed hasn't reached 1.0. On other platform, it is much faster and consume much less battery. Just like ghostty, to me the most attracting feature is knowing my editor/term is not backed by a browser engine.


> the most attracting feature is knowing my editor/term is not backed by a browser engine

How and why is this important? At any time I have two browsers with dozens of tabs and on top of it a slack client active, how another instance of a browser engine could hurt in this mad world?


try it and found out; it's not about memory overhead (maybe it is to someone), it's that it works so much better than web backed things. it's just an absurdly fast/responsive/delightful thing to experience after years of everything going the other way [while hardware actually gets faster]

and i say this as a proud career long web developer. i just really like zed.


Zed was my daily driver for almost a year. When it's working fine and not drawing the mouse cursor at 3 FPS it feels pretty much as anything else in terms of perceived speed. I saw mentions that you need to be younger to notice the difference. Like you don't hear frequencies above 18 KHz if you're more than 20 years old. I get the idea, I just don't see where it is connected to my real world experience


As a quick anecdatum, I'm on macos and I'm definitely more than 20 years old, and there is a quite perceptible difference in responsiveness between Zed and a VSCode-based IDE for me. (Though then again I did play fast-twitched games a lot when I was younger.)


Ghostty is an apt comparison, because it is a slow-launching memory hog compared to foot on Linux.


I listened to a podcast interview with one of the cofounders of zed. One of the revealing things is that he went on and on about how important latency was and how they had to do their own rendering because of the problems they ran into with electron. He also admits that he never used vim, which already has imperceptible latency relative to the terminal emulator, which has already largely solved the text rendering problem.

I understand that there are advantages to being a native app from their perspective, but for me, it is even better for my editor to be integrated into my terminal editor. And because they built it the opposite way though, they have to also build a terminal emulator into their editor.

There is a real cost to building zed this way. If they were an embedded terminal app, they would get full terminal support for free and would not have to implement 2d text rendering. They could even have made a bundled package with an oss terminal emulator like kitty. Then they could have focused strictly on their editing features and value adds. Every engineer hour spent fixing bugs/adding features to the rendering framework is an hour that could have been spent on value adds.

Personally, there is no way that latency itself can be that value add for me because I use neovim and don't use any language plugins so every keystroke already renders instantly. Clearly then you can do everything zed does in a terminal app within at worst tens of milliseconds of latency.

Of course their target market uses vscode and not vim and either doesn't know or care that you do not need to write your own rendering engine to make a low latency, featureful text editor. I am admittedly very much not the target consumer for zed though.


Not everyone wants to use a terminal editor and of those people, not everyone wants to use vscode.


Learning a terminal editor will save your ass at 3am when production goes down. No joke.


Learning a client-server editor will save your nerves when developing on a remote machine with 200ms ping


Vim and emacs both also support editing remote files.


"Support editing remote files" and "work with all plugins on remote that feels like editing on local machine" are different things


Just to clarify, for vim I'm talking about the scp:// thing, for emacs there is of course TRAMP. In both cases you're running a local editor and fetching the remote file, then sending it back when making changes. So your plugins and such should work just fine. I am not talking about sshing into the machine and running vim or emacs as a TUI on that remote machine, which is also possible with both editors.


learning both is fun!


Learning and wanting to use as a daily driver are very different things.

I use vim on my servers and for writing git commit messages. For everything else, I use another editor (used to be Sublime Text, then Vs code, now Zed).


Nailed it. I'm fine with using vim to do remote work. I'm not an expert, but I have enough muscle memory to zip through the things I need to do. I don't want to use it exclusively, though. Turns out I'm capable of learning my way around multiple tools and using different ones in different contexts. Who knew?


I don’t say I don’t know how to use one. I was a longtime vim and then neovim user. But I don’t want to use one.

I still use vim for quick terminal edits, git commit messages, and doing stuff over ssh. But for my day to day heavy text editing, I do not want to sit in a terminal and be limited by what a terminal can display.


Thats why nano exists.

At 3am i am not in the mood trying to remember where my 714 page how to save a file and exit vim manial is


I seem to hear this a lot, but there's probably like 10 commands (maybe a few more if you want to be fancy with copy-paste). Note: great if you like nano however!

For those that are interested, this will get you 90% there:

exit without saving> :q!

exit with saving> :wq

beginning of line> 0

end of line and edit (enter insert mode)> shift-a

delete line> dd

insert mode> i

stop insert mode> ESC

undo> u

show line numbers> :set number

go to line number (e.g line 5)> :5


It is easier to remember one command: `nano <file>`


+1, nano even gives you a handy hint bar at the bottom in case you forget any of its' shortcuts


Why does a text editor of all things need faster rendering, I have never seen this to be an issue


Why do yaks need to be shaved?


+1 I've explored Atom when it was the hot thing, and more recently, an IDE called Lapce, which is built in Rust and doesn't use Electron. They've been great, but the lack of an ecosystem stares glaringly right at your face. I don't like Electron apps but VSCode gets the job done.

I have my entire set of plugins and things installed here and now the productivity that I get off this setup even exceeded my long-loved Jetbrains IDEs. I hope that will be temporary though, seriously. The only thing that I strictly dislike about VSCode is the number of high quality themes available - and there are not that many.

Recently stumbled upon VSCodium which is a clean VSCode fork that can work with any VSCode plugin, but I am yet to try it out.


I miss Atom. I loved it. I loved all the crazy plugins people would make to do all sorts of things. It was a fun time.

These days I'm stuck on QTCreator most of the time. It's OK for core editing functionality but there's a lot that I miss, and some things like the generic Language Server support just aren't fleshed out much.

I'm tempted to use another editor whenever I'm not working on UI files but it adds some friction.


Was atom functionally ever any different from vscode today?


I only remember Atom being painfully slow. It was using Electron too btw.


At the time it was actually called "atom shell" iirc. Electron came from Atom.


> VSCode fork that can work with any VSCode plugin

Technically yes, but if you care about legally, you can't use extensions from Microsoft's marketplace, which excludes Microsoft's own closed extensions such as WSL/SSH/devcontainer remoting and Pylance. It ships with Open VSX instead.


Doing their own rendering is one of the most attractive things about it imo. I am on macos though.


Even on a mac the text rendering is wonky though, like it isn’t processing the metrics correctly or something.

At least it is noticeably fast.


Text rendering is awful on macOS too. I can’t even consider switching from Sublime Text because of it.


Wondering if you've tried rxi/lite or the community fork, Lite-XL [1] and how you've found the font rendering to be there?

FWIW, I've yet to find a better text editor than Sublime Text.

[1] https://github.com/lite-xl/lite-xl


Yes, I've tried them (Pragtical actually, which is another fork) and font rendering there is fine


I use it on Linux and think it's great. My laptop has a screen with some crazy-high DPI and a monitor which doesn't. Changing the font sizes in settings to suit has never left me with a poorly rendered view.


Same experience here on Linux, where bast majority of potential users seems to be in Linux who loves vim mode, Zed's focus on Linux is pretty bad. I would say alpha quality at this point.


VS Code also lags and become buggy if there are incompatible drivers and it cannot enable hardware acceleration.


Zed's still pretty beta on linux.

It will improve. I'm glad they are at least targeting it as well.


How can I setup a linter to prohibit stuff like this in my code base?


Just use Python 3.5.


str.format() can be cursed too, e.g.:

  >>> '{:{}{}}'.format('m', 'o<2', 3)
  'moooooooooooooooooooooo'


> Now, with LLMs making it easy to generate working code faster than ever, a new narrative has emerged: that writing code was the bottleneck, and we’ve finally cracked it.

This narrative is not new. Many times I've seen decisions were made on the basis "does it require writing any code or not". But I agree with the sentiment, the problem is not the code itself but the cost of ownership of this code: how it is tested, where it is deployed, how it is monitored, by whom it's maintained etc.


Oh, Russian is exceptionally well built for swearing. It provides possibilities barely imaginable from the perspective of languages such as English because of how mutable and composable word structure is. With roughly the same base set of 3-4 swear words the actual number of different forms that could be used goes to thousands and is hard to count, each word having its own shade of meaning and sometimes many more than one.


Tell us more.


For example, one word which is a form derived from one of the basic swear words can be used to describe/express: 1) disastrous circumstances, 2) extreme surprise, 3) an end-game event making very negative prospects for the future.

An adjective from the same stem would make another word with the meaning on the other side of the spectrum, which is basically "really cool, highly approved". An adjective similar but constructed in a little different way would mean "weird, crazy".

From the same stem you can make three most common verbs, one with meanings "beat up", "steal", another quite similar with meaning "lie" and a third one meaning "talk". Light modifications of the latter form allow some fine-tuning of the meaning, giving words describing more complex behaviour: 1) suddenly say something unexpected, that will attract the attention of others, causing amazement and approval, 2) unintentionally give up a secret, blurt out too much, 3) get yourself in trouble by talking too much, or even 4) fall down from a certain height or bump into an object receiving a light injury.

and so on, and so forth..


So much scepticism in the comments. I spent last week implementing an MCP server and I must say that "well-designed" is probably an overstatement. One of the principles behind MCP is that "an MCP server should be very easy to implement". I don't know, maybe it's a skill issue but it's not that easy at all. But what is important imo, is that so many eyes are looking in one direction right now. That means, it has good chances to have all the problems to be solved very quickly. And second, often it's so hard to gather a critical mass of attention around something to create an ecosystem but this is happening right now. I wish all the participants patience and luck)


It's pretty easy if you just use the MCP Python library. You just put an annotation on a function and there's your tool. I was able to do it and it works great without me knowing anything about MCP. Maybe it's a different story if you actually need to know the protocol and implement more for yourself


Yes, I am using their Python SDK. But you can't just add MCP to your existing API server if it's not ready to async Python. Probably, you would need to deploy it as a separate server and make server-to-server to your API. Making authentication work with your corporate IAM provider is a path of trial and error — not all MCP hosts implement it the same way so you need to compare behaviours of multiple apps to decide if it's your setup that fails or bugs in VS Code or something like that. I haven't even started to think about the ability of a server to message back to the client to communicate with LLM, AFAIK modern clients don't support such a scenario yet, at least don't support it well.

So yes, adding a tool is trivial, adding an MCP server to your existing application might require some non-trivial work of probably unnecessary complexity.


We've done it before, it hasn't worked before and it's only a matter of years if not months before apps starting locking down the endpoints so ONLY chatgpt/claude/etc. servers can use them.

Interoperability means user portability. And no tech bro firm wants user portability, they want lock in and monopoly.


> One of the principles behind MCP is that "an MCP server should be very easy to implement".

I’m not familiar with the details but I would imagine that it’s more like:

”An MCP server which re-exposes an existing public/semi-public API should be easy to implement, with as few changes as possible to the original endpoint”

At least that’s the only way I can imagine getting traction.


What font is on the screenshot? https://github.com/microsoft/edit



It's indeed Maple Mono. I love that font!



That's not it, the lower case 'a' is very different.


Could it be Consolas? It would make sense I believe it's the default monospace font on recent Windows.


So this is how a troll from The Witcher 3 would explain software engineering


> Puzzles a child can do

Certainly, I couldn't solve Hanoi's towers with 8 disks purely in my mind without being able to write down the state of every step or having a physical state in front of me. Are we comparing apples to apples?


Why without writing down each step? Would you be able to solve it writing each step required in sequence? Thinking between each one? Pretty sure i could, isn't that closer to an LLM?


I mean I need to offload state of the puzzle being solved from my brain to an external memory device — paper, in this case. Keeping that state in my mind would be much harder. It's like some people can play chess within their minds without a board, but it's obviously not something that everyone can do


But the LLM can think between writing each token, and indeed can factor in it's own previously written tokens into its answer - thats essentially using a piece of paper and writing stuff down and referring back to it. Thats the whole idea behind thinking models, and they are demonstrably better at many tasks than others.


> But the LLM can think between writing each token

Writing a token is the thinking itself. Thinking models just write some tokens behind the scene, that's the whole difference.


Writing things down and reading them back is quite literally the only thing LLMs do.


Generating text into the current context is not the same as writing down. It's the same as having a thought and putting it into short-term memory. An analogy for writing down would be sending something to an MCP server that provides context-independent memory functionality.


I am so much out of sync with this idea that a text editor must be blazingly fast. The latency of processing my input was never an issue for me in text editors unless it was an obvious misbehaviour due to a bug or something. And 120Hz text rendering is a thing that I couldn't care less about.


Iv'e seen people, and even been one myself, where the screen latency could be a problem for your raw text processing speed. We were well under 25 years old at the time though, using very low-level languages, and after 30 I never really felt that the rendering was ever too slow for me.

With larger projects and more modern code this is simply not an issue, and hasn't been for decades for me at least.

If you are a 10x developer coding assembly, sure?


In software like VSCode the milliseconds stack up fast if you're switching between projects constantly and/or doing any kind of remote development.


There's a huge chasm between VSCode-with-all-kitchen-sinks and 120 Hz. "Never freezes for more than 300ms" is a very valid point on that spectrum, and nowhere near the need for GPU acceleration.


I had a similar experience when I decided to use a pencil and paper, along with a few simple rules, to manage my to-do lists. This method worked so well for me that I started thinking about the reason for its success. These are my conclusions:

- It was MY method

- It was simple enough to fit entirely within one piece of thought

- It provided a clean feedback loop when I could strikeout the completed task

- I like handwriting

So it relied on things that my brain finds pleasurable


An A5 squared spiral notebook and a 4-colour pen was perfect for me. Spirals help as a lefty. They also allow to cleanly detach a page.

There's a lot of freedom with this. It can serve a much more than just writing boring task names.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: