Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have yet to find any fast UI framework that uses Skia. It seems it's great for mostly static pages (like old style HTML) but as soon as you do animations and desktop resolutions, Skia's CPU rendering will fall apart.

Case in point, none of the Flutter examples are smooth on my 4K laptop. But I can play Assassin's Creed Mirage (released a few months ago) just fine at 70+ fps.

Edit: And my Android phone also can't handle scrolling up in this Flutter demo app without severe stuttering: https://play.google.com/store/apps/details?id=com.gskinner.f...

My understanding is that rendering large text blocks with Skia will max out the CPU. So they would need to chunk the content of the div inside the scroll container, like what Chrome does.



You can try my Sciter. On Windows it uses as Direct2D/DX11 as Skia over DX12|Vulkan|OpenGL (whatever is available).

In both cases it uses GPU rendering and I see no problems with scrolling in them - 60 FPS kinetic scrolling by mouse wheel and touchpad.

You can try by yourself by running usciter.exe demo browser from

- https://gitlab.com/sciter-engine/sciter-js-sdk/-/tree/main/b... - Direct2D backend. - https://gitlab.com/sciter-engine/sciter-js-sdk/-/tree/main/b... - Skia backend.

You can load in usciter this document:

https://gitlab.com/sciter-engine/sciter-js-sdk/-/blob/main/s...

It shows pretty large colorized texts so you can estimate how Direct2D and Skia handle GPU accelerated text rendering. For these two binaries the difference is only in graphics used - Direct2D and Skia.


Would be curious to see an example of what you consider a fast UI. (I hear Skia is faster than most alternatives like Cairo, so your statement makes me think that rendering 2D UIs is just slow in general.)

Last time I tried Avalonia UI on Mac, the CPU usage was really high (like in the 70s sometimes without doing anything). They might have addressed that since then though.


Sadly commercial and subscription, but seriously fast on pretty much any platform. Used in Baldurs Gate 3 and other AAA games.

https://www.noesisengine.com/xamltoy/61c071a0b3a34ff82dfb0e2...

"rendering 2D UIs is just slow in general"

I think mostly it's just that many existing tools use CPU rendering. Without "modern" pixel shaders, crisp anti-aliased text rendering was pretty much impossible on GPU. At 800x600, rendering the UI on CPU used to be no problem. But with retina and high-DPI it became 4x slower and, thereby, unusable.

And yes, that high CPU usage is exactly the issue with Skia. Ideally, it should require a few percent on load and then afterwards everything should be running inside the GPU at almost 0% CPU cost.


I have to disagree that the high CPU usage I mentioned is an issue with Skia in the case of Avalonia.

Other frameworks like Flutter which use Skia don't have such a high CPU load on my computer so I'm pretty sure the high CPU usage is because of Avalonia specifically. (Although Skia might very well contribute to it.)

Just opened Chrome and Firefox (both of which use Skia) and both used about 1% CPU while idle while Avalonia is there with up to 70% CPU usage while idle.


My guess would be that Avalonia is doing something stupid like re-drawing the entire GUI on every mouse cursor movement (in case any hover effect changed) and then Skia is transforming that into high CPU usage. Games using proper GPU acceleration can get away with drawing highly complex GUIs at 144 FPS with negligible CPU usage [1], so while it is wasteful to constantly re-draw an application GUI, that alone will not lead to excessive CPU usage just yet.

[1] https://www.researchgate.net/figure/3-A-healers-UI-showing-t...

I'm pretty sure most apps have less progress bars, stats, buttons, toggles, and other interactive elements than WoW.


That sounds reasonable to me. I remember game loops being split into one draw() and one update() function and would expect UI frameworks to be the same.

I think, in games, it's usual to traverse twice through each object in a graph on each loop, once for drawing and once for updating/handling interactions.

Am I right guessing that UI frameworks do something similar? One update function traversing through each widget to check for interactions. Then two drawing-related functions, with one checking for each object whether it changed and needs to be redrawn, and the second drawing just those widgets that changed?

I'm trying to think in terms of Big O, and it doesn't sound unreasonable to drop the change-calcylation function and redraw from scratch because that way you would have two O(n) functions instead of three. Maybe the constant factor of drawing pixels to the screen is high enough to outweigh that additional change-detection loop.


"Am I right guessing that UI frameworks do something similar?"

most of them try to avoid the update() loop because UI elements usually only change if the application state changes (and then you call paint() or invalidate()) or if the window moves or gets dis-occluded (then the OS calls invalidate()) or if the mouse cursor moves.

So typically, the UI framework will have a "dirty" rectangle flag which represents the pixel area that needs to be redrawn.


Slight sidetrack, but Noesis is free for < $100k revenue, and the licenses for use above that are very affordable and royalty free.

The only problem I had is that IDE support is limited. At least at the time I was using it, Microsoft Blend felt very much like a half functioning and abandoned project that was just being dragged along for the sake of maintaining something at least reminiscent of an IDE.


It surely can be made faster. Here is a supercharged DearImGui running in browser, typically only using 10%~20% cpu.

[0] https://traineq.org/ImGuiBundle/emscripten/bin/demo_imgui_bu...


Just a black screen on my iPhone 14 Pro Max running Safari fwiw


I think Safari on iOS disables WebGL by default.


It doesn't. I have games using webgl with SDL and Emscripten and they run fine


70%? really?

I never even had that during the alpha days of Avalonia. Give it a shot again.


I'm talking from memory (tried opening the FluentAvalonia sample so the styling may have affected it), but I will try again later and post results hopefully.

Others found high CPU usage too, but not as high as 70% (more like half that). https://github.com/AvaloniaUI/Avalonia/issues/11070


Just checked and I'm getting about 5% CPU on idle, with no animations (for FluentAvalonia) and about 30% (did observe 40% once but was an edge case) when animations are playing on screen.

Sorry for the misinformation. (I can't edit my previous post unforunately because of how HN works.)


It's even worse. Skia can use system native font handling, but in a half-baked way, so chrome always has font rendering issue on windows, and any skia based project probably suffers same issue. Firefox tries to avoid it by handling font directly.


Firefox also sends it to DirectWrite, the native API.

https://github.com/servo/webrender/blob/master/wr_glyph_rast...


This is strange to me because Skia is the rendering backend used by Firefox and Chrome in many cases, and AFAIK it has hardware acceleration. Why is it using the CPU?


I believe it's caching the text as GPU texture and then accelerating only the compositing. That's also why you can't directly draw formatted text into a WebGL context. And for pages consisting mostly of static text, it's perfectly acceptable.

"GPU accelerated rendering means that the GPU is used for composition."

https://www.electronjs.org/docs/latest/tutorial/offscreen-re...

And that page says that's also how Chromium does it.


Chromium (and Firefox) do GPU accelerated raster as well as composition. That page you linked is about a very specific case of wanting to get the pixels back to reuse in a different context. In that situation doing the work on the GPU and then copying back to the CPU may be slower than keeping it on the CPU the whole time.


This source code:

https://github.com/servo/webrender/blob/master/wr_glyph_rast...

suggests to me that the glyph rasterization (which is the CPU-limiting factor for text rendering) in WebRender (which is the new FF 93+ GPU-accelerated rendering engine) is implemented in Rust and to be run on CPU. On Linux it appears to use FreeType (which is CPU-only):

https://github.com/servo/webrender/blob/master/wr_glyph_rast...

Also, that approach of pre-rendering text into textures is, sadly, very common. For example, see:

https://stackoverflow.com/questions/44062566/how-do-hardware...


Do you use Chrome (on non iOS platforms) or Android? Both are made in Skia, and I don't just mean they can render Skia apps like Flutter apps, I mean that literally the system views themselves are all in Skia. Native Android apps that aren't in Flutter are also rendered via Skia.


Yes, I sometimes test on Chrome. And I have also implemented JavaScript workarounds because Chrome still cannot handle a plain <table> with 750 rows where each row has a 120px image thumbnail and some text columns without gobbling up gigabytes of memory and/or freezing on weaker machines.

I agree with you that both Chrome and Android work quite well in general. But that doesn't change the fact that some of the design decisions have aged badly:

"Chrome is still the king resource hog on macOS"

"In previous tests, which included a lot more tab switching, and no WebGL and media consumption, Safari was using 10x less than Chrome"

https://www.reddit.com/r/macapps/comments/12n7162/part_3_fin...

9 years ago, I myself also shipped apps that were rendering the GUI exclusively on CPU because it's just so much easier to get things to look 100% identical everywhere. But nowadays, those apps are either pixelated or feel sluggish even on modern Macs because Retina displays have easily 4x-ed the number of pixels that need to be drawn. But single-threaded FLOPS has not 4x-ed since then.


Skia can use the GPU.

What alternative do you propose? What other library is cross-platform, has a nice API, performant, supports text well enough, and can do both CPU and GPU rendering?


Perhaps Mozilla WebRender?


I should have clarified that I am only asking about desktop solutions.


Not sure what you mean but WebRender powers Firefox which definitely works on the desktop.

You can use it to build desktop UI frameworks - see for example https://azul.rs/


Oh, I see. But it uses OpenGL, and I would have to guess it is being replaced by WebGPU / wgpug, correct?


I don't think they've started such work but I don't really know.


What the hell are we doing as an industry? We have 20 cores per CPU and enough RAM to fit the entire human civilization's knowledge in text form in everyone's pocket, but we can't scroll a webpage with animations without stuttering?


Just like every other industry, we're trying to deliver at the lowest possible price, which leads to the lowest acceptable quality level.

It would be easy to add 20% of time and budget to a project and just let your developers work on bugfixes and improvements. Maybe even let your dev team interview a customer or two to truly understand what matters to them. But that's considered too expensive. (even though some SaaS have 90%+ profit margins)


I would add to the above: like every other sector, we are sitting on top of a teetering arbitrary pile of technologies and processes that is long past due for a cleanup and overhaul, but we still haven't encountered a catastrophe big enough to motivate us to devote the resources.


Because it's cheap. You can add a ton of bling with little effort by piling on abstraction layer on top of abstraction layer so your cheapest bootcamp trained Javascript developers can write flashy GUIs without prior experience.

As long as your tool demo works slickly on the highest performance Macbooks used by the execs you're trying to sell to, performance isn't a factor.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: