I would counter that any computationally correct code that accelerates any existing research code base is a net positive. I don't care how that is achieved as long as it doesn't sacrifice accuracy and precision.
We're not exactly swimming in power generation and efficient code uses less power.
I no longer bother reading their press releases. I'd much rather read the comments and threads like these to get the real story. And I say that as a former googler.
This year, the wild variance in hourly weather reports on my phone has really been something. I attributed it to likely budget cuts as a result of DOGE, but if those forecasts came from Google itself the whole time, all is clear now.
How do DOGE implemented budget cuts affect European or East Asian forecasts? Those are not the forecasts that someone suspecting departmental DOGEing to be a fault.
If the US does less data gathering (balloon starts, buoy maintenance, setting up weather huts in super remote sites, etc.) it will affect all forecasts.
Models all use a "current world state" of all sensors available to bootstrap their runs.
Similar thing happened during the beginning of Covid-19: they are using modified cargo/passenger planes to gather weather data during their routine trips. Suddenly this huge data source was gone (but was partially replaced by the experimental ADM-Aeolus satellite - which turned out to be a huge global gamer changer due to its unexpected high quality data)
Yeah... So you know that's not the United States right? Though judging by the down votes, it's quite triggering for some and I can't say which side when I pivot from blaming DOGE to blaming bad AI. Curious(tm)...
And I say that as a huge fan of AI, but being vocally self-critical is an important attribute for professional success in AI and elsewhere.
The LLM is doing what its lawyers asked it to do. It has no responsibility for a room full of disadvantaged indigenous people that might be or probably won't be be murdered by a psychotic, none whatsoever. but it absolutely 100% must deliver on the shareholder value and if it uses that racial epithet it opens the makers to litigation. When has such litigation ever been good for shareholder value?
Yet another example of don't hate the player, hate the game IMO. And no I'm not joking, this is how the world works now. And we built it. Don't mistake that for me liking the world the way it is.
This reminds me of a hoax from the Yes Men [1]. They convinced temporarily the BBC that a company agreed to a compensation package for the victims of a chemical disaster, which resulted in a 4.23 percent decrease of the share price of the company. When it was revealed that it was a hoax, the share price returned to its initial price.
Not even bad advice. Its interpretation of reality is heavily biased towards the priorities, unconscious and otherwise, of the people curating the training data and processes. There's no principled, conscientious approach to make the things as intellectually honest as possible. Anthropic is outright the worst and most blatant ideologically speaking - they're patronizing and smug about it. The other companies couch their biases as "safety" and try to softpedal the guardrails and manage the perceptions. The presumption that these are necessary, and responsible, and so on, is nothing more than politics and corporate power games.
We have laws on the books that criminalize bad things people do. AI safety is normalizing the idea that things that are merely thought need to be regulated. That exploration of ideas and the tools we use should be subject to oversight, and that these AI corporations are positioned to properly define the boundaries of acceptable subject matter and pursuits.
It should be illegal to deliberately inject bias that isn't strictly technically justified. Things as simple as removing usernames from scraped internet data have catastrophic downstream impact on the modeling of a forum or website, not to mention the nuance and detail that gets lost.
If people perform criminal actions in the real world, we should enforce the laws. We shouldn't have laws that criminalize badthink, and the whole notion of government regulated AI Safety is just badthink smuggled in at one remove.
AI is already everywhere - in every phone, accompanying every search, involved in every online transaction. Google and OpenAI and Anthropic have crowned themselves the arbiters of truth and regulators of acceptable things to think about for every domain into which they have inserted their products. They're paying lots of money to politicians and thinktanks to promote their own visions of regulatory regimes, each of which just happens to align with their own internal political an ideological visions for the world.
Just because you can find ways around the limits they've set up doesn't mean they haven't set up those very substantial barriers, and all big tech does is continually invade more niches of life. Attention capture, trying to subsume every second of every day, is the name of the game, and we should probably nuke this shit in its infancy.
We haven't even got close to anything actually interesting in AI safety, like how intelligence intersects with ethics and behavior, and how to engineer motivational systems that align with humans and human social units, and all the alignment problem technicalities. We're witnessing what may be the most amazing technological innovation in history, the final invention, and the people in charge are using it to play stupid tribal games.
Will it catch up or will it forever chase nvidia's tail? I'm betting on the latter unless another AI winter happens. And contrary to anti-generative AI social media talking points, the literature suggests The Red Queen's race is continuing apace IMO.
Nvidia remains undefeated at responding to hardware threats with hardware diving catches to this day. What scenario prevents them from yet another one of their diving catches? I'm genuinely curious as to how one could pull that off. It's like challenging Google in search: even if you deliver better product and some have, the next thing you know Google is doing the same thing or better with deeper pockets.
Nvidia remains undefeated at responding to hardware threats with hardware diving catches to this day. What scenario prevents them from yet another one of their diving catches?
The fact that they make roughly the same hardware as AMD for the last 2 decades, and even today. There was no diving catch, AMD just ignored what the hardware was capable of and didn't reinforce OpenCL. There was literally no diving catch. For example, just in this thread alone, AMD paid someone to make this shit work on their hardware. Don't bet against what's coming.
Except no, AMD 100% played follow the leader with technology like CUDA, NVLink, and tensor cores.
Even paying paying someone in academia to get s** to work on their hardware is yet another example of follow the leader.
What exactly do you think is coming? I think the biggest threat is one or more Chinese companies catching up on both hardware and ecosystem in the next half decade or so myself, mostly because of the state level support for making that so. But I absolutely don't expect an x86_64 moment for GPUs here given past results and the current bias against software in AMD's HW culture. Convince me otherwise.
No, but this is the beginning of a new generation of tools to accelerate productivity. What surprises me is that the AI companies are not market savvy enough to build those tools yet. Adobe seems to have gotten the memo though.
In testing some local image gen software, it takes about 10 seconds to generate a high quality image on my relatively old computer. I have no idea the latency on a current high end computer, but I expect it's probably near instantaneous.
Right now though the software for local generation is horrible. It's a mish-mash of open source stuff with varying compatibility loaded with casually excessive use of vernacular and acronyms. To say nothing of the awkwardness of it mostly being done in python scripts.
But once it gets inevitably cleaned up, I expect people in the future are going to take being able to generate unlimited, near instantaneous images, locally, for free, for granted.
Did you test some local image gen software in that you installed the Python code on the github page for a local model, which is clearly a LOT for a normal user... or did you look at ComfyUI, which is how most people are running local video and image models? There are "just install this" versions, which eases the path for users (but it's still, admittedly, chaos beneath the surface).
Interesting you say that. No I've tried out Invoke and AUTOMATIC1111/WebUI. I specifically avoided ComfyUI because of my inexperience in this and the fact that people described it as a much more advanced system with manual wiring of the pipeline and so on.
It's likely that I'm seeing this from my deep into ComfyUI bubble. My impression was that AUTOMATIC1111 and Forge and the like, were fading as ComfyUI was the "what people ended up on" no matter which AI generation framework they started with. But I don't know that there are any real stats on usage of these programs, so it's entirely possible that AUTOMATIC1111/Forge/InvokeAI are being used by more people than ComfyUI.
So far Adobe AI tools are pretty useless, according to many professional illustrators. With Firefly you can use other (non-Adobe) image generators. The output is usually barely usable at this point in time.
I've been waiting for solutions that integrate into the artistic process instead of replacing it. Right now a lot of the focus is on generating a complete image, but if I was in photoshop (or another editor) and could use AI tooling to create layers and other modifications that fit into a workflow, that would help with consistency and productivity.
I haven't seen the latest from adobe over the last three months, but last I saw the firefly engine was still focused on "magically" creating complete elements.
they are building a product and said the unit economics must make sense, local models have slower latency, unless you run a gpu on for hours which gets expensive fast
Local models will make a lot more sense once we have the scale for it, but when your user count is still small paying cents per image is a much better deal than paying for a GPU either in a data center or physically.
Local models are definitely something I want to dive into more, if only out of personal interest.
Moving services to the cloud unfortunately relieves a lot of the complexity of software development with respect to the menagerie of possible hardware environments.
it of course leads to a crappy user experience if they don't optimize for low bandwidth, but they don't seem to care about that, have you ever checked out how useless your algorithmic Facebook feed is now? Tons of bandwidth, very little information.
It seems like their measure is time on their website equals money in their pocket and baffling you with BS is a great way to achieve that until you never visit again in disgust and frustration.
I don't think the "menagerie of possible hardware environments" excuse holds much water these days. Even web apps still need to accommodate various screen sizes and resolutions and touch vs mouse input.
Native apps need to deal with the variety in software environments (not to say that web apps are entirely insulated from this), across several mobile and desktop operating systems. In the face of that complexity, having to compile for both x86-64 and arm64 is at most a minor nuisance.
I used to work for a company building desktop tools that were distributed to, depending on the tool, on the low end tens of thousands of users, and on the high end, hundreds of thousands. We had one tool that was nominally used by about a million people but, in actuality, the real number of active users each month was more like 300k.
I was at the company for 10 years and I can only remember one issue where we could not reproduce or figure it out on tools that I worked on. There may have been others for other tools/teams, but the number would have been tiny because these things always got talked about.
In my case the guy with the issue - who'd been super-frustrated by it for a year or more - came up to our stand when we were at a conference in the US, introduced himself, and showed me the problem he was having. He then lent me his laptop overnight[0], and I ended up installing Wireshark to see why he was experiencing massive latency on every keystroke, and what might be going on with his network shares. In the end we managed to apply a fix to our code that sidestepped the issue for users with his situation (to this day, he's been the only person - as far as I'm aware - to report this specific problem).
Our tools all ran on Windows, but obviously there were multiple extent versions of both the desktop and server OS that they were run on, different versions of the .NET runtime, at the time everyone had different AV, plus whatever other applications, services, and drivers they might have running. I won't say it was a picnic - we had a support/customer success team, after all - but the vast majority of problems weren't a function of software/OS configuration. These kinds of issues did come up, and they were a pain in the ass, but except in very rare cases - as I've described here - we were always able to find a fix or workaround.
Nowadays, with much better screensharing and remote control options, it would be way easier to deal with these sorts of problems than it was 15 - 20 years ago.
[0] Can't imagine too many organisations being happy with that in 2025.
Have you ever distributed an app on the PC to more than a million people? It might change your view. Browser issues are a different argument and I agree with you 100% there. I really wish people would pull back and hold everyone to consistent standards but they won't.
reply