I think AI has come as the industry was somewhat maturing and most frameworks/software had previous incarnations that mostly did the same thing or could be done adhoc anyway. The need for libraries as the models get better probably declines as well.
Not all open source but a lot of it is fundamentally for humans to consume. If AI can, at its extreme (still remains to be seen), just magic up the software then the value of libraries and a lot of open source software will decline. In some ways its a fundamentally different paradigm of computing, and we don't yet understand what that looks like.
As AI gets better OSS contributes to it; but in its source code feeding the training data not as a direct framework dependency. If the LLM's continue to get better I can see the whole concept of frameworks being less and less necessary.
Mostly. I had the "AI bot tsunami" problem on my own personal site and blocked a bunch of bot user agents in robots.txt. Most of them were from companies I had never heard of before. The only big AI name I recognized was GPTBot from OpenAI.
https://www.anthropic.com/careers/jobs/5025624008 - "Research Engineer – Cybersecurity RL" - "This role blends research and engineering, requiring you to both develop novel approaches and realize them in code. Your work will include designing and implementing RL environments, conducting experiments and evaluations, delivering your work into production training runs, and collaborating with other researchers, engineers, and cybersecurity specialists across and outside Anthropic."
https://www.anthropic.com/careers/jobs/4924308008 - "Research Engineer / Research Scientist, Biology & Life Sciences" - "As a founding member of our team, you'll work at the intersection of cutting-edge AI and the biological sciences, developing rigorous methods to measure and improve model performance on complex scientific tasks."
The key trend in 2025 was a new emphasis on reinforcement learning - models are no longer just trained by dumping in a ton of scraped text, there's now a TON of work involved designing reinforcement learning loops that teach them how to do specific useful things - and designing those loops requires subject-matter expertise.
That's why they got so much better at code over the past six months - code is the perfect target for RL because you can run generated code and see if it works or not.
The funny part is how they think this will give them the power to take control of what is the defacto standard and circumvent standards.
It will instead further distinguish what is AI slop because it doesn't work and be siloed off to people who don't care about the code so can't fix it.
If people want good interoperable production ready code that can be deployed instantly and just works and meets all current standards and ongoing discussions, we've had it for many decades and it's called open source.