I've loved those novels of his that I've read, particularly War & War, and haven't watched a single one of those films. Krasznahorkai's work stands on its own perfectly fine.
You shouldn't dismiss them - they are not only adaptations, their screenplays were written by Krasznahorkai and he collaborated with their production.
Turin Horse is an original work by Krasznahorkai without being an adaptation, too. (I've seen that one 7 or 8 times, 4 during its festival & cinema run.)
To dismiss them would be like to dismiss his works with Max Neumann (AnimalInside being one of his best!) because they combine writing with painting instead of being pure literature.
A lot are, but there is an equal amount that are of modern versions. It’s a big landscape of employers. I’ve already begun moving my team to Java 25 from 21.
Funnily enough, I was using the word superlative more as an adjective, than the noun that refers to the part of grammer (adjective), if that makes sense.
It's not super clear but I don't think it does automatic transcription for you; I think it just provides a text editor with syntax highlighting, and that it syncs the scrolling of the rendered output to (prerecorded) audio playback.
AI looks like it understands things because it generates text that sounds plausible. Poetry requires the application of certain rule to that text, and the rules for Latin and Greek poetry are very simple and well understood. Scansion is especially easy once you understand the concept, and you actually can, as someone else suggested, train a child to scan poetry by applying these rules.
An LLM will spit out what looks like poetry, but will violate certain rules. It will generate some hexameters but fail harder on trimeter, presumably because it is trained on more hexametric data (epic poetry: think Homer) than trimetric (iambic and tragedy, where it’s mixed with other meters). It is trained on text containing the rules for poetry too, so it can regurgitate rules like defining a penthemimeral cæsura. But, LLMs do not understand those rules and thus cannot apply them as a child could. That makes ancient poetry a great way to show how far LLMs are from actually performing simple, rules-based analysis and how badly they hide that lack of understanding by BS-ing.
This is not a useful diversion, it's like arguing if a submarine swims.
LLMs are simple, it doesn't take much more than high school math to explain their building blocks.
What's interesting is that they can remix tasks they've been trained very flexibly, creating new combinations they weren't directly trained on: compare this to earlier smaller models like T5 that had a few set prefixes per task.
They have underlying flaws. Your example is more about the limitations of tokens than "understanding", for example. But those don't keep them from being useful.
They do stop it from being intelligent though. Being able to spit out cool and useful stuff is a great achievement. Actual understanding is required for AGI and this demonstrably isn't that, right?
The fact that he's a very eclectic thinker and not very systematic, although that's one of the things that a lot of people admire about him. His religious commitments, as well, I would guess. But also he had some very odd ideas--like refusing to get a tumor removed from his face. He also was not the best at communicating his ideas.
Right, politicians and officials working on behalf of the tax-filing lobby could introduce lots of changes to the tax code with a view to making this software useless.
The point of open sourcing from a dying ship is that the groups that can modify this software and resell it all start from it as a baseline. Is TurboTax all lean mean code available at a low enough price while still meeting profit expectations if it needs drastic changes?
Intuit can spend all the money they can convince investors to lose relative to last year and expectations, but they'll have a yggdrasil of companies to buy out from a turn-key solution and all their costs fighting OMB will amount to nothing if they screw one buyout up and get an updated software drop for a new round of $5 filers.
The companies Intuit will have to buy out don't have to make any profit per filer, they just have to take filers away from Inuit.
I mean… in some sense, it might be nice is the company doing your tax preparation is not too lean and mean, their whole point is to eat the hit if they screw it up, right? The math is not actually hard.
But, realistically, I guess if a self-service tax prep company messed up your taxes, they’d make sure you end up in arbitration.
The form you sign to authorize efiling says "I declare that I have examined a copy of the income tax return ... and to the best of my knowledge and belief, it is true, correct, and complete." If you think Intuit is going to cover you, you haven't really seen the things they do.
This attitude is depressingly common in lots of professional, white-collar industries I'm afraid. I just came from the /r/law subreddit and was amazed at the kneejerk dismissal there of Dario Amodei's recent comments about legal work, and of those commenters who took them seriously. It's probably as much a coping mechanism as it is complacency, but, either way, it bodes very poorly for our future efforts at mitigating whatever economic and social upheaval is coming.
This is the response to most new technologies; folks simply don't want to accept the future before the ramifications truly hit. If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.
LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.
I don't think it is only (or even mostly) not wanting to accept it, I think it is at least equal measure just plain skepticism. We've seen all sorts of wild statements about how much something is going to revolutionize X and then turns out to be nothing. Most people disbelieve these sorts of claims until they see real evidence for themselves... and that is a good default position.
hedging the possibility that they get displaced economically before it happens is always prudent.
If the future didnt turn out to be revolutionary, you now have done some "unnecessary" work at worst, but might've acquired some skills or value at least. In the case of most well off programmers, i suspect buying assets/investments which can afford them at least a reasonable lifestyle is likely too.
So the default position of being stationary, and assuming the world continues the way it has been, is not such a good idea. One should always assume the worst possible outcome, and plan for that.
> If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.
Look, we see the forest. We are just not impressed by it.
Having unlimited chaos monkeys at will is not revolutionizing anything.
Lawyers don't even use version control software a lot of the time. They burn hundreds of paralegal hours reconciling revisions, a task that could be made 100x faster and easier with Git.
There's no guarantee a technology will take off, even if it's really, really good. Because we don't decide if that tech takes off - the lawyers do. And they might not care, or they might decide billing more hours is better, actually.
Attorneys have the bar to protect them from technology they don’t want. They’ve done it many times before, and they’ll do it again. They are starting to entertain LLMs, but not in a way that would affect their billable hours.
Many of us would prefer to see the technological leaps to be evenly distributed (so e.g. even clean drinking water that does not need to be boiled before consumption is not a baseline in 2025). So if you want to adapt to your new and improved position where you are just pushing buttons fine - but some of us are actually interested in how computers work (and are actually really uninterested in most companies' bottom lines). It's just how it is ;)
I think many people just settled in while we had no real technological change for 15 years. Real change, not an update to a web framework.
When I graduated high school, I had never been or knew anyone who had ever been on the internet at all. The internet was this vague "information superhighway" that I didn't know really what to make of.
If you are of a certain age though you would think a pointless update to react was all the change ever coming.
Yes. If you judge only from the hype, then you can't distinguish LLMs from crypto, or Nuclear Weapons from Nuclear Automobiles.
If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (https://www.astralcodexten.com/p/heuristics-that-almost-alwa...), then you need to dig into the object-level facts and form an opinion.
In my opinion, AI has a much higher likelihood of changing everything very quickly than crypto or similar technologies ever did.
I didn’t buy the hype of any of those things, but I believe AI is a going to change everything much like the introduction of the internet. People are dismissing AI because its code is not bug free, completely dismissing the fact that it generates PRs in minutes from a poorly written text prompt. As if that’s not impressive. In fact if you put a human engineer on the receiving end of the same prompt with the same context as what we’re sending to the LLM, I doubt they could produce code half as good in 10x the time. It’s science fiction coming true, and it’s only going to continue to improve.
Again, there were people just as sure about crypto as you are now about AI. They dismissed criticism because they thought the technology was impressive and revolutionary. That it was science fiction come true and only going to continue to improve. It's the exact same hype-driven rhetoric.
If you want to convince skeptics talk about examples, vibe code a successful business, show off your success with using AI. Telling people it's the future and if you disagree you have your head in the sand, is wholly unconvincing.
As someone who gleefully followed along as the Web3 hype train derailed, an important distinction is that crypto turns every believer into a salesperson, by design. There were some that were truly passionate about the potential applications for blockchain technology, but by and large they were drowned out by people who, having poured $10k into the memecoin of the week, wanted to see the price of that coin rise.
This doesn't feel like that. The applications of generative AI have become self-evident to anyone that's followed their rise. Specific applications of AI resemble snake oil, and there are hucksters who pivoted from crypto to AI, but the ratio of legit use cases to scams isn't even close.
If anything, the incentives for embellishment have flipped since crypto. VC-funded AI companies will dreamily fire press releases about AI taking us to Mars, but it doesn't have the pseudo-grassroots quality of cryptocurrency hype. The average worker is incentivized to be an AI skeptic. The rise of generative AI threatens workers in several fields today, and has already negatively impacted copywriters and freelance artists. I absolutely understand why people in those fields would respond by calling AI use unethical and criticize the shortcomings of today's models.
We'll see what the next few years hold. But personally, I foresee AI integration ramping up. Even if the models themselves completely stagnate from this point on, there's a lot of missing glue between the models and the real world.
You don't have to be able to vibe code an entire business from scratch to know that the technology behind AI is significantly more impressive than VR, crypto, web3 etc. What the free version of ChatGPT can do right now, not just coding; would've been unimaginable to most people just 5 years ago.
Don't people and companies using AI lazily to put out low quality content blind you to its potential as well as the reality of what it can do right now. Look at Google's VO3, most people in the world right now won't be able to tell you that it's AI generated and not real.
The value of these was always a far fetch, and requires a critical mass adopting it before becoming potentially useful. But LLMs value is much more immediate and doesn't require any change in the rest of the world. If you use it and are amplified by it, you are... simply better off.
Frankly I disagree that LLMs value is immediate. What I do see is a whole lot of damage it's causing, just like the hype cycles before it. It's fine for us to disagree on this, but to say I'm burying my head in the sand not wanting to accept "the future" is exactly the same hype-driven bullshit the crypto crowd was pushing.
That's why it's what I define as immediate value. It's undeniably incredibly amplifying to me, whether you or others agree or not. No network effect required. It doesn't matter whether I convince anyone else of the value, I can capture it all on my own. Unlike ponzi-schemes like web3 or VR experiences that require an entire shift in everyday life and an ecology of supporting software.
I don't need to convince anyone that LLMs are enabling me to do a lot more. This is what makes this hype different. It has bones. Once you've found a way to leverage it, it's undeniably helpful regardless of your prior disposition. Everyone else can say they're not useful and it rings hollow because it obviously is to me. And thus probably useful to everyone else too.
Ah yes, please enjoy living in your moment and anticipating your entirely new world. I also hear all cars will be driving themselves soon and Jesus is coming back any day now.
I found it mildly amusing to contrast the puerile dismissiveness with your sole submission to this site: UK org's Red List of Endangered & Extinct crafts.
Adapt to your manager at bigcorp who is hyping the tech because it gives him something to do? No open source project is using the useless LLM shackles.
Why would we not? If they were so effective, their effectiveness would be apparent, inarguable, and those making use of it would advertise it as a demonstration of just that. Even if there were some sort of social stigma against it, AI has enough proponents to produce copious amounts of counterarguments through evidence all on their own.
Instead, we have a tiny handful of one-off events that were laboriously tuned and tweaked and massaged over extended periods of time, and a flood of slop in the form of broken patches, bloated and misleading issues, and nonsense bug bounty attempts.
I think the main reason might be that when the output is good the developer congratulates themselves, and when it's bad they make a post or comment about how bad AI is.
Then the people who congratulate the AI for helping get yelled at by the other category.
As long as the AI people stay in their lane and work on their own projects, they're not getting yelled at. This is ignoring that AI has enough proponents to have enough projects of significant size. And even if they're getting shouted at from across the fence, again, AI has enough proponents who would brave getting yelled at.
We'd still have more than tortured, isolated, one-offs. We should have at least one well-known codebase maintained through the power of Silicon Valley's top silicon-based minds.
I think it's pretty reasonable to take a CEO's - any CEO in any industry - statements with a grain of salt. They are under tremendous pressure to paint the most rosy picture possible of their future. They actually need you to "believe" just as much as their team needs to deliver.
Just a grain? I say take it with a gargantuan Train loaded with salt on all cars. An entire salt mine's worth. Markets, and CEOs, are downright insane, and they are the only ones who stand to profit from this situation, and have everything to gain.
I am not a software engineer but I just can't imagine my job is not automated in 10 years or less.
10 years is about the time between King – Man + Woman = Queen and now.
I think what is being highly underestimated is the false sense of security people feel because the jobs they interface with are also not automated, yet.
It is not hard to picture the network of automation that once one role is automated, connected roles to that role become easier to automate. So on and so on while the models keep getting stronger at the same time.
I expect we will have a recession at some point and the jobs lost are gone forever.
Lawyers say those things and then one law firm after another is frantically looking for a contractor to overpay them to install local RAG and chatbot combo.
In most professional industries getting to the right answer is only half the problem. You also need to be able to demonstrate why that is the right answer. Your answer has to stand up to criticism. If your answer is essentially the output of a very clever random number generator you can't ever do that. Even if an LLM could output an absolutely perfect legal argument that matched what a supreme court judge would argue every time, that still wouldn't be good enough. You'd still need a person there to be accountable for making the argument and to defend the argument.
Software isn't like this. No one cares why you wrote the code in your PR. They only care about whether it's right.
This is why LLMs could be useful in one industry and a lot less useful in another.
It's a popular view but it's massively controversial and far from being a consensus view. See here for a good overview of some of the problems with it.