Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The physical rules are well understood

Nope. They're constantly updating these models with really finnicky things like cloud nucleation rates that differ depending on which tree species's pollen is in the air. They've gotten a lot better (~2 day to ~7 day hi-res forecasts) but they're still wrong a lot of the time. The reason is the chaos as you say, however, chaos is deterministic, so, that a deterministic method can approximate a deterministic system is really not the surprising part.

You don't get what's going on here because your baseline understanding is a lot worse than you think it is.

What they're doing is skipping literal numerical simulation in favor of graph- (attention-) based approaches. Typical weather models simulate pretty fine resolution and return hourly forecasts. Google's new approach is learning an approximate Markov model at 6 hours resolution directly so they don't need to run on massive supercomputers.



It's a model of a model?

And it turns out to be better?

That's so counter-intuitive I'm kinda amazed anyone even bothered to research it, let alone that it worked.

Uh..... now do horse racing.


"All models are wrong, some models are useful." Some are more wrong and more useful simultaneously ;) This is actually the typical state of things in numerical simulation: we have infinite-resolution differential equations modeling such physical systems, but to implement them in silico we need to discretize and approximate various aspects of those models to achieve usefulness re: time and accuracy. Google has merely gone one level further in the tradeoff.

For more info on Google's approach, look into surrogate models. It's becoming more common especially in things like weather and geology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: