> during the first year of grad school I realized that AI, as practiced at the time, was a hoax.
I had a similar realization during grad school about a lot of the popular topics at the time (early 2000s). I even used to call them "the hoaxes of computer science". Things like grid computing or formal methods of software engineering had a lot of resources behind them, but nobody was able to use the results. Instead, very different formats of these ideas are what took root: cloud computing and advanced type systems.
> the low end eats the high end: that it's good to be the "entry level" option, even though that will be less prestigious, because if you're not, someone else will be, and will squash you against the ceiling.
I wish every grad student had been forced to memorize this statement. Build something useable, not clever.
Rather than an outright hoax, I like the term "fad". There are fads in technology, some of which are directly inspired by what has become possible and some of which are mutations of of other ideas. Some fads have more worth or more longevity than others -- in the world of clothing, denim jeans are now a foundation on which to build; I might consider object-oriented language features to be similar.
Like stocks, you can buy ideas “low” and sell them “high.” Some ideas are cyclical too, AI, mainframes/cloud, etc... And this extends beyond tech for instance “equity” is currently hot but that may be short lived which is unfortunate.
> Things like grid computing or formal methods of software engineering had a lot of resources behind them, but nobody was able to use the results. Instead, very different formats of these ideas are what took root: cloud computing and advanced type systems.
The clearest example of this dynamic is probably the "Fifth Generation Computing Systems" initiative, which was described as a "hoax" for a long time but managed to characterize quite closely the way computing would ultimately be done in the 2010s and will probably be done in the 2020s.
Though that particular initiative had some deeply weird focus on using Prolog-derived query languages for everything, which ultimely failed because that whole paradigm lacked compositionality and was not feasibly extensible to concurrent/parallel compute (which was obviously a big focus of FGCS). Functional programming has proven a lot more influential overall.
I don't agree about grid computing. Many scientists got work done with it on aggregations of clusters. LIGO used pyGlobus to transfer large amounts of scientific data.
Absolutely. Things that were commercial failures were often huge successes in the scientific community. If you don't see why something is popular it's probably not because it's useless, it's probably because you aren't the intended user. Which is fine but a very different conclusion.
And the early beowulf cluster stuff was definitely breaking new ground, and is the direct ancestor of the most powerful supercomputers in the world right now.
There were many cool things about grid computing and I think they got some of the abstractions right.
However, there was a larger gap in what was actually possible and what people claimed was possible. You'll see this gap in other software. However, if you compare the difference to what AWS says it can do to what it actually does, that's a pretty big difference.
The quality of the systems developed by a large company with of resources is going to be much better than a collaboration of different scientists and software engineering groups at different national labs and universities.
> the low end eats the high end: that it's good to be the "entry level" option, even though that will be less prestigious, because if you're not, someone else will be, and will squash you against the ceiling.
This happens with jobs too.. especially software jobs. Nobody wants to do software QA, want to know how to get a software engineering job when the market is tight or otherwise inaccessible... software QA.
Incidentally, I think being in QA and being a good engineer is a recipe for a very good career. A surprising number of QA software developers are... just not very good developers. Working with a good developer that just happens to specialize in QA is an amazing experience.
Would that good-engineer QA specialist have a good career in terms of appreciation and remuneration, or merely a good career in terms of being the least likely to get laid off, and providing a lot of value to the company?
Well, both, but they are fairly decoupled, just like for 'regular' software developers. Pay is based on willingness to move (and get that sweet, sweet signing bonus), and willingness to negotiate.
Essentially: If you are an amazing engineer, you will likely do well everywhere. If you are a 'good' engineer, and you want to stand out, go into QA engineering where you will be relatively better than a lot of people.
Without the benefit of hindsight we can't tell which of these building blocks will become the next paradigm. I think your expectation that progress should be a direct line where every step gets you closer is mistaken. It's often guided by a very subjective feeling of interesting-ness which cannot be formalized.
Yes, it's exactly what I was getting at. It's memetic evolution, it is extreme openendedness at work. Planning is only good when you get close to the solution and you can see the path ahead.
I knew someone who worked at a defense research group. Their head grant writer was pulling in 3x of the senior developers because he tried to quit and they had to make an offer he couldn't pass up.
Usually you don't counter-offer at all, and you don't throw money at someone like that unless there's a damned good reason.
This feels like a very sterile view of science and it's actual history and practice. I was recently remembering how Marconi's puzzling success in sending a transatlantic wireless signal stimulated the discovery of the ionosphere.
Do you have any idea how many areas of science were opened up by our attempt to get to the Moon?
The range is literally from discovering that unit tests are good in software to discovering the Van Allen belts to learning about the geology and history of the Moon from the rocks that we brought back.
Could we have learned more science by doing something else with the money? Of course. But it is a dramatic overstatement to say that the Apollo program didn't "constitute any progress scientifically."
If you don’t believe an anonymous person here, see what prominent physicists say clearly on this topic, e.g., Steven Weinberg.
This is not to dismiss experimental research which is quite important, but to distinguish (experimental or theoretical) science from product development.
In https://www.thespacereview.com/article/1037/1 you'll find that he is very critical of manned spaceflight in general, but about Apollo he says, "No, at the time of Apollo, the astronauts did do some useful things. They brought back Lunar samples. They placed a laser reflector on the Moon that has been used ever since to monitor the motion of the Moon with incredible accuracy."
Earlier in the same interview he criticized NASA for canceling Apollo 18 and 19 because he wanted the science that would have been done, to be done.
I guess he didn't say what you thought he said.
That said, his criticism of NASA's efforts with manned flight isn't because he doesn't think that it is useless to go to the moon. It is because it takes a lot of work to get humans there, and robots can do the job much more safely and cheaper. Which also explains why he thought Apollo was useful. At the time the technology of robotics was much worse so humans were the only way to do the job.
If I recall correctly, an interviewer asks him about the scientific impact of landing on moon. He says, “it was there, but it not that great“ and “I think it was money spent on public amusement, and from all money spent on public amusement this money was best spent”
I am not pushing this view; just a relevant comment.
— update, exact statement
I think this was not money spent on science. It was money spent on an extremely important aspect of technology, and it was money spent on public amusement. And from all money spent on public amusement this chunk of money was best spent. [The scientific value], it was there but it was not very great.
He is critical of manned space flight and says plainly in a number of his talks, recalling from the top of my head, “man spaceflight has costed such and such billions of dollars and has produced nothing of scientific value” or “this was sold to public as a scientific project but it’s nothing of the sort”, and that “it’s all done on earth.”
He mentions one area, but then says, “but actually that could have been done much cheaper using unmanned robots”
I agree costs are issue here; money that could have been better spent.
Here is the unmanned rockets quote that you refer to.
Those were useful things that could have been done by unmanned rockets, but in those days, the state of the art in computers and robotics was not what it is now.
The whole "the state of the art" bit I understand as saying that with modern computers and robotics, unmanned vehicles could have done the job. But they didn't have advanced enough computers and robotics at the time.
> Lack of good quality data (and qualified people to analyze it) is a bigger problem than lack of advanced models and computing power.'
There's plenty of data and compute power, but what's often lacking in the ML field is precisely models that reflect reasonable priors for one's given use case. Good feature engineering (often relying on domain experts) is similarly underrated. You see this again and again when looking at how robust SOTA results are achieved. In a way, this means that good (non-"hoax") ML is ultimately a lot more similar to traditional statistics than most practitioners are willing to acknowledge.
While I wholeheartedly agree with your point, he said good quality data.
I am currently working with real estate data. There is no way of knowing whether an entry in the database is a house or a house's floor.
I had a project at a death insurance company (they pay your funeral). They had customers dying and coming back to life.
You would say those are core business issues that should be dealt with.
Something can be both legitimately revolutionary/interesting, but also significantly over-hyped and misrepresented, often with strong for-profit incentives. Some recent good examples of this include progress in cryptocurrencies, decentralization, and ML/AI.
Sure, but many people disagree that ML itself is revolutionary. The basics of it were known (referred to, quite appropriately, as 'data mining') as early as the 1990s and perhaps earlier. We've added a smattering of new techniques since then, and compute power has been expanded via GPGPU, but there was no "revolutionary" shift in the field. Even multi-layer ("deep") neural networks are very old tech.
The smattering of new techniques seem to have made the difference between success on toy problems vs. being able to match or exceed human performance on many difficult tasks. So while naysayers are correct that "the math hasn't changed since the 90s!", enough has changed to make calling DL a paradigm shift accurate.
For reference, I can now get an intern to images for a few hours, then train a black box algorithm to automate their efforts in another few hours. This algorithm is sensitive, brittle, and may have perfomance issues, but it's still already orders of magnitudes better than what took days or months of effort prior. That to me is a revolution, regardless of the math.
I assume you're being sarcastic, but there actually isn't anything to see. Plug-in hybrids blow any EV out of the water and will do so the foreseeable future. They're cheaper, lighter, just as efficient on short trips, and much more practical on long trips.
Hybrid vehicles are the practical option today. Pure gasoline vehicles are outmoded and EVs are all hype.
And I'll assume you've never driven an EV, because almost everyone who has purchased an EV will never go back to an ICE vehicle. An EV purchase is a ratchet. Hybrids make a lot of sense for some people today, but battery electric vehicles are the inevitable future.
Perceptrons are indeed old tech. But try training models for even something as simple as handwriting recognition using techniques from the 90s and modern techniques but with the same training set and compute resources. You'll get much better results with the modern stuff.
It isn't a hoax, but the OP is exactly right: if it were more usable, people would see it for what it is, and not for what the silly media narrative makes it sounds like.
As long as your technology is only usable by a high priesthood, you can make it look like magic.
Why do people feel entitled to "usable" ML at all?
In the last 5 years, we have made incomprehensibly huge improvements in power and usability. It's an active field, and improvements are still coming at a steady pace.
We have already revolutionized search, natural language processing & machine translation, image/audio/video processing, robotics, game AI, and advertising (for better or worse).
And on top of all this, we have significantly reduced the "time to first useful model", and we have significantly lowered the math and programming requirements for building and implementing models. And now we have transfer learning, which lets any old Joe Schmo benefit from massive computing power and datasets to build small on-device models that blow away SOTA accuracy from even a few years ago.
Oh, and the ML tooling ecosystem has become a substantial source of innovation in programming language design, "developer UX", and "data ops".
What the fuck more do you want? The people who seem the most upset that ML isn't magic seem to be the most confused about what ML even is and does.
> Why do people feel entitled to "usable" ML at all?
Yeah, people are annoying, with their demands to use software themselves. It would be much easier for everyone if computers were controlled by an elite group of engineers who could hide the complexity from the rest of us. Perhaps they could wear labcoats.
> What the fuck more do you want?
If I knew the answer to that, life would be a lot simpler.
Yeah, people are annoying, with their demands to use software themselves. It would be much easier for everyone if computers were controlled by an elite group of engineers who could hide the complexity from the rest of us. Perhaps they could wear labcoats.
What are you even talking about?
It sounds like you're upset that cutting-edge technology still requires training & expertise to use and deploy effectively in industry.
Hey, can you please not take HN threads further into flamewar? We're trying to avoid that sort of thing here. If a comment contains a swipe, please don't escalate. Also, it's good to check if there's something in your earlier comment that might have been provocative in its own right (which there was: "What the fuck more do you want?" is a hop flameward).
> Yeah, people are annoying, with their demands to use software themselves. It would be much easier for everyone if computers were controlled by an elite group of engineers who could hide the complexity from the rest of us. Perhaps they could wear labcoats.
That's clearly not what nerdponx meant. Can you please stick to the site guidelines? "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." https://news.ycombinator.com/newsguidelines.html
> If I knew the answer to that, life would be a lot simpler.
I disagree. I made a straightforward interpretation of what was written. Given what came after the part I quoted, you really have to stretch to interpret it differently.
The OP underscored the same point using profanity.
I made a response that was clearly sardonic, attempting to be funny.
You can say, with your voice, “who won the Super Bowl last year?” to a device that fits in your pocket and it responds with its own voice with the correct answer. That’s pretty accessible.
Most of these systems are so far behind any real understanding of your words, though. They behave like Text-to-Speech followed by a Google Search, whether that's how they're implemented or not.
And Google Search is, of course, merely a natural phenomenon that is mined somewhere in Siberia and exploited without anybody truly knowing what's going on.
Sure, Google search is technological progress, but to the best of my knowledge they aren't doing any fancy natural language understanding every time you enter a search. It's a big, supersized information retrieval system that hashes all your n-grams with a few hundred thousand special cases tacked on.
There are many many practical example of modern ML (especially DL). Would be interesting to hear why you think those examples are not indicative of a field which is useful/not a hoax.
That is absolutely correct, but is sadly the case in a lot of fields. It doesn't mean that the practical results we see (AlphaFold, Imagenet Performance, NLP performance, Robotic control with RL) isn't amazing progress.
Luckily due to so many people using ML these days, what's useful vs. fluff gets sorted out over time.
It's a fair question. Is DeepMind famous for its amazingly smart toys because it's useful similarly-smart stuff is secret? Or public but boring? Or doesn't exist?
In theory it should be practical. The first generation has been adapted by other teams into excellent prediction servers that can be used now. The second gen is way more hush hush and has yet to be vetted, so we’ll have to see. I am watching for news of it eagerly!
I think it depends on what you call it. Instead of calling it AI or even ML you could call it pattern recognition or automatic model parameter estimation, but it doesn't sound as cool.
With emerging digital tools at the time which made it possible to store & retrieve datasets that I was already interpreting in detail anyway.
Since it was programmable too, ended up using the memory to store the key points from many permutations of well-characterized raw training data, then running that against new datasets to give me advice on how to save time on the greatly reduced manual work remaining.
I had a similar realization during grad school about a lot of the popular topics at the time (early 2000s). I even used to call them "the hoaxes of computer science". Things like grid computing or formal methods of software engineering had a lot of resources behind them, but nobody was able to use the results. Instead, very different formats of these ideas are what took root: cloud computing and advanced type systems.
> the low end eats the high end: that it's good to be the "entry level" option, even though that will be less prestigious, because if you're not, someone else will be, and will squash you against the ceiling.
I wish every grad student had been forced to memorize this statement. Build something useable, not clever.