"For an arbitrary polynomial function [...] determining whether it’s convex is what’s called NP-hard. That means that the most powerful computers in the world couldn’t provide an answer in a reasonable amount of time."
eye twitch
A problem's complexity class only gives an indication of potential runtime. There are decent algorithms for a lot of NP-complete problems (airline/package routing, compiler backend optimizations, etc.) that don't require "the most powerful computer in the world". I'm pretty sure the correct definition of hardness/completeness in complexity theory can be boiled down to a sentence that's much more correct than that, especially if you include something like "at least as hard as all other NP problems"
Your last statement is incorrect for one of the problems mentioned in the parent comment ("airline/package routing" which I take to be linear programming problems).
Simplex-type algorithms are typically used for these problems. These algorithms also have exponential time complexity in worst-case. In practice, well-designed simplex algorithms are of low-order-polynomial complexity. They terminate with an exact optimum.
This case, and others, are examples of situations where conventional complexity analysis fails to provide insight into real-world performance.
Last I knew, people were trying to analyze expected time complexity of certain (analysis-friendly) simplex algorithms under distributional constraints on the input structure matrices. But it's important to understand that this is mathematical back-filling to explain what is already known to hold in practice.
I would expect an airline routing problem to be an integer program (because you can't send 2/3 of an aeroplane along one route and 1/3 along another), and integer programming is NP-hard. On the bright side, at all times you have an estimate of the best possible solution so you can terminate early when you're within some percentage (say 1%) of optimal.
Interestingly they still use the simplex method repeatedly during the branch-and-bound solving process, because it only requires a little work to adjust at each step, whereas the methods with guaranteed polynomial worst case tend to take that polynomial time regardless of how "close" they start to the optimum. So, as you say, convential complexity analysis suggests that simplex would be an awful lot less useful than it is.
for linear programming there are several well known polynomial time algorithms (some interior point methods: ellipsoid, projective, etc)--the problem itself is by no means in EXP or NP.
for worst case complexities it's often just a small region of the problem space that makes the bound large--but enumerating or cutting out those cases is tricky. you just don't see them on average because they are few (c.f., quicksort: O(n^2) worst case, O(n log n) average).
Well, a lot of them can approximate "the" answer to within an arbitrarily and provably small amount, so it's definitions. For example, there is a polynomial-time approximation for euclidean TSP that produces a solution with length at most (1 + epsilon) * optimal.
That's nuttery. Borderline hero-worship. There are loads of epsilon-accurate proofs in math. Similarly, there are loads of non-perfect-but-more-efficient algorithms in CS.
"... is what's called NP-hard. That means that finding the answer is at least as hard as cracking a very strong cryptographic key, something that is currently considered too computationally expensive to be practical."
Would that be a better explanation, understandable by the article's target audience?
How pleasantly surprised I am to find that the work of Amir Ali Ahmadi, a friend and former student at Maryland where I was his EE TA, appeared on Hacker News. We both took the same graduate-level class on optimal control; despite being an undergrad, he was one of the best students in the class.
The result is intriguing and reasonably accessible: for all (multivariate) polynomials whose degrees are even and at least four, determining convexity (of any type! strong, strict, regular, pseudo-, quasi-) is strongly NP-hard. To prove this, the authors equate the problem of determining convexity to the problem of determining the nonnegativity of biquadratic forms -- a known NP-hard problem.
Like most things on physorg, this is blogspam. The original is at http://web.mit.edu/newsoffice/2011/convexity-0715.html and (unlike the physorg version) includes links to more detailed information about the research.
It's also badly written, buries the lead, and exaggerates the solution (which is that the math problem didn't fall, but a simpler problem which is almost as useful did.
eye twitch
A problem's complexity class only gives an indication of potential runtime. There are decent algorithms for a lot of NP-complete problems (airline/package routing, compiler backend optimizations, etc.) that don't require "the most powerful computer in the world". I'm pretty sure the correct definition of hardness/completeness in complexity theory can be boiled down to a sentence that's much more correct than that, especially if you include something like "at least as hard as all other NP problems"