Well, the proof is perfectly valid, as on some abstract level a computer == Turing machine == something you can reason about. If you can prove that your program works in the required way, then your proof holds.
However, brute force solutions are ugly and shed little (if any) new light on the problem, and in the words of G.H.Hardy - "There is no permanent place in the world for ugly mathematics". With things like this I just can't help but think that an elegant and enlightening proof will unexpectedly pop up some time in the distant future when we're all off researching some sort of new-age hypermaths or something...
I'd agree with your dislike of ugly solutions. I guess the benefit of this being accepted as a proof is that all the conjectures that begin 'Assuming that x is true...' can now be converted to theorems saying 'x is true, therefore...', which might tidy up a lot of loose ends.
There are proofs (by humans) that are too large for all but a rare few to check. Wiles' proof of "Fermat's Last Theorum" was 100 pages. Courtesy of Wikipedia:
"Because Wiles had incorporated the work of so many other specialists, it had been suggested in 1994 that only a small number of people were capable of fully understanding at that time all the details of what Wiles has done." [http://en.wikipedia.org/wiki/Wiles%27_proof_of_Fermat%27s_La...]
No one can argue that an unconfirmed computer proof is more trustworthy than an unconfirmed proof from a human.
I think you're overlooking the real thrust of the situation. Wiles' proof was relatively lengthy and involved, but it can be examined and studied by humans in an extremely reasonable amount of time. It only took three days for Wiles to present his original proof. There aren't many people in the world who can understand it, and there are fewer who are knowledgeable enough to confirm its validity, but they exist.
Conversely, the proof described in the NYTimes article is of such length that no single mathematician could confirm its validity -- rather than deducing the non-existence of the object in question by a logical argument, it examined a huge number of possible cases. In that respect, it is far more similar to the proof of the four color map theorem. The issue is not so much whether or not we trust the computer's result, but moreso what it means for mathematics to proceed with results that we do not, in a traditional sense, understand.
This question came up in some of my more abstract classes in college. A few professors in my department were working on slightly different problems in the same general domain of automated mathematical problem solving and proof construction. The general consensus as I remember it was that the simplest way to get around the problem of no human being able to survey these proofs was to do something like this:
1. Define a machine read- and writable logic that can express your theorem and the steps you think it'll take to get there.
2. Write an automated proof-checker that can verify that proofs in this language are correct.
3. Prove the correctness of the checker.
4. Write a program that starts with your axioms and writes out a proof that ends in your theorem.
5. Verify it with the checker.
Now the only proof that needs to be human-surveyable for us to be certain that everything is correct is the one in step 3. The proof created by step 4 can fill up a skyscraper full of hard disks, and as long as the proof checker verifies it we know that it must be correct. Given a simple enough proof language (FOPL, for example) and a suitable programming language (the choice at the time was lisp, I believe) step 3's proof is easily short enough for a human to verify.
The only hole left is the possibility of a subtle machine malfunction that causes the checker to falsely categorize a proof as correct. On modern hardware this possibility is remote enough that once a proof has been verified a number of times by independent researchers on their own hardware we can safely ignore it.
Reminds me of discussions which purported to prove, not that a particular number was prime, but that there was a 1 in BigNum probability that it was not prime.
The mathematicians in the audience seemed to consider that unacceptable to the point of being useless. Nonetheless, someone pointed out (to much laughter) that the chances of any given proof being incorrect were significantly higher than that 1 in BigNum. Of course, we've all been there--thought we proved something we didn't.
I regard the chances of machine malfunction similarly, and have a similar standard for proof. If you can examine the code and the processes sufficiently, there is no reason not to trust the machine. At least, no more reason than there is not to trust your and others' minds. I suspect this view is common, given that most folks consider the Four Color Theorem proven.
Yes, exactly, what if the thing they proved is not what they coded in the end, there is a good enough probability of bugs if the code is sufficiently big !
At least to me, this is a nonissue. If the program can be proven to be correct (in the sense that the logic can be confirmed so that it will result in a correct answer), and the hardware can be shown to be able to properly execute the program, and the programs answer is proven.
Even the already tiny chance of a hardware error can be eliminated in all practicality by the combination of error checking within the program itself and repetition on independent hardware.
Cray 1S? Isn't that about the power of today's desk top? The Cray could do 160 to 250 MFLOPS depending on the problem (Wikipedia). That seems pretty slow. I answered my own question http://forums.techpowerup.com/showthread.php?t=94721. There are processors doing up to 60 GFLOPS. Must be an error in the story.
EDIT: I have to start paying attention to bylines. There are archeologists among us.
reminds me of when the four color theorem was proved using a computer program. They had to convince everyone the proof was valid. There were lots of questions about the validity of the program.
I cant help thinking Douglas Adams brought this exact problem up ages ago when Deep Thought found the Answer to Life, The Universe and Everything.
As Adams pointed out the answer is sometimes useless because the process (or the question) was incomprehensible to the "beings" who sought the answer.
Then later when Earth is destroyed before the Answer is calculated he makes an even stronger point: It doesnt matter
The answer gave occupation to the philosophers for years before it was discovered, it provided millions of years of purpose for the "mice" in finding the answer. And Ultimately the lack of the answer didnt matter anyway.
Perhaps we can just accept the answer (knowing that it could be innacurate and keeping an eye out for indicators) and make use of it? Surely that is more worthwhile - it's something of a leap to completely trust the computer, but it seems more dangerous to refuse to trust it at all :)
For brute force proofs, the proof basically nothing to do with the output of the program. I'll explain:
A mathematical proof is just some utterance or depiction that will convince your audience that you have proven something. The actual utterance that might 'count' is totally dependent upon who is in the audience. If you are a topologist and you are talking to another topologist working on very similar stuff, your proofs would be incomprehensible to even other mathematics researchers, who could not determine if you were giving a valid proof or not, while the other topologist could easily tell.
If your audience is a really smart physics professor, that same topology proof would have to suck in many other facts, and explicitly quote accepted theorems, or you would risk the physicist jumping up and saying "not so fast" every time he see's a break in the reasoning. At least the physicist will be comfortable with the existence of theorems.
Now, a proof of the same thing, given to an alien with an IQ of 5000 but who's race never thought up the concept of a theorem would be exquisitely painful. You might have to actually give the proof of each theorem you use, and each theorem each of those theorems use. Otherwise, he would say "your logic is flawless, but you assume facts you have not proven, so this proof is incomplete."
If people were way smarter, proofs would be way shorter, since nobody writes down things that are really obvious. Nobody says something like "since the real numbers are commutative under multiplication..." because there is no need to say it. If people were 50 times smarter or something, way more complex things would just 'go without saying'.
So, what does some kind of brute force proof really consist of? It's a program that runs on a computer generally. So the proof is really the source code. If people are convinced the program does what you claim, and then you run it, that is certainly a proof.
If you can't count that, you also can't count any other non-constructive proof. I would say that they are basically pointless, since you can use the result of the proof in other proofs, but you can't build on it very much.
no, this is really begging the question. youve reduced "the proposition is true" to "the algorithm is correct, and the algorithm returns 'true'". once you prove the former you still have to do the hard part of actually _proving_ the latter. i suppose this amounts to mathematically proving, among other things, that your pentium 4 or whatever is incapable of executing a program incorrectly.
You miss my point completely. You are thinking that mathematical proofs are like airtight and completely unassailable and locked in for all time. In reality (practice) they are pretty sloppy, and are really just a form of persuasion.
Now, persuading a group of mathematicians to believe a claim is a high bar, and they are very good at finding logical holes, gaps, and mistakes. As Nietzsche points out in Beyond Good and Evil, it is quite possible that humans as a species have all sorts of blind spots in our reasoning abilities, and that such blind spots are essential for life itself.
In fact, we know of many of these blind spots, or failures of human reason. One great example is the Monty Hall problem. Others are the lottery, gambling, stock market events, advertising (the effects of), and all sorts of experiments that economists do these days.
So before you get concerned about the Pentium IV's accuracy, and 'mathematically proving' it is correct, consider that your brain is a very imperfect reason engine, and there is no reason to suspect that our intuition is accurate except in (and still imperfectly in) those situations where those intuitions improved the reproductive chances of our ancestors.
>...our intuition is accurate except in (and still imperfectly in) those situations where those intuitions improved the reproductive chances of our ancestors
That actually contradicts my point. Our intuition and reasoning powers did not evolve toward accuracy or better understanding the world, it evolved purely toward whatever kept our monkey ancestors out of lion mouths. Sometimes that could have caused our primitive ancestors to believe that they could only safely cross the plains while holding magic rubaduba fruit. This would of course be reinforced, since perhaps the rubaduba fruit only is on trees in the one month a year that lions are far away looking for water. The proto-humans who believed the rubaduba bullshit survived because they were able to pick up on a correlation, but they only acted on it because they believed in causation, meaning we our ancestors who thought the rubyduby fruit theory was idiotic got eaten by lions, while the credulous saps who filled sacks with 50 pounds of inedible crap and carried it across the plain were fine.
This is a just-so story, but the point is illustrated, which is that not only do humans have terrible reasoning ability when it comes to money and anything remotely scary, it's quite possible we evolved specifically towards irrationality in certain situations. There is billions of dollars spent studying this every year, it's called marketing.
"[proofs] are pretty sloppy, and are really just a form of persuasion."
yes... with one of these computer brute force proofs, you must take it on faith that the result of executing an algorithm is as someone has claimed, be it the paper author or your own machine. this is proof by handwaving.
It was a metaphor, here I'll explain it. The singularity is something in the future. Some believe the singularity will be an entity smarter than humans, that can do things, including move, perhaps even move itself in time. Who knows? Thus the trotting use as an anthropomorphic description of singularity behavior.
Furthermore, as technology progresses, the date at which the singularity will arrive is getting closer and closer, thus the singularity is moving backward in time, from the future to the present.
So humanity and the singularity, in a plane beyond the space time continuum are moving toward each other, humanity moving forward through time, and the singularity moving backward through time.
It's just poetry. That's why jderick said to use your imagination.
However, brute force solutions are ugly and shed little (if any) new light on the problem, and in the words of G.H.Hardy - "There is no permanent place in the world for ugly mathematics". With things like this I just can't help but think that an elegant and enlightening proof will unexpectedly pop up some time in the distant future when we're all off researching some sort of new-age hypermaths or something...
... well, I suppose a man's got to dream, right?