8th July 2008, 1:09 AM
http://www.wired.com/science/discoveries.../pb_theory
According to this, if you do enough comparisons you don't need science any more, or something like that. I'm not really sure how that works. Corrolating facts in new ways is important and it'll speed up the process alot, and further allow a lot of new realizations across multiple disciplines, however that's an "after science" thing. The initial data is still the keystone and gathering that is still the most important thing. No matter how fast we process, it's worthless without the data. It's the old programming maxim, garbage in, garbage out (REALLY old if you check my signature). Speed of analyzing data will certainly increase and that's great. However, we still need to make observations to add data to our pool. We also still need to test new hypothesis for verification. Corrolating data really fast is great and aside from giving us new insights at a rate that very well could change the world, it'll also give rise to new questions faster than ever before, but it will not, by itself, provide answers to those questions. We'll still need to form new testable experiments and gather new data. That's never really going to change.
Wired gets stupid sometimes...
Oh, but there is one last bit of confusion. If you are dealing with frickin' petabytes, you are dealing with numbers so huge that you will find all sorts of crazy patterns which, at smaller scales, really would be meaningful. You'll find coins flipping heads 100000 times in a row at that scale, but that's to be expected. What's needed aside from proper data gathering (the science) is also proper understanding of probability and number theory and good models. One needs good math to know what is a meaningful result at that scale.
According to this, if you do enough comparisons you don't need science any more, or something like that. I'm not really sure how that works. Corrolating facts in new ways is important and it'll speed up the process alot, and further allow a lot of new realizations across multiple disciplines, however that's an "after science" thing. The initial data is still the keystone and gathering that is still the most important thing. No matter how fast we process, it's worthless without the data. It's the old programming maxim, garbage in, garbage out (REALLY old if you check my signature). Speed of analyzing data will certainly increase and that's great. However, we still need to make observations to add data to our pool. We also still need to test new hypothesis for verification. Corrolating data really fast is great and aside from giving us new insights at a rate that very well could change the world, it'll also give rise to new questions faster than ever before, but it will not, by itself, provide answers to those questions. We'll still need to form new testable experiments and gather new data. That's never really going to change.
Wired gets stupid sometimes...
Oh, but there is one last bit of confusion. If you are dealing with frickin' petabytes, you are dealing with numbers so huge that you will find all sorts of crazy patterns which, at smaller scales, really would be meaningful. You'll find coins flipping heads 100000 times in a row at that scale, but that's to be expected. What's needed aside from proper data gathering (the science) is also proper understanding of probability and number theory and good models. One needs good math to know what is a meaningful result at that scale.
"On two occasions, I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question." ~ Charles Babbage (1791-1871)