Whatever the Deal

Another fools pleasure.
Let your feet do the learning.

"But the price of quality is often the unique imprint they leave."
Avowed antitheist.
Did leer.

"It was no excuse to be young."
One of us needs to figure out how to cross the event horizon.

Never mind martyrs, the stars had to die so that we could live!
Anyone who tells you anything else doesn't mean a thing.

"The world is... different. It's just not what we wanted it to be."
"I don't live very well alone. Some people don't. We all have different ways of defending our territory."

At the heart of the issue is this: science is any method by which we test statements to find out if they are true or not—evidence (or the lack thereof) is the only important fact. If you assert that a claim is fundamentally outside the purview of science, then you are also asserting that it is fundamentally disconnected from reality, and furthermore, you're asserting that the claim can not be proven true or false—it is by definition unknowable, incomplete. In summary, science is the sum of the collection of all that is truly knowable and the processes by which anything is truly known.

"Know how to solve every problem that has been solved."
—Richard Feynman, (appears to be on his chalkboard at the time of his death)

"Science is what we have learned about how to keep from fooling ourselves."
—Richard Feynman (unsourced)

Bee says, "A definition is never wrong, it's just more or less useful." I think that is the best description I've ever heard. Much more eloquent than my attempts to explain that whether Pluto is a planet or not is really just up to an arbitrary definition of planet, which should not be chosen so much to the inclusion or exclusion of Pluto, but rather by it's precision in categorizing astronomical objects. Seems so long winded now...

I disagree strongly with Stephen Hawking on this one.
The idea that extraterrestrial life is likely to be radically different than known life I think is only partially applicable. There are a few levels to look at: first, is it possible for life to form under extremely different circumstances? Well, we don't know yet! If it is, this might be expected to produce radically different life; the canonical example might Titan, with it's methane analog to Earth's water system. But that leads to a lot of complexities I'd like to ignore for the moment. So lets just stick with what we know, starting from observations of life here on Earth. Three main points come to mind:

1. Certain areas of Earth appear to remain (largely) devoid of life, implying that such environments have no suitable "hydrocarbon-based solution," (i.e., life as we know it). This might be extended to methane-based life by describing the limited temperature range in which methane-centric chemical reactions could be expected to allow an organism to do useful work (separate entropy from thermodynamic free energy).

2. Billions of years of evolution seems to indicate some quasi-equilibriums among different strategies, (e.g., sexual v. asexual reproduction allows for greater variation per generation, so much so that many species have completely lost the ability to reproduce asexually; multicellular v. single cellular, & plant v. animal are two other apparent branches we should probably expect).

3. Selection pressures are required to increase the complexity of life, and only life with a sufficient complexity can be expected to be intelligent (at least in the way we long for). In other words, we can't really expect a bacteria colony to develop the sort of intelligence we want, we should expect it to be a multicellular animal-like creature.

I also disagree with Hawking about a number of other issues in that article, such as the Independence Day theme of locust-like E.T.s, and the analogy to Christopher Columbus and the native North Americans. This idea that you might cross the vast interstellar distances because you've consumed all your resources at home is silly to me, it's along the lines of traveling from northern Canada to Peru to buy the last existing gallon of gasoline: if you can make it there, the rewards are relatively meaningless. Likewise, interstellar travel is no small feat; in fact, I suspect it may simply be impossible for organisms to accomplish—we may simply be too fragile, with to much space separating the stars from one another to expect an organism to ever make the journey. However, we are on the brink of being able to create autonomous explorers that could sleep the journey away, though it is unclear what it would mean for them to colonize the galaxy for us. (Though to me, this appears to be the modern reasonable version of Fermi's paradox, which I think is most reasonably resolved by adjusting the likelihood of intelligent life in a proximally reachable space to zero. That is, I figure intelligence is very rare, and spread out, and possibly unlikely to ever contact one another.) The argument against a Columbus-scenario is simply the question: is that how we would expect ourselves to treat the situation if we were the explorers? Travel hundreds of trillions of miles to exploit some new land? We are even trying to take care (unclear how well we are doing), to not infect other celestial bodies with Earth-based life (we don't want to taint Europa with Earth-based life, in case it harbors it's own home-grown life). These are also some important arguments to help people understand why, say, finding oil on Mars would not really affect us at all—what makes oil so valuable in the first place is really much more than it's energy content, it is also the accessibility and abundance of the substances as well, and Mars is most certainly not accessible. (A human trip to Mars demands a 26th month-mission, to allow for best orbital positioning to make the trip. Just a radio signal to tell a robot to stop takes 10-20 minutes to transmit one way, depending on the current positions of the two planets.)

Understanding these ideas about space and scale also help one understand why nuclear waste is not as big a deal as one might first suspect, and why any proposal to launch it into space is really very silly.

Some comments written in response to space-based disposal of nuclear waste:
I've heard a lot of people propose space-based elimination of nuclear waste, and I think under scrutiny it is a very unreasonable idea, exactly because I don't think it can be made either cost-effective or safe.

We lost two shuttles out of only 129 launches, even though we put extra care into maintaining and inspecting the craft because of the human element (though in my opinion the losses could have been prevented with better management).

Even the Saturn V only lifts 47,000 kg to lunar orbit, and the shuttle cost about $50,000/lb to GTO (I'm seeing as low as 11,000$/kg for other rockets), which isn't even escape! (Additionally, it seems unlikely these prices can drop too much lower, due to the enormous fuel requirement.)

Given the millions of tons of waste to dispose, odds exceedingly favor that we would accidentally detonate a large quantity of radioactive waste in the atmosphere eventually.

What I don’t understand is why everyone is so averse to storing it here? We already maintain, monitor and protect weapons, why not protect the waste the same way? No reason to seal up the mountain and hope the caskets don’t leak or rot; inspect it every few months! Guard it! Maintain the facility! Maybe even reprocess it if we choose.

In fact, these hard economic times ought to be perfect to get Yucca mountain going, or really, whoever the lowest bidder is. Development and maintenance should generate a lot of permanent jobs, ranging from higher-education engineering to lower-education construction.

Plus transporting the waste to the site, freeing up the space the waste currently takes up at some of the reactors. It seems like consolidating our inspection, maintenance and protection costs with radioactive waste should be a no-brainer, whether from a security, economic, or environmental perspective.

Some comments I recently wrote on a blog post about the Fermi paradox and the possible resolution that intelligent beings routinely destroy themselves with technological progress:

Resolving the Fermi paradox with nuclear annihilation I think is an awfully big jump. Assuming we did experience a full nuclear exchange, even at the height of the cold war (when maybe 10 times as many weapons existed) it seems unlikely that it would completely annihilate humanity, and the persistence of even a small society, together with the many fragments of information and technology, would return to an advanced society quite quickly. It is unrealistic to think we could really wipe out humanity, or even put us back at the stone age (knowledge has always defied destruction, even in times of heavy persecution).

To me, more reasonable resolutions to the Fermi paradox include: life isn't as common as we hope; intelligence is much less probably than we hope; communication over vast distances is much more difficult than we think (just imagine the 1/r^2 drop off with r=100+ ly! Let alone competing with all the astronomical sources radiating noise.) Maybe they use much more efficient means of communication, whereby stray signals that we could detect are much less likely; maybe they have some silly prime directive, or maybe they are just so evenly dispersed in the galaxy that we can't possibly ever find them. Of course, there might be many more reasons, or any combination of these.

This is not to say that I am not on board with nuclear arms reduction or awareness - surely it is one of the few serious threats to humanity that we have complete control of, which makes it a wonderful target of our concern. I'm just want us to be realistic about these things, that's all.

Glenn wrote:

Your resolution of the Fermi paradox seems quite quite plausible. However, concerning a nuclear conflagration, I would be somewhat less confident that we would survive as a species and quickly re-establish an advanced society.

I responded:
Glenn, I'm not saying it would be pretty, but just imagine, most information would remain intact (of course, nearly all electronic storage might be lost, there are mountains of paper-stored knowledge spread across the globe).

Of course, I would argue that our survival would be nearly certain even with the most aggressive exchange imaginable (as I did below), but more realistic scenarios, (like rogue states detonating a device as terrorism, or say an exchange between Pakistan and India), would be much less devastating still, leaving most of the industrialized world completely intact. 

Again, I still agree they are a terrible threat and we should be actively trying to decrease the number of devices, I just don't see it being as devastating as say a large asteroid impact.

Oh, I so disagree with Stuart Kauffman too! His entire argument seems centered around what I consider to be a misconception: that an "ultimate theory" would both describe all the fundamental rules governing the universe, and also provide a complete, probabilistic, "true" prediction of the future. (Provide all the possible outcomes with probabilities of being the actual outcome.) I don't know if I think the problem is that he is misconstruing what I consider to be an acceptable concept of "theory", or if it is merely his assertion that intractability of a problem means the rules governing the problem cannot be described. For instance, if the universe is analogous to a computer program, then an ultimate theory of physics would merely be the source code for the program, in a human-comprehensible language. That does not mean we would be able to run the program, and actually determine it's output. Hell, there are simple computer programs that are provably (from a mathematics perspective) uncomputable; we simply cannot predict the outcome of applying certain very simple rules in a systematic way. An interesting class of these are the Busy Beaver functions, which either run for the most steps, or output the most ones, before coming to a halt. I suppose if all he is saying is, "an ultimate theory would both describe exactly how nature works, and allow us to say ahead of time, exactly what the universe is going to do next, and such a theory is impossible to construct." If that is the case, then yes, I agree. But I think it is a silly notion to state in the first place. You cannot even expect to answer the yes or no question: given 1,000 typical mp3s of varying lengths, is there a way to partition the set into two subsets with equal total playtimes? I've chosen such a high number that we wouldn't expect you to find the answer even if the entire universe were a computer that had been working on the problem for the last 13.7 billion years! If every observable particle tested a trillion possible solutions per second every second for 13.7 billion years, it still only could have examined far less than:
of the total possible answers!
And of course, these numbers pale in comparison to even small busy beaver functions, even the six-state, two symbol machine is known to run many orders of magnitude more steps than the "playlist problem" described above! We humans routinely write multi-state, multi-symbol programs much larger than this! In many ways, the imagination really does know no bounds.

This was pretty fascinating too.

I can't figure it out; is the problem that our language feels inadequate to express our emotions?
That is, English is not enough? Words have failed me?


Kevembuangga said...

knowledge has always defied destruction, even in times of heavy persecution

Very, very untrue!
Roman mortar, original Tyrian purple, Pilema, Library of Alexandria, etc, etc, etc...
And it keeps going, we are POSITIVELY UNABLE to rebuild copies of the Concord or the Apollo cabin, not to speak of the infamous fogbank.
I personally know a Chinese man whose grand father secretly burnt his own library during the cultural revolution not to be caught with it.

cody said...

Hmm, you have a good point. What if I changed it to, "public knowledge"?

cody said...

A more thorough retort: of course if information is held by a single individual, or a single organization, and they want to keep it secret, then it's odds of persisting are terribly decreased. Likewise, if information is stored in a single repository, it's odds of being lost are greatly increased.

However, knowledge that has been shared, especially on the internet, is very different than your examples. Proprietary information will always hold the greatest risk of loss, but that is the price paid for secrets. Another example, I think it was in an Air & Space magazine years ago that I read that the IRS tried to audit the SkunkWorks program that built the F-117, and they had no receipts or anything to show for the project because they had been destroying any trace of it, apparently.

Notice however, that all of your modern examples rely on the secrecy and competition surrounding the information lost, and that when such secrets are discovered by the public, they often rapidly spread through many media, from textbooks to the internet, and it is this process that I am describing makes it very difficult to reset human knowledge.

I would argue that the vast majority of technology that you rely on in your modern life, is very well documented in hundreds or thousands of books, hard drives, people's heads, etc. It is this knowledge that I expect would persist through cataclysm, and would allow for even a small group of survivors to rapidly recover modern technology.

cody said...

I guess more to the point: FOGBANK and the Apollo cabin would be of little interest to rebuilding society, while the scientific and engineering knowledge required to build a lunar-faring capsule or a transonic passenger jet would both persist in the form of hundreds of different books, many of which exist in thousands, or perhaps even millions of copies. This is ignoring all the really important issues, like modern farming techniques & technologies, electrical engineering, petroleum production & processing... those things that make up significant portions of modern society.