Filed under: commons, Legacy Systems, Speculation, The Internet, Uncategorized | Tags: General Keith Alexander, Internet, Legacy System, NSA
Here I am venturing into something I know nothing about: the Internet. Recently, I read a 1999 quote from Steward Brand, in The Clock of the Long Now (BigRiver Link), that the internet could “easily become the Legacy System from Hell that holds civilization hostage. The system doesn’t really work, it can’t be fixed, no one understands it, no one is in charge of it, it can’t be lived without, and it gets worse every year.”
Horrible thought, isn’t it? What I don’t know about are the legions of selfless hackers, programmers, techies, and nerds who are valiantly struggling to keep all the internets working. What I do know some tiny bit about are the concerted efforts of the NSA, under General Keith Alexander (who’s due to retire this spring), to install effectively undocumented features throughout the Internets and everything connected to them, so that they can spy at will. Perhaps I’m paranoid, but I’m pretty sure that every large government has been doing the same thing. If someone wants to hack us, they can.
Well, what I’m thinking about is the question of trust, rather than danger. The idea that cyberspace is dangerous goes well back before the birth of the World Wide Web. Remember Neuromancer? Still, for the first decade of online life, especially with the birth of social media, there was this trust that it was all for the greater good. Yes, of course we knew about spam and viruses, we knew the megacorps wanted our data as a product, and anyone who did some poking or prodding knew that spy agencies were going online too, that cyberwarfare was a thing. Still, there was a greater good, and it was more or less American, and it pointed at greater freedom and opportunity for everyone who linked in.
Is that still true? We’ve seen Stuxnet, which may well have had something to do with General Alexander’s NSA , and we’ve seen some small fraction of Edward Snowden’s revelations, about how the NSA has made every internet-connected device capable of spying on us. Does anyone still trust the US to be the good guys who run the Internet for the world? Even as an American, I’m not sure I do.
This lost trust may be the start of the Internets evolving into the Legacy System from Hell. Instead of international cooperation to maintain and upgrade the internet with something resembling uniform standards, we may well see a proliferation of diverse standards, all in the name of cyber security. It’s a trick that life learned aeons ago, that diversity collectively keeps everything from dying from the same cause. Armies of computer geeks (engineers by the acre in 1950s parlance) will be employed creating work-arounds across all the systems, to keep systems talking with each other. Countries that fall on hard times will patch their servers, unable or unwilling to afford expensive upgrades that have all sorts of unpleasant political issues attached. Cables and satellites will fail and not be replaced, not because we can’t afford to, but because we don’t trust the people on the other end of the link to deal fairly with us and not hack the systems they connect to.
I hope this doesn’t happen, of course, but I wonder. Once trust is lost, it’s difficult to regain. On a global level, can we regain enough trust to have someone run the internet as an international commons? A good place? Or is it too late for that? I’m quite sure that US, Chinese, and Russian cyberwarfare experts all will say that their expertise is defensive, designed to minimize damage, and they may even believe it. Still, in the face of so many soldiers and spies amassing online, why trust our lives to this battlefield? Anything we put online might be wiped out or compromised, victim to a battle we neither wanted nor approved of.
Even though I don’t have a reason to like him, it would be sad if General Alexander’s legacy was starting the conversion of the internet into a legacy system. It will also be instructive too, a lesson in how the buildup of military power can backfire (something I think even Lao Tzu commented on). Fortunately or unfortunately, any history written on a legacy system will most likely vanish when the last expert walks away and the owners pull the plug. That’s the problem with legacy systems, you see. Their data can vanish very, very quickly.
Filed under: commons, economics, Speculation, sustainability | Tags: commons, Elinor Ostrom, markets
This isn’t my original idea. I’m reading John Michael Greer’s The Wealth of Nature: Economics as if Survival Mattered (Amazon link), and he makes the assertion that a free market, “in which buyers and sellers are numerous enough that free competition regulates their interactions,” is a form of commons, a resource that should ideally be free to all in a society. He goes on to point out that this is in contrast to those who think that all commons should be eliminated in favor of private ownership. The issue he’s getting at is that free markets cannot exist without regulation, something recognized even by Adam Smith, who noted in the Wealth of Nations that “people of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or some contrivance to raise prices” (reference).
I can see a long argument about how true this is, because it’s a provocative concept. Markets and commons are traditionally diametrically opposed in capitalist thinking, it’s hard to consider that they have anything in common. I’m happy to have that discussion, but there’s another related issue that, to me, is even more interesting: Can markets be managed as commons?
We don’t have any data on this, but the late Elinor Ostrom won the 2009 Nobel in economics for her studies of how commons are successfully managed. She found, through studying both successful commons (water districts, community forests, and the like) and unsuccessful commons, that there were eight “Design Principles” that distinguished the successful commons from the failures (Amazon link to reference).
Here are Dr. Ostrom’s eight design principles, as rewritten by David Sloan Wilson in The Neighborhood Project (Amazon link). I’m using Wilson’s version since it’s more general than the original.
1. Clearly defined boundaries. Members of the group should know who they are, have a strong sense of group identity, and know the rights and obligations of membership. If they are managing a resource, then the boundaries of the resource should also be clearly identified.
2. Proportional equivalence between benefits and costs. Having some members do all the work while others get the benefits is unsustainable over the long term. Everyone must do his or her fair share, and those who go beyond the call of duty must be appropriately recognized. When leaders are accorded special privileges, it should be because they have special responsibilities for which they are held accountable. Unfair inequality poisons collective efforts.
3. Collective-choice arrangements. Group members must be able to create their own rules and make their own decisions by consensus. People hate being bossed around but will work hard to do what we want, not what they want. In addition, the best decisions often require knowledge of local circumstances that we have and they lack, making consensus decisions doubly important.
4. Monitoring. Cooperation must always by guarded. Even when most members of a group are well meaning, the temptation to do less than one’s share is always present, and a few individuals might try actively to game the system. If lapses and transgressions can’t be detected, the group enterprise is unlikely to succeed.
5. Graduated sanctions. Friendly, gentle reminders are usually sufficient to keep people in solid citizen mode, but tougher measures such as punishment and exclusion must be held in reserve.
6. Fast and fair conflict resolution. Conflicts are sure to arise and must be resolved quickly in a manner that is regarded as fair by all parties. This typically involves a hearing in which respected members of the group, who can be expected to be impartial, make an equitable decision.
7. Local autonomy. When a group is nested within a larger society, such as a farmers’ association dealing with the state government or a neighborhood group dealing with a city, the group must be given enough authority to create its own social organization and make its own decisions, as outlined in items 1. and 6. above.
8. Polycentric governance. In large societies that consist of many groups, relationships among groups must embody the same principles as relationships among individuals within groups.
What’s interesting about these rules is that, superficially, it looks like these would be great rules for free markets as well. Look at the complaints such rules would solve:
– markets should have boundaries. People get really uncomfortable when everything is for sale, whether they want it to be or not. There’s a general idea that some things should not be for sale, while markets are the appropriate venue for other things. Similarly, not everyone wants to participate in “the marketplace” and the outsiders resent being forced in.
–Markets should be fair, the fabled level playing field. Most would agree that people should get special privileges only so that they can exercise special responsibilities, not because they have special connections. Similarly, corruption and gaming the system should be punished.
–Collective decision making. This one is tough, because everyone wants to constrain the fat cats, whether or not they’re in the market. Still, there are many complaints about top-down rulemaking, and with good reason. This is not to say that markets are all good at self-governing (and here I’m thinking about the body-county in illegal drug marketing disputes), but to the extent that a market is self-governing, having rules that everyone agrees are fair is not a bad thing.
–Monitoring: this one is a no-brainer. Corruption kills markets, and they always need to be monitored to avoid people gaming the system. Interestingly, monitoring in commons can come either from within, from people hired to monitor the system, or from outside officials. Any and all of these can work, depending on the circumstances.
–Fast and fair conflict resolution: this one is another no-brainer. Things work best when disputes can be settled fairly and quickly, either by a tribunal within the market, or by higher authorities, so long as judgement is fast and fair.
–Local autonomy. This can be somewhat problematic when you think about Wall Street, but it’s the flip side of having collective decision making within a market. If the authorities are going to let a market make their own rules, they need to let the market govern itself. Note, however, that authorities can be intimately involved in both monitoring and conflict resolution, so long as the market grants that this is their legitimate role in the market.
–Polycentric governance: This is the idea that the relationship between individuals and a markets is mirrored between markets within a greater market, if such a hierarchy exists. I’m not sure how this might work, but it does embody the same ideas of group decision making (on the level of member markets), monitoring, fast and fair dispute resolution, and so forth. That’s not a bad way to handle commerce on a large scale.
To me, this is the bigger point: even if markets aren’t exactly commons, it certainly looks like the principles that lead to successful commons might lead to successful free markets. Additionally, it’s not particularly driven by any market ideology: both progressives and libertarians could agree on these design principles. Even the big government proponents tend to agree (in my experience) that the best regulations are the ones that people think are fair and fairly enforced. Trying to get such regulations written can be very difficult, but it’s often a major goal of regulation. What also makes this interesting is that, if you accept that markets may be commons, it’s possible to have a free market under a wide range of conditions—so long as the market is properly monitored and managed according to rules.
A truly free market won’t work, but a market commons may well be viable. What’s sad is how far Wall Street currently is from most of these design principles. Perhaps our financial markets are a lot less successful and sustainable than we might wish for? Perhaps they need (shock, horror) more regulation, not less, to last?
What do you think?
Filed under: Real Science Content, Speculation, Uncategorized, Worldbuilding | Tags: anthropocene, global warming, hobbits, Speculation
No, I haven’t seen the latest offering Peter Jackson yet, but I will soon. Still, in honor of the latest, erm, extension of The Hobbit onto the big screen, I thought I’d pitch out an interesting possibility for the future of at least some of our descendents.
First, a definition: ATM isn’t the money machine. Rather, it’s an acronym for Anthropocene Thermal Maximum, which we’ll hit sometime after we’ve exhausted all the fossil fuels we’re willing to dig up into the atmosphere. If we blow off over something like 2500 gigatonnes of carbon, we’re going to be in the range of the PETM, the Paleocene-Eocene Thermal Maximum (Wikipedia link) about 55.8 million years ago, when global temperatures got as hot as they have been in the last 60 million years. Our descendents’ future will be similar, if we can’t get that whole carbon-neutral energy economy working.
One of the interesting recent findings is that mammals shrank up to 30 percent during the PETM (link to press release). The reason given by the researchers is that increased CO2 causes plants to grow more foliage and fewer fruits (in the botanical sense, so we’re talking fruits, nuts, grains, and all the other things we like to eat). This poorer nutrition led to smaller animals. I think there’s another possible explanation for the decrease in animal size.
My thought was that, if civilization crashes due to radical climate change into a PETM-type world, humans will be at the mercy of the elements, so it’s quite likely that future people will be smaller in size. Perhaps 30 percent smaller? Sitting down with the BMI graph and making a few assumptions, I found that the 30% smaller equivalent of a 71 inch tall male weighing 160 lbs is approximately 60 inches tall. Now, this is an interesting height, because it is the upper limit of pigmy heights in an interesting 2007 study by Migliano et al. in PNAS (link to article). Their hypothesis was that the evolution of pigmies around the world is best explained by significant adult mortality, which they adapted to by shifting from growth to reproduction earlier in their lives. The researchers found that the average age at mortality in pigmies is 16-24, and few live into their 40s. The major cause of death is disease, rather than starvation or accidents.
While I don’t know of any evidence of increased animal disease during the PETM, there is good evidence for increased plant disease and predation by insects (link), so it’s not much of a stretch to hypothesize that the animal dwarfing could have been caused by increased disease, decreased lifespans, and a resulting shift towards smaller body size and early reproduction.
So, here’s the idea: if we blow too much carbon into the air, and our ATM rivals or exceeds the PETM, at least some of our descendents will be the size of pigmies, due to the harsher environment (more disease, less medical care) favoring people who mature earlier and have kids as teenagers. They probably won’t be hobbits unless a hairy-footed morph takes off somewhere (perhaps in the jungles of Northern California?), but they will be technically pigmies.
It’s not the most pleasant thought, but if short lives and statures is troubling, the good news is that post-PETM fossils show that animal species regained their former size once the carbon was out of the air. And, according to Colin Turnbull’s The Forest People, life as a pigmy isn’t necessarily nasty or brutish, even if it’s short.
Filed under: livable future, Real Science Content, Speculation, sustainability, Syria | Tags: desalination, Syria, Syrian civil war, water politics, water war
Not that I’m an expert on foreign policy or Syria (there’s someone with the same last name who is. We’re not related). The one thing I do understand, a little bit, is water politics, and that’s may well be one of the important drivers of the Syrian civil war. As Mark Twain said, “Whiskey’s for drinking, water’s for fighting over.” And good Muslims won’t drink whiskey. Since I’m interested in the deep future with climate change, this might be a portrait of things to come for other parts of the world, including where I live in the southwestern US.
Here’s the issue: between 2006 and 2011, the eastern 60 percent of Syria experienced “the worst long-term drought and most severe set of crop failures since agricultural civilizations began in the Fertile Crescent many millennia ago,” forcing 200,000 Syrians off the land (out of 22 million total in Syria) and causing them to abandon 160 towns entirely (source). In one region in 2007-2008, 75% of farmers suffered total crop failure, while herders in the northeast lost up to 85% of their flocks, which affected 1.3 million people (source). Assad’s policies exacerbated the problem. His administration subsidized for water-intensive crops like wheat and cotton, and promoted bad irrigation techniques (source. I’m still looking for a description of what those bad irrigation techniques were.).
These refugees moved to cities like Damascus, which were already dealing with over a million refugees from Iraq and Palestine. They dug 25,000 illegal wells around Damascus, lowering the water table and increasing groundwater salinity (source). The revolt in 2011 broke out in southern Daraa and northeast Kamishli, two of the driest parts of the country, and reportedly, Al Qaeda affiliates are most active in the driest regions of the country (source).
One thing that worsened the problem was Turkey. The Tigris, Euphrates, and Orantes Rivers flow out of Kurdistan in Turkey into Syria. Turkey, in a bid to modernize the Kurdish region, built 22 dams on these rivers up to 2010 in the Southeastern Anatolia Project. They’ve taken half the water out of the Euphrates, and used it to grow large amounts of cotton within Anatolia, doubling or trebling local income in that traditionally rebellious area.
So is drought destiny? Experts caution that it’s not that simple (source). In 2012, the American Midwest suffered a record drought, While that may have led to Tea Party outbursts in the 2012 elections, it didn’t lead to armed insurrection. (As an aside, you can figure out how well the drought map correlates with the 2012 Presidential election map. Washington might one day take note of this…). Still, when you couple drought, poverty, bad governance, and a witch’s brew of historical grievances and systemic injustice, drought can cause a civil war.
There are a couple of big problems here. The first is that the US didn’t see the revolt coming. Right up until the first protests started, they thought that Syria was immune to the Arab Spring (source). This isn’t all that surprising. Due to the War on Terror, the CIA and other agencies work closely with government intelligence agencies to hunt terrorists (source), and have little or no intelligence capability to learn what’s happening on the “Arab Street.” This led to the US missing the Arab Spring movement pretty much in its entirety. The US military has been talking about climate change for years, and they’re starting to get serious about preparing to deal with it (source), but they don’t seem to have a functional reporting system yet, let alone a good way to respond. To put it bluntly, no one in Washington or other capitals seems to watching things like water supplies, crop reports, rural migration to cities, or even the price of bread. Or if they are, they’re not being listened to. Spikes in bread prices throughout North Africa helped prepare the ground for the Arab Spring uprisings, and the region is still a major wheat importer (source).
The second problem is that, so far, our leaders haven’t officially acknowledged that water’s a problem. Basically, during the drought, Syrian per capita water dropped by almost half. While a lot of this could be returned by better management, growing different crops, convincing people not to eat bread in the place where wheat was first farmed, and so forth, there are probably too many people relative to the water supply, at least during droughts. Part of this is demographics. The population of the Middle East has quadrupled over the last 60 years, and the water supply, if anything, has shrunk (source). The brutal answer is to get rid of those people, which may be one reason why Assad was so willing to use chemical weapons. There are 1.851 million registered Syrian refugees at the moment, and that’s about one percent of the population outside the country. Assad (and whoever follows him) may not be interested in having them return, either. Syria likely would be more stable with fewer thirsty mouths.
What’s the solution? One important part is to get water on the negotiating table. Turkey officially helps Syria with water flows, but it’s not clear how diverting half a river is a friendly gesture, and the two countries are not on good diplomatic terms. If the Turks are using the Euphrates to water cotton, most of that water is lost to the air, rather than flowing back into the river where Syria can get it. Turkey could help stabilize Syria by letting more water out of its dams, but by doing so, it would risk insurrection in Kurdistan, so I don’t think they will voluntarily give up that water. Since Turkey’s water sources are secure for the moment, I suspect that quite a few Syrians are going to be resettling there, just as Iraqis and Palestinians are (or were) living in Damascus. More countries should volunteer to permanently take in Syrian refugees, especially in the north (as Sweden has). Why not? It increases populations in areas that are experiencing population decreases due to low birth rates, and it’s cheaper than trying to fight in the Middle East. Moving people to where there’s water is much less cruel than interring them in refugee camps in border deserts with inadequate resources and no hope.
One of the problems with climate change is that the northern edges of deserts are forecast to get drier, and the Middle East and the Mediterranean basin are one of those edges. If we want to avoid continual unrest in that region, it’s high time we all (in the international sense) start financing regional desalination plants in the Middle East and other dry areas. This has worked to secure water for Israel. Granted, it’s an energy intensive solution, but a large-scale desalination plant is cheaper than a single day of all-out ground war, US style (source).
The other lesson here is that politics and politicians matter. Drought isn’t necessarily destiny, but bad water management choices can turn a chronic problem of scarce resources into a bloody war. If you want to know why I’m not a libertarian, this is why. It’s nice to have liberty, but it’s necessary to have water. Good politicians work to get you enough of both, and we need more of them at the moment.
Filed under: Kaiju, Pacific Rim, Real Science Content, science fiction, Speculation | Tags: Bechdel Test, Kaiju, Pacific Rim, Real Science Content
The first thought was inspired by Darren Naish’s comments about the portrayal of scientists in Pacific Rim. This is scarcely news. In fact, it’s even inspired a few entries at TV Tropes. Still, it’s frustrating, especially when the sheer stupidity of some applied phlebotinium degrades the rest of the movie (red matter, anyone?).
There are potential solutions. Movies tend to be quite sexist, and this has inspired the Bechdel Test which is a litmus test for how women are portrayed in a piece. In order to pass, the piece must:
a. Include at least two women,
b. who have at least one conversation,
c. about something other than a man or men.
When you start thinking about the number of films that fail, you realize how biased most films are. This goes double for summer blockbusters, unfortunately.
Can we do something similar for science? I’m not as pithy as Bechdel, but my first thought was that if a film could be improved by hiring an out-of-work scientist to vet the script and including her suggestions, then it fails the test. This would catch everything from midichlorians and red matter to the continuity gaffs in all the Star Treks, the teleportation between forests in Jurassic Park and so forth.
Now, movie types typically argue that scientists are such a tiny percentage of the audience that there’s no point in catering to them, but that misses the point of the test. This test is more in line with Van Halen’s requirement in their contracts that there be no brown M&Ms backstage. The point of this bizarre-seeming contract clause was that Van Halen at the time was touring with a huge, heavy and technically sophisticated stage rig. Their contracts ran to dozens of pages, and included things like making sure the stage they were to perform on wouldn’t collapse under the weight of all their props. The no-brown-M&Ms clause was actually there for safety. if they spotted brown M&Ms in the bowl, they would immediately know that the venue managers hadn’t bothered to read the contract. At that point, they’d have to immediately check every other show detail, to make sure that nothing collapsed and no one died during their show.
When a movie is stupid about the science, it’s often stupid about a lot of other things too, things that everyone notices, like a crappy plot or cardboard stock characters. Get too stupid and the movie flops. Compared to that, getting a scientist to vet the script is pretty cheap.
Now, let’s turn to Pacific Rim. At this point, I haven’t seen it (and since Darren and
Mike Matt have seen it multiple times, I’m not sure they need my ticket money). Be that as it may, I’d like to suggest what would really happen to any kaiju, including godzilla, that was stupid enough to make repeat visits to our little world.
Here’s the fundamental stupidity about these giant kaiju films. It’s all about killing cities. Yes, this would certainly happen the first few times, at least until someone ran an analysis on a kaiju corpse. See, kaiju are biophysically impossible as we understand reality, so if they did exist, they’d be absolutely full of bizarre chemistry. In Pacific Rim, this is all treated as hazardous waste and black market rhino horn stand-ins. But in real life, each corpse would be a gold-mine for the transnational, immensely sophisticated, chemical industry. It doesn’t matter whether you’re rendering Godzilla down for radionucleotides to supply the chronic shortages of medical isotopes, or rendering the blood of PR Kaijus down for all that ammonia, which is a major feedstock for both fertilizers and explosives. Those giant things are too valuable to nuke.
So if our world was invaded by kaiju, here’s what I suspect would happen. First, people would hack kaiju communications to figure out how to lure them or repel them (much as the Allies hacked U-Boat communications in WWII and routed the entire force. Controlling attack subs from a central hub is self-defeating). Then they’d build giant killing pens, probably on the coast of China (note that I’m suggesting this not due to any bias against China, but because they have become chemical suppliers to the world, and they’ve got the huge infrastructure needed to deal with the influx of kaiju products). Once these facilities were built, fleets would lure and drive kaiju into these kill-zones, dispatch them humanely, perhaps with a bunker busting guided bomb to the back of the skull dropped from 10,000 feet, and render their carcasses for everything we could get out of them. Rather than shutting the rift down, we’d probably drop a note in, asking the kaiju masters to send more kaiju (NSFW link). For all I know, bringing in kaiju this way would render our industrial civilization a bit more sustainable, since we would have outsourced production of some highly dangerous chemicals to another planet.
Yes, I understand that Pacific Rim runs on awesome, and that what I just suggested would be titanically not awesome, more in line with The Cove than with what actually made it to the screen. In fact, given Hollywood’s limited set of plots, the only movie they would make out of this scenario is some blue-eyed mother kaiju being mercilessly herded to her doom on the industrialized China coast, with impractical environmentalists’ efforts to save the noble beast from certain destruction. But there’s something a little sad in this whole exercise. It’s not just the bad science, it’s the lack of vision. Hollywood can only think to make kaiju in one mode: destroying coastal cities. There’s little creativity, it’s all replaying a trope that first showed up in 1954. The Japanese were more inventive with their kaiju, but Hollywood’s creativity has been leached out by the monstrous budgets they play with, since investors far prefer predictable ROI to untested creative productions. Personally, I think that adding a little real science, along with that massive dose of creativity that real science inevitably brings, would spice up the whole enterprise. Unfortunately, I doubt anyone in the industry (outside the SyFy Channel) would agree with me. And so it goes.
Filed under: Cthulhu, fantasy, fiction, science fiction, Speculation, Uncategorized, Worldbuilding | Tags: Cthulhu, Interstellar Travel, Lovecraft revisionism, science fiction
Time for something different. Admittedly, it’s inspired in part by Matt Wedel’s recent musings on how to make a proper Cthulhu idol. Since it’s July, I figured I’d trot out something I’ve been musing about. It has to do with vernal pools. And Cthulhu. And interstellar civilization.
Vernal pools, in case you don’t know, are rain-fed pools that crop up in the spring. I’m used to the California ones, which feature a wide variety of (typically rare to endangered) species that act as typical aquatic or wetland species, but only for the few weeks to months that the pools last. They have a couple of neat properties that are relevant here. One is that vernal pool species have a number of ways of dealing with the inevitable death of the pool, from flying to another pool to going into hibernation to producing propagules (seeds, eggs, etc) that can survive up to a century before they grow once a new pool forms. The other thing to know is that organisms in the pool typically start at the small end (fairy shrimp, algae), followed by bigger ones (tadpoles, small aquatic plants), followed by “large” predators (dragonfly larvae, beetle larvae), followed finally by the really big things (ducks, garter snakes) as the pool dries. It all happens quite fast, a miniature serengeti, as someone called it.
If you don’t know what Cthulhu is, well, what can I say? Go read The Call of Cthulhu, and come back later. But this is more about Lovecraft’s whole mythos of critters that lived in deep time and still live here and there, ready to jump out and go boo. Erm… Right.
Lovecraft didn’t know much about math or biology, for which I don’t blame him. It wasn’t his thing. Still, rather a lot of science has floated under the bridge since he wrote in the 1920s and 1930s, so I’d suggest it’s high time to retcon the Cthulhu mythos into modern science. That, and it’s July. In that spirit, I’d like to suggest an interstellar civilization composed of Mythos monsters, and based in part on the model of a vernal pool.
Let’s start with our galaxy. By most measures, there seem to be millions of potentially habitable planets out there, but equally, in our world, we don’t see any evidence of interstellar cultures. This is slightly bizarre, as sun-like stars have been around from something like 500 million years longer than our sun has existed. One would guess that, if interstellar civilization could exist, it would exist, and that furthermore, it would have colonized Earth long ago. That is exactly what Lovecraft posited, with his fossil cities in At the Mountains of Madness, The Shadow Out of Time, and elsewhere. Personally, I think his reasons for why we’re not over-run by alien beasties are a bit weak, so this is where the retcon starts.
The big problem with interstellar civilization is that traveling between stars is horribly energy and resource expensive. Lovecraft got it right, when he talked about species migrating between the stars, rather than commuting (although his Outer Gods seem to not have that trouble). It follows then that when a interstellar civilization colonizes a planet, resource extraction begins in earnest. We’re not talking about sustainability here, not by a long shot.
Since we know what a non-sustainable civilization looks like (we’re living in one), we also know that, absent major changes, such civilizations die out in a geologic instant. This may sound non-functional, but there’s a way out of it. If the interstellar civilization on a particular world can colonize one or more new planets before the civilization dies, it can keep going. Planets recover from civilization over a 10-65 million year period (thanks to geologic processes that allow the biosphere to recover, new oil reserves that gather surplus sunlight, and erosion that uncovers ore deposits), so it’s theoretically possible for a really clever interstellar civilization to persist indefinitely by constantly moving, leaving most of the hundreds of millions of habitable worlds in the galaxy fallow for most of the time. When the civilization ends on a planet, its constituents either leave, die off, hibernate, or leave some sort of remnant or propagule to grow when civilization comes again, tens of millions of years later. Granted, it’s tricky for anything to survive intact for tens of millions of years, but with god-like technology comes god-like hibernation abilities.
So what happens when civilization rains down on a planet? I suspect it’s a lot like what happens when a vernal pool fills. The little guys (elder things and their shoggoth bionanotech) show up first and most frequently. If the planet’s biosphere isn’t that suitable, that may be all that shows up, and they leave after they’ve sucked up the available resources to move on to the next suitable planet. If conditions are more favorable, the elder things are followed by all manner of beings: mi-go, the Great Race, and so forth, each preying on (excuse me, establishing trade relations with) the things that came before.
Then Cthulhu and his kind show up. They’re the megacorps, excuse me, the big predators. However, Cthulhu has an odd biology. According to the Call of Cthulhu“[w]hen the stars were right, They could plunge from world to world through the sky; but when the stars were wrong, They could not live.” In biological terminology, Cthulhu and his ilk use two strategies: interstellar travel (“plunging through the sky”), presumably if the stars are close enough for them to make the transit, and they also can go dormant (“could not live”), presumably through some amazingly advanced form of anhydrobiosis, to wait between boughts of civilization. Once Cthulhu’s kind is through ravaging a planet, the show’s over, and those survivors who didn’t flee settle in to wait for the planet to heal itself. This is much like what happens when a vernal pool dries to mud. The flowers bloom in the mud, and everything sets up to wait through another dry summer
Note that colonization isn’t an organized process, but then again, vernal pool community formation isn’t organized either. Every pool is different every year, and it depends on things like how fast the pools are evaporating and what animals are close enough to colonize the pools. Most of them can pass a year (or hundred) without needing water. Similarly, interstellar civilization is conditioned by how far a particular species can travel between stars and by what they need to survive on a planet, whether they can pioneer an uncivilized ecosystem (as the elder things can), or whether they need a civilization present to feed their great bulk (as with Cthulhu).
When Lovecraft talked about ancient cities, his biggest problem was lack of a viable dating technology. He wrongly assumed that species had been on Earth for hundreds of millions of year due to fragments throughout the geologic record, when in fact the planet was settled repeatedly, at different times, tens or hundreds of millions of years apart. It’s an easy mistake to make.
We can even understand the nature of Lovecraft’s Other Gods in this scheme. Azathoth, the blind idiot god (or demon sultan) at the center of the universe is pretty clearly the black hole at the center of our galaxy. Without it, this galaxy wouldn’t exist, so it is our creator in its own mindless way. Yog-Sothoth, the All-in-One and One-in-All of limitless being and self, is probably our galaxy’s equivalent of the Internet, possibly powered in part by the central black hole Azathoth. After all, if civilized species don’t know what’s going on on other worlds, how can they know where to migrate next? Nyarlathotep, “that frightful soul and messenger of infinity’s Other Gods, the crawling chaos,” is Yog-Sothoth’s equivalent of Siri, or perhaps Clippy the Paperclip, which may explain humanity’s generally negative interactions with it.
This leads to some interesting ideas. Paleontology in Lovecraft’s world is likely to be rather more interesting than our world’s paleontology. Think of what the remnants of an alien interstellar city would look like in the fossil record. Moreover, there would be a rather more sinister explanation for Earth’s mass extinctions, and the evidence would be rather different.
Of course, the ultimate question for humans is, when the stars come right and galactic civilization comes to this planet yet again, do we join in the madness and plunge between the stars with them, do we resist, or do we hide out until they go away, and hope we can survive on the scraps left behind?
About a month ago, De. Deepak Chopra appeared on the NPR show Wait Wait Don’t Tell Me (which you can listen to at this link). At the end, he repeated the old idea that form is an illusion, because inside atoms is mostly empty space. While I have no quarrel with Dr. Chopra, I started thinking about this, and realized both that he is (most likely) dead wrong, but that form is nonetheless an illusion. Since I haven’t posted for a while, I figured I’d throw this up in the best (and increasingly endangered) tradition of late-night dorm bull sessions.
The issue with the Dr. Chopra’s idea can be boiled down to two words: dark matter. According to the physicists, a majority of the stuff in the universe is dark matter, which can be seen only by its gravitational signature. Assuming they’re right, all that “empty space” inside our atoms actually has a fair amount of stuff in it: dark matter, if not dark energy. Neutrinos sleet through a bunch of the rest of it, as do all the photons that convey the radio waves I was listening to. One could, in fact, argue that space is an illusion, that even the sparsest interstellar vacuum is far from empty.
But the mystics are still right: form is illusion. It’s just a different kind of illusion. For those who watch Brain Games on the National Geographic Channel. Human brains are not just prone to illusions, they are hard-wired to see them. Neuroscientists have been having a lot of fun studying the neuroscience of magic. The basic finding is that our brains use a number of systems and shortcuts to make sense of the world. Some of these are innate, while some are learned, often culturally specific. To over-simplify, the world is so complex that we cannot understand it without simplifying it, pinning meaning onto sights, sounds, scents, and so forth so that we can respond to raw sensory inputs and survive. Without meaning, we would be lost. For example, our eyes are somewhat less acute than average smart-phone cameras, but we see more because our eyes move constantly, and our brains stitch the images together to provide the illusion that we’re seeing more than we actually do.
Thing is, this is part of being human, and the downside is that we’re innately susceptible to illusions because of the way our brains process incoming data. It’s a tradeoff, honed by evolution: we see the stuff we need to see (in the evolutionary sense of needing to survive to leave behind offspring), but that means we can be fooled by everything from camouflaged snakes to clever illusionists. In this sense, forms are illusory. We don’t see only what’s there. Instead, our brains are grown to see what we find meaningful. This is the difference between a camera and an eye: a camera sees what is actually there. However, it takes an enormous amount of effort to program a computer to see with a camera, because the programmer has to figure out how to embody human norms, assumptions, and illusions as computer code to interpret the camera image in a way that makes sense to humans. We do it automatically.
Personally, I think that the idea that form is illusion should be thrown out. Anyone who aspires to enlightenment needs to realize that illusions are a fundamental property of the structure of their brains. Seeing illusions is part of being a human being. We can, however, learn to see things somewhat differently, to not be caught by some illusions. For some people overcoming some illusions may be important, whether it be spotting the rattlesnake in the dead leaves or not being bamboozled by a con artist. Unfortunately, we are limited beings, and we will never see the world as it truly is.
For a trifecta, let’s look at another common mystical statement, that now is the only moment that is real. This may be scientifically true: we don’t really know what time is, and the only moment we truly experience is now. Nonetheless, now is just as illusory as anything else. It takes something like 40 milliseconds for a sensation to travel from your toes to your brain, so your sense of what’s going on in your feet “right now” is actually 40 milliseconds behind. I have no idea how the brain integrates feelings so that you have the immensely useful illusion that your face and feet are feeling the same thing at the same time, or that sounds and sights are integrated with these feelings, but it’s all an illusion: your brain is busy compiling all this incoming data into one whole that is partially illusory. Your sense of yourself, what “you” are at any instant, contains a lot of illusion. It’s not at all a stretch to say (as the mystics do) that you are an illusion.
All this isn’t to bash Buddhism or any other mystical religion. While these religious ideas about space, form, and nowness may be partially illusory, Buddhism in particular is aimed at enlightenment, not as a way of winning some sort of psychosocial game, but as a way of overcoming suffering. Some scientific research suggest that, in fact, Buddhist practitioners can overcome suffering and become among the happiest people studied to date. From a scientific perspective, their practices may be based on illusions and a misunderstanding of science’s reality (and I can’t say this for a fact, since I’m not a Buddhist or a scholar of Buddhism), but if they can overcome normal human suffering, I’d say that Buddhists and other meditators are definitely worth our respect regardless.