Filed under: science fiction, Speculation | Tags: Fermi Paradox, Interstellar Travel, science fiction
Wow, didn’t realize I hadn’t posted in so long. I got busy with life and writing. Here’s something that I was originally going to put in the book, but it doesn’t really fit there. It’s thoughts about how human experience might explain the Fermi Paradox.
Now, for one thing, I don’t think it’s much of a paradox. After all, as XKCD explained in “Alien Astronomers”, it would be almost impossible for a radio telescope on Alpha Centauri to pick up our radio broadcasts. Military and research radar beams, yes, but not our ordinary chatter. One critical point is that broadcasting powerful radio signals takes a lot of energy, and that’s expensive. If it’s more cost effective to be efficient, then we’ll do it (as we have with broadcasting and intercontinental cable) and that makes us more radio-invisible. At our current level of technology, the galaxy could be brimming with civilizations, and we couldn’t see them, nor could they see us. Being blind isn’t much of a paradox.
Of course, the question is, why aren’t the aliens here already? If they’ve had even a million years’ more civilization, shouldn’t they have starships? Well, here’s another answer: starships are expensive, because at high speeds, they’re a drag. This came out of an arXiv paper (link), and the pop-sci version on Io9. The basic point is that for a starship traveling at high speeds runs into photons from the Cosmic Microwave background, and if it’s traveling fast enough, those collisions generate about 2 million joules/second in energy, which seems to act like frictional energy slowing the ship down. So not only does a starship have to hit those high speeds, it has to continuously generate more thrust as particle collisions slow it down. You can’t just accelerate a starship and coast to another star, except at really low speeds which would take thousands of years to get between stars. Do you know how to make a machine that continuously functions for thousands of years? That’s a non-trivial challenge. So there’s answer #2 for the Fermi Paradox: space isn’t slick enough to coast. At high speeds, the CMB acts like an aether and causes friction, requiring continuous acceleration.
Answer #3 for the Fermi paradox is the one I was going to stick in my book, which is about what the Earth will look like if the worst predictions of climate change come to pass, and humans don’t go extinct. This scenario could also explain the Fermi Paradox. Basically, in the roughly 500 years of the Industrial Revolution (and yes, I know that it was much longer in the run-up), we’ll have burned through all our fossil fuels, our available radioactive elements, minable elements from aluminum to phosphorus, groundwater, and so forth. After we use up all the cheap energy and readily available raw materials, we’ll be stuck recycling everything using solar and gravitational energy (or biofuels, PV, wind turbines, and hydropower, if you want mechanisms) for hundreds of thousands to millions of years, until the Earth can generate more fossil fuels. Perhaps we had a brief window in the 1970s when, if we’d gone for it and known what we were doing, we *might* have put a colony on the moon. Highly unlikely, but possible, and the chances of that colony surviving would be fairly low. We can’t get to Mars now (due to little problems like radiation in interplanetary space), and if we don’t get nuclear fusion to work real soon now (the 1970s would have been a good time for that breakthrough, too), we’re going to be downsizing civilization pretty radically in the coming century, rather than going to Mars or beyond.
Let’s assume that humans are relatively normal for sapient species, in the sense that we got our rapid spurt of technological advance by using up all the surplus energy that their planetary biosphere had squirreled away for the last 300 million years. By the time we understood the true state of our world and the galaxy, we also realized we were in trouble, because we were already going into a time of overconsumption and too-rapid population growth. By the time we are technologically sophisticated enough to possibly colonize another planet, we won’t have the resources to do so. Indeed, we’ll be forced to use any terraforming techniques we work out on the Earth just to keep it habitable. Once we’ve survived this peak experience, we’ll be a mature civilization (or more likely civilizations), but we’ll also be radio-quiet, highly resource efficient, and totally incapable of interplanetary travel, let alone interstellar voyaging.
That’s the #3 answer to the Fermi Paradox: scientific development marches in tandem with resource extraction, and it’s impossible to become sophisticated enough to colonize another planet without exhausting the resources of the planet you’re on. It’s possible that the universe is littered with ancient sophisticated civilizations that have already gone through their peak resource crisis and are quietly going on with their lives, stuck on their planets, kind of like kids who went to college to change the world and got stuck with crushing college debts and jobs that weren’t their dreams. In our case, we’ve still got a billion years or so left before Earth becomes totally uninhabitable, so it’s not a horrible to be “stuck” here, on the one planet we’re evolved to live on. It’s just sad for those of us who thought that Star Trek looked like a really cool way to live.
Filed under: economics, futurism, Speculation, sustainability, Uncategorized | Tags: brontosaur, energy use, Straight Dope
I haven’t posted recently, because I’ve been busy with a book and life throwing things at me. Anyway, as part of research for the book (which explores the idea of what the deep future looks like if severe climate change comes to past and humans don’t go extinct), I wanted to find out how much energy the average American currently uses. So I did the usual Google Search, and tripped over Cecil Adams’ 2011 Straight Dope column about whether Americans use more energy than a blue whale (which was asserted in a 2009 New York Times article). He (actually his brainy assistant Una) cranked the calculation and came up with the basic answer of “no.” Just for thoroughness’ sake, I decided to replicate part of it.
It turns out that, in 2012 (according to <a href=”https://flowcharts.llnl.gov/energy.html”>LLNL</a>), the US used 9.71 quadrillion BTUs of energy (quads), of which 4.17 quads were actually used for something and 5.56 quads were lost in the system. As of December 31, 2012, there were 312.8 million people in the US. Grinding the numbers out, converting BTUs per year into watts and assuming that the population was constant throughout 2012, I got that the US generated about 10,378 watts per person, of which about 4,457 watts was used, 5,943 watts were wasted.
So Cecil (actually Una) was basically right in saying that Americans used about 11 kilowatts of energy per capita per year. According to what they found in their research, a hundred ton blue whale used about 65 kilowatts. So if this mythical average American isn’t consuming the energetic equivalent of a 100 ton blue whale, then, we’re sort of vaguely equivalent to a 15 to 20 ton blue whale (they exist too–they’re called calves).
While I was wallowing around, try to find the appropriate whale equivalent for this average American, it dawned on me that there’s a whole other class of critters that large: sauropod dinosaurs. Of course, they’re extinct, so their current metabolic rate is zero. However, it’s not entirely silly to postulate that they had whale-like metabolisms back when they were alive. We don’t know how much the large sauropods weighed either, but Brontosaurus (yes, I know it’s Apatosaurus, I’ll get back to that), is thought to have weighed in between 15 and 20 tons, if you believe Wikipedia.
In other words, the average American uses as much energy as an average brontosaurus.
Now, of course we can argue that Apatosaurus is not the right sauropod, that due to some metabolic model or reconstructed weight or other, another sauropod is a better metaphor than ol’ bronty. It’s an understandable but unwinnable argument, because the energy use of the average American is kind of a goofy concept too. A big chunk of that energy is used (and lost) transporting stuff around supposedly to benefit us, but we never see it. It is also averaged across everything from the energy use of a bum on skid row to that of a jet-setting star, and it’s a very uneven distribution. What does average mean? Who’s average? Whatever it means, the average human working an eight hour office day works pretty well on somewhere around 75 watts (resting metabolism), so we average Americans are using something like the energy of 150 humans just sitting around doing paperwork.
So, let’s just say that we are, on average, the brontosaurs of the energy world, use an outdated dinosaur name as a metaphor for how much energy we consume. We’re not the biggest energy uses by country, but we’re pretty close.
Now you might think that this energy use means we’re going to go extinct like the brontosaurs, because such energy consumption isn’t sustainable. I think the truth is a little different. As humans, we can live on 75 watts, even 250 watts if we’re working hard and not sitting around. It’s our culture that constrains us to act like brontosaurs, and I’m pretty sure our culture is going to have to change radically if it doesn’t want to disappear. Ultimately, it’s a question of identity: when it’s no longer possible for us to be American brontosaurs, will it still be possible for us to be Americans, or are we going to have to find, join, or develop other cultures that are more energy efficient? Who can we be in the future? That’s one of the questions I’m working on.
Filed under: commons, Legacy Systems, Speculation, The Internet, Uncategorized | Tags: General Keith Alexander, Internet, Legacy System, NSA
Here I am venturing into something I know nothing about: the Internet. Recently, I read a 1999 quote from Steward Brand, in The Clock of the Long Now (BigRiver Link), that the internet could “easily become the Legacy System from Hell that holds civilization hostage. The system doesn’t really work, it can’t be fixed, no one understands it, no one is in charge of it, it can’t be lived without, and it gets worse every year.”
Horrible thought, isn’t it? What I don’t know about are the legions of selfless hackers, programmers, techies, and nerds who are valiantly struggling to keep all the internets working. What I do know some tiny bit about are the concerted efforts of the NSA, under General Keith Alexander (who’s due to retire this spring), to install effectively undocumented features throughout the Internets and everything connected to them, so that they can spy at will. Perhaps I’m paranoid, but I’m pretty sure that every large government has been doing the same thing. If someone wants to hack us, they can.
Well, what I’m thinking about is the question of trust, rather than danger. The idea that cyberspace is dangerous goes well back before the birth of the World Wide Web. Remember Neuromancer? Still, for the first decade of online life, especially with the birth of social media, there was this trust that it was all for the greater good. Yes, of course we knew about spam and viruses, we knew the megacorps wanted our data as a product, and anyone who did some poking or prodding knew that spy agencies were going online too, that cyberwarfare was a thing. Still, there was a greater good, and it was more or less American, and it pointed at greater freedom and opportunity for everyone who linked in.
Is that still true? We’ve seen Stuxnet, which may well have had something to do with General Alexander’s NSA , and we’ve seen some small fraction of Edward Snowden’s revelations, about how the NSA has made every internet-connected device capable of spying on us. Does anyone still trust the US to be the good guys who run the Internet for the world? Even as an American, I’m not sure I do.
This lost trust may be the start of the Internets evolving into the Legacy System from Hell. Instead of international cooperation to maintain and upgrade the internet with something resembling uniform standards, we may well see a proliferation of diverse standards, all in the name of cyber security. It’s a trick that life learned aeons ago, that diversity collectively keeps everything from dying from the same cause. Armies of computer geeks (engineers by the acre in 1950s parlance) will be employed creating work-arounds across all the systems, to keep systems talking with each other. Countries that fall on hard times will patch their servers, unable or unwilling to afford expensive upgrades that have all sorts of unpleasant political issues attached. Cables and satellites will fail and not be replaced, not because we can’t afford to, but because we don’t trust the people on the other end of the link to deal fairly with us and not hack the systems they connect to.
I hope this doesn’t happen, of course, but I wonder. Once trust is lost, it’s difficult to regain. On a global level, can we regain enough trust to have someone run the internet as an international commons? A good place? Or is it too late for that? I’m quite sure that US, Chinese, and Russian cyberwarfare experts all will say that their expertise is defensive, designed to minimize damage, and they may even believe it. Still, in the face of so many soldiers and spies amassing online, why trust our lives to this battlefield? Anything we put online might be wiped out or compromised, victim to a battle we neither wanted nor approved of.
Even though I don’t have a reason to like him, it would be sad if General Alexander’s legacy was starting the conversion of the internet into a legacy system. It will also be instructive too, a lesson in how the buildup of military power can backfire (something I think even Lao Tzu commented on). Fortunately or unfortunately, any history written on a legacy system will most likely vanish when the last expert walks away and the owners pull the plug. That’s the problem with legacy systems, you see. Their data can vanish very, very quickly.
Filed under: commons, economics, Speculation, sustainability | Tags: commons, Elinor Ostrom, markets
This isn’t my original idea. I’m reading John Michael Greer’s The Wealth of Nature: Economics as if Survival Mattered (Amazon link), and he makes the assertion that a free market, “in which buyers and sellers are numerous enough that free competition regulates their interactions,” is a form of commons, a resource that should ideally be free to all in a society. He goes on to point out that this is in contrast to those who think that all commons should be eliminated in favor of private ownership. The issue he’s getting at is that free markets cannot exist without regulation, something recognized even by Adam Smith, who noted in the Wealth of Nations that “people of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or some contrivance to raise prices” (reference).
I can see a long argument about how true this is, because it’s a provocative concept. Markets and commons are traditionally diametrically opposed in capitalist thinking, it’s hard to consider that they have anything in common. I’m happy to have that discussion, but there’s another related issue that, to me, is even more interesting: Can markets be managed as commons?
We don’t have any data on this, but the late Elinor Ostrom won the 2009 Nobel in economics for her studies of how commons are successfully managed. She found, through studying both successful commons (water districts, community forests, and the like) and unsuccessful commons, that there were eight “Design Principles” that distinguished the successful commons from the failures (Amazon link to reference).
Here are Dr. Ostrom’s eight design principles, as rewritten by David Sloan Wilson in The Neighborhood Project (Amazon link). I’m using Wilson’s version since it’s more general than the original.
1. Clearly defined boundaries. Members of the group should know who they are, have a strong sense of group identity, and know the rights and obligations of membership. If they are managing a resource, then the boundaries of the resource should also be clearly identified.
2. Proportional equivalence between benefits and costs. Having some members do all the work while others get the benefits is unsustainable over the long term. Everyone must do his or her fair share, and those who go beyond the call of duty must be appropriately recognized. When leaders are accorded special privileges, it should be because they have special responsibilities for which they are held accountable. Unfair inequality poisons collective efforts.
3. Collective-choice arrangements. Group members must be able to create their own rules and make their own decisions by consensus. People hate being bossed around but will work hard to do what we want, not what they want. In addition, the best decisions often require knowledge of local circumstances that we have and they lack, making consensus decisions doubly important.
4. Monitoring. Cooperation must always by guarded. Even when most members of a group are well meaning, the temptation to do less than one’s share is always present, and a few individuals might try actively to game the system. If lapses and transgressions can’t be detected, the group enterprise is unlikely to succeed.
5. Graduated sanctions. Friendly, gentle reminders are usually sufficient to keep people in solid citizen mode, but tougher measures such as punishment and exclusion must be held in reserve.
6. Fast and fair conflict resolution. Conflicts are sure to arise and must be resolved quickly in a manner that is regarded as fair by all parties. This typically involves a hearing in which respected members of the group, who can be expected to be impartial, make an equitable decision.
7. Local autonomy. When a group is nested within a larger society, such as a farmers’ association dealing with the state government or a neighborhood group dealing with a city, the group must be given enough authority to create its own social organization and make its own decisions, as outlined in items 1. and 6. above.
8. Polycentric governance. In large societies that consist of many groups, relationships among groups must embody the same principles as relationships among individuals within groups.
What’s interesting about these rules is that, superficially, it looks like these would be great rules for free markets as well. Look at the complaints such rules would solve:
— markets should have boundaries. People get really uncomfortable when everything is for sale, whether they want it to be or not. There’s a general idea that some things should not be for sale, while markets are the appropriate venue for other things. Similarly, not everyone wants to participate in “the marketplace” and the outsiders resent being forced in.
–Markets should be fair, the fabled level playing field. Most would agree that people should get special privileges only so that they can exercise special responsibilities, not because they have special connections. Similarly, corruption and gaming the system should be punished.
–Collective decision making. This one is tough, because everyone wants to constrain the fat cats, whether or not they’re in the market. Still, there are many complaints about top-down rulemaking, and with good reason. This is not to say that markets are all good at self-governing (and here I’m thinking about the body-county in illegal drug marketing disputes), but to the extent that a market is self-governing, having rules that everyone agrees are fair is not a bad thing.
–Monitoring: this one is a no-brainer. Corruption kills markets, and they always need to be monitored to avoid people gaming the system. Interestingly, monitoring in commons can come either from within, from people hired to monitor the system, or from outside officials. Any and all of these can work, depending on the circumstances.
–Fast and fair conflict resolution: this one is another no-brainer. Things work best when disputes can be settled fairly and quickly, either by a tribunal within the market, or by higher authorities, so long as judgement is fast and fair.
–Local autonomy. This can be somewhat problematic when you think about Wall Street, but it’s the flip side of having collective decision making within a market. If the authorities are going to let a market make their own rules, they need to let the market govern itself. Note, however, that authorities can be intimately involved in both monitoring and conflict resolution, so long as the market grants that this is their legitimate role in the market.
–Polycentric governance: This is the idea that the relationship between individuals and a markets is mirrored between markets within a greater market, if such a hierarchy exists. I’m not sure how this might work, but it does embody the same ideas of group decision making (on the level of member markets), monitoring, fast and fair dispute resolution, and so forth. That’s not a bad way to handle commerce on a large scale.
To me, this is the bigger point: even if markets aren’t exactly commons, it certainly looks like the principles that lead to successful commons might lead to successful free markets. Additionally, it’s not particularly driven by any market ideology: both progressives and libertarians could agree on these design principles. Even the big government proponents tend to agree (in my experience) that the best regulations are the ones that people think are fair and fairly enforced. Trying to get such regulations written can be very difficult, but it’s often a major goal of regulation. What also makes this interesting is that, if you accept that markets may be commons, it’s possible to have a free market under a wide range of conditions—so long as the market is properly monitored and managed according to rules.
A truly free market won’t work, but a market commons may well be viable. What’s sad is how far Wall Street currently is from most of these design principles. Perhaps our financial markets are a lot less successful and sustainable than we might wish for? Perhaps they need (shock, horror) more regulation, not less, to last?
What do you think?
Filed under: Real Science Content, Speculation, Uncategorized, Worldbuilding | Tags: anthropocene, global warming, hobbits, Speculation
No, I haven’t seen the latest offering Peter Jackson yet, but I will soon. Still, in honor of the latest, erm, extension of The Hobbit onto the big screen, I thought I’d pitch out an interesting possibility for the future of at least some of our descendents.
First, a definition: ATM isn’t the money machine. Rather, it’s an acronym for Anthropocene Thermal Maximum, which we’ll hit sometime after we’ve exhausted all the fossil fuels we’re willing to dig up into the atmosphere. If we blow off over something like 2500 gigatonnes of carbon, we’re going to be in the range of the PETM, the Paleocene-Eocene Thermal Maximum (Wikipedia link) about 55.8 million years ago, when global temperatures got as hot as they have been in the last 60 million years. Our descendents’ future will be similar, if we can’t get that whole carbon-neutral energy economy working.
One of the interesting recent findings is that mammals shrank up to 30 percent during the PETM (link to press release). The reason given by the researchers is that increased CO2 causes plants to grow more foliage and fewer fruits (in the botanical sense, so we’re talking fruits, nuts, grains, and all the other things we like to eat). This poorer nutrition led to smaller animals. I think there’s another possible explanation for the decrease in animal size.
My thought was that, if civilization crashes due to radical climate change into a PETM-type world, humans will be at the mercy of the elements, so it’s quite likely that future people will be smaller in size. Perhaps 30 percent smaller? Sitting down with the BMI graph and making a few assumptions, I found that the 30% smaller equivalent of a 71 inch tall male weighing 160 lbs is approximately 60 inches tall. Now, this is an interesting height, because it is the upper limit of pigmy heights in an interesting 2007 study by Migliano et al. in PNAS (link to article). Their hypothesis was that the evolution of pigmies around the world is best explained by significant adult mortality, which they adapted to by shifting from growth to reproduction earlier in their lives. The researchers found that the average age at mortality in pigmies is 16-24, and few live into their 40s. The major cause of death is disease, rather than starvation or accidents.
While I don’t know of any evidence of increased animal disease during the PETM, there is good evidence for increased plant disease and predation by insects (link), so it’s not much of a stretch to hypothesize that the animal dwarfing could have been caused by increased disease, decreased lifespans, and a resulting shift towards smaller body size and early reproduction.
So, here’s the idea: if we blow too much carbon into the air, and our ATM rivals or exceeds the PETM, at least some of our descendents will be the size of pigmies, due to the harsher environment (more disease, less medical care) favoring people who mature earlier and have kids as teenagers. They probably won’t be hobbits unless a hairy-footed morph takes off somewhere (perhaps in the jungles of Northern California?), but they will be technically pigmies.
It’s not the most pleasant thought, but if short lives and statures is troubling, the good news is that post-PETM fossils show that animal species regained their former size once the carbon was out of the air. And, according to Colin Turnbull’s The Forest People, life as a pigmy isn’t necessarily nasty or brutish, even if it’s short.
Filed under: livable future, Real Science Content, Speculation, sustainability, Syria | Tags: desalination, Syria, Syrian civil war, water politics, water war
Not that I’m an expert on foreign policy or Syria (there’s someone with the same last name who is. We’re not related). The one thing I do understand, a little bit, is water politics, and that’s may well be one of the important drivers of the Syrian civil war. As Mark Twain said, “Whiskey’s for drinking, water’s for fighting over.” And good Muslims won’t drink whiskey. Since I’m interested in the deep future with climate change, this might be a portrait of things to come for other parts of the world, including where I live in the southwestern US.
Here’s the issue: between 2006 and 2011, the eastern 60 percent of Syria experienced “the worst long-term drought and most severe set of crop failures since agricultural civilizations began in the Fertile Crescent many millennia ago,” forcing 200,000 Syrians off the land (out of 22 million total in Syria) and causing them to abandon 160 towns entirely (source). In one region in 2007-2008, 75% of farmers suffered total crop failure, while herders in the northeast lost up to 85% of their flocks, which affected 1.3 million people (source). Assad’s policies exacerbated the problem. His administration subsidized for water-intensive crops like wheat and cotton, and promoted bad irrigation techniques (source. I’m still looking for a description of what those bad irrigation techniques were.).
These refugees moved to cities like Damascus, which were already dealing with over a million refugees from Iraq and Palestine. They dug 25,000 illegal wells around Damascus, lowering the water table and increasing groundwater salinity (source). The revolt in 2011 broke out in southern Daraa and northeast Kamishli, two of the driest parts of the country, and reportedly, Al Qaeda affiliates are most active in the driest regions of the country (source).
One thing that worsened the problem was Turkey. The Tigris, Euphrates, and Orantes Rivers flow out of Kurdistan in Turkey into Syria. Turkey, in a bid to modernize the Kurdish region, built 22 dams on these rivers up to 2010 in the Southeastern Anatolia Project. They’ve taken half the water out of the Euphrates, and used it to grow large amounts of cotton within Anatolia, doubling or trebling local income in that traditionally rebellious area.
So is drought destiny? Experts caution that it’s not that simple (source). In 2012, the American Midwest suffered a record drought, While that may have led to Tea Party outbursts in the 2012 elections, it didn’t lead to armed insurrection. (As an aside, you can figure out how well the drought map correlates with the 2012 Presidential election map. Washington might one day take note of this…). Still, when you couple drought, poverty, bad governance, and a witch’s brew of historical grievances and systemic injustice, drought can cause a civil war.
There are a couple of big problems here. The first is that the US didn’t see the revolt coming. Right up until the first protests started, they thought that Syria was immune to the Arab Spring (source). This isn’t all that surprising. Due to the War on Terror, the CIA and other agencies work closely with government intelligence agencies to hunt terrorists (source), and have little or no intelligence capability to learn what’s happening on the “Arab Street.” This led to the US missing the Arab Spring movement pretty much in its entirety. The US military has been talking about climate change for years, and they’re starting to get serious about preparing to deal with it (source), but they don’t seem to have a functional reporting system yet, let alone a good way to respond. To put it bluntly, no one in Washington or other capitals seems to watching things like water supplies, crop reports, rural migration to cities, or even the price of bread. Or if they are, they’re not being listened to. Spikes in bread prices throughout North Africa helped prepare the ground for the Arab Spring uprisings, and the region is still a major wheat importer (source).
The second problem is that, so far, our leaders haven’t officially acknowledged that water’s a problem. Basically, during the drought, Syrian per capita water dropped by almost half. While a lot of this could be returned by better management, growing different crops, convincing people not to eat bread in the place where wheat was first farmed, and so forth, there are probably too many people relative to the water supply, at least during droughts. Part of this is demographics. The population of the Middle East has quadrupled over the last 60 years, and the water supply, if anything, has shrunk (source). The brutal answer is to get rid of those people, which may be one reason why Assad was so willing to use chemical weapons. There are 1.851 million registered Syrian refugees at the moment, and that’s about one percent of the population outside the country. Assad (and whoever follows him) may not be interested in having them return, either. Syria likely would be more stable with fewer thirsty mouths.
What’s the solution? One important part is to get water on the negotiating table. Turkey officially helps Syria with water flows, but it’s not clear how diverting half a river is a friendly gesture, and the two countries are not on good diplomatic terms. If the Turks are using the Euphrates to water cotton, most of that water is lost to the air, rather than flowing back into the river where Syria can get it. Turkey could help stabilize Syria by letting more water out of its dams, but by doing so, it would risk insurrection in Kurdistan, so I don’t think they will voluntarily give up that water. Since Turkey’s water sources are secure for the moment, I suspect that quite a few Syrians are going to be resettling there, just as Iraqis and Palestinians are (or were) living in Damascus. More countries should volunteer to permanently take in Syrian refugees, especially in the north (as Sweden has). Why not? It increases populations in areas that are experiencing population decreases due to low birth rates, and it’s cheaper than trying to fight in the Middle East. Moving people to where there’s water is much less cruel than interring them in refugee camps in border deserts with inadequate resources and no hope.
One of the problems with climate change is that the northern edges of deserts are forecast to get drier, and the Middle East and the Mediterranean basin are one of those edges. If we want to avoid continual unrest in that region, it’s high time we all (in the international sense) start financing regional desalination plants in the Middle East and other dry areas. This has worked to secure water for Israel. Granted, it’s an energy intensive solution, but a large-scale desalination plant is cheaper than a single day of all-out ground war, US style (source).
The other lesson here is that politics and politicians matter. Drought isn’t necessarily destiny, but bad water management choices can turn a chronic problem of scarce resources into a bloody war. If you want to know why I’m not a libertarian, this is why. It’s nice to have liberty, but it’s necessary to have water. Good politicians work to get you enough of both, and we need more of them at the moment.
Filed under: Kaiju, Pacific Rim, Real Science Content, science fiction, Speculation | Tags: Bechdel Test, Kaiju, Pacific Rim, Real Science Content
The first thought was inspired by Darren Naish’s comments about the portrayal of scientists in Pacific Rim. This is scarcely news. In fact, it’s even inspired a few entries at TV Tropes. Still, it’s frustrating, especially when the sheer stupidity of some applied phlebotinium degrades the rest of the movie (red matter, anyone?).
There are potential solutions. Movies tend to be quite sexist, and this has inspired the Bechdel Test which is a litmus test for how women are portrayed in a piece. In order to pass, the piece must:
a. Include at least two women,
b. who have at least one conversation,
c. about something other than a man or men.
When you start thinking about the number of films that fail, you realize how biased most films are. This goes double for summer blockbusters, unfortunately.
Can we do something similar for science? I’m not as pithy as Bechdel, but my first thought was that if a film could be improved by hiring an out-of-work scientist to vet the script and including her suggestions, then it fails the test. This would catch everything from midichlorians and red matter to the continuity gaffs in all the Star Treks, the teleportation between forests in Jurassic Park and so forth.
Now, movie types typically argue that scientists are such a tiny percentage of the audience that there’s no point in catering to them, but that misses the point of the test. This test is more in line with Van Halen’s requirement in their contracts that there be no brown M&Ms backstage. The point of this bizarre-seeming contract clause was that Van Halen at the time was touring with a huge, heavy and technically sophisticated stage rig. Their contracts ran to dozens of pages, and included things like making sure the stage they were to perform on wouldn’t collapse under the weight of all their props. The no-brown-M&Ms clause was actually there for safety. if they spotted brown M&Ms in the bowl, they would immediately know that the venue managers hadn’t bothered to read the contract. At that point, they’d have to immediately check every other show detail, to make sure that nothing collapsed and no one died during their show.
When a movie is stupid about the science, it’s often stupid about a lot of other things too, things that everyone notices, like a crappy plot or cardboard stock characters. Get too stupid and the movie flops. Compared to that, getting a scientist to vet the script is pretty cheap.
Now, let’s turn to Pacific Rim. At this point, I haven’t seen it (and since Darren and
Mike Matt have seen it multiple times, I’m not sure they need my ticket money). Be that as it may, I’d like to suggest what would really happen to any kaiju, including godzilla, that was stupid enough to make repeat visits to our little world.
Here’s the fundamental stupidity about these giant kaiju films. It’s all about killing cities. Yes, this would certainly happen the first few times, at least until someone ran an analysis on a kaiju corpse. See, kaiju are biophysically impossible as we understand reality, so if they did exist, they’d be absolutely full of bizarre chemistry. In Pacific Rim, this is all treated as hazardous waste and black market rhino horn stand-ins. But in real life, each corpse would be a gold-mine for the transnational, immensely sophisticated, chemical industry. It doesn’t matter whether you’re rendering Godzilla down for radionucleotides to supply the chronic shortages of medical isotopes, or rendering the blood of PR Kaijus down for all that ammonia, which is a major feedstock for both fertilizers and explosives. Those giant things are too valuable to nuke.
So if our world was invaded by kaiju, here’s what I suspect would happen. First, people would hack kaiju communications to figure out how to lure them or repel them (much as the Allies hacked U-Boat communications in WWII and routed the entire force. Controlling attack subs from a central hub is self-defeating). Then they’d build giant killing pens, probably on the coast of China (note that I’m suggesting this not due to any bias against China, but because they have become chemical suppliers to the world, and they’ve got the huge infrastructure needed to deal with the influx of kaiju products). Once these facilities were built, fleets would lure and drive kaiju into these kill-zones, dispatch them humanely, perhaps with a bunker busting guided bomb to the back of the skull dropped from 10,000 feet, and render their carcasses for everything we could get out of them. Rather than shutting the rift down, we’d probably drop a note in, asking the kaiju masters to send more kaiju (NSFW link). For all I know, bringing in kaiju this way would render our industrial civilization a bit more sustainable, since we would have outsourced production of some highly dangerous chemicals to another planet.
Yes, I understand that Pacific Rim runs on awesome, and that what I just suggested would be titanically not awesome, more in line with The Cove than with what actually made it to the screen. In fact, given Hollywood’s limited set of plots, the only movie they would make out of this scenario is some blue-eyed mother kaiju being mercilessly herded to her doom on the industrialized China coast, with impractical environmentalists’ efforts to save the noble beast from certain destruction. But there’s something a little sad in this whole exercise. It’s not just the bad science, it’s the lack of vision. Hollywood can only think to make kaiju in one mode: destroying coastal cities. There’s little creativity, it’s all replaying a trope that first showed up in 1954. The Japanese were more inventive with their kaiju, but Hollywood’s creativity has been leached out by the monstrous budgets they play with, since investors far prefer predictable ROI to untested creative productions. Personally, I think that adding a little real science, along with that massive dose of creativity that real science inevitably brings, would spice up the whole enterprise. Unfortunately, I doubt anyone in the industry (outside the SyFy Channel) would agree with me. And so it goes.