At the present time, we are living in a warm interglacial period: MIS 1. This was preceded by the glacial MIS 2, which lasted from 24,000 to 11,600 years ago. The peak of this cold period lasted from 22,000 to 18,000 years ago, and is known as the Last Glacial Maximum (LGM). MIS 3, from 59,000 to 24,000 years ago, was warmer but less so than today, and is classed as an interstadial. MIS 4, from 74,000 to 59,000 years ago, was cold, though less so than MIS 2. MIS 5 was the last full interglacial before the present one, and lasted from 130,000 to 74,000 years ago; it was preceded by another glacial period, MIS 6, from 190,000 to 130,000 years age.


Tertiary (Paleogene) Palaeocene (‘Ancient’, 65 to 56 million years ago) Eocene (‘Dawn’, 55 to 33.9 million years ago) Oligocene (‘Few’, 33.9 to 23 million years ago) Tertiary (Neogene) Miocene (‘Less Recent’, 23,030,000 to 5,332,000 years ago) Pliocene (‘More Recent’, 5,332,000 to 2,588,000 years ago) Quaternary Pleistocene (‘Most Recent’, 2,588,000 to 11,600 years ago) Holocene (‘Wholly Recent’, from 11,600 years ago)


On-going sexual receptivity in women probably evolved to strengthen pair-bonding and counter the appeal of infidelity.


… there is no consensus on how to best define ‘species’. The problem was noted by Darwin in The Origin of Species, and matters are little improved to this day. Even the common-sense definition that two animals are the same species if they can interbreed in the wild and produce fertile offspring is fraught with difficulties. For example, wolves (Canis lupus) can interbreed with coyotes (Canis latrans) and jackals (Canis aureus), even though all three are regarded as separate species.


In fact, it is quite common for animals to be classed as different species because subtle differences in their courtship rituals prevent them from mating.


An analogy for genetic drift is seen in small isolated villages where everybody ends up with the same surname. If for example Mr and Mrs Smith are the only Smiths in the village and they have only daughters, then the surname Smith will disappear from the next generation. Over enough generations, the villagers will ‘drift’ to just one surname.


Archaeological evidence based on site numbers suggests that the ratio might have been as high as 10:1 in favour of the incoming modern population. Sheer weight of numbers might have enabled the modern humans to expand their overall territory at the expense of the Neanderthals.


Once they were gone from their southern Iberian refugium, the long story of the Neanderthals was at an end. Or was it? As we have seen, around 20 percent of the Neanderthal genome survives in the present-day population. With a current world population of seven billion, there is now more Neanderthal DNA in existence than ever before. The Neanderthals are far from extinct.


It is also untrue that Mitochondrial Eve was the only woman alive at the time; there were others, but their mitochondrial lineages all ended at some point with women who failed to have any daughters. This is an example of genetic drift (see Chapter 8) where in a small population, the various mitochondrial lineages ‘drifted’ down to just the one – that of Mitochondrial Eve.


More extensive regions of the mitochondrial DNA genome have been studied; the results confirm that the mitochondrial genetic diversity of Africans is far greater than that of non-Africans, and suggest that Mitochondrial Eve lived about 170,000 to 200,000 years ago.


Having answered the question of when and where modern humans emerged, the next key question is how? One possibility is that they emerged in one place among a single isolated African population, from whence they spread across the whole of Africa and replaced the various archaic populations as they did so. The second possibility is that the accretion model proposed for the Neanderthals in Europe (see Chapter 8) may also be applicable to Africa. If so, the anatomical and behavioural features now associated with modern humans might have appeared at different times and places across Africa, until what we see today emerged. As with the model proposed for Neanderthals, isolated human populations, each possessing some modern characteristics, periodically encountered one another and interbred. Over time, more and more of these characteristics began accumulate in single populations, until eventually a population bearing the full Homo sapiens ‘package’ emerged.


The fossil evidence does suggest that humans in Africa were only gradually ‘modernised’ after the split with the Neanderthals. Fossil remains begin to show increasing modern characteristics between 300,000 to 200,000 years ago, before entering the modern range between 200,000 and 100,000 years ago. During this second period, while clearly representing modern humans, fossil remains were still more robust than any living people, and it is only after 35,000 years ago that people with the more gracile, fully modern skeletal form make their appearance. Overall, this is what would be expected if the accretion model is correct.


A possible scenario is that while the accretion process was gradual and geographically-widespread, only one relatively localised population eventually acquired the full Homo sapiens ‘package’, giving it a competitive edge over other human populations. It is this population, possibly numbering between 2,000 and 10,000 individuals, from which the world’s current population is largely descended.


In 2002, a team at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany compared the human version of the FOXP2 gene with that of the chimpanzee, gorilla, orang-utan, rhesus macaque and mouse. They found that only three changes to the gene have occurred since humans diverged from mice, although two of these had occurred since the far more recent split between humans and chimps. They suggested that these two changes might have been critical to the development of speech and language, and that they had occurred at some stage in the last 200,000 years.


The FOXP2 gene is not unique to humans and exists with very few differences in other animals. The first clue that it could be a ‘speech gene’ came from studies of Family KE, an extended British family living in London (their actual identity is not in the public domain). Some members of the family have problems with aspects of grammar, including the use of inflexions in tenses. They also have difficulty in producing the fine movements of the tongue and lips required for normal speech. The problem affects three generations of the family and has been studied since the 1990s. In 2001, geneticists determined that the affected members of the family all have a defective version of the FOXP2 gene.


Their work with the Neanderthal genome had shown that Neanderthals possessed exactly the same version of FOXP2 as do modern humans. The implication was that the changes to FOXP2 must have occurred before the two species diverged from one another, which was at least 370,000 years ago.


Religion is universal in human society, and it appears to be a fundamental part of the human condition. Why should people all over the world believe in a supernatural being or beings inhabiting a realm detached from our day-to-day existence? Though religions differ considerably throughout the world, a belief in such beings is a very common aspect of them. The supernatural is not confined to religion, and entities unconstrained by conventional laws of physics and biology have featured in literature going back to the time of Homer.


If we accept that religion is indeed a product of the way the human brain works, the obvious question is why? Is religion useful, or is a ‘misfiring by-product’ of something more useful, as Richard Dawkins43 has suggested?


Robin Dunbar has noted that both religion and story-telling are universal to human culture. He claims that the social brain hypothesis provides an explanation as to why this is so. As we have seen, theory of mind refers to the ability to anticipate the thoughts of others. However, it is actually no more than a restricted case of what philosophers have termed ‘intentionality’: awareness of mind-states such as knowing, thinking, believing, desiring or intending. Simple organisms such as bacteria completely lack self-awareness and are said to possess zero-order intentionality. Anything that is aware of what it is thinking possesses first-order intentionality. Organisms possessing brains, such as vertebrates and the more complex invertebrates, are capable of first-order intentionality. To have a belief about somebody else’s belief, such as ‘I believe (1) that Sally thinks (2) the marble is still in her basket’ requires second-order intentionality. Thus theory of mind may be described as second-order intentionality. For day-to-day purposes, humans typically use up to three orders of intentionality, for example ‘I think (1) Sally believes (2) that Fred wants (3) to leave her for Anne’. In fact we can do considerably better, and are capable of fifth and sixth-order. Dunbar suggests that in writing Othello, Shakespeare would have required six levels in order to keep track of the plot. Even if their camp-fire stories never quite matched the works of Shakespeare, early story-tellers would have required similar levels of intentionality if they were going to hold their audiences. Dunbar believes that a functioning religion also requires similar levels of intentionality. From this, Dunbar suggests that the capacity for story-telling and religion must be advantageous from an evolutionary point of view. He notes that to power all this extra intentionality requires energetically-expensive brain power (see Chapter 5) and suggests that for this capacity to have evolved at all, the advantages must have been considerable. He notes that in small-scale societies, such as those of hunter-gatherers, story-telling plays an important role in social bonding. He argues that religion is an even more powerful bonding device, creating a sense of unity strong enough to enforce cohesion in a large group. However, there are problems with this view.


Theories advocating religion and ritual as a means of creating social cohesion were first advocated by the French sociologist Émile Durkheim. Though they are widely accepted, such theories have been criticised on the grounds that they invoke ‘functionalism’. In sociology, structural functionalism is a model that sees society as composed of institutions that function for the common good. While a fuller discussion is beyond the scope of this work, critics of functionalism argue that it downplays conflict and inequality. They also suggest that functionalism is logically flawed and begins by assuming the very thing it sets out to prove. Religion and ritual do indeed create social cohesion – but this, critics argue, is more a description rather than an explanation. It doesn’t tell us why religion and ritual exist, unless one supposes that people consciously invented them for that purpose. It is very difficult to argue that the tribal elders of early settlements sat down to discuss introducing rituals that would bind their society together, in case it became more complex in the future. While Dunbar is not suggesting that this was actually the case, we need to ask just how and why religious rituals actually came about. We must also not forget that for all its perceived advantages, religion has been a deeply-divisive force throughout human history. It has been responsible for countless wars, oppression and bloodshed, all of which continue unabated to this day.


The widely accepted view, based on genetic evidence, is that modern humans dispersed from Africa in a single migration sometime around 65,000 years ago. Taking the southern route, they crossed the Bab el-Mandeb Strait and then rapidly made their way along the southern Asian coast, reaching Australia around 50,000 years ago. It is supposed that any earlier migrations eventually died out.


The widely accepted view, based on genetic evidence, is that modern humans dispersed from Africa in a single migration sometime around 65,000 years ago. Taking the southern route, they crossed the Bab el-Mandeb Strait and then rapidly made their way along the southern Asian coast, reaching Australia around 50,000 years ago. It is supposed that any earlier migrations eventually died out.


The fossil and archaeological evidence suggests that early migrants reached East Asia and Australia before Europe. As noted in Chapter 9, the earliest reliably-dated fossil evidence we have for undisputedly-modern humans outside Africa and the Levant is in the order of 65,000 to 40,000 years old. The Australian ‘Mungo Lady’ from New South Wales is thought to be around 42,000 years old, and possibly as much as 62,000 years old; and remains from East Asia may be of similar age. No European finds are greater than around 45,000 years old. The archaeological record suggest that modern humans were at Lake Mungo 46,000 to 50,000 years ago, and in Europe 46,000 years ago. While the later dates from both East Asia and Australia are close to the European dates, humans must have arrived in northern Australia well before they reached New South Wales. Even if the Lake Mungo artefacts are only 46,000 years old, a conservative estimate would suggest that humans reached Australia at least 50,000 years ago.


Estimates based on mitochondrial genetic data suggest that the initial migratory group included no more than 500 to 2000 reproductively-active women. Of course the total number of men, women and children taking part in the migration would have been greater, and a total founding population of around 3,000 has been suggested.


Mitochondrial DNA studies have shown that the Andaman Islanders, the Semang and aboriginal New Guineans and Aboriginal Australian all possess localised but very ancient branches of the three founder haplogroups M, N and R. The implication is that these groups settled along the coastline of the Indian Ocean a long time ago, and that they have remained in more or less the same place ever since. Researchers applied a statistical technique known as founder analysis to the genetic data, with the aim of identifying and dating migrations into new territory. The results suggest that the settlement took place at least 60,000 years ago, and that these peoples are indeed the relict populations left over from the original migration out of Africa.


73,000 years ago, the Earth experienced the largest volcanic event of the last two million years when a supervolcano beneath Lake Toba in northern Sumatra erupted with a Volcanic Explosivity Index intensity of 8 (‘Ultra-Plinian’), ejecting 2,500 to 3,000 cubic km (600 to 720 cubic miles) of magma. In comparison, historical eruptions such as Krakatau, Tambora and Pinatubo were mere firecrackers. The largest of these, Tambora in 1815, ejected just 50 cubic km (12 cubic miles) of magma.


Much of the Indian subcontinent was blanketed in ash to a depth of 10 to 15 cm (4 to 6 in.). In places, drifting caused the ash to pile up to depths of up to 6 m (19 ft. 8 in.). Ash also fell on the Malay Peninsula, in the Indian Ocean, the Arabian Sea, and the East China Sea. The dispersal of ashes from Sumatra in both western and eastern directions indicates two contrasting wind directions, and suggests that the Toba eruption probably happened during the Southeast Asian summer monsoon season.


Anthropologist Stanley H. Ambrose has suggested that the eruption caused a bottleneck in human populations across the world. He proposed a number of scenarios, but his favoured scenario assumed modern humans had left Africa before the eruption took place. Ambrose suggests that the various ethnicities seen today arose from genetic drift among the diminished populations.


This, together with the behavioural flexibility needed to survive the eruption and its aftermath, suggests that modern humans were already in India at the time of the eruption – and that at least some of them lived through it.


This would have left a situation where human groups were largely concentrated in three isolated refugia, including the expanded coastal oasis of the Persian Gulf. It is likely that the Persian Gulf refugium attracted Neanderthals as well as modern humans. If it is the case that modern humans interbred with Neanderthals shortly after leaving Africa, and if the Persian Gulf refugium model is correct, then Neanderthals would have moved south from the Levant, encountering modern populations in the north of the refugium. As we have seen, Shi’bat Dihya 1 could represent a southern extension of the Neanderthal range, and if so, it is unlikely to have been the only one.


The demise of the Neanderthals is unlikely to have been the result of a deliberate pogrom by the modern populations, and contacts might not necessarily have been unfriendly. More likely, the Neanderthals were unable to compete for food and other resources against a demographic avalanche of newcomers that, on the basis of site sizes and numbers, might have outnumbered them by as much as ten to one.


Based on the most recent radiocarbon data, it is believed that modern humans entered southeastern Europe 46,000 years ago.


Genetic data is broadly consistent with this date: mitochondrial genetic diversity data indicates that a rapid expansion of the modern population of Europe began 42,000 years ago. Groups dispersed rapidly from east to west across Central and Western Europe, reaching Spain by 41,000 years ago. Meanwhile, other groups migrated northwards onto the East European Plain, to reach Kostenki on the River Don in southern Russia between 42,000 and 45,000 years ago.


As conditions eased, the population again began to expand. A migration from Beringia into the New World began around 18,000 years ago, probably via the Pacific coastal route. Estimates as to the size of the founding population range from as many as 10,000 to as few as 70 individuals.


Partly because it straddles the equator, Africa has been less affected by climate change than the New World and Australia. Glaciers have encompassed as much as 56 percent of North America over this time. As much as 60 percent of Australia has been overrun by windswept sand-dunes. Many large African species underwent major range contractions during Pleistocene glacial maxima, but Africa is large enough to suffer vast loss of habitable area and still sustain refugia extensive enough to sustain large species. African species could survive climatic conditions that most species elsewhere would not, and on this basis alone a lower rate of megafaunal extinction in Africa would be expected.


Diamond also suggests that diet was generally poorer than that of hunter-gatherers, and that this affected adult stature. In Greece and Turkey, at the end of the Pleistocene, men averaged 1.78 m (5 ft. 10 in.) and women 1.68 m (5 ft. 6 in.). However, with the coming of agriculture, heights fell dramatically. By 4000 BC males were averaging 1.60 m (5 ft. 3 in.) and females 1.55 m (5 ft. 1 in.). Not until classical times did average height begin to increase, but it has still not reached hunter-gatherer levels.


Roman legionnaires are popularly supposed to have been on occasions paid in salt, giving rise to the term ‘salary’ (Latin ‘salarium’).


Researchers sampled wild einkorn from a wide range of locations within the Fertile Crescent. They found that wild strains from the Karacadag Mountains of southeastern Turkey are the most genetically similar to domesticated einkorn. Accordingly, they concluded that it had been domesticated there and that domestication had occurred only once. The Karacadag Mountains are a range of hills no more than 30 km (18 miles) from Göbekli Tepe, and this has led Steven Mithen to suggest that the monument played a pivotal role in the origin and spread of domestic cereals in Southwest Asia. Mithen proposed that the need to provide food for the hundreds of people who gathered for ceremonies at Göbekli Tepe might have led to intensive cultivation of wild cereals, leading in turn to the first domestic strains. The grain at Göbekli Tepe soon became known for their high yields, leading visitors to take home bags of seed for sowing on their own plots. Thus domesticated cereals reached sites such as Jerf el-Ahmar, and eventually the obsidian trade routes spread them south, possibly all the way to the Jordan Valley.


While cereals and legumes were extensively cultivated during the PPNA, it is not until the PPNB that fairly clear-cut evidence for domesticated varieties first emerges. The earliest convincing examples of wheat domestication are remains of emmer and einkorn from Çayönü Tepesi and Cafer Höyük in southeastern Turkey. They date from 8600 to 7900 BC, and possess the characteristic tough, non-shattering rachis. Subsequently, domesticated emmer is reported at Tell Aswad near Damascus between 8500 and 8200 BC, and became widespread across the whole of the Levant between 8100 and 6700 BC.


The case for southeastern Turkey being an epicentre for crop domestication is strengthened by genetic results for emmer wheat that suggest that it too was domesticated in southeastern Turkey.100 Barley on the other hand has been pinpointed to the Israel/Jordan region, where it was already being extensively harvested as a wild crop.


Similar studies at Cafer Höyük have led to the conclusion that domesticated pigs were being kept there by 8300 BC.


Mitochondrial DNA evidence suggests that the wild boar originated in Island Southeast Asia and subsequently dispersed across Eurasia. Multiple independent domestication events have been identified: one in Southeast Asia, at least one in East Asia, another in India, and at least two in Europe. Curiously, the mitochondrial haplogroups of wild boar from Southwest Asia have not been found among modern domestic pigs, although mitochondrial DNA recovered from pig remains found at Neolithic sites in Europe does contain these haplogroups. Evidently in later prehistory, European domestic pigs spread eastwards, and completely replaced earlier domestic pigs of Southwest Asian origin.


Recent work at Stonehenge has shown that the monument was sited near a pair of naturally-occurring parallel ridges that happen to align on the summer solstitial sunrise in one direction and the winter solstitial sunset in the other. The ridges, formed by meltwater at the end of the last Ice Age, would have been seen as auspicious. Post holes dating back to around 8000 BC suggest that this natural alignment had been noticed in Mesolithic times, millennia before any monument-building began. The monument was originally an elite graveyard, but by 2500 BC it was hosting ritual winter feasts attended by as much as four or five thousand people, which at that time would have represented around ten percent of the British population.


PIE has words pertaining to wheeled vehicles including wheel, axle, shaft and hub. The wheel was invented around 3300 BC and if the existence of PIE words for it is taken at face value, then we get a date for when the language was spoken that is no earlier than 3300 BC.


We cannot be certain that it is. In the 1960s, linguist and classicist Calvert Watkins suggested that terms pertaining to wheeled vehicles were chiefly metaphorical extensions of older Indo-European words with different meaning. For example *nobh (wheel-hub) meant navel, and the word for wheel itself, *kwekwlo, is derived from the root *kwel- (to turn or to revolve). A problem with this suggestion is that there are actually at least four different PIE verbs that mean to turn or to revolve, any one of which could have been used. For the word wheel to have emerged thus, we must assume that each branch of Indo-European independently adopted *kwel rather than one of the other possibilities. The word *kwel would also have to have remained current from the time when the Proto-Indo-Europeans lived through to 3300 BC, when the wheel was invented.


However, Colin Renfrew has suggested the universality of words pertaining to wheels results from widespread borrowing. He believes that innovations such as the wheel and wheeled vehicles spread so rapidly that the relevant vocabulary spread with them as loanwords.


Ethnographic studies suggest that with simple hoe agriculture, the major subsistence contribution comes from female labour in sowing, weeding and harvesting. Such societies tend to be matrilocal (where women, rather than men, remain in their place of birth after marriage) and matrilineal (descent through the female line). By contrast, plough agriculture is associated with male dominance of subsistence activities, patrilocality (men remain in their place of birth after marriage) and patrilineal descent (descent through the male line).


In Central Africa and South Africa, Proto-Bantu has differentiated into 500 languages in 19 branches over 2,000 years.


An early study obtained dates between 7800 and 5800 BC for when the various Indo-European languages began to diverge from one another, rather earlier than the figure obtained using glottochronology. More recent work has refined the date range to between 7500 and 6000 BC.


In addition, it suggests that the geographical homeland of PIE is Anatolia. Thus both the timing and root location of the Indo-European languages fit with an agricultural expansion from Anatolia.


The name Nostratic comes from the Latin nostrates meaning ‘our countrymen’. From Nostratic arose a number of languages, including PIE. Like Indo-European, Nostratic is assumed to have arisen from a single language or tight grouping of languages termed Proto-Nostratic. If PIE can be thought of as a mother tongue of the Indo-European language group, then Proto-Nostratic is the grandmother tongue.


Nostratic was first proposed by the Danish linguist Holger Pedersen in 1903, and developed by Soviet linguists in the 1960s. Under the scheme proposed by American linguist Allan R. Bomhard, it includes Indo-European, Afroasiatic (North and East Africa, Arabia), Kartvelian (South Caucasus), Uralic (Finland, Estonia and Hungary), Altaic (Central Asia through to Japan and Korea) and Dravidian (Indian subcontinent, mainly the south). All these families are said to share some degree of common origin, or at least a degree of inter-family contact.


Bellwood associates Dravidian with a larger, hypothetical grouping, Elamo-Dravidian, which links it with the extinct Elamite languages of southwestern Iran. He notes that not all these language families can be associated with an agriculturally-driven dispersal out of Southwest Asia: Uralic is believed to be derived from Mesolithic hunter-gatherer dispersals across northern Eurasia, originating in the Ural region, and Altaic probably originated in Mongolia or Manchuria.


Anatolia, as we saw in Chapter 18, represented only one of the four directions in which the agricultural expansion proceeded out of Southwest Asia. Other dispersals took place into Central Asia, the Indian subcontinent and North Africa. Might these, too, have left their linguistic mark? Peter Bellwood has suggested that the Southeast Asian expansion as a whole may be linked to a controversial grouping known as Nostratic.


Although a pre-agricultural African origin for Afroasiatic cannot be ruled out, reconstruction of early vocabulary for cultural and environmental referents suggests a Levantine origin during the early Neolithic. Similarly, there are good grounds to associate Elamo-Dravidian with an agricultural dispersal into the Indian subcontinent.


Invariably, there have been attempts to identify a homeland and date for Proto-Nostratic. Bellwood admits that these will remain elusive, but speculates on a connection with the Natufian and its contemporaries.


If the farming/language dispersal hypothesis is correct, then the languages now spoken in much of the world are descended from the winners of linguistic lotteries that took place thousands of years ago. These were the languages that just happened to be spoken by people living in the very few parts of the world where there were indigenous plant and animal species suitable for domestication. Some of these lotteries paid out bigger ‘prizes’ than others, and Proto-Indo-European was by far the biggest winner. Originally spoken by no more than a few tens of thousands, it gave rise to languages now spoken by 45 percent of the world’s population.


We should also note that Neil Armstrong’s first words from the surface of the Moon were spoken in a language that may trace its origins to the Anatolian farming dispersal almost nine millennia earlier.


In terms of numbers of languages, Niger-Congo is the largest in the world, with 1,436 reported languages and 300 million speakers.19 Bantu is just one of the 177 subgroups that make up the Niger-Congo family, but it is one of the largest with around 500 languages.


It is now generally accepted that maize was domesticated from teosinte, which is the name given to a number of annual and perennial grasses native to Mesoamerica. The name is from the indigenous Mexican Nahuátl language, and has been interpreted to mean ‘grain of the gods’.


Salt was a vital commodity to the Maya. As tropical farmers, they required 8 gm. (0.28 oz.) of salt per day to maintain their sodium balance. A large city such as Tikal had to import over 130 tonnes of salt per year.


Earth’s total biomass is limited by the amount of energy it receives from the Sun. Before the extinctions, the solar energy available for conversion to biomass by megafauna was shared between many species. With the extinctions, megafaunal biomass crashed, creating an energy surplus. In past extinctions, such as that of the dinosaurs, crashes have been followed by explosions in biodiversity, as new species evolve to exploit the energy surplus.


As we have seen, the end of the Ice Age saw the rise and spread of agriculture in many places around the world, which enabled far larger human populations to be supported than could be sustained by hunter-gathering. Humans were not the first animal species to produce their own food; ants are thought to have learned the trick 45 to 65 million years ago.5 The consequences, though, were far more significant. Instead of remaining available to support a reestablishment of megafaunal biodiversity, solar energy was co-opted by humans to raise crops and livestock. In effect, the energy surplus was being converted into human biomass. The Earth’s large animal biomass regained pre-crash levels just before the Industrial Revolution, but now it was dominated by just one species – Homo sapiens.


Biologist Anthony Barnosky has suggested that the addition of fossil fuels to the global energy budget enabled the human population to break through the glass ceiling. The use of fossil fuels for heating goes back to Roman times,10 but their use for mechanical power had to await the invention of the steam engine. Without mechanical power, the food production and distribution that supports current global populations would not be possible. If Barnosky is correct, a global population more than an order of magnitude greater than Earth’s natural carrying capacity is being supported by a non-renewable resource. Even without the deleterious effects of CO2 emissions, this is a worrying prospect.


Logically, there are only few explanations for why we have not heard from ET: 1. The zoo hypothesis (popularised in Star Trek as the Prime Directive). Aliens are aware of our existence, but Earth has been placed out of bounds until humans reach a certain level of technical and/or ethical attainment; 2. We are alone. Earth is the first and only planet (at least in this galaxy and its neighbourhood) to have hosted a technological civilisation; 3. Any technological civilisation must invariably wipe itself out before interstellar travel can develop; 4. Long-lived alien civilisations exist, but rarely undertake interstellar travel, and have no desire to fill the galaxy with their presence.


The message could not be clearer. If we are to survive as a species, we should concentrate on the many problems we face here on Earth. Even at current population levels, much of the world does not have enough to eat. Around 870 million people in the developing world are chronically undernourished. According to World Bank figures for 2010, around 1,220 million people lived on less than US$ 1.25 per day and 2,400 million on less than US$ 2.00 per day. The latter figure is considered to be the average poverty line in the developing world. Nobody would dispute that eliminating these evils must be the aim of any responsible global society. Furthermore, it is hardly unreasonable that people in the developing world should aspire to a moderately-comfortable developed world standard of living. From an ecological point of view, can the economic growth required to make this a reality be sustained? The answer, probably, is ‘no’. If the world’s population stabilises at 10 billion or so, most will have to accept a much lower standard of living. The ‘population optimum’ may be no more than two or three billion. A managed (as opposed to catastrophic) reduction to these levels is unlikely to be achieved for at least two centuries.

Přidat komentář

Váš komentář
Jméno