ABSTRACT

Based on the history of the planet earth, it seems almost certain that humanity will face at least one survival crisis during the next 1,000 years.

Although some individual humans will almost certainly survive almost any exogenous or endogenous shock which does occur, whether those survivors know enough (or can learn quickly enough) to re-create what we call civilization or society depends critically on how information and knowledge is managed today.

Despite a proliferation of information and information technologies, there seems actually to be less investment today in understanding how humans learn and what is needed to be known in order to survive.

This paper outlines what might be needed to be known to live for another 1,000 years, argues that we know most of this already, and considers how we might ensure that we still know it no matter what happens in the interim.

“There are more scientists alive today than have ever lived before.”

“The amount of knowledge on the planet is increasing exponentially.”

Quotes like these are commonplace.  It is true that we know (or think we know) more about ourselves and our world than at any time in the planet’s history.  The fact that some archeologists wish to question such arrogant thinking doesn’t seem to permeate very far.

This explosion of information (and the technology to store and access it) gives some people great comfort.  My children are told at school that since they can find out anything they want to know through the internet it is less important today to remember things than it is to know how to validate and use the information which is readily at their fingertips.  Others are concerned that more information does not imply greater knowledge, and may in fact mask a despairing loss of wisdom.

I wish to focus on one particular aspect of this information explosion:– of all that we know, exactly what is critical to our survival as a species, both now and in the future, and how will we ensure that this essence is actually available to those who need it, when it is needed.

Information is ultimately contingent – its value (even often its use and meaning) depends almost entirely on prevailing circumstances.

Hence, conclusions about information needs in 1,000 years pre-suppose an analysis of the circumstances which will prevail in 1,000 years (or which may occur in the interim).

Some potential circumstances are so benign as to be trivial.  For example, if there is a stable world-wide power grid in 1,000 years and a ubiquitous technology using this power; then information required for survival will be so prevalent as to need no discussion.  Such scenarios are, in my view at least, so unlikely they can be ignored for the purposes of this paper.

It is much more likely that a viable future for humanity in 3,000 will need to be designed, and it is the features of this design to which I turn my attention here.[1]

Overwhelmingly, the history of human information storage and transfer is an oral history.  The capacity to semi-permanently record information spans less than 10% of human history (though it has caused a phenomenal explosion in the quality and quantity of stored information).

Even within the most recent 10% of human history, information has overwhelmingly been stored in ways which allow it to be directly accessed via human senses.  Only in the past 100 years or so have humans begun to record their information in ways which require technologically mediated access.  In the twenty-first century an overwhelming majority of human information is stored in ways which cannot be directly accessed by the people for whose benefit it has been stored.

In some cases (eg celluloid film) this storage is in a format which can be directly comprehended – though it may require amplification or some other relatively simple manipulation in order to be accessed – but overwhelmingly modern human information is stored in ways which can only be accessed by sophisticated technology using non-human senses.

Thus, any analysis of information needs in the year 3000 must include analysis of the technology needed to make sense of such information.  Even if required information is available in pristine form, if the technology needed to access it isn’t available, then the information is useless[2].

Despite the promises of technophiles (and increasingly sophisticated user interfaces notwithstanding) information accessing technology is becoming more and more complex.

It may be true that computer chips, fibreoptic cable and communications satellites are ubiquitous and reducing in price – but knowledge about their construction (including the facilities to actually do the construction) is being concentrated in fewer and fewer minds[3].  This observation is equally true of such relatively low-tech commodities such as loaves of bread.  How many of us in this modern world could create a loaf of bread from its raw materials?

In 2003 it is difficult to imagine accessing significant information without using Intel (or AMD or…) chips, Plasma (or LCD or….) screens, Windows (or OS2 or Unix or…) and Netscape (or Explorer or…)[4]. Yet each of these represents a technology which is understood by very few, and capable of being reconstructed by even fewer.

I will return to this issue later in this paper when I consider how we might organize ourselves so that the information we require to live to the year 3000 will be available as required.

But first I would like to begin to address the question of what we do need to know in order to survive to 3000.

Archeologists and anthropologists are convinced that there have been a number of cataclysms in human history which have resulted in a loss of what was relatively common knowledge.  Many science fiction writers have similarly explored the consequences of knowledge lost through trauma, and the huge cost to society of re-acquiring this knowledge.

So, for me, the paramount thing we need to know in order to survive for the next 1,000 years is the scope of our present knowledge.  That human beings once knew, for example, how to harness steam to produce power is critical to inspiring future generations, even if the actual technology to produce a steam engine (and it’s many derivatives) is lost.  There is plenty of evidence throughout history of parallel discoveries or inventions, once a fundamental possibility had been revealed, but it is hard to conceive of something if one doesn’t even know that it is possible.

The first thing we need to know is that we once knew.

Secondly, we need to continuously know some fundamental properties of matter, of ourselves and of our universe.

Losing knowledge of the periodic table and the gross structure of matter, for example, would be disastrous for our capacity to shape our environment to meet our needs.

Similarly, losing fundamental knowledge of our biology would increase hugely the risks to life and longevity.  And knowledge of our place in the universe is essential to ward off a return to the sorts of mythological interpretations which delayed societal development in the past.

Thirdly, I contend we need to continuously know how to continuously harness energy for human use. As animals we lack the ability to directly convert sunlight, air and water into energy and regressing to harnessing only available plants and animals would be a survival disaster.  Current human knowledge includes a variety of simple (and complex) ways of harnessing energy for our use – from lighting and sustaining fire to pumping ground water and harvesting seeds.  Loss of this knowledge would greatly reduce our survival chances.

These then are the three broad categories of things we need to continuously know to the year 3000:

  • that we now know how to do marvelous and apparently magical things
  • the fundamental structure of ourselves and our world
  • how to harness the energy we need.

On this we can re-create a viable civilization.[5]

Which brings me to the last question – how do we ensure that we continuously know these things?

Again I believe there are three imperatives.

First, we need to be acutely aware of the importance of multiple redundancy – having the same information dispersed widely and in multiple forms

Under the influence of economics and business, the Western world has become very skeptical of anything which appears to be unnecessary or redundant.  Modern business and government has streamlined itself into single lines of communication with the minimum number of organizational layers.

The same relentless pursuit of efficiency has seen the replacement of violins by amplifiers in symphony orchestras, and is reducing the number of viable languages spoken in the world to a handful; to give just two examples.

Nature on the other hand recognizes the criticality of redundancy in ensuring survival, particularly after a crisis.  Humans have two eyes, ears and kidneys where one is sufficient[6], and produce millions of sperm in a single ejaculation;  and many biological pathways have at least two viable routes to achieving their outcomes.

This apparent redundancy is critical in a crisis – natural systems frequently demonstrate a resilience not often evident in systems created by humans[7].

Putting one’s eggs in one basket is simply not a smart survival strategy.

In the context of this paper, redundancy means the dispersion of the same information widely and in multiple modalities.  To a large extent this is a feature of the modern world, though there are some reservations:

as noted earlier the distribution of millions of copies of Encarta across the world is meaningless if the technology to access it is unavailable (or can’t be readily replicated)

and if the distribution of Encarta crowds out the distribution of independently developed encyclopedia (with their inevitably different contents and perspectives) then information is lost

books (and computer disks) are necessary and critical ways of compiling and distributing information, but experience in using them is also essential.

Which leads to the second imperative if we are to know what we need to know to survive in 3000 – ubiquitous education.

Concentrating information and knowledge in a few minds is not a successful survival strategy – everyacorn and every sperm contains enough information to seed their next generation.

We can have not idea in advance of a crisis which particular individuals will survive.  Distributing fundamental information to the widest possible number is the most intelligent survival strategy we can follow.

For human beings this means at very least ensuring that every adult is numerate and can read and write. It also means ensuring everyone knows where to access information critical to their survival.

This is clearly not an imperative in our current world, in which a form of information imperialism predominates – with access to information being largely mediated by the economic marketplace.

A large scale version of the prisoners’ dilemma is evident here – humanity is clearly better able to survive if critical information is widely dispersed;  but individuals can enhance their personal survival (and their current prosperity) at the expense of others by garnering critical information for themselves.

Authors have written entire books about the difficulty of resolving other versions of this dilemma.  But I would argue that this particular information dilemma is readily resolved when it is recognized that information is not a typical economic good (it is not in short supply, and it is not consumed when used by one person).  Its distribution costs are also relatively negligible;  providing us with ultimately no excuse for failing to ensure that the whole of humanity has access (at least) to their survival information needs.

Achieving these first two imperatives is well within our grasp.  We already know what we know (and it is unambiguously sufficient to create civilization) and we know there is no excuse for not letting everyone join in our knowledge.

But the third imperative is more problematical.  Survival after a crisis requires being quickly able to re-create what was present (and desirable) prior to the crisis.

Nature has created self-replicating molecules which achieve this task by ensuring (as was noted earlier) that each generation contains within it the capacity to create its successors[8].

And, fortunately, human beings are a product of nature, and each subsequent generation is virtually guaranteed the genetic pre-requisites for language, consciousness and intelligence.

Human systems have some distance to go to achieve this level of self-replicability – though bio- and nano- technological developments hint at their future potential.

We will not guarantee our collective ability to survive until the self-replicability of environmental information is as reliable as its genetic cousins[9].

We already know all we need to know to ensure our survival as a species to the year 3000.  Whether we can avoid self-destruction in the meantime is another question.

Our two remaining challenges are the gaining of sufficient wisdom to recognize that each human is as important as any other with respect to survival of the species;  and the development of information systems which are capable of self-replicating human created information.

 

 


[1] though I acknowledge the anthropocentric nature of this decision – the universe has a perfectly functioning design program right up to (and beyond) the year 3000 – but I do believe that some human intervention will be needed if homo sapiens (or our genetic descendents) are to be present in that future.

[2] A real example from the present might reinforce this statement.  Australia (like most developed countries) has a long history of cataloguing and recording information on land ownership;  about structures build on that land;  and (more recently) on satellite images of the built form.  Historically, three different Government departments controlled these records, causing some difficulties for the general public.  In recent years, Governments have moved to merge these departments and create a single database (called in my country Land and Property Information, LPI).  Naturally technological innovation has been a key feature of this consolidation, and like all well-trained IT users the LPI department makes regular back-ups of its database and sends them off-site for secure storage.  In early 2001 a new system capable of accessing and reproducing all data (text and picture, current and historical) was introduced into the department.  Employees were, naturally, given access to new hardware and software, and old equipment was discarded.  When I visited the off-site storage centre in November 2001, there was none of the old system equipment left and the organisation was incapable of even reading any backup produced before March 2001.  When this was pointed out, they rushed out and purchased two versions of the old hardware and software – only to discover that no-one in the recovery centre had ever used them before.

[3] To say nothing of the sophistication required to produce and maintain the much vaunted future bio- and quantum- technologies.

[4] Even our modern book libraries can’t find what they own without using this technology.

[5] I hope this paper triggers a more detailed analysis of what is actually critical within each of these broad areas.

[6] Though I acknowledge there are reasons other than redundancy for having two eyes and ears.

[7] The rapidity of new growth after a fire or drought is awe-inspiring and has to be seen to be believed.

[8] Nature actually goes further than this by building the inevitability of useful mutation into this self-replication process.

[9] Though it is worth noting, as many others have, that it is possible to use current technology to send both genetic and environmental information off the planet – a course of action which, while not actually guaranteeing humanity’s survival, would almost certainly avoid any crisis which might threaten our earth-bound survival.