Brindabella Chronicles

Options
Directory Tree
Show Files
Show Images
Show File Info
Show Summaries
Post a Comment
Add a Note
Show Notes
Search
Reset

HOME

Up

Branch: .

./-summary.html

Brindabella Chronicles Summary The Brindabella Chronicles span three years at the turn of the twenty third century. This is realist future fiction with technologies that are achievable over this current century if we make the effort, and science that is constrained within the bounds of plausibility.

The stories are set in two quite distinct societies. Brindabella is a Janeite community that, with minimal help from modern technologies, has recreated the world of Jane Austen in the Brindabella valley of New South Wales. In contrast, Arkadel – a small floating city in the centre of the Pacific Ocean – is one of the most future oriented societies of the time. It is a swarm hive that's inhabitants devote their lives to preparing their Personal Archives to command spindles – tiny space craft designed to explore the galaxy in large swarms, and sow the seeds of settlement.

Book 1: Brindabella 2200. Arkadelian mathematician and social modeller Mary Wang recruits Tom Oldfield to help solve a scientific quest of her great grandmother Sara, and returns with him to Brindabella. The quest is successful. There are weddings.

Book 2: Brindabella Aftermath. Their findings shock the planet, and shock is quickly turned to fear by groups who's aim is to undermine The Treaty that has maintained peace for the past century. Mary returns to Arkadel in an attempt to quell the fears. She explores worlds of the secretive cybs and learns much from their understanding of swarming. There is another wedding.

Book 3: Brindabella Trust. Mary turns her efforts to reforming The Treaty. Back in Brindabella, she learns about the evolution of religions, gods and ideologies. Now that the world has finally recovered from the collapse of the institutions of the First Enlightenment it is moving into a Second Enlightenment based on trust. There is a death.


./B2200Ch96.html
./Book 1 - Brindabella 2200
./Book 2 - Brindabella Aftermath
./Book 3 - Brindabella Trust
./Brindabella.html
./BrindabellaAftermathCh1.html
./BrindabellaChroniclesPreview.epub
./BrindabellaChroniclesPreview.epub.zip
./BrindabellaChroniclesPreviews.pdf
./BrindabellaZone.html
./BrindabellaZone.html.1
./BrindabellaZone.html.2
./BrindabellaZone.html.3
./CellToMind.html
./Comments.html
./Comms.html
./EvolutionOfGods.html
./Excerpt from Book 2.htm
./Galaxy.html
./PA-future-view.html
./PA.html

Personal Archives
The PA and its Context


Artificial Intelligence, Privacy, and the PA

Many people are concerned about the eventual emergence of super-AIs that exceed human intelligence, but I see the problem as less imminent than most, and I can see an alternative that largely avoids the problem.

The impressive successes we see with contemporary machines are in the world of games. Improvements in performance have largely come from increased machine power. It's said that since this power is increasing exponentially we will reach a point, perhaps within decades, where some general machine intelligence will outstrip human intelligence. But there is an almost infinite jump between the constrained worlds of games and the world we deal with, and when it comes to complexity even small increases in scope can create a combinatorial explosion.

In a simple sense, the complexity of a system (the number of possible states it can be in) is the number of states of the component variables (S) to the power of the number of components. For up to four components with two states each (e.g. coins) the number of combined states is Sn as illustrated in Figure 2.


Figure 2

In Figure 2 the complexity increases with a doubling time of ten years. The different plots show the significance of the starting complexity or initial number of coins (from 0 to 4). The "0" plot represents the exponential rise in raw computing power.
Real world problems greatly exceed the simple systems represented here. In automated speech recognition the theoretical likelihood of states detected in a speech signal can be too low to represent in conventional computer number systems – beyond astronomically infinitesimal. Today, a casual observer might be led to think that the current game-winning AIs are making great progress. But how far? In automated speech recognition (ASR), leading systems claim 95% word accuracy and claim that this is on par with human ability.
In the late 1990s, when I was researching ASR for my PHD, people were claiming similar accuracy for single user systems, but it was recognised that they had reached a plateau well short of human ability. It was generally accepted that human performance was well beyond one error in twenty words. One per hundred was more realistic, but even then a stenographer who made three mistakes per page would be below par.
Going from 95% to 99% isn't a 4% improvement as it's commonly stated, nor was 91% to 95%, it's a factor of four reduction in error rate whereas 91% to 95% was less than a factor of two. Also, the difficulty becomes more than exponentially harder, and totally new approaches need to be taken. Humans use the grammatical and semantic context of a statement to aid recognition. We can often guess what's coming next.

Relevant to the PA, ASR systems have great difficulty dealing with different speakers. The recognition system in a PA can tune itself to one voice, in all its moods, and the owner's usual vocabulary and grammar. Centralised AIs will not only train us to speak the same, as we unconsciously adjust our speech to maintain accurate communication with them, they will train us to act the same – and eventually to think the same.

While computers operate much faster than our nervous system, and can store far more information reliably, they still really are quite dumb. They can be specifically programmed for a particular task, say chess, using what human players know consciously about how they play, or they can use powerful but general purpose algorithms and rely on their speed. Our brains have been evolving for half a billion years and are vastly more sophisticated in the way they store and use the information our senses pass on than any contemporary computer system can be.

The Current state-of-the-art in AI uses artificial neural networks. These use highly simplified nodes that don't realistically represent neurons, and the architectures used don't reflect the way our brains are wired and perform, even if we can accept the current models of neuroscience, which I don't.

These generalised machines can learn to play games, but not understand what a game is in any meaningful way, or the role games play in our individual lives and cultures. I'm not going to speculate on how rapidly AI will develop to where it can understand us to the point of being a serious autonomous threat. In the Brindabella Chronicles I make the convenient assumption that it will happen over this century, but not on the path we're heading down now.

For AI to genuinely understand us it will have to understand the world as we perceive it. Our brain doesn't just present our conscious perception of the world around us as a camera would. Our subconscious processes are recognising objects in the field of view and evaluating them in terms of our current interests and motivations, while simultaneously looking for anomalies and possible threats.

Cellular level functioning of the brain is discussed in From Cell to Mind. Summarising briefly, neural activation generated by the image spreads through the brain, to some degree activating memory traces of all past associated experiences and our responses to them – our varied views of the world develop over decades of such moment by moment engagement. To understand just one person it would have to share the lifetime sensory and emotional experiences of that person.

Where we are heading is AI that draws on the superficial and piecemeal information available on social media, individual spending patterns, telephone and email network patterns (and content?), or any other data that can be trawled and sold by multinational corporations. Its maximum potential is to develop a superficial, generalised model of our species, or those that engage in social media. Beyond that it can start dividing us into crude categories, and as individuals just map where we deviate from our category. The inescapable tendency of this approach is towards totalitarianism.

The issue of privacy is not just the potential for data in social media to be manipulated to misrepresent us, perhaps without our knowing, or leading our contact, purchasing, or voting choices in directions that suit others more than us. As the sophistication of AI develops we can be manipulated at a personal level in ways we might not expect, or even find hard to imagine and understand. We should start now to keep these centralised machines starved of detailed information, and develop a personalised alternative which we can trust with the details of our lives.

Centralised AI can never reliably understand us as individuals without opening up the potential for us to be deprived of our individuality.

The Personal Archive (PA)

Back in the late 1980s, my initial motivation for writing Brindabella 2200 was to depict a society where the social problems created by artificial intelligence had been resolved to a satisfactory degree. Since then, with the introduction of the web, blogging, and social media, another problem has arisen with privacy. Now we are moving into an time when the problems of AI and privacy are merging.

The alternative I'm proposing is that each and every AI should be under the control and personal responsibility of one individual person – that it is an extension of that individual and the role they play in society. This way, each AI builds an understanding of one individual human by privately sharing the owner's experiences and actions, and has a chance of developing the best possible understanding while under the constant control of its owner. With its actions under the control of that person it's not autonomous.

In Arkadel, a sensible owner does not treat their PA as autonomous but as a tool. Its actions are viewed by others as the actions of the owner. We can continue to live our lives as we wish, but with a prosthetic that can help with our limitations, such as memory and breadth of knowledge.

I have labelled this form of AI a Personal Archive because that's fundamentally what it still is when expanded to a full PA. This label can take three meanings:

1 Personal Archive: Continuous lifelong record of visual, audio, biometric, and ambient data.
2 Personal Assistant: Natural language interpreter for a Personal Archive with quizzing and command capabilities. Includes visual recognition.
3 Personal Avatar: Arbitrary visual representation used to interface a Personal Assistant with the visual world. (in their physical form a bot or spindle)

There are some basic design requirements for a secure and trusted PA architecture that can't be achieved with present architectures, which are derived from the Universal Turing Machine architecture. An advantage of this approach is its great flexibility. This flexibility makes it inherently insecure.
So what do we need instead? The principle requirements are that people have sole, complete, flexible, and confident control over their interactions with the digital world. To achieve this the architectural principles need to be simple and readily understood by the average user.

Memory should be Write-Once-Read-Many: That the record be indelible is implied in the word ‘archive’. It is a permanent and exact record, even if intermittent, not a reconstructable history. This is a necessary requirement for the system to be trusted.

Access should be restricted to a gatekeeper module: The architecture should not provide any physical means of reading the archive other than through a hardware gatekeeper. Access via the gatekeeper should be under the sole, instantaneous control of the owner of the PA. This is a necessary condition for privacy.

It should record every action it performs: That the gatekeeper record all its actions as part of the archive is a necessary requirement for reconstruction, analysis and verification of its actions. This underpins both trust and privacy.

The control logic should be expressed in natural language: The operational rules don't need to be translated into a low level computer language. A natural language (an automatically verified unambiguous subset) should be the operational language of the device down to the hardware level. This gives operational transparency and a direct means for the owner to provide instructions and check that they are being interpreted correctly.

There should be a core set of standard access rules: These would provide trusted answers to basic questions such as ‘Who are you?’ along with diagnostic evidence that the answer was derived from the core rules.

Manufacture should be completely transparent: If you are going to trust this device you need to know what's going on inside it, or rely on a wide community of users who have checked the system you start with. This is probably the most difficult requirement to satisfy since it relies on trusting others with the construction. The only way I can see this being achieved is through multiple open source projects with a diverse range of people constructing the units.


Fig. 1: A notional representation of PA Architectures

The PA will change the way we live and interact, but in a free society they will tend to improve trust. When asked a question we can provide a core verified answer. We can establish automated interactions between people we know and trust.


Fig. 1: Three levels of PA communication

When a PA reaches a level of sophistication where we can trust it to deal with the world on our behalf, we can set up back-channels of communication at three levels in accordance with privacy constraints we have specified. If there's any uncertainty, it can ask us. We can also enable the anonymous polling of opinion.

How will the widespread use of PAs influence how we interact as individuals and as a society? Can a PA continue to represent us meaningfully after we die, and would there be any point to that? If so, can it be a basis for our exploration and settlement of suitible planets in our galactic neighbourhood? These are the question that Brindabella Chronicles attempts to address.

An example of how a PA might be configured to operate is seen in Brindabella 2200Chapter 96.

Let me know that you think.


./TheGatesOfDawn.html
./balloonscopes.html
./bots.html
./brindabella-2200_20pct_sample.epub.zip
./brindabella-aftermath_20pct_sample.epub.zip
./brindabella-chronicles_10pct_sample.epub.zip
./galaxy.html
./notes
./spindles.html
./tetragraph.html
./wraiths.html