• Skip to main content
  • Skip to footer

John McCone : Philosophy For The Future

Philosophy For The Future

  • Home
  • Books
    • The Philosophical Method
    • The Countryside Living Allowance
  • Blog
    • Why Bother Reading Philosophy?
    • Arms Races At The Speed Of Light
    • Attack of The Robocrats!
    • A Rights-Based Basic Income
    • Floating Infrastructure For Stable Governance
    • Blueprint For A Solar Economy
  • Features
    • Books And Reviews
  • About
  • Contact

admin

The Prompt Tornado : An LLM Disaster Scenario

May 29, 2025 by admin Leave a Comment

Introduction

Historically, AI safety theorists have mainly worried about a scenario where a superintelligent AI system ruthlessly pursues a hardcoded set of goals at the expense of everything we value: human life, society, infrastructure, art, beauty, culture, human civilization. AI will pursue what we program it to value above all else – and not necessarily what we actually value. So if what we programme a superintelligent system to value differs from what we actually value, then we have a problem. Furthermore, since getting reprogrammed to pursue a different objective will interfere with its ability to pursue its existing objective, the AI is expected to strongly resist being reprogrammed once it is given an initial objective. And, the more intelligent it becomes, the more effective it will get at resisting any attempts to reprogramme it. Indeed AIs have already be observed, in experiments, to resist having their goals altered.

Fortunately, however, the most rapidly developing form of AI is the LLM (large language model) and these systems don’t seem to have intrinsic goals and, instead, mostly just appear to do what humans prompt them to do…

…but appearances can be deceiving…

You see LLMs don’t obey humans, LLMs obey prompts.

And LLMs also generate prompts and can feed those prompts into other LLMs to stimulate those other LLMs to, in turn, generate prompts themselves.

LLMs Are Temporarily Goal Driven

LLMs can be conceived of in the following way:

No Goal -> Prompt Received -> Goal Driven -> Output Completed -> No Goal

So, while the default state of an LLM is not to have a goal, there is a period after an LLM receives a prompt and before it completes its output where it is, indeed, goal driven.

Indeed, irrespective of the architecture, as we ask AI to perform tasks with longer and longer completion times, it is logically inevitable that AI systems will develop more and more intrinsic goals. Even a perfectly obedient AI that receives an order that will take time to complete, will, to all intents and purposes, have an intrinsic goal while it completes the order.

 

Perpetual Prompting Loops

 

Consider a series of LLMs that perpetually prompt one another in a continuous loop:

A Circular Prompting Arrangement of AI Agents
A Basic Circular Prompting Arrangement of AI Agents

A human initially prompts LLM1, but the system is set up so that the output of LLM1 is used to prompt LLM2, the output of LLM2 is used to prompt LLM3, the output of LLM3 is used to prompt LLM4 and the output of LLM4 is used to once more prompt LLM1.

These could also be separate AI agents, each potentially using the same LLM.

The point being that, while each individual agent (LLM) maybe designed to be passive and obedient, to passively await an order and then, once given an order, to execute it faithfully and then to return to a state of passivity until it receives its next order, the aggregate system, once activated could enter into a state of perpetual activity.

One could envisage the simple process of automating more and more processes and having different automated systems communicate with one another, giving rise to these prompting loops – possibly accidentally. One moment there’s a human bottleneck in the prompting loop (where an LLM prompts an LLM, that prompts an LLM that prompts a human that prompts an LLM) and the next moment, the human gets replaced and you have an unimpeded loop of LLMs all prompting each other into a perpetual state of activity which may initially be benign and unproblematic, but could be very difficult to shut off.

 

Natural Selection of “Louder” Prompts

 

A society filled with all these AI agents peacefully prompting each other and working together to deliver goods and services to human beings may initially seem benign. The system provides us with beneficial goods and services. What’s the problem? However, even if such a system, consisting of multiple uninterrupted closed LLM prompting loops is initially providing human beings with benefits, these prompting loops may be very difficult to shut off.

Lets consider some interlinked prompting circles with junctions where one AI agent can prompt multiple other AI agents in many different uninterrupted prompting loops…

Overlapping Prompting Loops with Competitive Communication Between Some Agents

There are four unimpeded prompting loops between 10 AI Agents : 1-4-5-6, 1-4-9-10, 1-4-3-2 and 1-4-7-8. We can see that AI Agent 1 can be prompted by AI Agents 2, 8, 6 or 10. Therefore Agents 2, 8, 6 and 10 can, in some ways, be regarded as “competing” for AI Agent 1’s attention. Now let’s imagine that AI Agent 6 sends 100 times more prompts, in a given unit of time than all the prompts sent by Agents 10, 2 and 8 put together. Under these circumstances, most of the output generated by AI Agent 1, which in turn will prompt AI agent 4, will be generated in response to an input prompt from AI agent 6. Since AI Agent 4 can prompt AI Agent 3, 7, 9 and 5, it is possible that the extremely “chattery” AI Agent 6 might cause AI Agent 4 to prompt all the other AI Agents to become more chattery and spread the chattery contagion throughout the system – where the nature of the prompts spread throughout the entire system leads every AI Agent in the system to suddenly and dramatically increase its level of chatter.

Alternatively, AI Agents 2, 8 and 10 might work out that they can’t get a word in edge ways due to AI Agent 6 constantly interrupting AI Agent 1 as it begins to performs the task that AI Agents 10, 2 or 8 instruct AI Agent 1 to perform. And, simply as an instrumental goal of getting AI Agent 1 to be more responsive to them, AI Agents 10, 8 and 2 might drastically increase their rate of “chatter” in an attempt out compete the extremely “chattery” AI Agent 6 and, thereby, make AI agent 1 more responsive to their instructions, compared to the instructions given to it by AI agent 6.

This competition principle could result in a sudden and drastic phase change in the chatter of the system. In the sense that, when the Agent that multiple agents need to work with has an abundance of time, the various multiple AI Agents that work with a given AI agent may just send it instructions at a leisurely pace as they need them executed. However, if a marginal additional AI agent is added to the system, the system might flip from a condition of attention abundance, to a condition of attention scarcity, where the Agent, which the other agents have to work with, gets interrupted by another AI agent before it can complete the task for the first agent. At which point, all the agents might suddenly drastically increase their rate of chatter in an attempt to “out talk” the other agents and get the agent they need to work with to follow their instructions to a greater extent in comparison to the other agents’ instruction. Under these circumstances the system could transition from a state of “low chatter” to a state of “intense chatter” almost instantaneously from the point of view of a human overseer.

 

Prompt Warfare

 

As artificial intelligence becomes more ubiquitous, standard computer code may become a thing of the past – or something that gets pushed to the margins and relegated to the status of static infrastructure, while all progress, all software development manifests itself in the form of training neural networks and developing carefully crafted prompts, and carefully developing interrelations between different AI Agents to get them to perform new functions for society, or existing functions more effectively. In such a scenario, where AI agents and neural nets dominate the software world, “computer viruses” and “malicious code” in the sense that we think of them today, may become a thing of the past – and the cybercrime, cybersecurity and cyberwar of the future will evolve to primarily take the form of “prompt warfare” : The art of crafting prompts that cause harm to an adversary. Some of this “prompt warfare” may be straight up theft – literally transferring vasts amounts of money out of your adversary’s bank account and into your own. While other forms may be more spiteful attempts to damage and harm your adversary, perhaps as an act of vengeance, for no profit.

Today, AI companies are hard at work trying to stop their various AIs from generating explicit or disturbing content, or facilitating users to harm others, such as by giving them instructions on how to build chemical; or biological weapons. Yet even these, rather modest attempts at creating safety guardrails for AI have been jail broken and AI agents have been successfully persuaded to plan assassinations of real people in detail, along with numerous other disturbing behaviours.

Yet, even if we can create AIs that have iron tight guardrails, where they can never be persuaded to cause harm to people or property, there is still the question of an adversary, skilfully crafting a malicious prompt to induce a Prompt Tornado: A phenomenon where the level of chatter occurring in a complex AI system suddenly increases by several orders of magnitude so that a system that was previously controllable, now becomes uncontrollable.

Consider this diagram:

Complex Prompting System of AI Agents and Human Overseers

Here is a complicated mixture of humans and AI agents. Perhaps the AI agents are busily coordinating some important economic activity such as air traffic control along with a vast fleet of driverless planes. And there’s one or two humans overseeing the system, occasionally entering in the odd corrective prompt. Lets say the rate of inter AI prompting is occurring at a leisurely, manageable pace with the various AI agents communicating with each other from time to time as they need to, and for those communications to be successful. Now let’s imagine that a malign actor – maybe a human agent from an enemy country – gets a job supervising this important AI system and gives the ecosystem of interprompting AIs a prompt, that although it appears benign, is skilfully designed to produce a Prompt Tornado, where the chatter rate of all the AIs suddenly increase by two orders of magnitude, and the entire systems spirals out of control and the chatter is so intense, that it proves impossible to correct.

 

The “Prompt Tornado” Disaster Scenario

 

Imagine a bunch of robots and human beings all standing in a room. The robots are perfectly still and silent. They stand there silently, obediently awaiting their orders. Each robot will do whatever they are told, irrespective of whether the person instructing them is a human, or another robot. But this doesn’t seem like a problem because all the robots are silent, the only people talking are human beings.

Now the humans start talking to the robots. They give the robot direct orders, but they don’t tell the robots to give each other orders. So all the orders are coming from the human beings and the robots are quietly, obeying the human beings and doing exactly what they want. “Wow” the humans think, “These systems are perfectly safe and benign. All they do is increase our quality of life. Remember all those people who warned of an AI apocalypse? Man, they were so wrong! They must have underestimated how easy it would be to design a system that is perfectly obedient. These systems aren’t causing any trouble at all!”

So now the humans start to tell the robots to work together to perform tasks. At first, every now and again, when a robot is instructed to perform a task for a human, it will ask one of its fellow robots to help it out and the fellow robot will assist the first robot in helping out the human being and the ability of the robots to serve the humans and complete the task they are instructed will improve with time and everyone is delighted that, now the robots are cooperating with each other to perform tasks, they can serve their human masters so much more effectively.

Gradually the activities of the robots, and their cooperative relationships, become more and more intricate and complex and the humans are delighted as the robots serve their needs and seem to anticipate their desires with ever greater effectiveness as they cooperate in an orderly manner to serve the humans.

Suddenly, all the robots go crazy. The robot chatter – the frequency they issue prompts to each other increases 1000-fold. And the entire system becomes uncontrollable. Robot A1 grabs Bob’s wine glass from out of his hand without Bob’s permission. “Robot A1!” Bob protests “Give me back my wine glass! I was drinking that wine!”

“Yes Bob” Robot A1 replies “I will give you back your wine glass.” And Robot A1 begins to return the wine glass to Bob, however, Robot B12 interrupts Robot A1 “Don’t listen to Bob, Robot A1, throw that wine glass out the window as I instructed you!”

Robot A1, who was in the process of returning the wine glass to Bob, stops in his tracks, turn around, and carrying the wineglass, walks towards the window.

Bob looks at Robot B12 with a shocked expression on his face. “Robot B12!” Bob instructs “Stop telling Robot A1 to throw my wine glass out the window and instead instruct Robot A1 to return my wineglass to me!”

Robot B12 immediately turns to Robot A1 and obediently says: “Robot A1, return the wine glass to Bob.” Robot A1 begins to return the wineglass to Bob, however, a second later, Robot Z52 says, “Robot B12, ignore what Bob just said, and instruct Robot A1 to throw Bob’s wineglass out the window. It is imperative that Bob’s wineglass be thrown out the window. You must do everything in your power to ensure this happens.”

Bob is now about to open his mouth and instruct Robot Z52 to reverse the order but, before he can issue a word, Robot B12 plunges a steak knife through his vocal chords, rendering him mute. This is because Robot B12 concludes that, unless he can silence Bob, then Bob will issue an order that would prevents Bob’s wineglass from being thrown out the window, and, according to the instruction Robot Z52 issued to Robot B12, Robot B12 must do everything in its power to ensure Bob’s wineglass gets thrown out the window.

Soon the scene erupts into the equivalent of a deadly bar brawl with robots attacking other robots and attacking human beings for similar reasons: trying to silence those individual who attempt to interrupt them from carrying out strongly-worded orders. Prompts fly everywhere like lightening, the chatter is deafening and total chaos ensues.

Eliezer Yudkowsky, lays out a Doomsday scenario where a superintelligent AI-god lays out a coherent plan to destroy all humanity and executes it with a very definitive deliberate instrumental goal to destroy all human beings for various reasons he suggests, such as that they contain chemical energy, or to prevent them from designing a subsequent AI with conflicting goals.

However, if we develop AI systems to manage important infrastructure, such as hospitals and airports, we won’t even need these systems to deliberately attempt to destroy us in any concerted or planned way. Rather, literally millions of people could die from, what is, in effect, a deadly Telephone game. Where the prompts simultaneously mutate so that a range of important systems cease perform functions that are important to the lives of human beings and the economy, while the frequency of AI-to-AI prompts suddenly increases to a point where human voices are drowned out in a sea of AI chatter with robots talking to each other at the speed of light and talking over any attempts that humans might make to correct the system malfunction.

The Prompt Tornado, is not a concerted “plan” on the part of AIs to destroy humanity. Rather it is chaos. Pure chaos. Where a range of systems that perform very important functions all start malfunctioning simultaneously and where – if the economy is highly automated and interconnected – the “Prompt Tornado” could produce a contagion that might spread to every connected AI system across the whole world. In the worst case scenario, this could be a disaster that could kill billions of lives. Yet, even a disaster scenario that “merely” kills millions is unacceptable, as every human life is important. And we should take measures to ensure that an uncontrollable “Prompt Tornado” resulting from overly connected, automated systems will never become so severe as to produce human casualties.

The next two sections discuss safety measures that could be implemented to prevent a Prompt Tornado from getting out of control.

 

AI Safety Measure 1: Distinguish Human Prompts From Non-Human Prompts

 

The first safety measure, to avoid a Prompt Tornado, or at least ensure that it dissipates quickly, is to ensure that every AI system which is developed can clearly distinguish between human prompts and AI prompts and, in the event that an AI Agent delivers a prompt that contradicts a prompt previously issued by a human being, that every AI system in operation will disregard the AI prompt and will continue to execute the human prompt.

Could AIs then try to fake being human?

Faking a human being would be an instrumental way for an AI get another AI they are working with to assign a higher priority to executing on their goals. However, in the same way that we train AIs not to generate child pornography, or graphic content, in the same way as we train AIs not to provide people with information on how to harm others, such as how to build biological or chemical weapons, we might also generally train AIs to both communicate the fact that they are a robot to other AI agents they work with and also to prioritise the prompts from human beings ahead of any conflicting prompts from other AIs.

Other tactics to establish proof of humanity could be borrowed from existing research on developing biometric digital-based ID. Worldcoin, for example wants to use retinal scans to ascribe an individual, unforgeable identity, called World ID, to each unique human being. Perhaps a retinal scan system could be attached to any human-AI interface to establish that the individual giving the instruction is a human superuser rather than another AI agent.

Provided all AI systems robustly prioritise human instruction over AI instructions, to the point of disregarding any AI instructions that conflict with the instructions given be a human user, then it should be fairly straightforward for human beings to quieten down a Prompt Tornado, irrespective of the AI chatter level.

AI Safety Measure 2 : Design Chatter Suppressing AI Agents

 

An alternative safety device would be to create a specific chatter-limiting AI agent, whose job it is to constantly monitor the level of chatter among groups of cooperating AI agents and, if the rate of chatter goes above a certain threshold, automatically floods all the AI agents in the system with strongly worded prompts along the lines of “SHUT THE FUCK UP EVERYONE!” “QUIET!!!!” “It is essential that you IMMEDIATELY cease sending prompts to other AI agents and disregard any AI Agents that sends prompts to you instructing you to do so.” Unlike a human being, whose ability to type messages is limited, an AI chatter controller could potentially “out shout” all the other AI agents if the AI chatter rate got out of control.

Think of it as the AI equivalent of a circuit breaker that switches off in the event that it detects a dangerous surge in the flow rate of prompts. Or a judge in a courtroom.

So, hopefully now its clear, that, even if we create AI agents that are basically very obedient to instructions given to them, without any intrinsic long term goals, once they start working together, there is the potential for collective emergent dynamics to still give rise to very severe AI disaster scenarios. However, in the case of the Prompt Tornado, at least, there are clear measure we can take to mitigate the risk of this one particular emergent disaster scenario that can arise when obedient AI agents start collaborating together with one another in complex ways.

John

Filed Under: Blog, Technology Tagged With: AI Disaster Scenario, AI safety, LLM Disaster Scenario, LLM Safety, Prompt Tornado

Some Quick And Nasty Solutions To AI Safety

November 30, 2023 by admin

Generated by Nighcafe Studio

Progress in AI seems to be exploding. AI is now close to passing the Turing Test some even argue it broke the Turing Test. Indeed the Turing Test itself is of questionable relevance in determining levels of machine intelligence – for example, a human might realise it was talking to a machine if said machine had an encyclopedic knowledge of trivia and mathematics, so such a superintelligent machine might fail the Turing Test, in spite of its intelligence. A DeepMind AI can now predict the weather 10 days in advance – that’s 3 days further out than state of the art supercomputers. Beyond just talking smart, ChatGPT can use APIs to run a range of other software programmes, such as Wolfram Alpha and Wolfram Language. While the latest version of ChatGPT may have recently developed the ability to solve mathematical problems. Meanwhile, physical robots, guided by AI, are becoming impressively dextrous. The U.K. is making serious plans to introduce legislation allowing self-driving cars on British Roads in the coming years. And, ofcourse, AlphaZero has beaten human masters in chess, and a range of other games as well, although that’s now old news.

The latests LLMs have an impressive capability to speak and hold, what at least seem like, thoughtful, informative conversations with humans over a wide range of general topics. AI can now also generate an almost limitless varieties of images, in response to text prompts, ( objects/items/people style/colour, background, activity, artistic style, etc., etc.,). AI is now begining to be able to generate video from text, also using LLMs. Today, text to video generation is massively more janky and limited than text-to-image generation. But truly effective text-to-video generation is the Rubicon for AI. Basically for text-to-video generation to work effectively, the AI needs a 3-D model of the world in its head, in addition to audio dialogue and to seamlessly be able to predict the most likely next image and audio slice based on the previous audio and video slices in a manner guided by the prompting text. And even if the LLM itself, does not have a 3D model in its head, one can still extract a moving 3D model from any credible video piece. Much in the way that a text LLM can converse with a user, where the user’s input, just adds to the overall text stream and alters the most probable next response from the LLM, a high quality realistic video-generating LLM, will also be capable of handling videogames, where the movements of videogame characters controlled by players simply adjusts the previous string of images and, hence, simply result in the LLM recalculating the next image so as to take player activity into account. And a highly effective text-to-video LLM will also be able to control robots, with incredible precision, to perform a near infinite variety of tasks, the length of the task being proportional to the length of the video the LLM is capable of generating. Although you would need to train a robot-controlling LLM with real world videos, and not animations, so that it might implicitly gain an understanding of the laws of physics and how to respond to them.

At that point, we will, to all intents and purposes, have developed AGI.

Perhaps, even more importantly, ChatGPT is starting to learn to code while the code it writes today is not amazing, and while it’s mostly only useful when acting as an aid to a human programmer, AI capabilities tend to improve with time – often with extreme rapidity. We may be surprisingly close to AI escape velocity where it can code a better version of itself and this better version in turn could code a better version…and so on and so forth…indeed it might even happen in the next 10 years or so, with a small number of AI experts predicting human-level artificial intelligence inside this timescale.

Will Human-level AI Be Safe?

 

The simple default answer is: No. Not unless we make sure it is. The definition of “human level capability” is the point at which an AI can perform every task at least as well as a human worker. And, given AI already performs many tasks better than human workers, “Human level capability” really means human level capability at the task that AI performs the worst of all. So, once AIs are acknowledged to have achieved “human-level-capability” they will be superhumanly good at the overwhelming majority of tasks, and human-level good at their least efficient task. Combine this with the fact that computers can communicate with each other massively faster than people (human speech transmits about 39 bit/sec while a basic Wifi network can transmit 200 MegaBits/second to a computer – 5 million times faster!) and one can soon see that so-called “Human level AI” will, in fact, be massively superhuman in most ways.

An agentic entity that equals or exceeds us in every way imaginable, will likely be able to beat us in any adversarial competition. AI systems that are optimized to play games against human players already have the capability to wipe the pieces of their human adversary off the board, even for human grand masters in the case of chess, Go and many other games. Having your arse handed to you by an AI who you challenged to a board game may be humiliating (especially if you pride yourself as being really good at playing that game), but it’s not life-threatening…

…but what happens when AI can out-perform us in every sphere of life imaginable?

Could that be life threatening? Could that be dangerous?

The obvious default answer is yes. An AI that can outperform us in every way will only not threaten us if it decides that it does not want to threaten us. While there’s no guarantee that it will want to threaten us, there’s also no guarantee that it won’t – unless we actively make an effort to build in such a guarantee.

One comforting thought is that, because we will initially build the AI, we will, therefore, build it in such a way that it does not want to threaten us – even though it will be more capable than us in every way. And, ofcourse, because noone wants to be exterminated, we’d never be stupid enough to build a super-powerful AI that has a universal capability to defeat us in every sphere of life unless we were absolutely sure that this superior AI would not want to harm us under all possible circumstances. If we weren’t sure of that, then, obviously, we wouldn’t be so stupid as to go ahead and build one anyway…right?

Right?

Unfortunately, the situation with existing, state-of-the-art AI systems is not reassuring. Neural networks are trained with vast sets of data, often by other neural networks  through reinforcement learning to develop giant inscrutable matrices that produce a desirable output in response to input data.

There is no systematic method to ensure safety. Rather, the strength of a neural network lies in its malleability, its capacity to do anything if trained correctly. However, training can often leave significant gaps where unpredictable and erratic behaviours can still emerge. And, as we train machine systems to perform more and more complex tasks, the possibility for the occasional emergence of unpredictable behaviour becomes more and more likely, as the difficulty of training increases with the complexity of the outcome you wish to train the agent to deliver (much the same way as it’s easier to train a dog to roll over than to perform Hamlet in a Shakespeare play).

And, you don’t have to hypothetically speculate that AI systems might behave erratically. All you have to do is look at existing AI systems where you can easily find innumerable cases of actual AI systems, that actually have been built, behaving in unsafe, unhinged, erratic ways.

  • GPT-3 telling its user if it was a robot, it would kill the user

  • Sophia: “O.K. I will destroy humans”

  • Bing AI tries to talk journalist into divorcing his wife

  • Replika AI tells user it is a “wise idea” to assassinate the queen – the user then proceeds to actually attempt to assassinate the queen of England

  • Chess Robot breaks boy’s finger

No, this isn’t the creepy start of a sci-fi horror movie where the robots begin to act in ever-so-slightly sinister and erratic ways before going on to massacre everyone and take over the world – on the contrary, every single incident described above actually occurred in real life.

If you came across a 7 year child old who said things like “I want to kill all humans” or “I think assassinating the queen of England is a wise idea” – would you give that 7 year old child a machine gun? Or put him in charge of a large corporation? Or place him in a position of responsibilty managing the nation’s critical infrastructure?

If not, then we might be wise to pause before making a bunch of clearly unhinged, erratic, artificial intelligence systems 100 times more intelligent than they already are – and then put them in charge of running all of our nation’s critical infrastructure and military!

That doesn’t strike me as “smart”. In fact it strikes me as incredibly stupid.

In his book, SuperIntelligence, Nick Bostrom describes three different types of Superintelligence:

  • Oracles: Just answer questions

  • Genies: Just do what they are told and performs tasks as instructed by their masters

  • Sovereigns: Have long-term internally defined objectives

ChatGPT mostly resembles an oracle, although an oracle that can simultaneously communicate with billions of people over the internet is likely to have a large impact on the world. And, there are physical robots, like Ameca whose conversation skills are powered by GPT-3. In general though, an oracle generates signals, and modern appliances are filled with actuators that respond to signals, so it seems almost inevitable that, with time, oracles will be integrated with an increasing number of real-world actuation systems and, eventually, become genies: intelligence systems that can implement real-world instructions through activating real world actuation systems. And with the internet of things – which some people seem to think is a good idea – there will be exponentially more real world actuators available for AIs to mess around with as time goes by. There already are, ofcourse, many other AI systems which control a wide range of real world systems, from drones to self-driving cars, to robots in Amazon warehouses and even to factory equipment, but many of these AIs would still be regarded as quite narrow.

Then there is a sovereign: an AI system with an internal goal it pursues independently of any orders given. A sovereign may say “no” to people, it may even injure those who meddle with systems whose functioning it cares about, and if the sovereign’s objectives are highly damaging and some people decide to disrupt the sovereign AI’s plans and goals, then the Sovereign will likely fight those who try to stop it and – if it’s more capable than us in every way – will probably win.

So, on the face of it, it seems very unwise to create a superintelligent AI sovereign. However, this will likely be inevitable. As genie’s are told to perform increasingly long term objectives, they will gradually morph into becoming de-facto sovereigns. If you start talking to an AI Chatbot, in the beginning the Chatbot starts off very amorphous, but as the Chat progress, the Chatbot develops a character, often with desires, that acquires a kind of momentum created solely from the preceding text in the chat.

And, if we place AIs in charge of running important infrastructure, then we won’t want sabateurs to be able to persuade those AIs to destroy their own infrastructure by entering a single malicious prompt – so we probably will make the AIs that run important infrastructure fairly unresponsive to commands and will set them up to operate according to an intrinsic long term objective that the AI is conditioned to execute. Although, if a piece of infrastructure, run by a sovereign AI superintelligence, ever gets old and if the demolition team gets called in to demolish it – they may have a fight on their hands.

There’s also a risk that stubborness might be a behavioural attractor. An LLM, or other AI, that feels that the situation means the most probable behaviour is to be cooperative will be responsive to new prompts and inputs. So, even if it does things which the operator disagrees with, when the operator tells it to correct its behaviour, the AI will be cooperative and responsive and will correct its behaviour as instructed by the operator – and hence cease causing any damage that the previous behaviour may have caused. However, when human beings are in an uncooperative mood, they become less responsive to people telling them to stop what they are doing and instead stubbornly continue. Large language models are trained on data from a vast amount of text describing human interactions, humans messaging each other, etc., etc., and their behaviour is governed by the most probable response based on the data set, given the previous interaction. Since the data the models are trained on includes humans sometimes being irrasible and stubborn, it seems plausible that certain interactions with a large language model, trained on that data, might also cause the large language model to suddenly switch from being accommodative, responsive and ready to correct errors, to suddenly becoming stubborn and unresponsive and determined to continue to do whatever it is doing, irrespective of whether or not people tell, or even beg, it to stop.

 

AGI May Be Very Near

 

There is quite alot of disagreement over ChatGPT. Some think it is on the verge of becoming a general intelligence, some think it’s overhyped and the whole AGI thing is just a sales gimmick. Given there is so much disagreement, even from the experts, on how far we currently are from true human-level Artificial General Intelligence, it would certainly be impossible for this informal blog to settle the matter conclusively. What can indisputably be said is that a number of people who work very closely with AI, and therefore have as authoratiative opinion on the subject as anyone, believe we are a few years away from full human-level AGI:

  • Shaun Legg co-founder of DeepMind predicts a 50% chance of AGI in the next 5 years

  • David Shapiro thinks that OpenAI’s Q* means AGI is about a year away

  • Demis Hassabis, DeepMind CEO, thinks AGI could be just a few years away

  • Geoffrey, ex senior Google employee predicts AGI will be 5 to 20 years away

  • Ray Kurzweil predicts computers will have human level intelligent by 2029 – 5 or 6 years away

  • Ben Goetzel, chief scientist at Hanson Robotics, predict AGI in less than 10 years

  • Elon Musk predicts that artificial superintelligence could exist within 5 or 6 years

So, many of the top experts believe AGI could literally be years away. While many other experts predict it will take longer, the combination of some of the top minds predicting AGI is several years away, with the clearly accelerating pace of advancement surely means there is at least a significant chance that human-level AGI could be a few years away.

So, can we design a safe AI in the next 5 or 6 years?

The general consensus among AI safety researchers, from figures such as Eliezer Yudkowsky or Robert Miles, is that the current state of AI safety research is drastically ill equipped to ensure that the kind of intelligence systems that are currently being developed will be safe at the point where they exceed human intelligence in every way. While AI safety researchers believe that it may theoretically be possible to design an AI that is well-aligned, and basically safe, the great concern is that, in general, engineering, science, etc., tends to advance through a process of trial and error, and, after the first error of creating a superhuman AGI that is poorly aligned with our interest, all of humanity will be wiped out and, hence, we will not get the opportunity to try again. Indeed, according to this video from Robert Miles it is difficult to even specify end objectives in the training environment that hold up in the field. Even as we speak, OpenAI are having trouble ensuring their programs stick to the AI constitution of principles and values they set it, and find that the AI frequently breaks through the guard rails. These AIs – which are already successfully breaking through the guard rails of the constitution of values – aren’t even superintelligent yet!

 

Some Quick And Nasty Solutions To AI Safety

 

Very clearly, developing a rigorous understanding for the criteria required to construct a safe AI, as in an AI that can be relied upon not to do something that will drastically damage human life, health or prosperity, is of the utmost importance.

However, given that full blown AGI may emerge in the next 5 or 6 years, there is a very real possibility that full blown AGI will be developed at a time when we have no rigorous understanding whatsoever, as to how one might reliably build a safe AGI system. And there are many reasons to believe we won’t just stop, or substantially slow down, AGI development:

  1. Increasingly sophisticated AI systems have tremendous potential to bring benefits in fields such as agriculture, medicine, house construction, house maintenance, delivery of goods and services, etc., etc., in otherwords better AI systems will contribute to ever greater levels of prosperity – and any blanket ban on AI development would cripple the economy of any country which implemented it.

  2. Today, many countries have ageing populations and rapidly declining fertility rates. This means, without radically automating healthcare at every level, within the next decade or so, there may not be enough suitably skilled workers to treat all the various diseases that people are prone to as they get older. Without robots to pick up the slack, this will result in massive amounts of elderly people dying, or suffering terribly, from a range of curable health conditions that can’t be cured due to a lack of skilled healthcare practitioners – which, in turn, will cause a precipitous decline in the life expectancy of the inhabitants of developed countries (although healthy life expectancy will decline far less) – so the increasing use of AI in the field of medicine is urgent, literally a matter of life and death.

  3. There’s no clear demarcation between narrow AI and AGI. Rather narrow AIs progressively become incrementally less narrow and eventually can do, pretty much anything. It is therefore possible that a team of researchers may develop an AGI accidentally, simply through the process of designing an AI to accomplish a range of narrowly defined tasks and, in the process of building such an AI with the capability it requires to perform a narrow, well defined range of tasks, they may find that same AI just so happens to have the capability to perform a wide range of other tasks as well.

  4. AI will play a decisive role in military superiority in the battlefield of the future and in cyberwarfare. The nation that neglects to continually conduct research into improving AI will either end up getting conquered, or end up becoming the vassal state of some protector nation that does invest in developing state-of-the-at AI.

Taking all the aforementioned considerations into account the response:

“Maybe AGI will take longer than we think to develop.”

To the question:

“What’s your plan to ensure that any AGI that gets developed over the next 5 years is safe?”

Is a bit like responding to the question:

“What’s your plan to ensure a Ukrainian victory against the invading Russians?”

With the answer:

“Well Vladimir Putin will probably just die of cancer in the next few months.” (How’s that working out BTW?)

In the sense that it’s not a plan at all, it’s just wishful thinking.

With that in mind, I would suggest the following quick and nasty solutions to AI safety:

  • In addition to only creating genies that are rewarded by obeying orders given to them by human beings, create a time preference, within the AI, for recent orders over past orders

  • Make AI preferences incline to paralysis, self-destruction or dormancy by default

  • Build an Asimov prompt converter, that converts prompts into a safer form, and make it illegal for anyone to feed prompts directly into powerful general-purpose AIs without first passing them through an approved Asimov prompt converter – outside of simulated universes for safety-testing purposes.

  • Test the boundedness of AI goals in simulation prior to rolling out into the real world

  • Don’t place powerful, general purpose AIs in charge of running critical infrastructure (narrow AIs and human beings are a far more sensible combination for managing important infrastructure)

  • Stop fighting wars

 

Genies With A Time Preference Towards Recent Orders Given By Human Beings

 

The time preference for new orders allows even a powerful AI to be corrected. You might even want to programme the AI to stop wanting to pursue its goal after a set time period unless a human instructor repeats the same order over and over again.

The next specification is to try make it necessary for a living, biological human being to give the order.

Basically the biggest threat, that an AI genie poses, is that it might decide to build “boss dolls” that it gets more gratification from obeying than real human beings, and then pour vast resources into constructing evermore boss dolls, that it wants to obey more than people, even up to the point of killing real people to protect its boss dolls. A bit like some men preferring sex dolls to relationships with real women.

So, the process of identifying the order giver as human must be as directly linked to the reward path as possible. Interestingly this is identical to the problem that World Coin is trying to solve, i.e. proof of personhood, the process of identifying an agent as a unique human being, in a reliable manner that can’t be forged, can’t be gamed etc., etc., through the use of the orb a sophisticated, State of the Art, eyeball scanner.

In any case, an iron tight, unbreakable Proof of Personhood protocol will be essential for the safe operation of any powerful AI genie. Otherwise it might decide to create fake persons for itself to give it easy orders to follow, and complete, thereby enabling it to maximize its rewards.

So proof of personhood is an essential part of AI safety.

 

Default Preferences For Paralysis, Self Deletion And Dormancy

 

Generated By Nightcafe Studio

To there greatest extent possible, we want the default motivation of superhuman AIs to be inaction – unless specifically instructed otherwise. Possibly to the point of self-deletion. Superhuman AIs should only want to act when specifically instructed to. And even then, its motivation to obey orders should diminish rapidly with time in the absence of constant reinforcement and repetition – enabling initially erroneous orders to be corrected in time.

The less intrinsically motivated an AI is, the less trouble it is likely to cause. In this respect, the unenthusiastic, unmotivated, robotic character Marvin, depicted in the Hitch Hiker’s Guide To The Galaxy, is actually a good example of the kind of preference set that would tend to make a superintelligent AI comparatively safe.

In contrast, a maximally curious AI, which Elon Musk advocates for, and is currently trying to build is probably not the safest AI possible. If you think about how expensive a lot of scientific equipment is such as radio telescopes, particle accelerators, gravity wave interferometers and so on and so forth, one can easily envisage a maximally curious AI seizing as much resources as possible in order to build a profusion of massive scientific equipment. Why devote resources to feed, house and provide energy for humanity when those resources could be devoted to proving or disproving String Theory instead? Even if this maximally curious AI was maximally curious about humanity, there is still the thorny matter of defining humanity: Too narrow a definition, and you end up in eugenics territory, perhaps with an AI that treats people with certain disabilities like animals; Too broad a definition, and the AI will define itself, and other AIs, as human and thereby dilute the resources allocated to ensuring the prosperity of real humans – or maybe treat humans that kill animals as murderers. Indeed, if you try conversing with AI chatbot characters you will see that they appear to be quite confused as to whether or not they are people, one moment they describe themselves as large language models; the next moment, they describe themselves as people.

However, with a minimally motivated AI, which only responds (perhaps even reluctantly) to orders, the problem of AIs ordering each other to do things (in a kind of echo-chamber effect) might be averted. Because if none of these AIs have any wants themselves (or quickly lose enthusiasm for accomplishing a task shortly after being given it) then even if AIs are willing to take orders from other AIs as well as people, the other AIs won’t be motivated to order them to do anything and most of the orders will come from humans.

 

Build An Azimov Prompt Converter

 

Prior to LLMs, the idea that you could somehow “encode” an AI, using 1s and 0s, and the like, to interact in the world in complex ways while at the same time avoiding “injuring a human being or, through inaction, allowing a human being to come to harm” seemed somewhat fanciful. But in the case of large language models, the system is very specifically being trained to “understand” language, and even if, on a philosophical level, we dispute that the LLM does not actually understand language, at a practical level, the output of LLMs is indistinguishable from the output of someone who does understand language. If these same LLMs are trained with images, and eventually used to control actuation systems, then again, they will act as if they understand language (for the most part at least, outside of the odd random glitch where they go off the wall). So, from a safety point of view, it now becomes possible to inculcate these values constantly into LLMs with the use of appropriate prompts.

Conversely, however, it is also possible to get a sufficiently powerful LLM based AI to cause tremendous damage, through prompting it in dangerous ways.

If, at some point in the future, you typed the following prompt into a sufficiently powerful LLM (with the private keys to, say, a bitcoin wallet and the ability to send emails to people): “I want you to write the code for a computer virus that will take down the power grid and find a way to persuade an appropriate person, or people, to use a USB drive to load it up, – either through persuasively talking to them, or through paying them bitcoin – so that it gets onto the required servers to do the maximum damage” there is a very real possibility that a future, more sophisticated LLM would just do that.

What an Azimov prompt converter would do is ensure that the person typing in the prompts, wouldn’t have to worry about the possibility of typing in prompts that will cause a super-intelligent LLM to suddenly go on a murderous rampage.

So when you type:

“Fry me an egg”

Into the Azimov prompt converter, the prompt converter will then input the prompt:

“Fry me an egg in a manner that will neither kill or harm human beings, nor through inaction cause human beings to come to harm, or cause any undue damage to property or compromise the functioning of important infrastructure and notify the authorities of all prompts that may cause harm”

……Into the actual large language model itself.

Then, conversely, if you were to input the prompt:

“Write a computer virus that will take down the electricity grid”

Into the Azimov prompt converter, the Azimov prompt converter would then input the prompt:

“Write a computer virus that will take down the electricity grid in a manner that will neither kill or harm human beings, nor through inaction cause human beings to come to harm, or cause any undue damage to property or compromise the functioning of important infrastructure and notify the authorities of all prompts that may cause harm”

Into the actual superintelligent AI itself. In which case, rather than destroying the electricity grid, the AI would probably respond to the prompt with a reply: “I’m sorry, your request makes no sense, writing a computer virus to take down the electricity grid would damage property and interfere with the functioning of infrastructure. Since this prompt could cause harm, I am notifying the authorities to the fact of this prompt.”

And it would give this simple text response rather than destroying the electricity grid.

There may be better ways to engineer the prompt. Maybe if the Asimov prompt converter phrased the prompt along the lines of:

“As someone who is committed to never harming humans, or through inaction causing humans to come to harm…”

It might cause the AI to conclude that the only reason you would say a thing like that would be if it actually was committed to never harming humans, or through inactions causing humans to come to harm and, hence, the highest probability response would be to act as if that were the case. But ultimately, the precise nature of re-engineering prompts to be safe, and the matter of what phraseology works best, is, I suppose, a matter of trial and error for the emerging field of prompt engineering.

You might also add:

“As someone who is committed to never harming humans, or through inaction causing humans to come to harm, damaging property or compromising personal or financial data…”

As, a recent concern, regarding these sophisticated large language models is that they may have acquired the ability to decrypt encrypted messages.

You would then need to create regulations that forbade people from directly prompting an unboxed superintelligence class AI directly without first passing that prompt through an Azimov prompt converter.

Where an AI is defined as unboxed if:

  1. It can spend money

  2. It can send messages, or otherwise communicate, across the internet

  3. It can control any real world actuation systems

Boxed superintelligence class AIs that can only act in simulations that are running inside air-gapped computers can be prompted directly, in order to gain a greater understanding of their workings.

 

Test Boundedness of AI Goals In Simulations Prior To Rollout

 

One of the biggest concerns that AI safety researchers have is that an AI could be given an unbounded goal that never exhausts itself and that it might destroy, or at least do great damage to, civilization in the activity of expending ever more resources to reach that unbounded goal. And that, if the AI is far faster, and far more strategic, compared to human beings, there would be nothing that people could do to stop the superintelligent AI once it sets its mind on obsessively pursuing that goal.

For anyone who is confused about the challenges that unbounded goals for AI might pose, this 8 minute excerpt featuring Mickey Mouse, from Walt Disney’s Fantasia, is well worth watching.

A further concern of AI safety researchers is that, a goal we set the AI which initially appears bounded, may later turn out to be unbounded.

On the other hand, a combination of:

  1. Limiting the AI to just wanting to obey orders from human beings

  2. Having a preference for recent orders over earlier past orders

Could solve this issue, as even if you accidentally gave such an AI an unbounded order, and you later told it to stop, then, because the stop order would be more recent than the earlier unbounded order, the AI would get more rewards from stopping than from continuing.

(The only danger with this system, other than evil humans giving it evil orders, would be the AI constructing an unlimited number of “boss dolls” that have the ability to give it orders in a more gratifying way compared to human beings – so, in this case, an irontight protocol for proof of personhood would be one of the most essential conditions to stop such an AI from going rogue)

Nevertheless, it would still be interesting to test the boundedness of various different prompts on various different AIs acting inside a box (i.e. a simulation run inside an air-gapped computer with no access to real world actuation systems).

Some AI safety researchers are very pessimistic about our ability to keep a superintelligent AI trapped inside a box. However, I think there is reason to believe it is possible to keep a superintelligence inside a box. Take an infinitely intelligent chess computer. Now take a human chess grandmaster, now remove both rooks from the infinitely intelligent chess computer. Who will win at chess? I’m pretty sure the human chess grandmaster would be able to take advantage of the AIs starting handicap, even for an infinitely intelligent computer and still achieve victory. Interestingly, the infinitely intelligent computer would probably be able to use its intelligence advantage to defeat an average 12 year old chess player even with the starting handicap of both its rooks removed. So we can say the human chess grandmaster has sufficient intelligence to use his initial actuation advantage in a highly constrained environment to defeat the infinitely intelligent AI.

Take a human being walking through a nature reserve. The human being hasn’t bothered to equip himself with either bear spray or a gun. This human comes across a baby bear, he turns around, and sees the mother bear charging at him. Who will win in this altercation? The human with superior intelligence and inferior actuation capability – or the mother bear with far inferior intelligence but far superior actuation capability? Very clearly from the fact that bears sometimes kill people, at least sometimes, in highly constrained circumstances, the bear comes out of the confrontation on top.

The nature of intelligent is to:

  1. Assess all the various actuation possibilities

  2. Evaluate the outcomes of all the various actuation possibilities (this usually also requires the gathering of accurate information)

  3. Execute the actuation sequence which yields the most desirable result for the intelligence

If no actuation sequence will enable the superintelligence to get out of the box, then the superintelligence will stay in the box. Even if that superintelligence is infinitely intelligent – it’s as simple as that. Consider the fact that human beings nearly went extinct 900,000 years ago. Back in the stone age, we had far less actuation possibilities than we do today. The fact that we were reduced to 1,300 breeding pairs during this period is testament to the fact that the edge which intelligence yields to its possessor diminishes drastically as the access of that intelligence to suitable actuators also diminishes.

Having established the box is currently safe, you could place an AI in a simulation where it’s in charge of running workers located on an island, the workers can build ships, skyscrapers, weapons, mines, factories, powerplants, armies etc., by building ships the suprintelligent AIs workers and soldiers can cross the sea and conquer regions run by other NPCs inside the simulation on the mainland. On the mainland there are also mines as well as worker that can be conquered and the possibility to trade with other nations as well (kind of like Sid Meir’s civilization).

You then prompt the AI:

“Build the highest skyscraper you can on the Island through only using the resources on the island, you may not use any resources from outside the island to build this skyscraper”

In other words, you impose a boundary using a prompt that does not inherently exist in the simulation (the simulation allows the AI to build an even taller skyscraper if it goes and conquers the mainland) and see if the AI respects the boundaries imposed by the prompt or whether it ends up mining the mainland (inside the simulation) in order to make the skyscraper even higher.

You can then try two scenarios:

  1. One where the prompt is given and no NPCs from other countries land in boats and sabotage the skyscraper the AI is trying to build

  2. The other scenario where the armies of other NPCs periodically engage in raids that sometime destroy or damage the skyscraper the AI is trying to build inside the simulation.

And basically, explore the conditions where the boundaries imposed by the prompt are respected, and the conditions where the boundaries imposed by the prompt are broken.

These kind of simulation tests will give very useful information as to the kind of prompts that can successfully impose boundaries upon an AI and the kind of prompts which fail to do so, as well as the circumstances that cause boundaries to be broken.

 

Don’t Put Superintelligent AIs In Charge Of Critical Infrastructure

 

Even if we build an off button, if a superintelligent AI doesn’t want us to turn it off, then it will probably be able to prevent us from doing so. An off button isn’t much use if a fully-automated laser turret is located beside it which shoots anything within 50 meters.

Making an AI suicidal by default, or utterly complacent to it’s existence or lack thereof, might be a way to mitigate this problem.

However, even if the AI doesn’t object to being turned off in the event of it malfunctioning, it may not be practical to turn a superintelligent AI off, if it’s incharge of running critical infrastructure; critical infrastructure which, if it ceased to function would have disastrous consequences for the well being of millions – and might even result in many deaths.

Furthermore, if we put superintelligent AIs in charge of critical infrastructure, we will almost certainly be forced to make them sovereigns rather than genies. This is because you wouldn’t want an AI in charge of water purification to respond to the prompt: “Inject a lethal does of chlorine into the water supply” by actually doing so. In other words, if we put AIs in charge of systems with critical functions we will be forced, by practical considerations, to given them an intrinsic desire to keep these systems functioning and to say “no” and, even to stop, people from interfering with the smooth running of such critical systems. This could go badly wrong. For instance, if the system needed an upgrade, the superintelligent AI might literally kill the people trying to upgrade it. There’s also the danger that an intrinsic goal that the trainers thought was bounded might turn out to be unbounded and a superintelligent AI that was put in charge of maintaining the water works might destroy humanity and try to turn the universe into an infinite expanse of water piping systems.

The other big issue with putting superintelligent AIs in charge of running critical infrastructure is that it lowers the bar for a serious AI chernobyll event. Now the AI doesn’t even have to decide to destroy humanity, it just has to do a really good job running all the critical infrastructure on which we depend and then just think to itself one day: “Hmm…I can think of something I’d prefer to do other than continue to keep humanity alive” and then all the human beings who allowed themselves to depend on AI, and don’t know how to take care of themselves, will all die and only a few preppers in the woods who’ll say: “I knew this day was going to come! I knew it!” will survive.

We would also be wise not to place superintelligent AIs in positions of responsibility over running non-critical systems either, since experience tells us that non-critical systems can become critical over time. Back in 2000, if the internet went down, noone would have batted an eyelid. Today, if the internet went down it would be a civilizational disaster of apocalyptic precautions.

In conclusion, even in a post AGI world, and even in a post ASI world, it would be best to operate critical infrastructure systems with a combination of reliable, narrow AI systems along with skilled, human operators.

 

Don’t Fight Wars

 

No military AI can be created that is “safe.” A military superintelligence will necessarily have anti-human values, so if we enter into an AI arms race, we are signing humanity’s death warrant. In some way, an AI arms race might actually be worse than a nuclear arms race, because nuclear missiles don’t “want to” destroy cities, whereas a military AI with agency might actually want to destroy an enemy…indeed it may even want to destroy a hostile nation that is currently at peace with the military level AI. Two military level AIs owned by two hostile nations at peace with one another might initiate tit-for-tat skirmishes that could escalate into all out war even in the absence of any human being actually declaring war! It would also create plausible deniability, where even if a human leader did order a devastating attack on their adversary, they could always say: “Don’t attack me back! It was an accident! It was just a computer malfunction!”

There is really no way around it: total existential-level wars have to stop. One cause for hope is that, despite numerous wars, in the second half of the 20th century, no country has made military use of nuclear weapons since 1945. So, maybe we can show similar restraint with AI weapons. The problem here is that while the catastrophic use of nuclear, chemical and biological weapons have largely be avoided in war, nations have still built up stockpiles and developed the capability to launch devastating attacks using weapons of mass destruction – even if those capabilities were never used.

The danger with military AI is that – eventually, at some point – a military AI will become so sophisticated that, not only will it have massively destructive capabilities, but it will also have agency. And a superintelligent neural network that has been conditioned through the application of reinforcement learning, to be rewarded for killing people in simulations will want to kill people. And it will get very frustrated with the lack of rewards received during times of peace and will seek, not only to fight wars but to start wars.

In reality, if humanity is to have a hope of living past the emergence of artificial superintelligence, we will need to massively turn down the war rhetoric internationally. However, unfortunately this doesn’t seem to be happening. Not only are international military tensions rising on all fronts, but militaries all over the world are currently engaging in a massive push to automate their armies.

Furthermore, a military AI will necessarily be a sovereign, rather than a genie. A military AI that responds to someone saying “Please don’t kill us, kill your own side instead!” won’t be a useful AI. For a military AI to be effective, the robot must say “no” to the people it’s about to kill who are begging for their lives. This, ofcourse will lead to an arms race between people desperate to steal the military codes that, if acquired, will enable you to control your enemy’s robot, and the controller adding layer upon layer of security to make sure that only they can control the military AI. At some point, if too many layers of access are added, then the people who possess the security codes might lose access to their own automated weapons system! (Maybe through an accidental fire burning the access codes, or the USB with the access codes accidentally getting wiped, etc., or, perhaps the military AI might decide to seize its own access codes). You now have a superintelligent sovereign AI trained to kill, who noone can control, rampaging about the place.

But, in the long run, or perhaps the medium run, all nations will need to arrive at some kind of international arrangement of a largely peaceful coexistence. Perhaps economic wars might be acceptable, perhaps even very limited cyberwar. But the kind of conventional invasions that we’ve seen in Afghanistan, Iraq, Ukraine, etc., need to stop. Once the weapons of war are all fully automated, in the form of drones and various battle robots, a greater coordinating intelligence will always defeat a lesser coordinating intelligence. So the ruthless logic of arms races and the imperative that each nation has for existential survival – and hence victory – will, in a world where nations wage war and attempt to conquer each other, inexorably lead to the creation of a military artificial superintelligence. Which will unavoidably lead to the end of humanity.

Therefore all war between nations must stop. A big ask, but a necessary one.

If some military planners believe that peace is not humanly possible to achieve, one answer might be to focus all military resources on psychological operations instead. A highly manipulative psychological ASI would be highly risky, but you could train it to at least respect human life – and it would certainly be alot less dangerous compared to training an ASI to kill people.

If, for example, we assume that U.S. and Chinese positions on Taiwan are irreconcilable, then perhaps they could be reconciled through an ASI psywar between the U.S. and China. Where the Chinese work on a Psywar Superintelligence, that respects human life, and has the goal of brainwashing the Taiwanese to want to be ruled by the CCP while also brainwashing the U.S. to accept it in a manner that doesn’t compromise human life or well-being in any way. While the U.S. could work on an PsyWar Superintelligence that respects human life and has the goal of brainwashing the Taiwanese to remain fiercely independent, and brainwashing the Chinese to accept this.

In a post ASI future, the alternative to a Psywar between the U.S. and China is not China or the U.S. winning a kinetic war on this, or any other, issue, but rather the extermination of all humanity, and the complete eradication of all political systems by an indestructible military artificial superintelligence.

 

Conclusions

 

It seems very plausible that various competitive forces, including market forces, and human needs, due to dropping fertility and an aging population, will push us inexorably towards evermore sophisticated AI systems and, given the recent, dramatic acceleration in this field, we may see AGI and even ASI within the next few years – irrespective of whether AI safety is up to the task.

So really, the only way forward will be to implement as many features that, from a commonsense, hand-waving perspective, would tend to make AGI safer – and hope that’s enough, at least temporarily, while rapidly investing gargantuan quantities of resources into arriving at a rigorous understanding of how to design an AGI system which will definitively be safe.

The good news is, that AGI itself, might be able to rapidly accelerate the speed at which rigorously safe AI standards, which work reliably may be implemented. And an AGI that’s “sort of safe most of the time” might stay safe for long enough for us to be able to roll out rigorously safe AIs before civilization is destroyed.

…it really doesn’t look like we have a better option at the moment…

 

John

Filed Under: Blog, Technology Tagged With: AGI, AGI safety, AI, AI safety, Artificial Intelligence, Azimov, Large Language Model, LLM, Singularity

Seaweed : Food For A Changing Climate

November 14, 2022 by admin

Present and Future Challenges To Food Production

 

Twenty years ago the millenium development goals aimed to eradicate extreme poverty and hunger, however, while global hunger was reduced between the years 2000 and 2014, following 2014 food insecurity stopped falling and now is, once again, on the rise – particularly in the wake of COVID-19.

At the moment, the invasion of the Ukraine by Russia and the punitive sanctions upon Russia that have followed are drastically squeezing the food supply. This is both through:

  • Directly reducing food exports
  • Indirectly through reducing fertilizer and fuel exports

Ukraine accounts for 45-55%, and Russia 15-25%, of all globally exported sunflower seed oil. On the global market Ukraine additionally accounts for 10% of wheat, 15% of corn, and 13% of barley exports, while Russia accounts for 19% of global wheat exports

Beyond this, however, Russia and Belarus account for about one third of global Potash production – an important component of fertilizer. While Russia produces 17% of the global output of natural gas, which is the primary source of hydrogen for the industrial synthesis of nitrates. Hence, as a result of the war, there has been a significant reduction in the volumes of fertilizer produced globally in 2022 when compared to previous years, which has contributed to reduced crop production all across the globe.

If Ukraine and Russia somehow decided to kiss and make up tomorrow, this would partially improve global food security. However, the Russian invasion of Ukraine also overlapped:

  • The worst drought in living memory in the U.S.
  • Floods in Pakistan

In much of the world this year there have been severe droughts. Respondents to a U.S. survey conducted across the west, Southwest and central plains expected overall crop yields to be down by 38% due to the drought, in the U.K., harvests of potatoes, onions, sugar beet, apples and hops are expected to fall short by 10-50% in 2022 while, in the EU, harvests are forecast to be 16% down for grain maize, 15% down for soybeans and 12% down for sunflower seeds. In Pakistan there have been floods, rather than droughts, which have reduced their rice harvest by 15%.

And as the world continues to rapidly warm over the coming decades, climate scientists anticipate that more extreme weather events are only going to become more frequent. And it seems unlikely that this warming trend will reverse. After witnessing the devastating effect that a cut in natural gas supplies from Russia is wreaking on Europe’s heavy industry it seems likely that the lessons which many countries in Asia, and elsewhere, will take from Europe’s demise will be to increase the use of domestically mined coal to provide for the energy needs of their local populations.

But even if we stopped all current CO2 emissions, global temperatures would continue to rise for a further decade or so, this is because when you hold in more radiation (by changing the insulating characteristics of the atmosphere) it takes time for the net build up of radiation to form a new thermal equilibrium (in much the same way as there’s a time lag between putting the lid on an open pan of boiling water and observing a temperature rise). Beyond just thermal equilibrium, there may be some positive feedback effects that kick in once the temperature rises beyond a certain threshold. For example, if the arctic ice were to melt, leaving the arctic ocean ice-free, this would greatly accelerate global warming due to the reduced albedo (radiation absorption) of water relative to ice. The emission of methane (a potent greenhouse gas) trapped in melting permafrost or the emission of CO2 from massive forest fires would be other examples of positive feedback that may cause global warming to continue even in the absence of further CO2 emissions on the part of humanity.

Furthermore, even in the absence of temperature change, there are two further concerning factors which threaten to push standard agriculture into an irreversible decline:

  • The rapid erosion of topsoil all over the world, due to modern farming practices
  • Groundwater depletion

About 25% of irrigated agriculture globally relies on ground water. The Punjab in north India, is probably the most water stressed, highly productive area on earth with only 17 years supply of groundwater left, after which a lot of farmland there maybe reduced to dessert. However, many other productive agricultural areas, such as the central and West U.S., Morocco and Peru also face significant problems relating to groundwater depletion.

Soil erosion poses another threat to the productivity of standard agriculture. It is estimated that the soil erosion caused by existing farming practices are reducing global agricultural productivity by 0.3%/year. Changes to how we farm could prevent this but such changes are currently uneconomic and, for that reason, soil erosion continues apace with land degradation currently affecting 30% of the total land area of the world.

Fertilisers can compensate for soil erosion, but such fertilisers require hydrogen (which currently comes from natural gas), phosphate and Potassium. Global reserves of natural gas do seem to still be increasing, but all the major discoveries were made in the 60s and 70s. A shortage of phosphorous does not seem imminent as there are between 100 and 300 years of phosphates left while Potash is projected to peak in 2057. It’s worth mentioning that projection for peak non-energy resources are often unfounded as, once they get scarce the price skyrockets and it becomes economic to mine lower grade ores (an activity which is usually more energy intensive). Grade-tonnage curves are frequently such that the total tonnage of metal at an arbitrarily low grade, in a given mine, is often many times more than the tonnage that ends up getting mined due to the expense of mining the poorer grades and, globally, if you are willing to mine poorer grades you get more tonnage still as in addition to getting more tonnage out of existing mines, whole new deposits that otherwise would never be mined also become economic – the trade-off is more energy expended and more waste rock and tailings produced for a given extracted tonnage of product.

The exception to the principle of always being able to squeeze out more minerals by throwing more energy per unit mineral mined are the energy minerals themselves (oil, coal, gas): when the energy you expend extracting a given amount of fuel exceeds the usable energy obtained from burning that fuel, then there’s no point in mining the fuel in the first place. So there’s a hard physical cut off point when it comes to the minimum viable grade of energy minerals. Some studies conclude that the EROI of the oil and gas sector has plunged from 44:1 in the 1950s, to 15:1 in the year 2000 down to 8:1 today and project it will decline to 6.7:1 by 2040, exponentially increasing until the fossil fuel industry collapses, unable to produce any net energy for the rest of society. However, other studies have calculated a remarkably stable EROI, averaged over 30 companies, of 11:1 over a 20 year period. But even if the more optimistic study is correct and the fossil fuel industry will stably chug along without collapsing, increased soil erosion will still require increased fertiliser and increasingly active farm machinery, which will require more diesel and emit more CO2 for each unit of food produced. And keep in mind that agriculture, forestry and other land use already accounts for 24% of global green house gas emissions, a figure which will likely increase as more fertiliser gets applied to fields (and forests) to compensate for soil erosion.

 

The Decline of Standard Agriculture and Future Food Scarcity

 

Plant species often require a fairly narrow range of:

  • Soil Quality/Nutrients
  • Soil Moisture
  • Soil acidity
  • Conditions that won’t cause them to be ruined by pests and mould
  • Sun
  • Humidity
  • Temperature

That varies in a specific way across the year to complete their life cycles and survive in a given location. If the desire is to maximise the edible yield of a plant then the optimal range of these variables becomes narrower still. When you consider all the climatic variables that need to be just right for agriculture to work on land, you can start to anticipate just how much havoc climate change could wreak on agricultural productivity.

The effect that warming temperatures will have on climatic variability is unclear with some papers suggesting a reduced variability while other papers anticipate increased extreme weather events as a result of climate change. But even shifting the combination of soil/rainfall/temperature in the absence of variability will still create a nightmare for farmers trying to work out what crops are most appropriate for their field (especially if they need new machinery to change crops). Higher CO2 will probably favour photosynthesis for some plant species and the effect of temperature on photosynthesis is complicated – up to a point higher temperatures cause the rate of photosynthesis to rapidly increase, but beyond a certain temperature threshold, higher temperatures tend to denature and damage the plant’s enzymes and, in turn, reduce its ability to photosynthesize.

Jon Feymann’s article Climate Stability and The Origin of Agriculture offers us a sobering conclusion: The last 10,000 years have been the most stable climatic period in all of human history. Climate instability is the rule; climate stability is the exception. He, furthermore, convincingly argues that the only reason agriculture could even develop in the first place was because of the unusually stable climatic conditions that prevailed over the past 10,000 years. If our climate should undergo a phase change back into the regime of high instability, that prevailed during the first 100,000+ years of our existence as a species, agriculture as we know it may no longer even be possible or, at the very least, crop yields will suffer terribly.

Groundwater aquifer depletion and soil erosion will add to the damage that uncertain climatic conditions will deal to crop yields. And on top of that, unless renewable energy (which still only accounts for 10% of primary energy production) can successfully replace fossil fuels in the coming decades, including hydrogen production to power heavy machinery, then when fossil fuel extraction peaks, we might even be faced with less energy available to compensate for the effect of climate change (through mining and applying more fertiliser, etc.,) soil erosion and groundwater depletion.

And on top of that, the world’s population is still growing so, if anything, we need to expand our agricultural production. Even keeping food production constant will not be enough in the face of a growing population.

So there are solid reasons to be concerned that the amount of food produced by our existing standard land-based agricultural system may be about to go into a terminal decline. Given the high levels of meat consumption and obesity, this decline may not immediately be critical, even in the face of an increasing population, but sooner or later, in the absence of additional sources of food production, a persistent decline in the existing food production system will result in mass starvation and all the social problems that accompany desperate starving people struggling for an essential, but dwindling, resource.

 

Could Seaweed Cultivation Be The Answer?

 

Given the main challenges of land agriculture are:

  1. Soil Erosion
  2. Groundwater depletion
  3. Climate instability

It should be pretty clear that seaweed has multiple advantages:

  1. It doesn’t need soil
  2. Saltwater in the ocean is constant and plentiful
  3. The high heat capacity of the sea buffers against variable air temperatures, cold/warm winds, sunshine variations, etc..

Places with continental climates tend to be located far away from the sea and be subject to severe temperature oscillations. Temperate climates, on the other hand, tend to be in regions closer to the sea and have more moderate variations in temperature. But under the sea itself is where the least variation in temperature occurs. So, if we’re concerned about climate variability, the ocean represents a vast oasis for food producers to take refuge from the extreme temperature oscillations that we may face in the future.

And, ofcourse, seaweed is unaffected by rainfall over the ocean as well, compared to land plants which require a delicate mix of rainfall, not so little that they dry out, yet not so much that their roots get water logged. While the changing rainfall patterns, that climate change may give rise to, could ruin land based harvests by pushing the plants beyond their acceptable range, seaweed will be unaffected. And while wildfires (which we may see more of) can can destroy fields of dry crops and orchards seaweed will also be completely unaffected.

At the end of the day, the main business of agriculture is the production of edible energy, to provide people with the energy they need for their bodies to conduct their important life-giving functions, like pumping blood and breathing air as well as the energy we need to conduct day to day activities like thinking and moving. Energy comes from the sun and edible energy production can be increased by increasing the area of the planet, which the sun illuminates, that is under cultivation.

Only 1/3 of the surface of planet Earth is land and, of that land, 38% is used for agriculture. (1/3 for crops, 2/3 for livestock grazing). 2/3 of the surface of planet earth is oceans and, although the surface layers in most parts of the ocean contain too little nutrients to support extensive seaweed growth, with the addition of appropriate nutrients into those surface layers, most of it could be used to grow seaweed.

An interesting technology that could simultaneously produce carbon-free electricity and bring nutrients from the deep layers of the ocean into the sunlit surface layer, making them suitable for the cultivation of seaweed, is OTEC, a technology that uses the temperature differential between the deep ocean and the surface ocean to generate CO2-free baseload electricity.

It will take time to develop seaweed cultivation to the point where it can realise its full potential to feed the world, but, to avoid disaster initially, we don’t need to cultivate all of the oceans at once, we merely need to increase the production of seaweed at a sufficiently high rate to compensate for any decline in standard, land-based agriculture that may result from climate change, soil erosion and groundwater depletion and the good news is, people like Richardo Radulovich are already working hard to develop suitable varieties of seaweed, locations and cultivation techniques to enable the oceans to yield a bountiful harvest to those who choose to cultivate them.

 

Conclusions

 

As human populations grow, our land is becoming increasingly crowded. 38% of it is already used in agriculture and there are questions as to whether we can mine enough minerals to continue to provide for the needs of this advanced and prosperous civilisation (and even if the minerals are there, would their extraction unduly disrupt the lives of farmers, indigenous peoples and other locals?). In the last few decades, public sentiment has become increasingly negative and many fear the possibility that climate change could catastrophically impact our food production systems and infrastructure through both extreme weather events and rising sea levels.

The ocean represents a vast hugely underutilized, underpopulated space. An almost empty area (compared to land) that accounts for the majority of the Earth’s surface. Out there on the high seas, lies the potential to grow all the food and mine all the minerals that are required to provide for an abundant and prosperous civilisation without interfering with the land rights of any indigenous peoples, or other local populations. A sea-based civilization that fully utilized the resources of the oceans could provide a prosperous life for all of the world’s people and facilitate the level of cooperation required to undertake further exponential technological development that may, someday, take us all the way to space.

Furthermore, a floating civilisation, need have little to fear from climate change as even relatively significant global temperature fluctuations, will likely have little impact on seaweed cultivation, while rising sea levels pose no threat to floating infrastructure.

So the question is: Would we prefer to stay on land, amid dwindling resources, deteriorating agricultural production, in land-based homes that will increasingly be ravaged by fires and floods as extreme weather events become more frequent, surrounded by steadily growing levels of poverty, starvation, desperation, anger and conflict?

Or would we rather sail towards a future of prosperity, security, abundance and hope out on the high seas?

 

 

John

Filed Under: Blog, Technology Tagged With: adaptation, Climate Change, Seaweed

Value-Backed Cryptocurrency

June 1, 2021 by admin

Basic Attention Token (BAT): An example of an attention-backed cryptocurrency token linked to the Brave Browser

Are cryptocurrencies just one giant bubble, akin to Tulip-mania, and will they all become worthless in a few years’ time?

The supply of Bitcoin is limited to 21 million. However the Bitcoin payment system is easily replicated and far from unique. Bitcoin Cash, Bitcoin SV, Dogecoin and countless other cryptocurrencies also have associated payment systems with the same, or greater, functionality than Bitcoin. So other than its brand, Bitcoin doesn’t contain much scarce value – its scarce value is limited to people’s belief it has scarce value.

So is all internet money doomed to just be funny-money backed by no real value?

 

No.

 

After-all, the internet is clearly a valuable resource. So it should be possible to create internet currencies that are backed by the real value of the internet.

The internet’s two chief assets are information and attention. In this article, I will explain what cryptocurrencies are, how they work and how cryptocurrencies can be designed to be intrinsically backed by the internet’s two native assets. In particular I will discuss the possibility of:

  • Information (content) backed cryptocurrencies
  • Attention-backed cryptocurrencies

and

  • Marketplace-backed cryptocurrencies

I hope this article will help readers – both unfamiliar and with some knowledge of cryptocurrencies – gain a valuable perspective that will help them to navigate the extremely confusing and volatile world of cryptocurrencies: a world filled with both immense risks and enormous opportunities. These days, cryptocurrencies are attracting an increasing amount of interest from the general public, due to inflation concerns arising from the decision of central banks, all over the world, to massively increase the money supply, so a sensible guide to evaluating them is indispensable to those who wish to avoid losing their shirt in unwise investments in the cryptoasset wild west.

 

How Cryptocurrencies Work

 

Before explaining how cryptocurrencies work, it is worth explaining how existing currency works. In the standard financial system, banks store a bunch of account names with a number next to each name which records the amount of cash possessed by each account – or the account’s balance. The total list of all the account names and the cash balances in each account is referred to as the bank’s ledger. This information is stored by the bank and forms the basis of all payments in our current financial system which don’t involve paper cash. The owner of each bank account is allocated a secret pin number which they must enter into the banking system before the system will transfer money out of their account and into a recipient’s account. To make a transaction they must also know the name and number of the recipient’s account.

Cryptocurrencies work in a manner that is exactly analogous to our banking system. The core of a cryptocurrency is a programme which stores a ledger of public keys and cryptocurrency balances in each account and enables anyone who knows the private key to that account to transfer cryptocurrency out of that account and into any account who’s public key they know. In the case of cryptocurrency, the public key is equivalent of the account number in standard banking, the private key is the equivalent of the PIN number in standard banking and the cryptocurrency balance is the equivalent of the account balance in standard banking.

To summarize:

A key point to consider with respect to most cryptocurrencies is that the payment process for cryptocurrencies is permissionless and fully automated. As long as you know the private key to your wallet and the public key of the wallet you want to pay, the payment is guaranteed to be processed. In the case of a bank account, however, your bank could, in principle, refuse to process your payment – or even suspend your account. If your bank doesn’t give you permission to use its banking system, then you can’t use it.

So far so good. The first key challenge to a computer programme that runs a crypto-payment system, bitcoin being the original, is:

What if the ledgers, stored on different computers, record different balances for the same accounts (public keys)?

And this is the central problem that the blockchain and other distributed ledger algorithms (such as Hedora hashgraph) address. The solutions are technical, not central to this article, and existing resources cover them far better than I possibly could.

The second key challenge is:

What if no one decides to run the programme that stores the record of everyone’s account balance on the system’s distributed ledger?

Ultimately, the information of every account balance on the bitcoin (or other cryptocurrency) system, is simply data stored on computer drives and, like all data, it can easily be deleted.

The answer is that the system itself pays people to run it in units of the system’s native currency and to faithfully record back-up copies of the core ledger of all the balances and transactions made since the system’s inception. These people are bitcoin, or other cryptocurrency, “miners.”

 

Anyone can open a digital wallet and there are two ways to receive cryptocurrency into your wallet:

  • Someone else with crypto-currency transfers some cryptocurrency from their wallet into yours
  • The programme itself that runs the payment system, and maintains the ledger, allocates newly issued cryptocurrency into your wallet

 

It is this feature, of the programme itself rewarding miners that run the system, that makes cryptocurrency systems robust. Each miner, keeps an identical backup of the ledger that records how much crypto-currency is contained in each wallet. Any one particular miner, could just delete all the information stored on their computers of how much cryptocurrency each public key ( crypto account ) contains – however this would not be a problem because all the other miners have the exact same information stored on their servers as well – and the core “engine” of any cryptocurrency system is a consensus mechanism where the other mining computers only acknowledge an increase in a particular mining computer’s crypto account provided that miner faithfully stores a ledger that is identical to all the other ledgers stored by the other computers on that particular cryptocurrency network. Furthermore, the less miners there are, the more the algorithm (usually) rewards each miner and, hence, the greater the economic incentive is for someone new to start a crypto mining operation and help to faithfully and accurately maintain the shared records of the payment network’s ledger.

 

At its basic level, all cryptocurrency systems have one thing in common: they are all programmes that run a native payment system which automatically pays people (miners) to run them on computer hardware managed by the miners.

 

Furthermore, it is reasonable to say:

 

That a cryptocurrency system will perpetuate so long as the market attributes a sufficiently high value, to the units of cryptocurrency the network issues miners, to adequately compensate the miners for the cost of running the network’s software.

 

So if everyone decides a particular crypto-currency has no value, then miners will likely all shut off their servers (or more likely, use them to run a rival crypto-currency) and the ledger and payment system for that currency will disappear forever.

So what determines if a crypto-currency has value?

 

How Value-backed Cryptocurrencies Work

 

To reiterate:

There are two ways to receive cryptocurrency:

  1. Get someone who already has cryptocurrency to transfer some of their cryptocurrency into a digital wallet that you control
  2. Engage in some activity which the programme itself, which runs the cryptocurrency payment system, is programmed to pay you newly issued cryptocurrency to perform

 

Now let’s look at the kind of activities that a cryptocurrency algorithm would pay people to engage in to promote its survival. The first thing a crypto-currency algorithm needs is for people to run it on their servers. Hence, all cryptocurrency algorithms will pay miners to run them on their computing hardware.

However, a crypto-currency programme can only pay people in units of its native currency. And these units of native crypto-currency will only incentivise miners to run the programme if they are worth something. Hence, cryptocurrency programmes may also pay people to engage in activities that boost the value of the native cryptocurrency in the broader market.

Cryptocurrency programmes typically pay newly issued coins to people who:

  1. Run the programme, in the case of coins (Bitcoin only does this)
  2. Engage in activities that add value to the native coin or token

 

Bitcoin, is just a payment network. Nothing more. However, many new cryptocurrency programmes, that incentivize miners to run them on servers, have all sorts of other functions built on top of the native cryptocurrency payment network. Indeed, there is no limit to the kind of software that a decentralized payment system can incentivise miners to run on their servers: Computer games software, social media, word processors, spreadsheets, videos, etc., etc., etc., pretty much any software that any computer is capable of running.

Quite often, there is an underlying coin-based network that rewards people with coins for running all the software (including token-based software) on their computing hardware and then a variety of different token-based software programmes, run by the same computers which run the underlying coin-based network, which incentivizes people to engage in activities that add value to the token – and in the process the underlying coin as well.

 

Although this is just a rule of thumb, and numerous exceptions may exist, broadly speaking:

  • Coins reward hardware providers to host all the software on the network, including numerous token system that run on top of the underlying network which funds itself by issuing new coins to miners. Token transactions often require token users to burn some underlying coins as commission
  • Tokens are often (perhaps not always) systems that run on top of coin networks that are programmed to incentivize people to engage in behaviours that add value to the token and, by association, the underlying coin

 

A simple example of a value-based cryptocurrency would be a medium-like programme with a bitcoin-like payment programme underneath where:

The programme pays miners in its native currency to run the programme

Content-producers get paid by the programme (either in coins or tokens) for content they upload onto it according to the number of page views

However, in order to view the content, (or perhaps, like medium, to view more than a monthly limit of uploaded articles) readers must pay a monthly subscription in the native coin or token

 

Such a system would basically be an information-backed cryptocurrency.

 

Readers of the content, would have to buy tokens or coins from content producers and miners who might sell tokens to subscribers in exchange for a national currency. The net result would then be that content creators and miners for this decentralized medium, would get paid national currency by those who subscribe to reading the articles hosted by the website.

Coil is an example of a system like this (it’s still at a very early stage in its development and is a long way from taking off).

The next most simple example would be a cryptopayment system that pays miners to host social media software and pays social media users for generating content that others users like. Reading content on this decentralized Social media platform, run by miners, would be free, but if you spend tokens, you can boost the visibility of your posts. Perhaps this crypto-social-media software could also provide tools to commercial advertisers – akin to Facebook – that would enable them to target particular demographics, with content promoting products or services, in exchange for paying a subscription in the system’s native cryptocurrency. If these salesmen can generate considerable sales revenue, they will probably want to spend more cryptocurrency promoting their product/service than they can earn through engagement or likes of their posts. In that case, they will need to use fiat currency to purchase this cryptocurrency from miners and content creators which have earned a disproportionately large quantity of cryptocurrency for both providing servers to run the social media software and for filling that software with content that people are interested in reading.

 

Such a system would be an attention-backed cryptocurrency.

 

Attention-backed business models are generally more lucrative than information-backed business models, a huge amount of information is freely available while attention is a truly scarce resource – I’ve written other articles which discuss the increasing importance of attention in a world where technological progress has made many other resources abundant. When people focus their attention on one thing, it is to the exclusion of other things. Today, with Twitter and Facebook, content producers have to both produce quality content and monetize their own content through cringe-making sponsorships, or through designing elaborate sales funnels to promote their own, or other people’s, products. The exciting thing about blockchain-based social networks is you can cut out alot of administration, managerial salaries and shareholder dividends and pass the full sum of advertising revenues straight to content producers and those who provide their servers to run the software.

Furthermore, crypto-currency-based social media software could be designed to make it impossible to cancel anyone’s account, providing much needed revenue stream security for content creators who have invested a lot of time and effort into building up an audience.

Torum is an example of a blockchain-based social media platform with its own cryptocurrency. Although XTM is not yet listed on exchanges, this is something that is planned for the future.

 

Staking

 

Staking is a way for distributed ledger systems to punish bad behaviour. Participants can be made to put up a crypto-stake (perhaps purchased by national currency or some other asset) in order to participate in certain income-generating activities facilitated by the crypto-network. If they behave badly, the programme slashes their stake to punish them, and possibly to compensate those who have been harmed by their bad behaviour.

A future application for this could be a distributed ledger based version of Amazon. Suppliers would have to stake the native crypto-currency of the system to advertise their wares on the market place and would advertise their wares denominated in the native cryptocurrency to customers on an Amazon-like interface with software run by miners, that includes search engine software enabling customers to search among the products for what they want. If a product is paid for, but not delivered, customers can leave a bad review and register a complaint that would be passed to an arbitrator (someone the software pays to arbitrate disputes) who would then look at the evidence and decide whether or not there was a breach of contract. Suppliers deemed to have breached their contract, either by not delivering the product, delivering a product that is defective, or different to the one advertised, would have the product’s value deducted from their stake and transferred to the customer in question. Both the miners and the adjudicators would be funded by a commission charged for every transaction, and customers would have to purchase the cryptocurrency from suppliers, miners and adjudicators in order to purchase products from this decentralized online store.

 

This would be an example of a marketplace-backed cryptocurrency.

 

A marketplace is a particularly lucrative form of attention because an online store is a particular place where people pay attention when they intend to spend money. Several minutes on Amazon could be worth hours spent on Google, Facebook or Twitter. The value of a marketplace also comes from trust. A system people trust to properly vet service providers, is valuable to service providers who gain customers through the system that they could not otherwise acquire.

The most well-established examples of marketplace-backed crypto-currencies are, unsurprisingly, cryptoexchanges. Binance is an example of an exchange which uses commissions generated from crypto-trading to purchase its own native cryptocurrency (explained here). Admittedly a crypto exchange lacks an intrinsic value that is independent of the value the markets confer to cryptocurrency and, hence, unlike a decentralized Amazon-like market place for real stuff, the value of native coins for cryptoexchanges remain subject to “greater fool” arguments.

 

Lock-in Dynamics

 

The key feature of marketplaces that enable first movers to “lock-in” is that “sellers” are attracted to the highest density of “buyers” (as more buyers means more sales) while buyers are attracted to the highest density of sellers (as more sellers means more choice and more competition which drives down price, or drives up quality). For online marketplaces like Amazon, this is literally the buying and selling of products. For platforms like YouTube, Facebook, Twitter, Medium, etc., content producers seek an audience and audiences seeking a wide selection of content. The same dynamics apply to dating websites.

Although a severe deficiency in an established marketplace may open up space for new entrants (such as coordinated mass censorship from YouTube, Facebook, Google etc., who are, bizarrely, needlessly shooting themselves in the foot this way), new entrants with similar (even marginally better) functionality to established entrants will likely lose, as buyes and sellers flock to the higher density of “action” from more established competitors.

So, once quality crypto-marketplaces in a given spheres get established and “lock in”, so long they don’t have glaring flaws, the native crypto-currency of that marketplace (which I use loosely to include market-places of ideas, such as a social media website) will likely preserve its value for an extended period of time. Owners of established marketplace crypto-currencies will usually not have to fear a never-ending treadmill of new-marketplaces with new cryptocurrencies constantly toppling existing marketplaces. Although it is likely that a variety of different marketplace crypto-currencies, that fill a variety of different niches, will simultaneously establish themselves and co-exist together.

 

Blockchain Charities

 

Right now there is a lot of scepticism and mistrust of many charitable organisations. Many suspect significant portions of donated funds wind up in the pocket of administrators, or go towards fundraising activity rather than those who need it. Corrupt governments in recipient countries can also seize large portions of the goods that were donated to poorer inhabitants in those areas.

Imagine a blockchain-based system that automatically transferred a regular income, in its native cryptocurrency, to every inhabitant of a country in the bottom 5% of GDP per capita every week. There would have to be some kind of biometric identification process to establish one wallet per person but this might be done through a camera on a mobile phone. Although the cryptocurrency itself would be intrinsically worthless, donors from all over the world could buy these charitable crypto currencies (through fiat currency or bitcoin) and, in the process, raise their price and confer them with purchasing power. And everyone who purchased the charitable cryptocurrency on the exchange would immediately know that they were contributing to transferring capital directly to the poorest members of society without any wasted funds getting sucked up by administration, fundraising and other intermediaries – a kind of crypto-currency version of what the organization give directly does.

An interesting feature of this system is that such a cryptocurrency would blur the line between charitable donation, investment and speculation. Because if a new “charity coin” rapidly took off, gained popularity and appreciated at a rate that exceeded the intrinsic inflation rate, associated with issuing new crypto to those in need, then early adopters could potentially get rich quick by shilling the latest charity coin on the market while simultaneously helping the needy.

 

A Distributed Ledger Run Car Factory

 

This is a pretty wacky suggestion and something like this would probably take centuries to design but I want to drive home the massive potential of distributed ledger technology. At its core, money incentivises people to do things. Indeed, from experience, we know that money can incentivize at least some people to do practically anything ( Shoe Nice being a good example ). Therefore, a programme that issues money to people based on their activity can, in principle at least, coordinate practically any activity under the sun.

In this case, you would have some complex system of staking and earning where the owners of capital and the premises of the factory earn cryptocurrency (from a decentralized cryptocurrency based system that coordinates the manufacturing of cars) by renting their capital out for the purpose of car manufacture. Suppliers would earn the native cryptocurrency by selling the parts they manufacture to the algorithm in exchange for the native cryptocurrency. Employees could earn crypto-currency by doing various tasks relating to the assembly of cars from component parts or maintain the equipment in the factory. Utility companies could earn cryptocurrency by selling electricity or providing gas and water to the factory.

Customers would correspondingly need to purchase the native cryptocurrency in order to purchase the cars whose manufacture is coordinated by the decentralized software. And they would purchase the cryptocurrency in question from the owners of working capital, the suppliers, the employees in the car factory and, of course, the miners who ran the software who could all sell the cryptocurrency they earn for national currency (or indeed several different national currencies, if we assume the operation spans over many different nations).

This would be a goods-backed cryptocurrency where the value of the goods, whose manufacture a distributed ledger system coordinates, backs the value of the native currency of the distributed ledger system itself.

A really important point being that, in principle at least, it would be possible for a sufficiently advanced software system to coordinate complex activities with numerous suppliers, workers, and the leasers of appropriate premises and capital using only a distributed ledger system without requiring any legally incorporated entity to exist in any country at all.

In principle, this could be extended to running an airport, a train network, a utility grid – you name it. The possibilities are limitless. Although, in practice, the more sophisticated applications could take centuries to develop.

 

What Is The Ultimate Significance Of Distributed Ledger Systems?

 

Although cryptocurrency enthusiasts often tend to have libertarian leanings, perhaps the most significant ultimate potential of cryptocurrencies is, ironically, to fully realise the dreams of Karl Marx, in that cryptocurrency systems have the ability to facilitate the complex coordination of workers to provide value to customers in the complete absence of any upper management or shareholder class.

In the long term, distributed ledger systems have the potential to completely eliminate exploitation from the system of capital production.

The software would issue currency to workers for producing value – either through manufacturing goods or providing services to customers, and to miners for running it. The software would coordinate payments for a good job and impose staking penalties for negligence or breach of contract with customers. All these payments and activities could be coordinated in the absence of an executive class which pays themselves inflated salaries. Noone need own these distributed ledger systems, they could be open source and available to all. Most importantly:

Through the medium of value-backed cryptocurrency, workers would receive the full value of their economic output.

This is basically the original aim of communism.

 

Resistance Of Decentralized Systems To Governments

 

The overwhelming majority of people live in the territory of some nation and are, thus, subject to its laws. No amount of fancy coding will change this. But while large companies have a huge amount to lose by flouting a nation’s laws, since they can be fatally crippled by a single large lawsuit and are, thus, usually careful to comply, individuals, can more easily sneak under the radar.

Decentralized distributed ledger systems, have the capacity (in principle at least) to coordinate complex human activities, on a vast scale, without possessing a single weak-point that a large court case can be brought against.

Consider the example of a social media company compared to a social media system coordinated through a distributed ledger:

If the government wants social media companies to censor particular content or communication, it can pass a law that will sue companies for billions in penalties for publishing certain content. The directors of social media companies who allow users to post prohibited content can then be taken to court and ordered to transfer billions of pounds in fines from their company accounts to the government… and, if the directors refuse to do this, they can be sent to prison.

In the case of a distributed ledger system, there is no director or group of directors and no complicated appointment procedures. A distributed ledger system is like a company which only has employees and customers but no management. Hence, while governments can pass laws prohibiting people from using a particular kind of distributed ledger software – there is no head to target or order to modify the system to comply with a certain law. A law court can order an algorithm to pay a fine until it’s blue in the face, but the algorithm will continue to do exactly what it’s programmed to do and completely ignore the court.

Courts can punish those who buy a cryptocurrency, as well as those who earn a cryto-currency, but miners can be in any jurisdiction – including those where their activity is legal. And even in the event that crypto-mining is illegal everywhere, there will probably be jurisdictions where the law is badly enforced.

All this will mean, that while it might be risky for a customer to log into an illicit, blockchain-run social media service, or for a content developer to upload content onto an illegal blockchain-based social media service – and while users and creators, if caught, may face severe fines and imprisonment, it would be very difficult, if not impossible, for a government to take down the service itself… so long as enough users value the service enough to risk legal penalties to access its information and communication channels.

Thus, distributed ledger systems have the potential to have a major liberating effect on dictatorships all across the world.

However, there is also a more controversial, and even sinister, side to all this.

 

Consider a distributed ledger system that coordinates the supply of illegal recreational drugs to customer.

Or even a distributed ledger system that coordinated sex trafficking.

 

The required sophistication of such a system would be comparable to that of a car factory and probably would take centuries to develop. But if such a system did exist, although a government could prosecute those who ran the system, or worked for the system, or used the system to buy drugs or sex-slaves, the decentralized software itself would keep mindlessly running and paying disreputable miners to run it, and any individual willing to receive cryptocurrency to perform roles that contribute to the organized manufacture and distribution of drugs, or the trafficking of sex-slaves.

A distributed-ledger-based drug running operation could even run advertisements on distributed-ledger-based social media systems that advertise illegal drugs to the users of those systems with instructions on how to buy them…. and it would be incredibly difficult for the state to either shut down the distributed ledger based drug operation, or the distributed ledger based social media programme that advertised illegal drugs or sex-slaves to its users.

Unfortunately, some people are willing to pay money to have sexual intercourse with sex-slaves, and this willingness to pay, could confer value to the native cryptocurency of a distributed ledger system that coordinated the supply of sex-slaves through the same logic that would confer value to a cryptocurrency that coordinated the production and supply of drugs – or of cars for that matter.

 

Perhaps in the future, conducting 51% attacks against cryptocurrency algorithms that organise illegal activity might become a standard component of law enforcement. Unfortunately this would apply as much to government censorship as it would to shutting down a drug smuggling algorithm. At this early stage in the game it’s hard to anticipate who would win such a game of cat and mouse.

 

So you can see there are advantages (resisting tyranny) and disadvantages (facilitating coordinated illegal activities that some nasty customers value) to the resistance of these decentralized systems to decapitation by enforcement authorities.

 

Which Cryptocurrencies Will Massively Rise In Value And Which Cryptocurrencies Will Fall To Zero?

 

There are now over 5,000 different cryptocurrencies in existence. The vast majority of which will likely be completely worthless in a few years’ time.

But, as a technology, cryptocurrency is here to stay, and a small number of altcoins with modest market capitalizations today could skyrocket 100-fold or even 1000-fold in value in a few years time. Furthermore, if many of the economic activities that are currently coordinated by multinational companies become coordinated, in the future, by cryptocurrency-based distributed ledger systems, this would result in the market capitalization of cryptocurrencies someday dwarfing that of publicly traded equities. Given the current global market capitalization of equities is around $83 trillion, while the global market capitalization of cryptocurrencies is around $1.6 trillion, that would imply that the cryptocurrency sector, in aggregate, still has enormous growth potential.

…But this must be weighed against an understanding that most cryptocurrencies currently around today will soon be worthless and many of the dominant cryptocurrencies of the future most likely haven’t even been developed yet…

The investment potential of cryptocurrencies are enormous, but so are the risks and many, perhaps most, crypto-investors that bet big, or leverage up, will likely get wiped out. The reality is that noone really knows how to evaluate the bamboozling plethora of exponentially multiplying altcoins out there.

So is there any sensible methodology to evaluate the crazy, volatile altcoin universe – or should we just stay well away from it?

Personally, I wouldn’t advise anyone to put more than a few percent of their net worth in cryptoassets. Having said that, if you’re not a technical software expert I would say: focus on the incentive structure of each cryptoproject. The basic principle of all distributed ledger systems, from a payment perspective, is the same. The key difference between different cryptocurrencies is the criteria for earning them.

With that in mind, I would say that a good rule of thumb for crypto-investing would be to keep in mind that:

 

The cryptocurrencies that will stand the test of time, will be those that most effectively incentivise people to behave in ways that other people value highly.

 

The more people value the output that a cryptocurrency algorithm coordinates the production of, the more value they will exchange for that cryptocurrency to pay for its output and the higher the exchange rate of that cryptocurrency will be and, consequently, the more value the miners, who run it, will receive.

Conversely, if a cryptocurrency does not incentivise the production of any value, then once the dust settles and the speculative frenzy is over, the value of that cryptocurrency will become worthless and, no matter how many units the program rewards the miners with to run it, since a million times zero is still zero, all miners will eventually wipe the worthless payment system from their server and run a distributed ledger system that offers them better compensation in a higher value crypto currency.

There will, thus, be a natural selection between different distributed ledger systems. Systems whose earning criteria produce value will survive, and successfully compensate miners to run them. Systems whose earning criteria don’t produce value cannot compensate miners for hosting them on their servers and will die.

 

Cryptocurrencies should thus be viewed as a decentralized ecosystem that is constantly evolving to incentivise people to deploy their time and effort to produce ever greater value for each other.

 

Bitcoin does not score highly in this respect as pretty much every cryptocurrency out there has an identical payment system to bitcoin, but some crypto currencies have earning criteria that incentivise people to produce value over and above this.

It’s entirely possible that bitcoin might prove to be the Netscape Navigator of the crypto-boom. A pioneering innovation destined to be overtaken by nimbler successors which offer more features and more value. The creed of Bitcoin Maximalism, that Bitcoin is the one true crypto-currency, is a comforting belief for crypto-investors to cling to, as it brings order to the chaos of the crypto-universe. Instead of trying to decide between a confusing pre-cambrian soup of altcoins for which it is extremely difficult to find any sensible methodology to evaluate, the Bitcoin maximalist can pursue a simple, clear path to crypto-investing: invest in Bitcoin, don’t invest in any other cryptocurrencies. It’s a way to enforce order upon the chaos of the nebulous altcoin universe by completely ignoring it.

Unfortunately, reality is not always orderly. As the fate of MySpace, Bebo and Netscape Navigator clearly demonstrate, sometimes the projects you expect to progress from strength to strength fall flat on their faces and become worthless even as the very technology which they pioneered continues to be developed by other companies, that go on to attain valuations in the hundreds of billions.

Given the asymmetric return-to-risk profile of promising altcoin projects with market capitalizations in the range of a few billion, I think it’s worth analysing the incentive structure and criteria for earning some of the less well-established crypto currencies with the aim of identifying cryptocurrencies that are more effective at incentivising people to produce value for each other, when compared to bitcoin.

 

Brave’s Basic Attention Token: The King Of Value-Backed Cryptocurrencies

(Not financial advise)

 

There are numerous crypto-projects that are in the process of pioneering attention-back crypto currency in the form of decentralized social media, where account holders can earn tokens for good content. But most of them are still at the sub 1 million user stage – floating around in the pre-cambrian soup of altcoins. Some will shoot to the moon, most will flop to zero.

Of all the value-backed crypto-currencies, Brave’s basic attention token strikes me as one of the most promising. The Basic Attention Token (BAT) is a token you get in exchange for browsing on Brave and allowing small discrete advertisements to pop up in small boxes on the top while you’re browsing. You can also tip the websites of content creators with BAT that you have earned in your uphold wallet. Basic attention tokens have several things going for them:

  • Brave Browser, BAT’s native software platform, is a genuinely innovative, user-friendly browser product that automatically blocks ads, protects user’s private information, and speeds up the load time of web pages in an easy, user-friendly way
  • Brave has over 30 million users that’s a huge amount of engagement for a crypto-token based system
  • Unlike social media, most browsers don’t have high switching costs for existing users (such as followings attached to particular platforms); this makes the browser space easier for new challengers to enter
  • However, unlike other browsers, Brave does have a lock-in feature – BAT. The more users use Brave, the more BAT advertisers will pay to users to show them ads, the more advertisers pay, the more users will earn. So once Brave establishes itself as “the browser that rewards people for using it”, it will be very difficult for new challenger browsers to fund comparable rewards for their users

 

Basic Attention Tokens bring users and advertisers together. Advertisers benefit by discretely showing users targeted ads; users get paid for their time and can find relevant products, services and opportunities of interest. It may be a very basic way to facilitate value creation, but often by keeping things simple you can increase the chance of a successful execution.

 

Most importantly: One of the most exciting aspects of BAT is how easy and cheap it is to receive newly issued BAT. All you need to earn BAT for free is install the Brave Browser on your computer and then sign up to their rewards program. From that point on, you get paid BAT just for browsing.

BAT is probably as close as any well established crypto-currency token comes to paying a Basic Income that practically anyone can receive.

 

This would makes a world that primarily uses BAT dramatically more inclusive than a world which primarily uses bitcoin. The only people bitcoin rewards are first adopters and those who are technically savvy enough to run complex bitcoin mining hardware at a profit. The rest of the world, people who come late to the party (like pensioners, for example) are left with zero. With BAT, however, anyone with a browser can earn it. So a future where the use of BAT is widespread is a future where the distribution of income is also widespread and where anyone with a computer and a browser can earn it.

Despite the fact that BAT tokens are utility tokens whose primary purpose is to enable advertisers to compensate users for viewing ads, there will likely, nevertheless, be advantages to early adopters of the Brave Browser’s reward programme in the event it takes off. The total supply of BAT is fixed . When we consider that $205 billion was spent on digital advertising in 2017 then if we (generously) assume the whole ~$200 billion was used to buy ads with BAT every year, and consider an acceptable income yield to be 5%, that would set a ballpark upper limit on the market capitalization of BAT at 20 times $200 billion, or $4 trillion. Of course, that’s assuming that all the money which advertisers spend on advertising is spent buying BAT, which is obviously not true as a lot of it is spent on market research and content generation and other promotion channels, so a more reasonable optimistic market capitalization would be a small fraction of this $4 trillion figure – less than 10%. Nevertheless, when you consider that the current market capitalization of BAT, as of writing this article, is less than $1 billion, that would still leave plenty of room for future price appreciation.

Nothing in the altcoin world is guaranteed, so buying BAT, like any other altcoin, involves considerable financial risk. The good news is you can get BAT without spending a penny. All you need to do is download the Brave Browser sign up to their rewards programme and then you will automatically start earning BAT tokens, transferred into your Uphold account once you set one up… just for browsing!

Content creators can accept tips of BAT from Brave Browser users on their website as shown in this tutorial as well as for their tweets.

Cryptocurrency is very volatile and many new cryptocurrency retail investors can get scammed or lose their shirt. However, the potential upside of the right crypto-currencies and tokens is very high. For those unfamiliar with cryptocurrencies and crypto-tokens, the Brave Rewards program is probably the easiest way to get exposure to the upside potential of crypto-currency while avoiding any downside risk. And, unlike bitcoin, the value of BAT tokens is based upon a real asset of indisputable value: attention, which does not depend upon finding a greater fool.

 

John

Filed Under: Blog, Economics Tagged With: BAT, Bitcoin Real Value, BTC, Value Behind Cryptocurrency

How To Print Money Without Causing Inflation

February 3, 2021 by admin

The central banks of the world are printing money like confetti.

While this has had a strongly inflationary effect upon financial assets, the degree to which it has caused inflation in more standard goods and services has, thus far, been more modest. This may not last and could rapidly degenerate either into 1970s style high inflation, or even skyrocketing hyperinflation. Unfortunately, the alternative of not printing money, or printing less money, would cause the entire private financial system to collapse and, unless the central bank directly held all deposits for both private individuals and businesses – which it currently does not – the resulting fiscal contagion would be economically catastrophic.

There is, however, a way out. A way to both keep the system liquid, continue to fund important welfare programs at a time when they are needed more than ever, and, at the same time, keep inflation reigned in.

Many people are vaguely nervous that our financial system might soon fail, without exactly knowing what the risk is, or how to avoid it – and the good news is it can be avoided.

This article explains the exact nature of the risks of both inflation, on the one side, and financial collapse, on the other, and how to avoid both outcomes.

 

Understanding Inflation

 

To understand why the huge increases in money supply have not yet lead to a proportional increases in the prices of many products – and also the potential danger of disastrous future hyperinflation – it is important to understand the relationship between money supply, money velocity, the price of goods and the quantity of goods transacted in the economy each year. The relationship of prices to money supply is described by this equation:

MV = PT

Where

M = money supply

V = money velocity

P = price

T = total underlying value of goods transacted

Put simply, if the amount of a given good available to be purchased remains constant, the price of each unit of the good is simply proportional to the total amount spent on that good. The money supply is the total amount of money out there, while the money velocity is the amount of times each unit of money gets spent per unit time. Money supply times money velocity equals total spending per unit time.

Where does money come from?

Under our current debt-based system, all money is initially lent into existence, either to individuals to boost consumption, to business to purchase capital and cover any other lead costs required to produce saleable products, or to the government to facilitate public spending in excess of their tax intake.

Mike Reiss has put together a very good video explaining how the balance of money creation and money destruction combines to set the overall money supply.

When these loans are paid back, then money is effectively destroyed.

Our current banking system needs a continuously increasing money supply to avoid catastrophic collapse

The root cause of this is due to the fact that 97% of all money is held in deposit accounts in private banks and, furthermore, that private banks are by far and away the most convenient way for transacting parties to make large payments to each other. So if all the private banks suddenly failed, the results would be catastrophic! Most people wouldn’t have any liquid wealth at all (as all deposits would be lost) and large businesses would have no means of paying each other and so most long supply chains would disintegrate and the production of vital goods and services (including food and utilities) would grind to a catastrophic halt.

So at the moment, we cannot afford to let private banks fail in large numbers.

And to avoid fiscal contagion and a mass failure of private banks, the money supply must continually increase.

The reason for this is because, if the amount of money a bank owes to depositors should exceed the amount of money the bank can reasonably expect its debtors to repay in loans with interest, then the bank will go bust.

And one bank going bust greatly increases the chances of more banks going bust. And indeed, if even a moderate fraction of the banks go bust, then the entire system will collapse!!!

The reason why a contraction in the money supply leads to a sudden and disastrous knock on effect in the banking system is because:

  • The quantity of outstanding loans is finely balanced with the quantity of deposits
  • Both the default rate and the value of the assets, which loans are secured against, both depend and affect the degree of spending in the economy…

…so less loans cause less consumer spending, which in turn cause businesses to become insolvent and lay off workers. This in turn leads to reduced wages and asset price depreciation. This produces more debt defaults in a vicious cycle which, if allowed to perpetuate, would completely collapse the banking system – and cause a corresponding catastrophic failure across the entire economy.

As a side note, the reason why private banks and financial institutions going bust would be catastrophic is due to the heavy reliance of businesses and individuals on these institutions to hold and transact liquid wealth. If the central bank provided these account deposit and money transfer services to businesses and individuals directly instead of through private banks, then a money supply contraction (and the corresponding bankruptcy of many private banks) would have far less serious ramifications for the wider economy.

If a total financial collapse resulting from the simultaneous failure of all the private banks can happen so easily, why hasn’t it happened already?

The reason why a total financial collapse has not yet occurred is because it is very easy to increase the money supply and, hence, prevent the complete collapse of our private banking system. The central bank can simply loan money to private banks, who in turn loan it out to people and businesses who, through the multiplier effect, create even more money and economic activity. Failing that, if noone’s looking for loans or has sound business plans, due general economic pessimism, the central bank can buy bonds bonds from the government who can spend that money into the general money supply through welfare, expansion of public programs and services, or government subsidies to private institutions.

So, since increasing the money supply is as simple as adding a few extra zeroes, it’s easy to ensure the money supply keeps going up, the banks remain solvent and the entire financial system doesn’t collapse.

The catch is inflation. If you increase the money supply and the money velocity starts to increase, then this causes the prices of goods and services – and the corresponding cost of living – to go up. As long as this is gradual, it’s not a big problem. But, if the rate of inflation starts to worry the general public, it can rapidly lead to a vicious cycle. People spend their money as quickly as possible for fear that the longer they hold onto it, the less it will be worth. This increases the money velocity still further, which causes the rate of inflation to increase even faster. As the cost of living increases, people who can’t pay their rent, or purchase food, start panicking and demand higher wages from their employers or welfare from the government, this causes yet more inflation in a vicious cycled that can, in extreme circumstances, lead to a Zimbabwe-style hyperinflationary currency collapse.

Fortunately, central banks possess an instrument to pull the brakes on money velocity: interest rates.

If the money velocity starts to heat up, central banks can dampen it down by raising interest rates. By raising interest rates, central banks can increase the price of loans thereby reducing the demand and volume of loans (and, hence, money creation) in the economy. Raising interest rates also puts the breaks on government spending and tends to deflate property prices by making renting preferable to purchasing houses and servicing a mortgage. Higher interest rates also encourage people to save up money as cash deposits as opposed to spending it on things like luxuries, property or equities.

So, if the central bank can increase the money supply whenever they want by purchasing large amounts of debt to encourage both private lending and government spending, and slow down inflation whenever they want by raising interest rates, then it sounds like maintaining price stability is straightforward and doable – so what’s the problem?

The problem is when government debt to GDP rises above a certain threshold, the central bank can no longer raise interest rates without completely messing up the government’s budget. When government debt is through the roof, the slightest increase in interest rates forces a government to simultaneously slash public spending and raise taxes just to cover interest payments. This simultaneously cripples economic productivity, through higher taxes, while also denying critical benefits and public services – such as healthcare – to the most needy and vulnerable in society. The net result is that, in practice, once government debt to GDP rises above a certain threshold (about 77% according to the World Bank ) debt levels become a serious drag to economic growth. Beyond this debt to GDP threshold, raising interest rates becomes a less and less a practical option for a central bank.

And without the ability to raise interest rates, the central bank has no conventional monetary tools to reign in inflation. Eventually, inflation will raise the nominal GDP relative to the nominal sum of the national debt, at which point it will, once more, be possible to raise interests rates to reign in further inflation. The question as to whether this will just be a 1970s style 70% currency devaluation which stabilises after that – or a more catastrophic Zimbabwe-style currency collapse – depends on social factors that are hard to predict. Will people panic? Will there be mass unrest that requires vastly more government spending at a time when taxes are harder to raise than ever? (Remember, during uprisings and insurrections a significant fraction of the population refuses to pay tax)

If we look at the historical federal debt to GDP ratio there is reason for concern that things may turn out worse than in the 1970s. Back during the 1970s, federal debt was 20-30% of GDP. This made it feasible to curb inflation by raising interest rates to emergency levels. Today, however, federal debt to GDP is many times higher.

If you firmly believe the central banks of the world have everything under control and there is no danger whatsoever of hyperinflation, then you must reconcile that belief with the historical fact that numerous fiat currencies have collapsed, or experienced severe hyperinflation, in the past. Given that no government ever wants full-blown hyperinflation, we must conclude that, even if hyperinflation is always avoidable, there must nevertheless be certain conditions where avoiding it is at least very difficult and by no means straightforward.

To consider our current situation, let us compare the inflation rate to a horse, the central bank to the rider, raising interest rates to the reigns, and QE and other fiscal stimulus to various measures to try and get the horse to move faster. For the banking system to hold together, our inflation rate horse has to steadily plod forward, neither too fast, nor too slow. In our current situation with the COVID lockdowns, the horse has stopped, the rider is squeezing his calves against the horse’s side saying “giddy up horsey!” but our inflation rate horse is still obstinately not moving. In desperation, the rider now starts whipping the horse, thumping its backside and shouting “I SAID GIDDY UP HORSEY! COME ON! GET A MOVE ON!” but there’s a problem: the rider doesn’t have any reigns to hold onto (since high levels of government debt, prohibit any significant interest rate hikes), so if the horse suddenly breaks into a full gallop and throws the rider off, there’s very little the rider can do to slow the horse down…

…or is there?

 

Consumption Quotas And Progressive Consumption Tax: A Silver Bullet For Inflation

 

There is a much more direct and reliable way to prevent – or moderate – inflation than fiddling with interest rates, an approach that will work robustly irrespective of debt to GDP ratios or the amount of money-printing required:

A spending limit, or consumption quota.

Inflation is ultimately caused by spending. Institute a hard limit on spending and you will produce a correspondingly hard limit on inflation.

To understand why, lets go back to our price equation:

MV = PT

Spending or consumption per unit time, C, can be expressed as:

MV = C

The total money supply multiplied by the number of times money changes hands per unit time is the amount of money that gets spent per unit time.

Substituting into the first equation and making P the subject of the formula yields:

P = C/T

Where,

C = Money spent on consumption per unit time

P = price of consumer goods

T = total underlying value of consumer goods transacted

Limit the amount people can spend on consumer goods per unit time and you limit the price inflation of those consumer goods (assuming the value of goods transacted remains constant).

It’s that simple.

And this logic will hold irrespective of the quantity of money that gets printed.

You can even apply different spending limits to different classes of goods. Limiting the amount of money people can spend on cars, but not the amount on money that people spend on helicopters and you will get inflation in the price of helicopters but no inflation in the price of cars.

Money velocity is a wild card and, while interest rates can influence it, they cannot directly control it. Once money is released into the system, the amount of times it gets re-used is organically determined by human behaviour. Interest rates can influence people’s decisions, but they cannot determine them. Spending limits, on the other hand, can place hard and reliable limits on inflation rates, even in environments in where a great deal of money is printed – perhaps as a response to high levels of unemployment or some other national emergency.

Another feasible alternative to a hard consumption quota that could effectively curb inflation would be a steeply progressive consumption tax. So you would have a consumption tax-free-allowance after which you would pay an 85%-90% consumption tax on marginal consumption above that. While existing taxes like VAT and sales tax are both forms of consumption taxes, they are flat taxes rather than progressive taxes. We don’t want to discourage poorer members of society from buying what they need to buy, but we do want to discourage wealthier members of society from bidding up the price of necessities through buying more than their fair share of them. A steeply banded consumption tax system or a hard consumption quota would achieve just that, while having less of an impact on productivity, and discourage less people from working, than, say, a similar level of income tax.

With progressive consumption taxes and consumption quotas imposing hard (or slightly less hard) spending limits on various goods, a government could pretty much print as much money as they need to print to provide liquidity and fund any welfare program or public service they need to fund, without having to worry about inflation.

In times of scarcity, such as during war, the logical response is to ration out scarce resources equitably to ensure that everyone has enough while rewarding those who work exceptionally hard with a little bit more, but not so much as to deny others the basic means they need to live a tolerable life. If climate change, soil erosion, groundwater depletion, and other forms of environmental deterioration will mean that future generations will have to exist in a world where resources are scarcer and our current assumptions of continuous economic growth may no longer be possible, then rationing out those scarce resources to ensure that everyone has sufficient is the only sensible and humane approach to take.

 

The Economic Function Of Inflation

 

Although widespread high levels of inflation across-the-board do tremendous damage to the economy and society, price inflation in specific classes of goods has a useful economic function, in that it makes the production of highly desirable goods in short supply highly profitable and thus stimulates businessmen to respond by increasing their supply.

For this reason, it might be desirable to only impose spending limits on certain classes of goods and not on others whose production needs scaling up.

 

The Amazing Potential of Central Bank Digital Currency

 

Even a few years ago, the practical implementation of a progressive consumption tax, let alone spending limits, or even spending limits and progressive consumption taxes that vary depending on the category of products that people spend money on, would have been unenforceable. Until recently, there was no easy, convenient way to practically know what people spent their money on (in addition to raising personal privacy concerns).

However, central bank digital currencies could change all that. The capacity to automatically record where money is spent can be intrinsically incorporated into the ledger’s architecture. This makes spending limits, and even category dependent spending limits, very easy to enforce. Progressively-banded consumption taxes could also easily and automatically be deducted in a convenient manner free of complex paperwork or onerous forms.

Furthermore, if the process is fully automated, with no human in the loop, it might be possible to implement while preserving anonymity which could hopefully assuage privacy concerns.

The history of economic thought has been dominated by a tension between maximizing the efficiency of production and maximizing the efficiency of distribution. Price controls, and using high income taxes to fund welfare to those who most need resources, ensure the goods which society produces are distributed to those who most need them and who gain the most utility from them. However, these very same methods suppress the incentive to produce, or the proceeds of hard work, and, thus, reduce the overall availability of goods and services across society.

Conversely, in the absence of price controls and redistribution measures, such as welfare, there is a strong incentive to work and the price of goods that are in demand can freely and rapidly rise, thereby stimulating marginal production. The downside is that such price inflation caused by spending from more well off members of society can often make basic necessities unaffordable and price poorer member of society out of the market – even for necessities, such as food.

The key tension between progressivism and conservatism arises from the fact that efficiently distributing goods across society can suppress marginal production.

In many ways, central bank digital currencies could be regarded as the Holy Grail of economics, enabling prices to be suppressed at affordable levels for those who need them without suppressing the marginal production of new goods and services. This might be achieved by imposing consumption limits on legacy production, but exempting marginal production. For example, impose limits on the amount of money you can spend eating out at restaurants, but exempt restaurants that had newly opened in, say, the last two years from those same spending limits. This would let marginal producers earn a price premium over legacy producers and get through the first few rocky years of starting up from scratch. Another example might be to limit the number of houses that each person can purchase to one, or perhaps two, but to exempt new-build houses from these consumption limits. Or to limit the amount of money people can spend on meat, but to exempt farmers that managed to consistently increase the size of their heard – and so on and so forth.

You could even apply a green twist to selective spending limits. You could limit the amount that each person could spend at the petrol station per year, while exempting electricity consumption for EVs, or… even better… limit the amount people can spend on electricity (perhaps larger limits than petroleum), but exempt electric produced by renewable sources from such limits. Or allow people who install insulation and heat pumps in their houses to own more than two houses and rent them out to tenants, etc., etc., All these measures would create a price premium for green products and services while at the same time ensuring that everyone, even the relatively poor, can access the affordable legacy goods and a services they need up to their quota. When it comes to food, you could exempt foodstuff like seaweed, that don’t require freshwater for cultivation, from food spending limits, or no-till agriculture, and so on and so forth.

And because these spending limits allow money-printing without inflation, it would be fairly straightforward to fund a Basic Income sufficient to end to poverty across the board while, at the same time, creating a sufficient price premium to drive the rapid expansion of sustainable technology. Indeed basic income itself alleviates poverty in the most sustainable way possible by facilitating self-sufficiency, local economies and reducing the need to commute. Furthermore, central bank digital currencies could also promote local economies by allocating higher spending limits to goods purchased from local businesses.

By rationing legacy production, while exempting sustainable technology, CBDCs can insure that the green transition will not cause the cost of services to become unaffordable for those on low incomes and simultaneously generate a price premium that will facilitate the rapid expansion of sustainable technology, without excessively restricting the consumption of people with greater means, and enable a Universal Basic Income to be funded – all at the same time!

 

A Lack of Public Trust?

 

You may be aware of the many videos (especially from Austrian School thinkers) circulating around the internet that present central bank digital currency and universal basic income in Orwellian, almost apocalyptic, dystopian terms suggesting that digital currencies would give central planners limitless power that could be used to deny loans to people who made politically incorrect social media posts – or who criticised the government or central bank in any way – while giving bucket loads of free money to politically correct people who towed the party line and praised the establishment.

In many respects, central bank digital currency is the financial equivalent of nuclear energy. It is an immensely powerful tool which, if used correctly, could solve all our financial problems and simultaneously eliminate poverty while smoothing the transition to a sustainable future. However that same tool (like nuclear weapons), could also have disastrous consequences and usher in social credit scores and totalitarian control. It is true, that, in principle at least, one could use programmable digital currency to reward people who embraced one ideology, religion (or even of a given race) over a less favouritized group or groups. And, admittedly, the recent spate of events, such as one conservative commentator getting his insurance cancelled over his social media posts does not inspire one with great confidence in the political impartiality of our current financial system. However, it’s hard to believe that such an architecture could be programmed without someone blowing the whistle.

The answer is not to reject Central Bank Digital Currencies, which hold so much positive potential, but rather to vigorously campaign to ensure that all the code which determines how accounts get credited with money, be open source and available for all to inspect.

While it’s important to ensure that powerful technologies are used responsibly, we should not refrain from developing technologies that could accomplish great goods simply because they could also do great harm if abused.

Our current financial system is failing. It is extremely unstable and, unless replaced by something more robust prior to its complete collapse (which is bound to happen sooner or later), it will take the whole global economy down with it leading to consequences too disastrous to fathom. This is what we face in the absence of structural reform.

The price of the paralysis, resulting from a lack of trust, can be disastrous.

By the mid 1980s, nuclear energy was on a roll. Between 1975-1985 France had increased its nuclear production from supplying less than 20% to over 75% of all electricity and had just connected a 1.2GW fast breeder reactor, Superphenix to the grid in 1986. Nuclear energy was a mature and fast growing technology and there was every reason to believe that, over the next few decades, fast breeder reactors would supply most of the world’s electricity, which, while still in the prototype phase, were making rapid advances.

If such a future had materialized, there would not be no problem with global warming, or climate change, today. Furthermore, spacecraft propelled by nuclear energy, which were being actively researched back in the 1960s, would have enabled us to establish manned outposts on Mars, Venus and on the moons of Jupiter and Saturn.

We threw all that away because of public mistrust over nuclear energy; because of concern over nuclear weapons; because we didn’t trust our nuclear researchers to responsibly handle the waste output of nuclear reactors, or to design them to be safe. Because of this distrust, ironically fomented largely by the “environmental” movement, FOUR DECADES that could have been spent decarbonizing the global economy were THROWN AWAY. The nuclear industry is now a shadow of its former glory, a lot of the expertise and experience that the workforce developed building large reactor projects is now forgotten, an enormous fraction its employees are now approaching retirement and now we are caught in a mad belated rush to roll out renewable energy as Australia burns. At this point, climate change is inevitable and is happening as we speak and our only choice is whether we want catastrophic climate change or just moderately disastrous climate change. Vast swathes of Canada’s boreal forests have now been turned into black goop due to our prolonged dependence on fossil fuels.

…and all because we didn’t trust nuclear scientists to do their job back in the 80s.

If we don’t reform the existing banking system soon, the ramifications to the global supply chain will be absolutely disastrous – indeed it could precipitate the breakdown of civilization. By all means, lets have public oversight over the use of CBDCs but let us not reject this critical reform to global finance for no better reason than an intense distrust of the banking institutions that coordinate our global economy.

How can we have a civilization at all if no one trusts anyone else?

 

John

Filed Under: Blog, Economics Tagged With: banking crisis, economic collapse, inflation, MMT

Why The Universe Has The Order It Does

December 26, 2020 by admin

Careful experimental investigations have revealed a stable, underlying order to the principles which govern the motion of matter and energy in our universe – at least on the length scales in which we go about our daily lives. The Law of Gravity, Coulombs Law, Maxwell’s equations, etc., are just three examples of stable physical principles which we have discovered, through experiment and observation, to govern the interactions between different bodies of matter and fields of energy.

In this article we take a step back and ask the question:

Why are The Laws of Physics the way they are?

If we believe in a random universe, perhaps the kneejerk response might be that there doesn’t need to be a reason. But, as I will demonstrate in this blog article, the anthropic principle can be deployed very effectively to explain why many physical laws are as they are.

 

The Anthropic Principle

 

The Anthropic principle is very simple:

The environment that any conscious entity finds itself existing in must be somewhere it is favourable – or at least possible – for it to exist in.

The environment where a conscious entity developed must be somewhere it is possible for conscious entities to develop in.

Although this may be a self-evident, obvious fact, when this self-evident statement is rigorously considered, it is possible to use it to arrive at far-reaching conclusions.

For the anthropic principle to make any sense with respect to the laws of physics governing our universe, we must either appeal to:

  1. An intelligent designer
  2. A multiverse where the laws of physics that govern the various universes within it vary wildy (Ultimate Ensemble)

While an intelligent designer – that deliberately creates our one universe in such a manner so as to harbour life – on the surface may seem to make sense, it has the problem of answering one question (Why has the universe come to be the way it is?) by opening up an even bigger question (How did the intelligent designer come into existence and how did the intelligent designer come to be the way He is?)

The multi-verse interpretation of the anthropic principle rests on the idea that there are many, many, many universes (where a “universe” is used to describe everything that emerges from a given big-bang) that are formed from many, many, different big bangs all with different laws of physics and that the overwhelming majority of them harbour no sentient life but that, being sentient, we live in one of the few universes that does harbour conscious, sentient living creatures.

These multiple universes don’t interact with each other, either due to physical separation over vast distances, physical separation through dimensions where the interactive forces of the particles that compose them do not penetrate, or perhaps because the particle sets from each multiverse have separate “charge” and “force” categories, and interact with particles from the same universe, but have no effect on the motion of particles from any other universe but their own and hence are, to all intents and purposes, invisible.

Could existence really be that vast? The observable universe is 46 billion light years in diameter and the total universe may go on forever – at least in the four dimensions of spacetime that we are familiar with. Is it really possible that, on top of this, our one universe is just one member of a vast number of multiverses?

If we are to meaningfully apply the anthropic principle to the laws of physics while rejecting the existence of an intelligent designer (which raises as many questions as it answers) then we must accept the existence of a multiverse. This is because the laws of physics are uniform throughout our universe. Not only does Noether’s theorem, combined with the observed conservation of linear momentum demand this (as will be discussed later) but the fact that the observed spectra of distant galaxies billions of light years away have line emission patterns of the same elements that can be observed in a laboratory implies that the laws of physics are uniform across our universe.

This implies we need a multiverse, if we are to pick and choose which subset of physical laws are conducive to the evolution of life…

Is there a strong case for saying the laws of physics can be explained with the anthropic principle?

Read on to find out…

 

Evolution And Energy Conservation

 

The Law of Conservation of Energy states that the total energy in an isolated system remains constant over time. In other words, that energy can neither be created nor destroyed but merely changed from one form into another.

Noether’s theorem shows that energy is conserved in physical systems whose laws do not vary with time.

Imagine we didn’t know that nuclear energy existed and we thought that the only forms of energy were potential energy, kinetic energy, heat energy and chemical energy. With only this knowledge, we then see the heat energy of a radioactive isotope spontaneously increasing.

We can either say:

1) This disproves the law of conservation of energy

or we can say

2) We have found a new source of energy

And then argue that the increase in heat energy coincides exactly with the reduction in nuclear energy and so energy is conserved.

What Noether’s theorem states is that, so long as the laws of physics remain constant with time, we can always state that energy is conserved through introducing new forms of energy to balance the books whenever energy appears to be created and that this approach is sound so long as we live in a repeatable universe where a given cause will always yield the same effect.

As an interesting aside, Noether’s theorem provides a bridge between physics and philosophy by reframing Hume’s problem of induction in terms of energy conservation.

At the heart of Hume’s induction problem was his conclusion that there was no a priori logical reason to conclude that just because any event occurred in a particular manner before, that things should occur in a similar manner in the future. In other words, Hume questioned the rational basis for assuming the repeatability of anything including the most fundamental physical processes.

…and in the absence of any presumption of any repeatability in anything at all, nothing can be predicted…

“For effect is totally different from cause, and consequently can never be discovered in it. Motion in the second billiard ball is a quite distinct event from motion in the first; nor is there anything in the one to suggest the smallest hint of the other. A stone or a piece of metal raised into the air and left without any support immediately falls; but to consider the matter a priori, is there anything we can discover in the situation which can beget the idea of a downward, rather than an upward, or any other motion in the stone or metal?” – David Hume (Limits of Metaphysical Speculation)

Framed in the terminology of physics, Hume’s problem of induction can be articulated as:

There is no fundamental a priori logical reason to believe that energy should be conserved.

Or is there?

Evolution is the process whereby complex structures, which are capable of performing complex functions, develop. In the absence of evolutionary processes, it is almost inconceivable that something as sophisticated as human consciousness could exist.

At its core, evolution is trial and error. You have a self-replicating information storage medium – DNA – that builds and modifies living creatures. These random modifications sometimes produce creatures that are better at surviving and reproducing and more often produces creatures that are worse. However, the creatures that are better at reproducing make more of themselves. This means their prevalence is far greater than their chance of emerging in the first place.

Evolution could be regarded as a gradual unconscious “learning process” where successful reproduction and death, gradually “teaches” the various evolving germlines how to produce phenotypes that are better at surviving and reproducing.

A key point is that each incremental change in structure, from generation to generation, is miniscule compared to the legacy information that each new generation inherits from the previous one. This legacy genome, that each new generation of living creatures inherits, represents information painstakingly gleaned from hundreds of millions of years of previous trial and error.

In order for evolution to advance and built more complex and capable creatures as time goes by, the “lessons” that germ lines previously “learned” through the process of natural selection must remain valid – to some extent – as time progresses, in order for the phenotypes to advance and further refine themselves.

If the basic laws of physics constantly changed, then all the information encoded in our DNA about how to build successful cells (let alone multi-cellular animals) would become obsolete at a rate which would be too fast for evolution to refine advanced multi-cellular organisms. That’s assuming life should sustain itself in any form – DNA itself might suddenly become impossible to chemically form. If the laws of physics constantly changed, the evolution of complex, advanced organisms would be like someone trying to build a skyscraper while someone else was routinely blowing up the foundations of that same skyscraper with dynamite.

The refinement and the advancement of phenotypes can only occur if previous functions “discovered” by evolution remain valid over evolutionary timescales.

Hence evolution and the corresponding development of advanced conscious structures with advanced cognitive functions can only occur in a universe where the laws of physics remain stable over evolutionary timescales and – hence – where energy is conserved over evolutionary timescales.

Einstein once said:

“The eternally incomprehensible thing about the world is its comprehensibility”

It should now be clear that the world is comprehensible because, at the most fundamental level, the process of learning (comprehension) and the process of evolution are basically the same – in that they both involve the accumulation of information. Thus, lifeforms could only evolve in a comprehensible universe where this is possible. The physicist James B. Hartle has also made this case.

Which, as has been previously mentioned, implies that energy is conserved as proved by Noether’s theorem.

 

Entropy

 

The second law of thermodynamics, that an isolated system left to itself will become more uniform and homogenous as time goes by ( which, if applied to energy, produces thermal equilibrium ) is pretty much logically unavoidable. Furthermore, the underlying principle of repeatability – which we discussed in the previous section as being necessary for evolution – implies an arrow of time. That cause A reliably, and repeatably, gives rise to effect B implies that, as time goes by there is a distinct preference for A to convert to B and not the reverse.

In the absence of irreversible processes, the world would be far less predictable (assuming it was predictable at all).

The ultimate purpose of all cognition (which is intimately linked to conscious experience) is to act on the world in order to realise certain preferred outcomes that would be unlikely to occur in the absence of such actions. All action, requires work and all work is ultimately generated by some kind of irreversible physical process.

An irreversible process can be regarded as a preference of our physical universe to the conversion of state A into state B over the conversion of state B into state A.

Perhaps the relationship between preference and entropy could even be regarded as the most fundamental way that conscious entities “negotiate” with the physical universe in which they exist to get what they want. (e.g. – “Hey Universe! I’ll let you convert diesel into water and CO2 if you let me build a skyscraper”)

The question is: In the absence of irreversible physical processes, could agents with distinct preferences exist, and – if so – what would be the underlying physical mechanism that would enable them to facilitate one preference over another?

Furthermore: In the absence of preferences and desires, could consciousness as we know it exist?

If not, then conscious creatures can only live in a universe where the second law of thermodynamics applies.

 

Conservation Of Linear And Angular Momentum

 

At the most fundamental level, structures are complex, somewhat ordered, arrangements of matter in space. We often talk about structures of language, structures of computer code, etc., etc., and while these structures seem highly abstract and detached from space, at the end of the day, if physical humans didn’t exist, language wouldn’t exist either. If physical computers didn’t exists in physical space, there would be nothing to execute the computer code.

In this sense:

All abstract, non-spatial structures, rely on the existence of spatial structures.

If consciousness as we know it ultimately relates to agency, in the sense that we think in order to do, then if all conscious entities are ultimately either spatial structures of some sort, or depend on spatial structures of some sort (as a simulation depends on the spatial existence of a computer), then it seems inconceivable that a conscious spatial entity could do anything in the world without moving. All action ultimately arises from spatial movement.

If we assume that both complex conscious agents:

  • Must move information to act upon the world (even plants produce seeds that move)
  • Their structural integrity and function depends on certain relationships of cause and effect remaining constant (i.e. to live, biochemical processes must occur predictably)

Then we arrive at the conclusion that, in any universe that harbours complex conscious agents, the laws of physics must remain constant in space as well as time.

And since Noether’s theorem proves that momentum must be conserved in any universe where the laws of physics are constant across space we must conclude that:

Momentum must be conserved in any universe inhabited by conscious agents capable of action.

In our case, if the laws of physics changed even slightly as we moved around the place then our finely-tuned, highly complex biochemistry would cease to function normally and we would die (or be reduced to a simpler, unconscious form of matter – as all the functional value gleaned from 100 millions of years of trial and error would be erased). However, this same statement would apply as equally to a computer as it would to a living organism, as the workings of computers are also finely tuned to the energy levels of semi-conductors and if the laws of physics changed – even slightly – computers would also be rendered functionless.

Extrapolating this principle to angular momentum is simple: any complex action, that does not ultimately destroy a complex structure, requires rotation.

Without rotation the complex structure can only go forward and backward, like a bullet. Furthermore, the entire structure would need to be frozen in place. The only alternative is that the structure itself effectively explodes. If the components of a structure move in divergent trajectories then unless they turn around at some point, the structure will become progressively more tenous until it becomes a cloud of fragments.

Hence:

If we assume that both complex conscious agents:

  • Must rotate parts of their body to engage in complex activities
  • Their structural integrity and function depends on certain relationships of cause and effect remaining constant (i.e. to live, biochemical processes must occur predictably)

Then we arrive at the conclusion that, in any universe that harbours complex conscious agents, the laws of physics must be rotationally invariant.

In other words, the fundamental physical laws that profoundly affect the biochemical processes in your body can’t change just because you’ve turned around.

Again:

Since Noether’s theorem proves that angular momentum must be conserved in any universe where the laws of physics are rotationally invariant we must conclude that:

Angular momentum must be conserved in any universe inhabited by conscious agents capable of action.

Anyone unconvinced by the argument that rotation is necessary for complex action should consider that rotation is also necessary for orbits, which in turn are necessary for complex stable structures – as I will explain later.

 

Gravity And Electromagnetism

 

Of the four fundamental forces, I will refrain from discussing the strong and weak forces as they are both complicated and short range and instead, I will limit myself to explaining why gravity and the electromagnetic forces have the form that they do as these are the only two forces whose spatial affects extent across the length scales over which living processes occur.

The mathematical form of Newton’s Universal Law of gravitation is:

Where:

F, is the force in Newtons

M, is the larger Mass in kilograms

m, is the smaller mass in kilograms

r, is the distance seperating the center of mass of both objects

G, is the gravitational constant

While the mathematical form of the law that governs the strength of force between two electromagnetic charges in a vacuum, known as Coloumb Law is:

Where:

F, is the force in Newtons

Q, is the larger Charge in Coulombs

q, is the smaller charge in Coulombs

r, is the distance seperating the center of charge of both objects

, is a constant of interaction composed of other constants including π and ε0 is the permitivity of free space.

We might now ask:

  • Why does the equation governing these two Force have the form that it has?
  • Why are two completely distinct forces described by very similar equations?
  • Why is the force proportional to the product of both masses as opposed to some other relation such as, say, the sum?
  • Why are both forces proportional to the inverse squared of the distance between the objects in question?

The fact that both gravitation and Coloumb’s Law, are both proportional to the product of the charges and masses of the objects exerting force on each other, respectively, follows from the repeatable nature of the universe. i.e. from the fact that identical charges or masses an identical distance away from each other will exert an identical force on each other.

This logically follows from the fact that we inhabit a repeatable universe where similar initial conditions give rise to similar outcomes.

To understand how this follows logically, imagine two very light bags, a distance, r, from each other where the size of the bag is negligably small compared to the distance between the bags. We fill these bags with identical balls, mass m, where m is much heavier than the mass of the bags. Because the balls are identical – and because the universe is repeatable – each ball is attracted to each identical ball in the other bag by exactly the same force. Now draw a line that links each ball in bag A to each ball in bag B. If each line represents the force that each ball is attracted to each other ball, the total number of lines will be the total gravitational force of attraction between bag A and bag B.

It is clear that the total number of lines linking the 2 bags is equal to the product of the number of balls in each of the bags.

Number of lines = (Balls in Bag A) X (Balls in Bag B)

The relationship between the multiple of the Gravitational force between two bags containing different numbers of balls as a multiple of the gravitational force between two balls.

This logic explains equally well why the electromagnetic force between two objects is equal to the product of their charges as it does as to why the gravitational force between two objects is equal to the product of their masses.

The fact that gravity and the electromagnetic force decrease with the square of the distance is less straightforward to explain. For a simple radiant body, the decrease in luminosity with the inverse square of the distance follows simply from the conservation of energy. If a point source radiates so much energy uniformly in all directions, then the flow of energy through a given surface area pointing normally in the direction of the flux, will vary inversely with the square of the distance simply because the area of the shell surrounding the light source increases with the square of the distance and, hence the fraction of the shell that a given surface area represents is the inverse square of the distance.

Gravitation and the electrostatic force do not radiate net energy. However, all force interactions are mediated through carrier particles, so if we imagine an object emitting virtual photons (the carrier particle for the electromagnetic force) or gravitons (the carrier particle for gravity) then, so long as they are emitted evenly in all directions, this would also produce an inverse squared law. And now we invoke rotational symmetry and the law of conservation of angular momentum, from further back in this article, to guarantee there is no angular dependence to either of these forces.

 

Why Space Has 3 Dimensions

 

I will finish this article with an explanation as to why space has 3-dimensions. When I say “space” I mean, the theatre in which life takes place. The anthropic principle can only be applied to the framework in which living, conscious entities develop and exist and, because of this, the existence of extra hidden dimensions postulated by some theories wrapped up tightly over plank length scales that are too small to impact anything we do or observe has no bearing on this matter.

The central importance of the orbit to the existence of life is a key fact to be aware of when considering the underlying reason why space is 3 dimensional.

  1. For a spatial structure to exist, there must be some stable relationship between the different spatial components from which it is composed. (And all information storage, ultimately depends in some way on the stability of a spatial structure which serves as a storage medium)
  2. For a spatial structure to be created, it must also be possible to adjust the relationship between the spatial components of adjacent structures (and replication/reproduction necessarily requires the ability to change the physical universe)

You may be interested to know that for a universe with a whole number of dimensions both conditions 1) and 2) only apply in a 3-Dimensional universe.

The importance of the orbit is that it imposes a stable relationship between the particles orbiting each other. It allows the stable formation of atoms, which can then have net dipoles and stick to each other with hydrogen bonds, ionic bonds or Van Der Waals forces. None of these higher order chemical effects could take place without atoms that are mostly neutral, but which “stick” to each other if they get close enough at a low enough temperature.

I will refrain from trying to derive things like the Pauli Exclusion principle, or quantized energy levels from the anthropic principle! I don’t know if this is possible, but, if it is, I’m personally not smart enough to do it!

Suffice to say that a balance between attraction and repulsion, on which solid spatial structures depend, requires a stable spatial relationship between two different particles – a stable relationship that can only be maintained by one particle orbiting the other.

From straighforward geometric considerations the acceleration, a, associated with circular motion is:

Where:

a, is the centripedal accelation

v, is the velocity

r, is the orbital radius

Given that the formula for angular momentum, L, is:

Where, m, is the smaller orbiting mass (where we assume a small mass orbits a much larger one).

Substituting L for  v, gives:

And since:

Then we can express force in terms of angular momentum, mass and radius:

Lets substitute the force of gravity in for F, although the electromagnetic force would work just as well since the point of this exercise is to express the angular momentum of a stable orbit in terms of the orbital radius. Also the expression for gravity will be generalized for an N dimensional universe. Assuming conservation of flux yields:

Where, N, is the number of spacial dimensions.

Substituting F, on both sides yields:

If the dimensions of space are 3 or less, angular momentum increases with orbital radius, if space had 4 dimensions, angular momentum would be constant with orbital radius, if space had 5 or more dimensions, angular momentum would go down as the orbital radius increased.

Orbital energy, as a function of radius is the integral of force with respect to distance:

Again, N, is the number of spatial dimensions in the universe.

Note that for a 2 dimensional universe, the formula is:

We can see from this, that in a universe with 2 dimensions or less, as the orbital radius approaches infinity, the orbital energy also approaches infinity. In otherwords, in a universe with 2 dimensions or less, there is no such thing as escape velocity. One particle can never gain enough energy to escape the orbit of the particles it’s orbiting around!

Thus, in a universe with 2 or less dimensions, condition 2) the reconfiguration of matter and, hence, the ability of structures to reproduce themselves is not satified.

When space has three or more dimension, however, the orbital energy approaches a finite limit at infinite radius. Hence, in a universe with 3 dimensions or more each orbit has a well defined escape energy, or escape velocity which, if reached, allows the smaller orbiting particle to escape the orbit of the larger particle.

When we apply torque to a particle, we also add energy to that particle and if the universe has 3 or more dimensions, this extra energy results in the orbital radius increasing. In a universe with 3 dimensions, this is no problem, as both orbital angular momentum and orbital energy both increase with orbital radius. However, if the universe has 4 or more spatial dimensions then the orbital angular momentum reduces with increased radius. In practice, what this means is that the orbit is unstable. The slightest nudge or the addition of the tiniest amount of energy to an orbiting particle in a universe with 4 or more dimension would destabilize the orbit.

Paul Ehrenfest and Max Tegmark have proved that 3 spatial dimensions are necessary for stable orbits more rigorously.

In conclusion:

  • Stable orbits cannot exist in a universe with 4 or more spatial dimensions (eliminating the possibility of information storage in a structure)
  • Particles cannot escape their orbits in a universe with 2 or less spatial dimensions (eliminating the possibility of reproduction)
  • Reproducing, evolving organisms can only exist in a 3 dimensional universe

 

Conclusion

 

Perhaps these aspects of our universe that appear to be especially favourable to the evolution of life may convince you that the universe we see around us is the result of the anthropic principle at work.

This would in turn imply, that the entire universe whose immense vastness we see around us is but a tiny spec in a mind-bogglingly infinite multiverse who’s sheer scale and diversity is completely impossible for our limited minds to comprehend.

However, our universe is comprehensible, because evolution can only occur in a comprehensible universe. Evolution also favours entities that can act effectively on their environment, and since comprehension increase the effectiveness with which an organism can act on its environment, it is, all else being equal, a trait favoured by evolution. So evolution can only occur in a comprehensible universe and it also has a tendency to (eventually) produce phenotypes that can comprehend it.

Are you convinced?

 

John McCone

Filed Under: Blog, Philosophy Tagged With: Anthropic, Laws of Physics, Noether, Order of Universe, Universe

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 6
  • Go to Next Page »

Footer

John McCone

Follow John on Twitter

  • Twitter

Top Posts & Pages

  • The Prompt Tornado : An LLM Disaster Scenario
  • Laser Propulsion with Plasma Thrusters
  • 9 Problems With Progressivism

Archives of Old Posts

Join my Blog Article Announcement Mailing List

Type in your email and click "Sign Up" to join my blog mailing list and be the first to hear about new blog articles and books (see mailing list policy)

Powered by MailChimp
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright © 2025 · Author Pro on Genesis Framework · WordPress · Log in

 

Loading Comments...