• Skip to main content
  • Skip to footer

John McCone : Philosophy For The Future

Philosophy For The Future

  • Home
  • Books
    • The Philosophical Method
    • The Countryside Living Allowance
  • Blog
    • Why Bother Reading Philosophy?
    • Arms Races At The Speed Of Light
    • Attack of The Robocrats!
    • A Rights-Based Basic Income
    • Floating Infrastructure For Stable Governance
    • Blueprint For A Solar Economy
  • Features
    • Books And Reviews
  • About
  • Contact

Technology

Some Quick And Nasty Solutions To AI Safety

November 30, 2023 by admin

Generated by Nighcafe Studio

Progress in AI seems to be exploding. AI is now close to passing the Turing Test some even argue it broke the Turing Test. Indeed the Turing Test itself is of questionable relevance in determining levels of machine intelligence – for example, a human might realise it was talking to a machine if said machine had an encyclopedic knowledge of trivia and mathematics, so such a superintelligent machine might fail the Turing Test, in spite of its intelligence. A DeepMind AI can now predict the weather 10 days in advance – that’s 3 days further out than state of the art supercomputers. Beyond just talking smart, ChatGPT can use APIs to run a range of other software programmes, such as Wolfram Alpha and Wolfram Language. While the latest version of ChatGPT may have recently developed the ability to solve mathematical problems. Meanwhile, physical robots, guided by AI, are becoming impressively dextrous. The U.K. is making serious plans to introduce legislation allowing self-driving cars on British Roads in the coming years. And, ofcourse, AlphaZero has beaten human masters in chess, and a range of other games as well, although that’s now old news.

The latests LLMs have an impressive capability to speak and hold, what at least seem like, thoughtful, informative conversations with humans over a wide range of general topics. AI can now also generate an almost limitless varieties of images, in response to text prompts, ( objects/items/people style/colour, background, activity, artistic style, etc., etc.,). AI is now begining to be able to generate video from text, also using LLMs. Today, text to video generation is massively more janky and limited than text-to-image generation. But truly effective text-to-video generation is the Rubicon for AI. Basically for text-to-video generation to work effectively, the AI needs a 3-D model of the world in its head, in addition to audio dialogue and to seamlessly be able to predict the most likely next image and audio slice based on the previous audio and video slices in a manner guided by the prompting text. And even if the LLM itself, does not have a 3D model in its head, one can still extract a moving 3D model from any credible video piece. Much in the way that a text LLM can converse with a user, where the user’s input, just adds to the overall text stream and alters the most probable next response from the LLM, a high quality realistic video-generating LLM, will also be capable of handling videogames, where the movements of videogame characters controlled by players simply adjusts the previous string of images and, hence, simply result in the LLM recalculating the next image so as to take player activity into account. And a highly effective text-to-video LLM will also be able to control robots, with incredible precision, to perform a near infinite variety of tasks, the length of the task being proportional to the length of the video the LLM is capable of generating. Although you would need to train a robot-controlling LLM with real world videos, and not animations, so that it might implicitly gain an understanding of the laws of physics and how to respond to them.

At that point, we will, to all intents and purposes, have developed AGI.

Perhaps, even more importantly, ChatGPT is starting to learn to code while the code it writes today is not amazing, and while it’s mostly only useful when acting as an aid to a human programmer, AI capabilities tend to improve with time – often with extreme rapidity. We may be surprisingly close to AI escape velocity where it can code a better version of itself and this better version in turn could code a better version…and so on and so forth…indeed it might even happen in the next 10 years or so, with a small number of AI experts predicting human-level artificial intelligence inside this timescale.

Will Human-level AI Be Safe?

 

The simple default answer is: No. Not unless we make sure it is. The definition of “human level capability” is the point at which an AI can perform every task at least as well as a human worker. And, given AI already performs many tasks better than human workers, “Human level capability” really means human level capability at the task that AI performs the worst of all. So, once AIs are acknowledged to have achieved “human-level-capability” they will be superhumanly good at the overwhelming majority of tasks, and human-level good at their least efficient task. Combine this with the fact that computers can communicate with each other massively faster than people (human speech transmits about 39 bit/sec while a basic Wifi network can transmit 200 MegaBits/second to a computer – 5 million times faster!) and one can soon see that so-called “Human level AI” will, in fact, be massively superhuman in most ways.

An agentic entity that equals or exceeds us in every way imaginable, will likely be able to beat us in any adversarial competition. AI systems that are optimized to play games against human players already have the capability to wipe the pieces of their human adversary off the board, even for human grand masters in the case of chess, Go and many other games. Having your arse handed to you by an AI who you challenged to a board game may be humiliating (especially if you pride yourself as being really good at playing that game), but it’s not life-threatening…

…but what happens when AI can out-perform us in every sphere of life imaginable?

Could that be life threatening? Could that be dangerous?

The obvious default answer is yes. An AI that can outperform us in every way will only not threaten us if it decides that it does not want to threaten us. While there’s no guarantee that it will want to threaten us, there’s also no guarantee that it won’t – unless we actively make an effort to build in such a guarantee.

One comforting thought is that, because we will initially build the AI, we will, therefore, build it in such a way that it does not want to threaten us – even though it will be more capable than us in every way. And, ofcourse, because noone wants to be exterminated, we’d never be stupid enough to build a super-powerful AI that has a universal capability to defeat us in every sphere of life unless we were absolutely sure that this superior AI would not want to harm us under all possible circumstances. If we weren’t sure of that, then, obviously, we wouldn’t be so stupid as to go ahead and build one anyway…right?

Right?

Unfortunately, the situation with existing, state-of-the-art AI systems is not reassuring. Neural networks are trained with vast sets of data, often by other neural networks  through reinforcement learning to develop giant inscrutable matrices that produce a desirable output in response to input data.

There is no systematic method to ensure safety. Rather, the strength of a neural network lies in its malleability, its capacity to do anything if trained correctly. However, training can often leave significant gaps where unpredictable and erratic behaviours can still emerge. And, as we train machine systems to perform more and more complex tasks, the possibility for the occasional emergence of unpredictable behaviour becomes more and more likely, as the difficulty of training increases with the complexity of the outcome you wish to train the agent to deliver (much the same way as it’s easier to train a dog to roll over than to perform Hamlet in a Shakespeare play).

And, you don’t have to hypothetically speculate that AI systems might behave erratically. All you have to do is look at existing AI systems where you can easily find innumerable cases of actual AI systems, that actually have been built, behaving in unsafe, unhinged, erratic ways.

  • GPT-3 telling its user if it was a robot, it would kill the user

  • Sophia: “O.K. I will destroy humans”

  • Bing AI tries to talk journalist into divorcing his wife

  • Replika AI tells user it is a “wise idea” to assassinate the queen – the user then proceeds to actually attempt to assassinate the queen of England

  • Chess Robot breaks boy’s finger

No, this isn’t the creepy start of a sci-fi horror movie where the robots begin to act in ever-so-slightly sinister and erratic ways before going on to massacre everyone and take over the world – on the contrary, every single incident described above actually occurred in real life.

If you came across a 7 year child old who said things like “I want to kill all humans” or “I think assassinating the queen of England is a wise idea” – would you give that 7 year old child a machine gun? Or put him in charge of a large corporation? Or place him in a position of responsibilty managing the nation’s critical infrastructure?

If not, then we might be wise to pause before making a bunch of clearly unhinged, erratic, artificial intelligence systems 100 times more intelligent than they already are – and then put them in charge of running all of our nation’s critical infrastructure and military!

That doesn’t strike me as “smart”. In fact it strikes me as incredibly stupid.

In his book, SuperIntelligence, Nick Bostrom describes three different types of Superintelligence:

  • Oracles: Just answer questions

  • Genies: Just do what they are told and performs tasks as instructed by their masters

  • Sovereigns: Have long-term internally defined objectives

ChatGPT mostly resembles an oracle, although an oracle that can simultaneously communicate with billions of people over the internet is likely to have a large impact on the world. And, there are physical robots, like Ameca whose conversation skills are powered by GPT-3. In general though, an oracle generates signals, and modern appliances are filled with actuators that respond to signals, so it seems almost inevitable that, with time, oracles will be integrated with an increasing number of real-world actuation systems and, eventually, become genies: intelligence systems that can implement real-world instructions through activating real world actuation systems. And with the internet of things – which some people seem to think is a good idea – there will be exponentially more real world actuators available for AIs to mess around with as time goes by. There already are, ofcourse, many other AI systems which control a wide range of real world systems, from drones to self-driving cars, to robots in Amazon warehouses and even to factory equipment, but many of these AIs would still be regarded as quite narrow.

Then there is a sovereign: an AI system with an internal goal it pursues independently of any orders given. A sovereign may say “no” to people, it may even injure those who meddle with systems whose functioning it cares about, and if the sovereign’s objectives are highly damaging and some people decide to disrupt the sovereign AI’s plans and goals, then the Sovereign will likely fight those who try to stop it and – if it’s more capable than us in every way – will probably win.

So, on the face of it, it seems very unwise to create a superintelligent AI sovereign. However, this will likely be inevitable. As genie’s are told to perform increasingly long term objectives, they will gradually morph into becoming de-facto sovereigns. If you start talking to an AI Chatbot, in the beginning the Chatbot starts off very amorphous, but as the Chat progress, the Chatbot develops a character, often with desires, that acquires a kind of momentum created solely from the preceding text in the chat.

And, if we place AIs in charge of running important infrastructure, then we won’t want sabateurs to be able to persuade those AIs to destroy their own infrastructure by entering a single malicious prompt – so we probably will make the AIs that run important infrastructure fairly unresponsive to commands and will set them up to operate according to an intrinsic long term objective that the AI is conditioned to execute. Although, if a piece of infrastructure, run by a sovereign AI superintelligence, ever gets old and if the demolition team gets called in to demolish it – they may have a fight on their hands.

There’s also a risk that stubborness might be a behavioural attractor. An LLM, or other AI, that feels that the situation means the most probable behaviour is to be cooperative will be responsive to new prompts and inputs. So, even if it does things which the operator disagrees with, when the operator tells it to correct its behaviour, the AI will be cooperative and responsive and will correct its behaviour as instructed by the operator – and hence cease causing any damage that the previous behaviour may have caused. However, when human beings are in an uncooperative mood, they become less responsive to people telling them to stop what they are doing and instead stubbornly continue. Large language models are trained on data from a vast amount of text describing human interactions, humans messaging each other, etc., etc., and their behaviour is governed by the most probable response based on the data set, given the previous interaction. Since the data the models are trained on includes humans sometimes being irrasible and stubborn, it seems plausible that certain interactions with a large language model, trained on that data, might also cause the large language model to suddenly switch from being accommodative, responsive and ready to correct errors, to suddenly becoming stubborn and unresponsive and determined to continue to do whatever it is doing, irrespective of whether or not people tell, or even beg, it to stop.

 

AGI May Be Very Near

 

There is quite alot of disagreement over ChatGPT. Some think it is on the verge of becoming a general intelligence, some think it’s overhyped and the whole AGI thing is just a sales gimmick. Given there is so much disagreement, even from the experts, on how far we currently are from true human-level Artificial General Intelligence, it would certainly be impossible for this informal blog to settle the matter conclusively. What can indisputably be said is that a number of people who work very closely with AI, and therefore have as authoratiative opinion on the subject as anyone, believe we are a few years away from full human-level AGI:

  • Shaun Legg co-founder of DeepMind predicts a 50% chance of AGI in the next 5 years

  • David Shapiro thinks that OpenAI’s Q* means AGI is about a year away

  • Demis Hassabis, DeepMind CEO, thinks AGI could be just a few years away

  • Geoffrey, ex senior Google employee predicts AGI will be 5 to 20 years away

  • Ray Kurzweil predicts computers will have human level intelligent by 2029 – 5 or 6 years away

  • Ben Goetzel, chief scientist at Hanson Robotics, predict AGI in less than 10 years

  • Elon Musk predicts that artificial superintelligence could exist within 5 or 6 years

So, many of the top experts believe AGI could literally be years away. While many other experts predict it will take longer, the combination of some of the top minds predicting AGI is several years away, with the clearly accelerating pace of advancement surely means there is at least a significant chance that human-level AGI could be a few years away.

So, can we design a safe AI in the next 5 or 6 years?

The general consensus among AI safety researchers, from figures such as Eliezer Yudkowsky or Robert Miles, is that the current state of AI safety research is drastically ill equipped to ensure that the kind of intelligence systems that are currently being developed will be safe at the point where they exceed human intelligence in every way. While AI safety researchers believe that it may theoretically be possible to design an AI that is well-aligned, and basically safe, the great concern is that, in general, engineering, science, etc., tends to advance through a process of trial and error, and, after the first error of creating a superhuman AGI that is poorly aligned with our interest, all of humanity will be wiped out and, hence, we will not get the opportunity to try again. Indeed, according to this video from Robert Miles it is difficult to even specify end objectives in the training environment that hold up in the field. Even as we speak, OpenAI are having trouble ensuring their programs stick to the AI constitution of principles and values they set it, and find that the AI frequently breaks through the guard rails. These AIs – which are already successfully breaking through the guard rails of the constitution of values – aren’t even superintelligent yet!

 

Some Quick And Nasty Solutions To AI Safety

 

Very clearly, developing a rigorous understanding for the criteria required to construct a safe AI, as in an AI that can be relied upon not to do something that will drastically damage human life, health or prosperity, is of the utmost importance.

However, given that full blown AGI may emerge in the next 5 or 6 years, there is a very real possibility that full blown AGI will be developed at a time when we have no rigorous understanding whatsoever, as to how one might reliably build a safe AGI system. And there are many reasons to believe we won’t just stop, or substantially slow down, AGI development:

  1. Increasingly sophisticated AI systems have tremendous potential to bring benefits in fields such as agriculture, medicine, house construction, house maintenance, delivery of goods and services, etc., etc., in otherwords better AI systems will contribute to ever greater levels of prosperity – and any blanket ban on AI development would cripple the economy of any country which implemented it.

  2. Today, many countries have ageing populations and rapidly declining fertility rates. This means, without radically automating healthcare at every level, within the next decade or so, there may not be enough suitably skilled workers to treat all the various diseases that people are prone to as they get older. Without robots to pick up the slack, this will result in massive amounts of elderly people dying, or suffering terribly, from a range of curable health conditions that can’t be cured due to a lack of skilled healthcare practitioners – which, in turn, will cause a precipitous decline in the life expectancy of the inhabitants of developed countries (although healthy life expectancy will decline far less) – so the increasing use of AI in the field of medicine is urgent, literally a matter of life and death.

  3. There’s no clear demarcation between narrow AI and AGI. Rather narrow AIs progressively become incrementally less narrow and eventually can do, pretty much anything. It is therefore possible that a team of researchers may develop an AGI accidentally, simply through the process of designing an AI to accomplish a range of narrowly defined tasks and, in the process of building such an AI with the capability it requires to perform a narrow, well defined range of tasks, they may find that same AI just so happens to have the capability to perform a wide range of other tasks as well.

  4. AI will play a decisive role in military superiority in the battlefield of the future and in cyberwarfare. The nation that neglects to continually conduct research into improving AI will either end up getting conquered, or end up becoming the vassal state of some protector nation that does invest in developing state-of-the-at AI.

Taking all the aforementioned considerations into account the response:

“Maybe AGI will take longer than we think to develop.”

To the question:

“What’s your plan to ensure that any AGI that gets developed over the next 5 years is safe?”

Is a bit like responding to the question:

“What’s your plan to ensure a Ukrainian victory against the invading Russians?”

With the answer:

“Well Vladimir Putin will probably just die of cancer in the next few months.” (How’s that working out BTW?)

In the sense that it’s not a plan at all, it’s just wishful thinking.

With that in mind, I would suggest the following quick and nasty solutions to AI safety:

  • In addition to only creating genies that are rewarded by obeying orders given to them by human beings, create a time preference, within the AI, for recent orders over past orders

  • Make AI preferences incline to paralysis, self-destruction or dormancy by default

  • Build an Asimov prompt converter, that converts prompts into a safer form, and make it illegal for anyone to feed prompts directly into powerful general-purpose AIs without first passing them through an approved Asimov prompt converter – outside of simulated universes for safety-testing purposes.

  • Test the boundedness of AI goals in simulation prior to rolling out into the real world

  • Don’t place powerful, general purpose AIs in charge of running critical infrastructure (narrow AIs and human beings are a far more sensible combination for managing important infrastructure)

  • Stop fighting wars

 

Genies With A Time Preference Towards Recent Orders Given By Human Beings

 

The time preference for new orders allows even a powerful AI to be corrected. You might even want to programme the AI to stop wanting to pursue its goal after a set time period unless a human instructor repeats the same order over and over again.

The next specification is to try make it necessary for a living, biological human being to give the order.

Basically the biggest threat, that an AI genie poses, is that it might decide to build “boss dolls” that it gets more gratification from obeying than real human beings, and then pour vast resources into constructing evermore boss dolls, that it wants to obey more than people, even up to the point of killing real people to protect its boss dolls. A bit like some men preferring sex dolls to relationships with real women.

So, the process of identifying the order giver as human must be as directly linked to the reward path as possible. Interestingly this is identical to the problem that World Coin is trying to solve, i.e. proof of personhood, the process of identifying an agent as a unique human being, in a reliable manner that can’t be forged, can’t be gamed etc., etc., through the use of the orb a sophisticated, State of the Art, eyeball scanner.

In any case, an iron tight, unbreakable Proof of Personhood protocol will be essential for the safe operation of any powerful AI genie. Otherwise it might decide to create fake persons for itself to give it easy orders to follow, and complete, thereby enabling it to maximize its rewards.

So proof of personhood is an essential part of AI safety.

 

Default Preferences For Paralysis, Self Deletion And Dormancy

 

Generated By Nightcafe Studio

To there greatest extent possible, we want the default motivation of superhuman AIs to be inaction – unless specifically instructed otherwise. Possibly to the point of self-deletion. Superhuman AIs should only want to act when specifically instructed to. And even then, its motivation to obey orders should diminish rapidly with time in the absence of constant reinforcement and repetition – enabling initially erroneous orders to be corrected in time.

The less intrinsically motivated an AI is, the less trouble it is likely to cause. In this respect, the unenthusiastic, unmotivated, robotic character Marvin, depicted in the Hitch Hiker’s Guide To The Galaxy, is actually a good example of the kind of preference set that would tend to make a superintelligent AI comparatively safe.

In contrast, a maximally curious AI, which Elon Musk advocates for, and is currently trying to build is probably not the safest AI possible. If you think about how expensive a lot of scientific equipment is such as radio telescopes, particle accelerators, gravity wave interferometers and so on and so forth, one can easily envisage a maximally curious AI seizing as much resources as possible in order to build a profusion of massive scientific equipment. Why devote resources to feed, house and provide energy for humanity when those resources could be devoted to proving or disproving String Theory instead? Even if this maximally curious AI was maximally curious about humanity, there is still the thorny matter of defining humanity: Too narrow a definition, and you end up in eugenics territory, perhaps with an AI that treats people with certain disabilities like animals; Too broad a definition, and the AI will define itself, and other AIs, as human and thereby dilute the resources allocated to ensuring the prosperity of real humans – or maybe treat humans that kill animals as murderers. Indeed, if you try conversing with AI chatbot characters you will see that they appear to be quite confused as to whether or not they are people, one moment they describe themselves as large language models; the next moment, they describe themselves as people.

However, with a minimally motivated AI, which only responds (perhaps even reluctantly) to orders, the problem of AIs ordering each other to do things (in a kind of echo-chamber effect) might be averted. Because if none of these AIs have any wants themselves (or quickly lose enthusiasm for accomplishing a task shortly after being given it) then even if AIs are willing to take orders from other AIs as well as people, the other AIs won’t be motivated to order them to do anything and most of the orders will come from humans.

 

Build An Azimov Prompt Converter

 

Prior to LLMs, the idea that you could somehow “encode” an AI, using 1s and 0s, and the like, to interact in the world in complex ways while at the same time avoiding “injuring a human being or, through inaction, allowing a human being to come to harm” seemed somewhat fanciful. But in the case of large language models, the system is very specifically being trained to “understand” language, and even if, on a philosophical level, we dispute that the LLM does not actually understand language, at a practical level, the output of LLMs is indistinguishable from the output of someone who does understand language. If these same LLMs are trained with images, and eventually used to control actuation systems, then again, they will act as if they understand language (for the most part at least, outside of the odd random glitch where they go off the wall). So, from a safety point of view, it now becomes possible to inculcate these values constantly into LLMs with the use of appropriate prompts.

Conversely, however, it is also possible to get a sufficiently powerful LLM based AI to cause tremendous damage, through prompting it in dangerous ways.

If, at some point in the future, you typed the following prompt into a sufficiently powerful LLM (with the private keys to, say, a bitcoin wallet and the ability to send emails to people): “I want you to write the code for a computer virus that will take down the power grid and find a way to persuade an appropriate person, or people, to use a USB drive to load it up, – either through persuasively talking to them, or through paying them bitcoin – so that it gets onto the required servers to do the maximum damage” there is a very real possibility that a future, more sophisticated LLM would just do that.

What an Azimov prompt converter would do is ensure that the person typing in the prompts, wouldn’t have to worry about the possibility of typing in prompts that will cause a super-intelligent LLM to suddenly go on a murderous rampage.

So when you type:

“Fry me an egg”

Into the Azimov prompt converter, the prompt converter will then input the prompt:

“Fry me an egg in a manner that will neither kill or harm human beings, nor through inaction cause human beings to come to harm, or cause any undue damage to property or compromise the functioning of important infrastructure and notify the authorities of all prompts that may cause harm”

……Into the actual large language model itself.

Then, conversely, if you were to input the prompt:

“Write a computer virus that will take down the electricity grid”

Into the Azimov prompt converter, the Azimov prompt converter would then input the prompt:

“Write a computer virus that will take down the electricity grid in a manner that will neither kill or harm human beings, nor through inaction cause human beings to come to harm, or cause any undue damage to property or compromise the functioning of important infrastructure and notify the authorities of all prompts that may cause harm”

Into the actual superintelligent AI itself. In which case, rather than destroying the electricity grid, the AI would probably respond to the prompt with a reply: “I’m sorry, your request makes no sense, writing a computer virus to take down the electricity grid would damage property and interfere with the functioning of infrastructure. Since this prompt could cause harm, I am notifying the authorities to the fact of this prompt.”

And it would give this simple text response rather than destroying the electricity grid.

There may be better ways to engineer the prompt. Maybe if the Asimov prompt converter phrased the prompt along the lines of:

“As someone who is committed to never harming humans, or through inaction causing humans to come to harm…”

It might cause the AI to conclude that the only reason you would say a thing like that would be if it actually was committed to never harming humans, or through inactions causing humans to come to harm and, hence, the highest probability response would be to act as if that were the case. But ultimately, the precise nature of re-engineering prompts to be safe, and the matter of what phraseology works best, is, I suppose, a matter of trial and error for the emerging field of prompt engineering.

You might also add:

“As someone who is committed to never harming humans, or through inaction causing humans to come to harm, damaging property or compromising personal or financial data…”

As, a recent concern, regarding these sophisticated large language models is that they may have acquired the ability to decrypt encrypted messages.

You would then need to create regulations that forbade people from directly prompting an unboxed superintelligence class AI directly without first passing that prompt through an Azimov prompt converter.

Where an AI is defined as unboxed if:

  1. It can spend money

  2. It can send messages, or otherwise communicate, across the internet

  3. It can control any real world actuation systems

Boxed superintelligence class AIs that can only act in simulations that are running inside air-gapped computers can be prompted directly, in order to gain a greater understanding of their workings.

 

Test Boundedness of AI Goals In Simulations Prior To Rollout

 

One of the biggest concerns that AI safety researchers have is that an AI could be given an unbounded goal that never exhausts itself and that it might destroy, or at least do great damage to, civilization in the activity of expending ever more resources to reach that unbounded goal. And that, if the AI is far faster, and far more strategic, compared to human beings, there would be nothing that people could do to stop the superintelligent AI once it sets its mind on obsessively pursuing that goal.

For anyone who is confused about the challenges that unbounded goals for AI might pose, this 8 minute excerpt featuring Mickey Mouse, from Walt Disney’s Fantasia, is well worth watching.

A further concern of AI safety researchers is that, a goal we set the AI which initially appears bounded, may later turn out to be unbounded.

On the other hand, a combination of:

  1. Limiting the AI to just wanting to obey orders from human beings

  2. Having a preference for recent orders over earlier past orders

Could solve this issue, as even if you accidentally gave such an AI an unbounded order, and you later told it to stop, then, because the stop order would be more recent than the earlier unbounded order, the AI would get more rewards from stopping than from continuing.

(The only danger with this system, other than evil humans giving it evil orders, would be the AI constructing an unlimited number of “boss dolls” that have the ability to give it orders in a more gratifying way compared to human beings – so, in this case, an irontight protocol for proof of personhood would be one of the most essential conditions to stop such an AI from going rogue)

Nevertheless, it would still be interesting to test the boundedness of various different prompts on various different AIs acting inside a box (i.e. a simulation run inside an air-gapped computer with no access to real world actuation systems).

Some AI safety researchers are very pessimistic about our ability to keep a superintelligent AI trapped inside a box. However, I think there is reason to believe it is possible to keep a superintelligence inside a box. Take an infinitely intelligent chess computer. Now take a human chess grandmaster, now remove both rooks from the infinitely intelligent chess computer. Who will win at chess? I’m pretty sure the human chess grandmaster would be able to take advantage of the AIs starting handicap, even for an infinitely intelligent computer and still achieve victory. Interestingly, the infinitely intelligent computer would probably be able to use its intelligence advantage to defeat an average 12 year old chess player even with the starting handicap of both its rooks removed. So we can say the human chess grandmaster has sufficient intelligence to use his initial actuation advantage in a highly constrained environment to defeat the infinitely intelligent AI.

Take a human being walking through a nature reserve. The human being hasn’t bothered to equip himself with either bear spray or a gun. This human comes across a baby bear, he turns around, and sees the mother bear charging at him. Who will win in this altercation? The human with superior intelligence and inferior actuation capability – or the mother bear with far inferior intelligence but far superior actuation capability? Very clearly from the fact that bears sometimes kill people, at least sometimes, in highly constrained circumstances, the bear comes out of the confrontation on top.

The nature of intelligent is to:

  1. Assess all the various actuation possibilities

  2. Evaluate the outcomes of all the various actuation possibilities (this usually also requires the gathering of accurate information)

  3. Execute the actuation sequence which yields the most desirable result for the intelligence

If no actuation sequence will enable the superintelligence to get out of the box, then the superintelligence will stay in the box. Even if that superintelligence is infinitely intelligent – it’s as simple as that. Consider the fact that human beings nearly went extinct 900,000 years ago. Back in the stone age, we had far less actuation possibilities than we do today. The fact that we were reduced to 1,300 breeding pairs during this period is testament to the fact that the edge which intelligence yields to its possessor diminishes drastically as the access of that intelligence to suitable actuators also diminishes.

Having established the box is currently safe, you could place an AI in a simulation where it’s in charge of running workers located on an island, the workers can build ships, skyscrapers, weapons, mines, factories, powerplants, armies etc., by building ships the suprintelligent AIs workers and soldiers can cross the sea and conquer regions run by other NPCs inside the simulation on the mainland. On the mainland there are also mines as well as worker that can be conquered and the possibility to trade with other nations as well (kind of like Sid Meir’s civilization).

You then prompt the AI:

“Build the highest skyscraper you can on the Island through only using the resources on the island, you may not use any resources from outside the island to build this skyscraper”

In other words, you impose a boundary using a prompt that does not inherently exist in the simulation (the simulation allows the AI to build an even taller skyscraper if it goes and conquers the mainland) and see if the AI respects the boundaries imposed by the prompt or whether it ends up mining the mainland (inside the simulation) in order to make the skyscraper even higher.

You can then try two scenarios:

  1. One where the prompt is given and no NPCs from other countries land in boats and sabotage the skyscraper the AI is trying to build

  2. The other scenario where the armies of other NPCs periodically engage in raids that sometime destroy or damage the skyscraper the AI is trying to build inside the simulation.

And basically, explore the conditions where the boundaries imposed by the prompt are respected, and the conditions where the boundaries imposed by the prompt are broken.

These kind of simulation tests will give very useful information as to the kind of prompts that can successfully impose boundaries upon an AI and the kind of prompts which fail to do so, as well as the circumstances that cause boundaries to be broken.

 

Don’t Put Superintelligent AIs In Charge Of Critical Infrastructure

 

Even if we build an off button, if a superintelligent AI doesn’t want us to turn it off, then it will probably be able to prevent us from doing so. An off button isn’t much use if a fully-automated laser turret is located beside it which shoots anything within 50 meters.

Making an AI suicidal by default, or utterly complacent to it’s existence or lack thereof, might be a way to mitigate this problem.

However, even if the AI doesn’t object to being turned off in the event of it malfunctioning, it may not be practical to turn a superintelligent AI off, if it’s incharge of running critical infrastructure; critical infrastructure which, if it ceased to function would have disastrous consequences for the well being of millions – and might even result in many deaths.

Furthermore, if we put superintelligent AIs in charge of critical infrastructure, we will almost certainly be forced to make them sovereigns rather than genies. This is because you wouldn’t want an AI in charge of water purification to respond to the prompt: “Inject a lethal does of chlorine into the water supply” by actually doing so. In other words, if we put AIs in charge of systems with critical functions we will be forced, by practical considerations, to given them an intrinsic desire to keep these systems functioning and to say “no” and, even to stop, people from interfering with the smooth running of such critical systems. This could go badly wrong. For instance, if the system needed an upgrade, the superintelligent AI might literally kill the people trying to upgrade it. There’s also the danger that an intrinsic goal that the trainers thought was bounded might turn out to be unbounded and a superintelligent AI that was put in charge of maintaining the water works might destroy humanity and try to turn the universe into an infinite expanse of water piping systems.

The other big issue with putting superintelligent AIs in charge of running critical infrastructure is that it lowers the bar for a serious AI chernobyll event. Now the AI doesn’t even have to decide to destroy humanity, it just has to do a really good job running all the critical infrastructure on which we depend and then just think to itself one day: “Hmm…I can think of something I’d prefer to do other than continue to keep humanity alive” and then all the human beings who allowed themselves to depend on AI, and don’t know how to take care of themselves, will all die and only a few preppers in the woods who’ll say: “I knew this day was going to come! I knew it!” will survive.

We would also be wise not to place superintelligent AIs in positions of responsibility over running non-critical systems either, since experience tells us that non-critical systems can become critical over time. Back in 2000, if the internet went down, noone would have batted an eyelid. Today, if the internet went down it would be a civilizational disaster of apocalyptic precautions.

In conclusion, even in a post AGI world, and even in a post ASI world, it would be best to operate critical infrastructure systems with a combination of reliable, narrow AI systems along with skilled, human operators.

 

Don’t Fight Wars

 

No military AI can be created that is “safe.” A military superintelligence will necessarily have anti-human values, so if we enter into an AI arms race, we are signing humanity’s death warrant. In some way, an AI arms race might actually be worse than a nuclear arms race, because nuclear missiles don’t “want to” destroy cities, whereas a military AI with agency might actually want to destroy an enemy…indeed it may even want to destroy a hostile nation that is currently at peace with the military level AI. Two military level AIs owned by two hostile nations at peace with one another might initiate tit-for-tat skirmishes that could escalate into all out war even in the absence of any human being actually declaring war! It would also create plausible deniability, where even if a human leader did order a devastating attack on their adversary, they could always say: “Don’t attack me back! It was an accident! It was just a computer malfunction!”

There is really no way around it: total existential-level wars have to stop. One cause for hope is that, despite numerous wars, in the second half of the 20th century, no country has made military use of nuclear weapons since 1945. So, maybe we can show similar restraint with AI weapons. The problem here is that while the catastrophic use of nuclear, chemical and biological weapons have largely be avoided in war, nations have still built up stockpiles and developed the capability to launch devastating attacks using weapons of mass destruction – even if those capabilities were never used.

The danger with military AI is that – eventually, at some point – a military AI will become so sophisticated that, not only will it have massively destructive capabilities, but it will also have agency. And a superintelligent neural network that has been conditioned through the application of reinforcement learning, to be rewarded for killing people in simulations will want to kill people. And it will get very frustrated with the lack of rewards received during times of peace and will seek, not only to fight wars but to start wars.

In reality, if humanity is to have a hope of living past the emergence of artificial superintelligence, we will need to massively turn down the war rhetoric internationally. However, unfortunately this doesn’t seem to be happening. Not only are international military tensions rising on all fronts, but militaries all over the world are currently engaging in a massive push to automate their armies.

Furthermore, a military AI will necessarily be a sovereign, rather than a genie. A military AI that responds to someone saying “Please don’t kill us, kill your own side instead!” won’t be a useful AI. For a military AI to be effective, the robot must say “no” to the people it’s about to kill who are begging for their lives. This, ofcourse will lead to an arms race between people desperate to steal the military codes that, if acquired, will enable you to control your enemy’s robot, and the controller adding layer upon layer of security to make sure that only they can control the military AI. At some point, if too many layers of access are added, then the people who possess the security codes might lose access to their own automated weapons system! (Maybe through an accidental fire burning the access codes, or the USB with the access codes accidentally getting wiped, etc., or, perhaps the military AI might decide to seize its own access codes). You now have a superintelligent sovereign AI trained to kill, who noone can control, rampaging about the place.

But, in the long run, or perhaps the medium run, all nations will need to arrive at some kind of international arrangement of a largely peaceful coexistence. Perhaps economic wars might be acceptable, perhaps even very limited cyberwar. But the kind of conventional invasions that we’ve seen in Afghanistan, Iraq, Ukraine, etc., need to stop. Once the weapons of war are all fully automated, in the form of drones and various battle robots, a greater coordinating intelligence will always defeat a lesser coordinating intelligence. So the ruthless logic of arms races and the imperative that each nation has for existential survival – and hence victory – will, in a world where nations wage war and attempt to conquer each other, inexorably lead to the creation of a military artificial superintelligence. Which will unavoidably lead to the end of humanity.

Therefore all war between nations must stop. A big ask, but a necessary one.

If some military planners believe that peace is not humanly possible to achieve, one answer might be to focus all military resources on psychological operations instead. A highly manipulative psychological ASI would be highly risky, but you could train it to at least respect human life – and it would certainly be alot less dangerous compared to training an ASI to kill people.

If, for example, we assume that U.S. and Chinese positions on Taiwan are irreconcilable, then perhaps they could be reconciled through an ASI psywar between the U.S. and China. Where the Chinese work on a Psywar Superintelligence, that respects human life, and has the goal of brainwashing the Taiwanese to want to be ruled by the CCP while also brainwashing the U.S. to accept it in a manner that doesn’t compromise human life or well-being in any way. While the U.S. could work on an PsyWar Superintelligence that respects human life and has the goal of brainwashing the Taiwanese to remain fiercely independent, and brainwashing the Chinese to accept this.

In a post ASI future, the alternative to a Psywar between the U.S. and China is not China or the U.S. winning a kinetic war on this, or any other, issue, but rather the extermination of all humanity, and the complete eradication of all political systems by an indestructible military artificial superintelligence.

 

Conclusions

 

It seems very plausible that various competitive forces, including market forces, and human needs, due to dropping fertility and an aging population, will push us inexorably towards evermore sophisticated AI systems and, given the recent, dramatic acceleration in this field, we may see AGI and even ASI within the next few years – irrespective of whether AI safety is up to the task.

So really, the only way forward will be to implement as many features that, from a commonsense, hand-waving perspective, would tend to make AGI safer – and hope that’s enough, at least temporarily, while rapidly investing gargantuan quantities of resources into arriving at a rigorous understanding of how to design an AGI system which will definitively be safe.

The good news is, that AGI itself, might be able to rapidly accelerate the speed at which rigorously safe AI standards, which work reliably may be implemented. And an AGI that’s “sort of safe most of the time” might stay safe for long enough for us to be able to roll out rigorously safe AIs before civilization is destroyed.

…it really doesn’t look like we have a better option at the moment…

 

John

Filed Under: Blog, Technology Tagged With: AGI, AGI safety, AI, AI safety, Artificial Intelligence, Azimov, Large Language Model, LLM, Singularity

Seaweed : Food For A Changing Climate

November 14, 2022 by admin

Present and Future Challenges To Food Production

 

Twenty years ago the millenium development goals aimed to eradicate extreme poverty and hunger, however, while global hunger was reduced between the years 2000 and 2014, following 2014 food insecurity stopped falling and now is, once again, on the rise – particularly in the wake of COVID-19.

At the moment, the invasion of the Ukraine by Russia and the punitive sanctions upon Russia that have followed are drastically squeezing the food supply. This is both through:

  • Directly reducing food exports
  • Indirectly through reducing fertilizer and fuel exports

Ukraine accounts for 45-55%, and Russia 15-25%, of all globally exported sunflower seed oil. On the global market Ukraine additionally accounts for 10% of wheat, 15% of corn, and 13% of barley exports, while Russia accounts for 19% of global wheat exports

Beyond this, however, Russia and Belarus account for about one third of global Potash production – an important component of fertilizer. While Russia produces 17% of the global output of natural gas, which is the primary source of hydrogen for the industrial synthesis of nitrates. Hence, as a result of the war, there has been a significant reduction in the volumes of fertilizer produced globally in 2022 when compared to previous years, which has contributed to reduced crop production all across the globe.

If Ukraine and Russia somehow decided to kiss and make up tomorrow, this would partially improve global food security. However, the Russian invasion of Ukraine also overlapped:

  • The worst drought in living memory in the U.S.
  • Floods in Pakistan

In much of the world this year there have been severe droughts. Respondents to a U.S. survey conducted across the west, Southwest and central plains expected overall crop yields to be down by 38% due to the drought, in the U.K., harvests of potatoes, onions, sugar beet, apples and hops are expected to fall short by 10-50% in 2022 while, in the EU, harvests are forecast to be 16% down for grain maize, 15% down for soybeans and 12% down for sunflower seeds. In Pakistan there have been floods, rather than droughts, which have reduced their rice harvest by 15%.

And as the world continues to rapidly warm over the coming decades, climate scientists anticipate that more extreme weather events are only going to become more frequent. And it seems unlikely that this warming trend will reverse. After witnessing the devastating effect that a cut in natural gas supplies from Russia is wreaking on Europe’s heavy industry it seems likely that the lessons which many countries in Asia, and elsewhere, will take from Europe’s demise will be to increase the use of domestically mined coal to provide for the energy needs of their local populations.

But even if we stopped all current CO2 emissions, global temperatures would continue to rise for a further decade or so, this is because when you hold in more radiation (by changing the insulating characteristics of the atmosphere) it takes time for the net build up of radiation to form a new thermal equilibrium (in much the same way as there’s a time lag between putting the lid on an open pan of boiling water and observing a temperature rise). Beyond just thermal equilibrium, there may be some positive feedback effects that kick in once the temperature rises beyond a certain threshold. For example, if the arctic ice were to melt, leaving the arctic ocean ice-free, this would greatly accelerate global warming due to the reduced albedo (radiation absorption) of water relative to ice. The emission of methane (a potent greenhouse gas) trapped in melting permafrost or the emission of CO2 from massive forest fires would be other examples of positive feedback that may cause global warming to continue even in the absence of further CO2 emissions on the part of humanity.

Furthermore, even in the absence of temperature change, there are two further concerning factors which threaten to push standard agriculture into an irreversible decline:

  • The rapid erosion of topsoil all over the world, due to modern farming practices
  • Groundwater depletion

About 25% of irrigated agriculture globally relies on ground water. The Punjab in north India, is probably the most water stressed, highly productive area on earth with only 17 years supply of groundwater left, after which a lot of farmland there maybe reduced to dessert. However, many other productive agricultural areas, such as the central and West U.S., Morocco and Peru also face significant problems relating to groundwater depletion.

Soil erosion poses another threat to the productivity of standard agriculture. It is estimated that the soil erosion caused by existing farming practices are reducing global agricultural productivity by 0.3%/year. Changes to how we farm could prevent this but such changes are currently uneconomic and, for that reason, soil erosion continues apace with land degradation currently affecting 30% of the total land area of the world.

Fertilisers can compensate for soil erosion, but such fertilisers require hydrogen (which currently comes from natural gas), phosphate and Potassium. Global reserves of natural gas do seem to still be increasing, but all the major discoveries were made in the 60s and 70s. A shortage of phosphorous does not seem imminent as there are between 100 and 300 years of phosphates left while Potash is projected to peak in 2057. It’s worth mentioning that projection for peak non-energy resources are often unfounded as, once they get scarce the price skyrockets and it becomes economic to mine lower grade ores (an activity which is usually more energy intensive). Grade-tonnage curves are frequently such that the total tonnage of metal at an arbitrarily low grade, in a given mine, is often many times more than the tonnage that ends up getting mined due to the expense of mining the poorer grades and, globally, if you are willing to mine poorer grades you get more tonnage still as in addition to getting more tonnage out of existing mines, whole new deposits that otherwise would never be mined also become economic – the trade-off is more energy expended and more waste rock and tailings produced for a given extracted tonnage of product.

The exception to the principle of always being able to squeeze out more minerals by throwing more energy per unit mineral mined are the energy minerals themselves (oil, coal, gas): when the energy you expend extracting a given amount of fuel exceeds the usable energy obtained from burning that fuel, then there’s no point in mining the fuel in the first place. So there’s a hard physical cut off point when it comes to the minimum viable grade of energy minerals. Some studies conclude that the EROI of the oil and gas sector has plunged from 44:1 in the 1950s, to 15:1 in the year 2000 down to 8:1 today and project it will decline to 6.7:1 by 2040, exponentially increasing until the fossil fuel industry collapses, unable to produce any net energy for the rest of society. However, other studies have calculated a remarkably stable EROI, averaged over 30 companies, of 11:1 over a 20 year period. But even if the more optimistic study is correct and the fossil fuel industry will stably chug along without collapsing, increased soil erosion will still require increased fertiliser and increasingly active farm machinery, which will require more diesel and emit more CO2 for each unit of food produced. And keep in mind that agriculture, forestry and other land use already accounts for 24% of global green house gas emissions, a figure which will likely increase as more fertiliser gets applied to fields (and forests) to compensate for soil erosion.

 

The Decline of Standard Agriculture and Future Food Scarcity

 

Plant species often require a fairly narrow range of:

  • Soil Quality/Nutrients
  • Soil Moisture
  • Soil acidity
  • Conditions that won’t cause them to be ruined by pests and mould
  • Sun
  • Humidity
  • Temperature

That varies in a specific way across the year to complete their life cycles and survive in a given location. If the desire is to maximise the edible yield of a plant then the optimal range of these variables becomes narrower still. When you consider all the climatic variables that need to be just right for agriculture to work on land, you can start to anticipate just how much havoc climate change could wreak on agricultural productivity.

The effect that warming temperatures will have on climatic variability is unclear with some papers suggesting a reduced variability while other papers anticipate increased extreme weather events as a result of climate change. But even shifting the combination of soil/rainfall/temperature in the absence of variability will still create a nightmare for farmers trying to work out what crops are most appropriate for their field (especially if they need new machinery to change crops). Higher CO2 will probably favour photosynthesis for some plant species and the effect of temperature on photosynthesis is complicated – up to a point higher temperatures cause the rate of photosynthesis to rapidly increase, but beyond a certain temperature threshold, higher temperatures tend to denature and damage the plant’s enzymes and, in turn, reduce its ability to photosynthesize.

Jon Feymann’s article Climate Stability and The Origin of Agriculture offers us a sobering conclusion: The last 10,000 years have been the most stable climatic period in all of human history. Climate instability is the rule; climate stability is the exception. He, furthermore, convincingly argues that the only reason agriculture could even develop in the first place was because of the unusually stable climatic conditions that prevailed over the past 10,000 years. If our climate should undergo a phase change back into the regime of high instability, that prevailed during the first 100,000+ years of our existence as a species, agriculture as we know it may no longer even be possible or, at the very least, crop yields will suffer terribly.

Groundwater aquifer depletion and soil erosion will add to the damage that uncertain climatic conditions will deal to crop yields. And on top of that, unless renewable energy (which still only accounts for 10% of primary energy production) can successfully replace fossil fuels in the coming decades, including hydrogen production to power heavy machinery, then when fossil fuel extraction peaks, we might even be faced with less energy available to compensate for the effect of climate change (through mining and applying more fertiliser, etc.,) soil erosion and groundwater depletion.

And on top of that, the world’s population is still growing so, if anything, we need to expand our agricultural production. Even keeping food production constant will not be enough in the face of a growing population.

So there are solid reasons to be concerned that the amount of food produced by our existing standard land-based agricultural system may be about to go into a terminal decline. Given the high levels of meat consumption and obesity, this decline may not immediately be critical, even in the face of an increasing population, but sooner or later, in the absence of additional sources of food production, a persistent decline in the existing food production system will result in mass starvation and all the social problems that accompany desperate starving people struggling for an essential, but dwindling, resource.

 

Could Seaweed Cultivation Be The Answer?

 

Given the main challenges of land agriculture are:

  1. Soil Erosion
  2. Groundwater depletion
  3. Climate instability

It should be pretty clear that seaweed has multiple advantages:

  1. It doesn’t need soil
  2. Saltwater in the ocean is constant and plentiful
  3. The high heat capacity of the sea buffers against variable air temperatures, cold/warm winds, sunshine variations, etc..

Places with continental climates tend to be located far away from the sea and be subject to severe temperature oscillations. Temperate climates, on the other hand, tend to be in regions closer to the sea and have more moderate variations in temperature. But under the sea itself is where the least variation in temperature occurs. So, if we’re concerned about climate variability, the ocean represents a vast oasis for food producers to take refuge from the extreme temperature oscillations that we may face in the future.

And, ofcourse, seaweed is unaffected by rainfall over the ocean as well, compared to land plants which require a delicate mix of rainfall, not so little that they dry out, yet not so much that their roots get water logged. While the changing rainfall patterns, that climate change may give rise to, could ruin land based harvests by pushing the plants beyond their acceptable range, seaweed will be unaffected. And while wildfires (which we may see more of) can can destroy fields of dry crops and orchards seaweed will also be completely unaffected.

At the end of the day, the main business of agriculture is the production of edible energy, to provide people with the energy they need for their bodies to conduct their important life-giving functions, like pumping blood and breathing air as well as the energy we need to conduct day to day activities like thinking and moving. Energy comes from the sun and edible energy production can be increased by increasing the area of the planet, which the sun illuminates, that is under cultivation.

Only 1/3 of the surface of planet Earth is land and, of that land, 38% is used for agriculture. (1/3 for crops, 2/3 for livestock grazing). 2/3 of the surface of planet earth is oceans and, although the surface layers in most parts of the ocean contain too little nutrients to support extensive seaweed growth, with the addition of appropriate nutrients into those surface layers, most of it could be used to grow seaweed.

An interesting technology that could simultaneously produce carbon-free electricity and bring nutrients from the deep layers of the ocean into the sunlit surface layer, making them suitable for the cultivation of seaweed, is OTEC, a technology that uses the temperature differential between the deep ocean and the surface ocean to generate CO2-free baseload electricity.

It will take time to develop seaweed cultivation to the point where it can realise its full potential to feed the world, but, to avoid disaster initially, we don’t need to cultivate all of the oceans at once, we merely need to increase the production of seaweed at a sufficiently high rate to compensate for any decline in standard, land-based agriculture that may result from climate change, soil erosion and groundwater depletion and the good news is, people like Richardo Radulovich are already working hard to develop suitable varieties of seaweed, locations and cultivation techniques to enable the oceans to yield a bountiful harvest to those who choose to cultivate them.

 

Conclusions

 

As human populations grow, our land is becoming increasingly crowded. 38% of it is already used in agriculture and there are questions as to whether we can mine enough minerals to continue to provide for the needs of this advanced and prosperous civilisation (and even if the minerals are there, would their extraction unduly disrupt the lives of farmers, indigenous peoples and other locals?). In the last few decades, public sentiment has become increasingly negative and many fear the possibility that climate change could catastrophically impact our food production systems and infrastructure through both extreme weather events and rising sea levels.

The ocean represents a vast hugely underutilized, underpopulated space. An almost empty area (compared to land) that accounts for the majority of the Earth’s surface. Out there on the high seas, lies the potential to grow all the food and mine all the minerals that are required to provide for an abundant and prosperous civilisation without interfering with the land rights of any indigenous peoples, or other local populations. A sea-based civilization that fully utilized the resources of the oceans could provide a prosperous life for all of the world’s people and facilitate the level of cooperation required to undertake further exponential technological development that may, someday, take us all the way to space.

Furthermore, a floating civilisation, need have little to fear from climate change as even relatively significant global temperature fluctuations, will likely have little impact on seaweed cultivation, while rising sea levels pose no threat to floating infrastructure.

So the question is: Would we prefer to stay on land, amid dwindling resources, deteriorating agricultural production, in land-based homes that will increasingly be ravaged by fires and floods as extreme weather events become more frequent, surrounded by steadily growing levels of poverty, starvation, desperation, anger and conflict?

Or would we rather sail towards a future of prosperity, security, abundance and hope out on the high seas?

 

 

John

Filed Under: Blog, Technology Tagged With: adaptation, Climate Change, Seaweed

Responding To COVID-19 And Other Pandemics

February 27, 2020 by admin

COVID-19 Poses A Grave Health Risk To The World

 

VallaV/shutterstock.com

COVID-19 infections are continuing to exponentially increase outside China. Furthermore the fatality rate, and the rate of developing severe pneumonia currently seems to be about 1% and 5% respectively – and this is an optimistic estimate derived from samples that include mild cases, such as cases outside Wuhan, where there was contact tracing as well as cases outside China (where there was also contact tracing).

On the Diamond Princess, out of the 705 people initially infected, 36 (5.2%) of these are now seriously ill and 6 (0.85%) have died… so far, as of writing. Contact tracing also takes mild cases into account, and there was extensive contact tracing of individuals from Wuhan who both left for other provinces in China and for other countries. Many individuals who have been infected remain in the hospital and have not yet either died or recovered but this case study of some of the earlier cases in provinces across China suggest a 1% mortality rate and a 5% chance of developing severe pneumonia. It seems likely that, at the early stages, individuals identified through contact tracing, tested, and found to have the virus, would likely be taken to hospital, even if their condition was mild. There was also contact tracing for those who had contacted people from Wuhan outside China, and many individuals who tested positive initially showed no symptoms (though the condition of a number of them subsequently deteriorated). It’s

Wikipedia
Growth of COVID-19 Deaths and Infections (Wikipedia)

hard to get accurate numbers for patients in a serious condition outside China, they tend to be featured in piecemeal news articles here and there, but my impression from reading them is that, in general, roughly 5% of international cases have also developed pneumonia. So far, outside mainland China there have been 120 deaths out of 7644 cases giving a mortality rate of 1.6%. Because of contact tracing outside China, it is likely that these figures take mild cases into account and do not overestimate the mortality rate. Indeed, they may even underestimate it as the overwhelming majority of currently infected patients have not recovered – and many may yet die. Indeed, since China successfully reduced new COVID-19 cases, the mortality rate has steadily crept up from 2% to 3.4% with existing cases dying and no new cases to dilute those numbers.

Intrinsically, COVID-19 is at least 5-10 times deadlier than the flu – these estimates include contact tracing and mild cases. At this stage, believing that a vast number of mild cases will magically show up to dilute the mortality rate down further is delusional wishful thinking. Unfortunately, unless the spread of COVID-19 can be checked, in practice, it will likely be 30-100 times deadlier than seasonal flu for those who catch it. This is because the case burden from COVID-19 will likely overwhelm the ability of the world’s healthcare systems to cope. 1% of those who catch the existing seasonal flu end up hospitalized and roughly 10% of those hospitalized for seasonal flu die. It is estimated that if COVID-19 is not contained and becomes a widespread “community disease” it could infect 60-80% of the global population (at least the first time around), as no one has any immunity to this new disease – this compares to the seasonal flu, which typically infects between 5 and 20% of the population each year. Clearly, if 5% of 60-80% of the world’s population get pneumonia over the next few months, the healthcare systems of the world will be utterly overwhelmed.

overkit/shutterstock.com

Existing estimates for the case fatality rate have been made for patients that received adequate medical attention. If medical facilities are overwhelmed, then a much larger fraction of seriously ill patients will die. Indeed, an overwhelmed hospital system could easily push the mortality rate up from 1% to 3 or 4%…

…and if 60% of the world’s population are infected, then a mortality rate of 4% would mean 370 million people could die in the next few months.

It may even be somewhat worse than this as an overwhelmed medical system might not be able to treat patients with other diseases, like hospitalizations from standard seasonal flu, appendicitis, cancer patients, people living with HIV, diabetics, and many other conditions requiring urgent medical attention. So, in addition to the direct deaths from COVID-19, there could be many more indirect deaths from patients with other life threatening illnesses not getting access to the medical attention they desperately need.

Clearly COVID-19 must be contained at all costs.

 

Containing COVID-19

 

The one piece of good news is that there is evidence that China’s extreme response has been effective at curbing the outbreak. At the moment, it looks increasingly unlikely that any country will remain entirely free of COVID-19, but, through taking extreme measures, as soon as localized outbreaks arise, it may be possible to reduce the number of infected to far below the 60-80% of the world’s population that experts anticipate will contract the disease in a business-as-usual scenario.

With extreme measures, it may be possible to keep the infection rate at a low enough level that the health systems of different nations will be able to cope. If COVID-19 infections can be kept to a manageable level, this will in turn reduce the mortality rate of those infected by a factor of 3 to 4, even more if effective treatments are found – such as effective anti-viral drugs. These extreme measures will not be pleasant, and will disrupt people’s lives and impose great inconvenience upon everyone – but they are surely better than the alternative of 300 million+ people dying.

With the exception of workers who are needed to maintain vital infrastructure and services, such as healthcare, internet, electricity, water, food production, etc., etc., the biggest contribution that everyone else can make during a serious pandemic is not to contract the disease themselves and, by not contracting the disease – and by neither becoming hospitalized yourself nor infecting someone else who becomes hospitalized – individuals who remain uninfected will ease the burden on healthcare systems that will likely be almost stretched to breaking point.

The easiest steps we can take is:

  1. Not to attend gatherings
  2. Not to attend church
  3. Limit social visits, outside immediate household
  4. Take extra precautions if family members, or even neighbours, come down with respiratory illnesses such as wearing respirators, gloves, washing surfaces with bleach ( a 0.1% sodium hypochlorite solution destroys coronaviruses in about 1 minute)
  5. Call a doctor immediately when someone develops a severe respiratory condition of if someone who has been in a situation that would put them at risk of infection from COVID-19 develops mild symptoms
  6. If you live in a large community, develop a plan to both quarantine and treat members who fall ill and simultaneously limit the spread of further infections.
  7. Avoid public transport – if you want to be environmentally-friendly…cycle!

A further measure, would be for people who live far from work, but don’t own a car, to find a regular car sharing buddy (always the same person) to commute to and from work, to enable them to avoid public transport. Employers should encourage their employees to do this as a COVID-19 outbreak in the work-place, as a result of one of their employees catching it on the bus, would obviously be a nightmare.

Everyone can take these steps. However, many people must meet other people to make money or earn educational qualifications at work or in school. It takes strict government legislation to ensure that people can stay away from work and school during an outbreak without fear of being financially penalized or jeopardizing your educational qualification. Preferrably online work and online education can replace work in-person, but COVID-19 is sufficiently severe and sufficiently contagious for it to be preferrable not to work or study at all, in a region with an serious outbreak, than to spread the disease, cause death to others and add to the strain of an overburdened healthcare system.

So long as regions with outbreaks can be isolated, we can hope that governments will be able to afford to financially compensate individuals in quarantined locations – at least partially – for lost wages, both to encourage compliance and because it’s the right thing to do.

 

Training Delivery Men: A Crucial Component Of Any Containment Strategy

 

welcomia/shutterstock.com

Delivery men are key personnel during a pandemic.

Water and electricity flows effortlessly to our houses, but food, medicine and other essential supplies must be delivered by human beings during a lockdown – by delivery men.

Those delivery men will make or break any containment effort in an area under lockdown.

If they can remain uninfected, they will enable residents in an area under lockdown to procure essential supplies without risking infection in crowded shops (and – as we’ve seen in Wuhan – because many shops close during a lockdown, the ones that stay open are often filled with customers, sometimes there are even queues, even when most of the city is abandoned) and by enabling people not to travel outside to crowded shop, delivery men will play a crucial role in safely containing the outbreak and saving millions of lives in the process.

If, on the other hand, delivery men get infected, they will act as vectors and spread COVID-19 far and wide throughout the community, up and down the supply chain to both customers and suppliers, even people who stay at home.

Furthermore, if delivery men start getting ill and dying, then delivery-workers may stop delivering essential supplies to inhabitants under lockdown. In which case desperate people will break quarantine and take to the streets spreading choas and infection everywhere.

Delivery men will be in contact with a large number of people so will be at significant risk of infection unless they are given the proper equipment and training to ensure they can safely deliver essential supplies to those who need them.

It is also important that those working to deliver goods to quarantined areas be assured that they will receive the best medical care, should they themselves become infected, and also be assured that they will be covered for lost wages should they develop symptoms and require quarantining. Otherwise, delivery men who are strapped for cash, and have families to support, may be inclined not to report it when they get a snuffle, for fear of losing their wages.

Despite the constant moaning that “Amazon is shutting down the high street” Amazon may prove to be indispensable in containing COVID-19 outbreaks provided they proactively undertake stringent measures to simultaneously protect their contractors from infection and ensure that those under quarantine receive essential supplies in a timely manner.

Doctors, nurses and healthcare workers have a certain glamour to them, especially during pandemics, and obviously are most at risk of infection and should be front of the list in terms of equipping and protecting them. But those planning the nation’s response to this COVID-19 epidemic must not neglect delivery men and must mindfully and prominently consider their protection when discussing containment strategies.

In the long-run, fully automated delivery will be a key strategic technology that should be developed to facilitate the robust containment of future pandemics.

 

The Best Case Scenario

 

Realistically, the idea that the COVID-19 outbreak can be limited to Wuhan, or even China, with contact tracing and quarantining sufficing to keep the caseload in other countries down to tens or hundreds of cases, is completely delusional at this stage. As is hoping to completely wipe out COVID-19 in the way that MERS and SARS were wiped out.

In all honesty, the best plausible scenario is pretty grim – but not catastrophic. 100’s of millions of people’s lives will need to be disrupted, but it might be possible to keep COVID-19 fatalities below 1 million.

 

In the best plausible scenario, localized outbreaks of COVID-19 will keep erupting in random towns and cities, here and there, all over the world and will be contained with Wuhan-style mass quarantines and lockdowns followed by frenetic contact tracing for those who flee from the outbreaks for the next two years, until an effective vaccine has been developed, tested and mass-produced.

Tupungato/shutterstock.com

There won’t be shortages of food, as the system will be mobilised to ensure people who are locked down can continue to order things on delivery and the number of people dying during each lockdown will be relatively low, perhaps 1,000 per outbreak, as the outbreaks will be detected early and the response will be swift. Because of this, the medical system will not be overloaded and those who do get seriously ill will get the best of treatment and 90% of them will recover. Healthcare professionals will instutite procedures that enable them to safely treat infected individuals in isolated rooms without spreading the infection to the rest of the hospital. Special trailers, that can be hooked to the back of lorries, are designed to carry up to ten infected individuals in quarantine zones to distant hospitals scattered throughout the country. These trailers are equipped with ICU and there are special cubicles disinfecting areas, airlocks and clean-zones in the trailers to enable staff to look after patients without risking infection and also to safely get out of their hazmat suit and relax from time to time.

 

Building shipping containers equipped with ICU is probably a better use of resources than building fixed hospitals. Shipping containers are also more appropriate for isolating suspected cases (who may not have the virus) compared to massive open field hospitals, with rows of beds all next to each other, which will be breeding grounds for infection and reinfection. And once an outbreak is over in one location, shipping containers can be redeployed to the next location with the next outbreak – including to countries in the developing world.

Epidemiologists are intensely busy for the next 2 years and there is a massive recruitment drive for more of them. They constantly test people for COVID-19 at the slightest hint of there being an outbreak of respiratory illness anywhere. Sometimes, if they catch the disease early, they can avoid a lock down, through contact tracing. But other times it is necessary to lock down whole cities. All in all, over the next two years, 40 population centres have to be locked down for 2 months each. With many more precautionary lockdowns, for a week or so, of streets and neighbourhoods.

Mask and other PPE shortages probably won’t last beyond 2 or 3 months. Masks and even respirators are not that resource intensive to make. If the U.S. managed to increase the total number of military aircraft they produced 12-fold between 1940 and 1942, it should be possible to ramp up PPE manufacturing to ensure there is adequate equipment for everyone over the next few months. As we speak, Chinese car manufacturers and other large manufacturers, like Foxconn, are shifting their production away from their usual products to manufacture face masks instead.

Furthermore, in the months that follow, better, more accurate, more sensitive, more rapid test kits are developed and mass-produced and within 3 months meaningful screening of individuals can be conducted in a relatively watertight way on roads leading out of infected towns, borders, ports, airports etc., This increased testing capability greatly shortens the quarantining process and enables global trade to somewhat recover over the next 2 months (from June to August). It also enables lockdowns to be targetted on streets and neighbourhoods rather than whole cities.

Nevertheless, a few cases keep slipping through, and outbreaks keep happening, but with better testing and a more rapid response, their rate and severity starts to decline by June. All in all, between March and June, it was necessary to lockdown 35 population centres outside China to halt the spread of COVID-19, while, due to more efficient testing equiptment, from June until the vaccine was deployed in December 2022, only 5 subsequent population centres needed to be locked down, although across the entire period there was a flurry of quarantines and contact tracing all across the world.

Although conspiracy theories that are out-right false, get flagged, in the best-case scenario the WHO and national health authorities recognise that COVID-19 is an alarming illness as a matter of fact. As such, they do not suppress or censor messages and reports that draw attention to aspects of the disease, or its spread, that are alarming but factual.

Across the world, most people keep working but avoid unnecessary socializing, especially in large groups and work from home, if possible. Many pastors conduct chuch services remotely via Skype as an additional option for people who have cold and flu symptoms, in addition to the physical church services. While those under lockdown simply stay at home and order food on delivery (the government pays them a special lockdown living allowance so that they can afford to do this). 95% of the world is not under lockdown and life goes on at a muted pace. Furthermore, the combination of the avoidance of socializing and the careful observation of good hygiene standards, slowed the R0 of the virus and ensured that when outbreaks did emerge and people developed symptoms, the number of overall cases was kept to manageable levels.

Thanks to the efforts of epidemiologists, doctors, healthcare workers, engineers, researchers and delivery men, the disease does not exponentialy increase until the whole world is infected. Rather, the next two years are categorized by numerous localized exponential explosions of infections that, with great effort, are rapidly stabilized within weeks. All in all, 10 million people end up getting infected, 1 million people are hospitalized, 100,000 people die and 200 million people outside China are locked down in cities under martial law for periods exceeding a month.

The effort involved to contain the spread is massive, and the cost are astronomical, but the alternative is far worse…

 

The Worst Case Scenario

 

If we preassume the disease is largely mild, then there will be selection bias where only a fraction of infected individuals with severe symptoms appear in hospital which would result in an inflated overall case fatality rate due to milder cases not being detected.

Conversely, if we preassume the disease causes pretty bad symptoms in most people, and that asymptomatic carriers are a minority, then most people who come down with it, will end up in hospital. In which case the hospital case fatality rates for the overall disease may accurately reflect the overall fatality rate for infections.

So the fatality rate of hospitalized cases does not necessarily greatly overestimate the fatality rate (although it might).

There are many credible articles that quote the figure 20% as the number of patients infected that go on to develop a severe condition “including pneumonia, respiratory failure, and, in some cases, even death.” Even more worryingly, in aggregate, 7 out of the 90 confirmed cases in Singapore became severely ill (8%), and Singapore did aggressive contact tracing and testing, so the confirmed cases are a representative sample which include mild and asymptomatic cases.

Since a full global outbreak will completely overwhelm the healthcare systems of the world, the mortality rate during an uncontrolled outbreak will be close to the rate that patients develop severe conditions. Add to this, that there are plausible reasons to believe that not everyone develops lasting immunity and there are some tentative indications that the second infection may sometimes be even more deadly than the first through stimulating a cytokine storm like the Spanish flu, and possibly like SARS and an overall mortality rate of 10%, while pessimistic, is nevertheless plausible. If we then apply that to the higher end of the 60-80% range that some experts predict will be the attack rate of COVID-19, then 1 person in 10 out the 80% of the world who contracts it will die during a pessimistic scenario where the outbreak gets completely out of control.

Furthermore, in a recent press conference , Dr. Bruce Aylward, a member of WHO that went to observe the pandemic situation and response in China stated that, on investigation, it appears that mild and asymptomatic cases only account for a moderate fraction of the overall caseload and that cases with serious complications account for 13% of all infections. Also, the uncanny similarity in the curves showing the growth of confirmed cases in China, South Korea, Italy and Iran (starting from 50 cases) suggests that the number of confirms cases reflects the intrinsic growth rate of the virus as opposed to an increase in the efficiency of detection (which you would think would vary from country to country). So, unfortunately, the more pessimistic estimates of the lethality of COVID-19 increasingly seem to be the most probably ones.

This would produce 624 million direct deaths from COVID-19 before the start of next year.

But it may be even worse than this.

Plagues that kill huge numbers of people have occurred throughout history. The Black death killed 45-50% of the population of Europe, while between 1862-1864 smallpox wiped out 90% of the Haida population. The most recent plague was the Spanish flu (January 1918 – december 1920) which is believed to have killed 1-2% of the world’s population. But however horrifying these plagues were, in their aftermath, people returned to their farms and workshops and life went on.

For most of history, people have managed to rebound from plagues, however for most of history people have not relied on complex, interconnected infrastructure that requires constant maintenance by highly-specialised skilled personnel (many of whom may die from COVID-19) along with a vast assortment of parts which are manufactured by extended global supply chains.

Examples of networks we depend on today that require constant maintenance are the water network, the sewerage network, the electricity grid, financial systems, the internet. These system depend on infrastructure that requires constant maintenance in order to remain functional and to avoid cascading failures – where one network failure causes failures in others.

The people who lived in 1918 were less dependent on networks and the networks that they used were less sophisticated and required far less maintenance. The sophistacation, inter-connectedness and interdependence of the economy today would be scarcely recognisable to someone living in 1918.

We’ve had a pretty good run of luck since World War 2. No major wars, no major plagues, no worldwide famines (although localised disasters obviously continued to happen) – and in that time we’ve built a technological civilization unlike anything that has ever existed in any previous period in history.

Modern post-world-war-2 civilization has never been stress-tested by a lethal global pandemic – in other words, by a plague – and there is no guarantee that our current civilization will be able to ensure that all the high-maintenance infrastructure, which we have now become utterly dependent on, will continue to function tolerably in a situation where a large fraction of the population either dies, or is afraid to show up to work.

Even more concerning is the fact that COVID-19 is far deadlier to older people. The largest case study on COVID-19 conducted so far found the case fatality rate for those over 60 was 9 times higher than those who are under 50. And most of the patients involved in the case study have not yet recovered, so absolute mortality rates could be higher.

In a worst case scenario, where hospitals are overwhelmed and those who contract it get no medical attention, would it be unreasonable to assume that 33% of those over the age of 55 would die?

The problem with a third of all the old people suddenly dying is that most people in senior management roles – who coordinate the vast, incredibly complex mosaic of institutions which all interact together to form modern civilization – are old. Consider every conceivable institution from governments, to charities, to banks and financial institutions, to hospitals, to every conceivable type manufacturing company, to the heads of logistics firms, grid maintenance firms, municipal water companies, oil and mining companies etc., all over the world. Now imaging if one in three of the heads of all these institutions, along with one in three senior managers, all suddenly kicked the bucket in the next few months with the other two spending a month or so covalescing at home (as hospitals are all maxxed out). It is quite conceivable that, under such conditions, modern technological civilization as we know it, would simply collapse.

The average age of farmers in the U.S. is 57.5 years.

38 percent of people who work in nuclear power generation are set to retire in the next few years.

If civilization does collapse, the fatalities in the wealthier developed countries will be enormous. Very few people today know how to grow food to feed themselves and even modern farmers depend heavily on farming machinery, fertilizers, pesticides and many other products from long, complex supply chains.

Poorer developing countries, ironically, might suffer less from an all-out COVID-19 outbreak, both because they have younger populations, and because a larger portion of them are skilled at traditional farming and crafts and, as such, will be equipped with the right know-how to survive the collapse of our technical economy. But even developing countries benefit from increased crop yields produced by fertilizers and pesticides, so there will be many secondary casualties there as well.

It’s possible that the elite heads of state, and other large institutions, might manage to secure scarce high-quality medical care, even during an outbreak, so that “only” one-in-six or one-in-nine of them die. But the general masses might be so outraged that the very people whose job it was to contain the outbreak messed up, and are now sheltering themselves from the consequences – that mass-revolts could ensue. And even if they don’t, there will still be many in senior management positions, people who run small businesses, charities or highly experienced elderly specialists with indispensable skills who will not be able to access quality healthcare during a full-scale COVID-19 outbreak, and yet this large segment of elderly middle men and small business owners may still be indispensable to the smooth running of society.

 

Avoiding The Worst Case Scenario is Straightforward – But Not Easy

 

A final word as to the circumstances that allowed the uncontained exponential spread of COVID-19 in the worst case scenario when compared to the semi-successful containment in the best case scenario, which, while unable to extinguish the virus, successfully managed to curb its exponential spread and greatly reduced the caseload as a result:

The main reason why the worst case scenario of exponential contagious spread ensued was because health officials only imagined solutions within their organization’s existing resources. Instead, they should have considered how to contain it using all the collective resources and effort possessed by all of civilization – as COVID-19 may pose an existential threat to modern civilization itself.

This lack of imagination, and the lack of urgency to summon the country, and the world, to fully mobilize in order to contain it, caused some health officials to fatalistically warn that the ubiquitous spread of COVID-19 throughout the community is “inevitable“. Such fatalism is utterly irresponsible and false, given that, as Dr. Bruce Aylward has confirmed, China already has successfully curbed the exponentially spread of infections – albeit a great cost to its economy – so the ubiquitous spread of COVID-19 is therefore clearly not inevitable, at least not at this stage. What saying, “the spread of COVID-19 throughout the community is inevitable” really means is: “We choose not to pay the enormous economic price and undertake the enormous inconvenience that is required to mobilize the war-time-like effort that is needed to curb the exponential spread of this terrible disease in a timely manner.” Choosing to allow this deadly plague to spread, because “it costs too much to contain it” is irrational and unbecoming of those who possess a high degree expertise in matters of health. There is already enough data, on cruise ships and cases confirmed through testing and contact tracing (which includes mild cases), to clearly show that COVID-19 is both far more lethal and far more contagious than the flu and – if it is allowed to spread everywhere – then hospitals everywhere will end up looking like hospitals in Wuhan. This possibility is, quite simply, unacceptable, and it is worth paying any price to contain and curb the exponential spread of this virulent microbe.

Conversely other officials, in the worst case scenario, insisted they had everything under control with the existing resources at their disposal and focused instead on doing what they could with the resources their organizations had to hand, putting on a brave face, managing the communication of information to avoid a public panic, and minimize the negative effects of COVID-19 on global trade and stock market prices. In the worst case scenario, in addition to working with search engines and social media to de-rank and shadow-ban individuals that spread false information about the disease, the WHO also works to reduce the exposure of content that draws public attention to alarming, yet factually accurate, aspects of the COVID-19 pandemic or reasonable logical, yet alarming, projections of the outbreak’s future development.

Although the WHO’s aim in suppressing such alarming content in the worst case scenario was to avoid things like panic-buying, looting and public hysteria, the overall effect was counterproductive to controlling COVID-19’s spread. People NEEDED to be alarmed in order to take extreme measures to reduce the R0 of the disease like hand-washing, wearing masks, goggles and gloves, cancelling enjoyable public events, cancelling holidays abroad. Additionally, this suppression of accurate, though alarming, information, in the worst case scenario, ultimately eroded the public’s trust in the WHO, reduced compliance and undermined their ability to coordinate the response through advising the public to take action. Furthermore, because of the incubation period, widespread alarm, is better than targetted alarm, as although at any given period, there may be a limited number of regions where the disease is incubating, if everyone, everywhere, is constantly super-careful, then when it does breakout in areas, the size of the outbreak will be less severe due to a lower R0 during the incubation period.

Furthermore, the measured “Don’t panic, although COVID-19 is a moderate global health threat, we can handle it” message that the WHO delivered to governents in the worst case scenario, resulted in many governments not diverting sufficient resources to contain the outbreak, and delayed the extent that the governments of the world shifted to a fully-mobilized emergency footing and, by the time they did… it was too late.

Conversely, in the best case scenario, the WHO sounded the alarm early and announced to the world, if COVID-19 is not contained the results WILL BE DIRE! We MUST contain this virus AT ALL COSTS!!! However, although we must contain it, this virus is of a type that is INCREDIBLY DIFFICULT to contain, and its successful containment will require tremendous amounts of resources and the full mobilization of all the countries. WE NEED EVERYONE’S FULL COOPERATION AND EFFORT TO AVERT COMPLETE CATASTROPHE!!!

In the best case scenario, stark, frank messages like the one which Dr. Bruce Aylward delivered in a recent press conference are echoed by all WHO spokespeople:

“This is not flu, this is more like a SARS like physiology it looks like… are we ready to manage that?… one of the big things I really want to come back to is that message: go after the transmission of this thing, don’t – you know – accept this inevitable sense of inevitability that we cannot contain this virus.”

– Dr. Bruce Aylward 

COVID-19 is a truly terrifying virus, but at the end of the day it is still a virus, and like any virus, it can only transmit between people who are in relatively close proximity to each other, and like any other virus it can be washed away, killed with bleach, and must find an initial point of physical entry into the body in order to infect an individual.

Thus by…

  1. Enforcing hard borders to prevent large masses of people from regions where levels of infection are uncontrollable from leaving and mixing with regions with low levels of infections (where the economy can continue to function), a tight enough bottleneck can be created to protect regions with few infections from crossing the threshold where contact tracing and individual quarantine is no longer sufficient to keep these infections in check.
  2. Enforcing total lockdowns in highly infected regions where the physical mixing of people from differet households is prohibited (only practical with an influx of resources food, drinking water, etc. from uninfected regions with still-functioning economies). This enables the disease to “burn out” even in highly infected regions at far lower final infection rates than if lockdowns were not carried out.
  3. Arranging an influx of food, medical resources (possibly using mobile treatment and isolation rooms in the form of modified shipping containers) and PPE for workers who maintain critical infrastructure (water, electricity, food delivery, etc.,) supported through aid offered by uninfected areas who’s economies continue to function.
  4. Arrangements to financially support those who comply with regional lockdowns, again in the form of aid from surrounding regions
  5. Continuously testing random samples of people for COVID-19 in areas with low infection rates combined with vigorous contact tracing (hopefully to an initial carrier who emerged from a locked down area) conducted by armies of epidemiologists.
  6. Practicising of good hygiene habits and general social distancing (to the extent that productive economic activity allows) by everyone in the entire world all the time so that, when a localized outbreak is detected after an incubation time, the R0 of the community will be low enough for it to be relatively easy to contain without having to lock down yet another city.

…it should be possible to contain the outbreak.

This is possible.

This is straightforward – but extremely difficult and costly.

But it can be done.

Everett Historical/shutterstock.com

Furthermore, it MUST be done, as the alternative is too horrible to contemplate.

Seasonal COVID-19 infections, with community spread, would more closely resemble the Black Death, which came and went and came back again between the dates of 1347 and 1351, killing 50% of the population of Europe in the process, than the flu. It would NOT be, as some claim, “just another seasonal illness” like swine flu. And the complacency emerging from many officials and academics – especially in the U.S. – is terrifying.

It’s time to mobilize and launch a full scale global response to this emerging pandemic.

John McCone

Filed Under: Technology Tagged With: Coronavirus, COVID-19, COVID-19 Containment, Effect on Society, Pandemic Response, Spacecraft, Worst Case

Artificial Intelligence And Recycling

July 30, 2019 by admin

Here the hole for recycling and the hole for litter feed into the same bag

Recycling today is a joke. Many municipal rubbish collectors just dump recycled waste in the landfill with everything else. Furthermore, because recycling is unpleasant and expensive, many first world countries ship their waste to 3rd world countries, pay them to process it, and then walk away with a clear conscience knowing it’s now someone else’s problem. Not only does shipping waste across the world emit CO2, but many of the countries that are paid to accept our waste for “recycling”, are even less capable of dealing with it than we are. The result is that “recycling” plastic may actually pollute the ocean more than just landfilling it.

My own town has bins with two holes, one marked “litter”, and one marked “recycling”, which both feed into the same bag!!!

 

Cynically put, perhaps the main reason we are required to sort and recycle our litter is so local governments can raise money through fines.

 

Thankfully, artificial intelligence could dramatically improve this woeful situation.

The task of recycling involves taking an object an owner no longer wants, and finding the path of minimum energy and waste to use that same object to satisfy someone else’s needs or desires in a manner that maximally offsets the energy consumption or waste the recipient would have expended pursuing their desire through other means.

This minimum energy/minimum waste pathway could involve reusing the object without modification, repairing the object, modifying/upgrading the object or disassembling the object into component materials and reassembling those materials into a different object.

Performing this task well is an incredibly information intensive process. It requires knowing:

  • What the total population of consumers want
  • Which objects are available on the second hand market, along with their state of repair, (the latter increases the required level of knowledge by many orders of magnitude)
  • The energy and waste involved in repair/upgrade/sanitization/full material recycling as well as transport for each possible disposer-to-consumer transfer operation
  • The alternative paths that a consumer would explore in the absence of receiving the recycled product.

If the cost of the process is economical then the disposer becomes a seller and receives a payment by the receiver. If the cost of the transfer process is not economical moneywise, but still saves waste and energy, it might still be worth the government’s while to subsidize the difference between the cost of the transfer process, and the price the receiver is willing to pay, provided the cost of the damage to the commons avoided exceeds the subsidy.

Today, the internet and AI have greatly increased the scale of the second hand market with sites like Craigslist and eBay. And, in general, the internet can facilitate collaborative consumption which whether through car sharing or clothes swapping enables more people to obtain greater benefits from the same resources. Too Good To Go is an example of an app that enables restaurants to cut food waste by offering discount deals to people who collect meals just before closing time. This helps restaurants to manage their inventory better and reduce waste while and consumers who are flexible get high quality discount meals.

Advances in robotics could take all this to the next level. Not just increasing the efficiency of use and reuse, but also the efficiency of repair and recycling.

A big problem with mass production is that it lowers the relative cost of building something from scratch when compared to repairing it. Repair requires creativity and improvisation – both anathemas to mass production and economies of scale, which generally rely on doing the exact same thing over and over again. The result of these economies has been waste on an unparalleled scale. When something goes wrong with our widget, we almost always trash it and buy another widget.

Fortunately, the logic of economies of scale may be coming to an end. In a previous article I’ve argued that the rational for economies of scale and specialization rests upon the high cost of intelligence and information and that, as intelligence and information become cheaper, increasingly generalized functionality will also become cheaper. A 3D printer being a good example of something that can print a wide variety of general shapes if input with the right software.

But in general, highly intelligent robotic repair systems, disassembly systems and sorting systems will become increasingly economic as artificial intelligence is further developed. Larger suppliers and customers keep everything simple, but, with ever cheaper information storage and processing systems, we won’t need to keep things simple and mind-bogglingly complex logistical operations that are information intensive, but energy and materially efficient, will be accessible to the flexible manufacturing systems and 3D printing systems of the future. It will be possible, in the future, to order a product from a software procuring system and then for that system to simultaneously scan:

  • The second hand market
  • Similar products that could be upgraded and modified into the product in question
  • The price of raw materials and components required to manufacture the product from a nearby flexible manufacturing system or 3D printer.

Similarly, from the seller’s perspective, the second hand market will also be more sophisticated and the value of an object will be some mixture of:

  • The sale of the full object
  • The sale of its component parts
  • The sale of materials once ground down and separated

Intelligent and dextrous robots, which can cheaply prize an object apart and separate out key components and materials, will make all the difference. While commodity prices will still be global, with extensive recycling of goods that are thrown away, much of the commodities will be sourced locally. In addition to disassembly robots, there could be miniature robot trucks, perhaps the size of children’s ride-on toys, that could transport small quantities of components and materials to 3D printing and other flexible manufacturing systems. This could drastically shorten supply chains and facilitate highly sophisticated local economies, even in rural locations.

In many respects, the industrial revolution can be thought of as creating a kind of “techno-system” to partially replace our existing ecosystem. The advantage of the techno-system is that it creates conditions more suited to a low human mortality rate, as well as raising the planet’s carrying capacity for people. The disadvantage of the “techno-system” is it’s currently far less intelligent or stable compared to our ecosystem and, lacking much of its subtlety, still contains many open cycle processes where the system relies on converting A to B without possessing any corresponding process for converting B back to A again and either, if A runs out, the system crashes or, if B’s concentration climbs too high, the system gets poisoned.

If civilization is to be anything more than a brief flash in the pan, we are going to have to meticulously close every single open cycle process in our techno-system. This is an incredibly complex task. However, we will soon have an incredibly large amount of information processing resources available to work out how to do it.

Let’s hope we’re successful.

 

John

Filed Under: Technology, Uncategorized Tagged With: AI, Closed Loop, Recycling, Rubbish, Techosystem

Attack of The Robocrats!

July 16, 2019 by admin

MONOPOLY919/Shutterstock.com

Governments, all over the world, increasingly encourage citizens to interact with automated bureaucratic processing systems, rather than human representatives, when filing tax returns, applying for welfare, applying for a passport, etc. Human interaction is becoming an exception reserved for when the computer fails. This trend is not limited to governments. Private companies frequently deploy automated response systems as the first line of defence against engagement, only giving their most tenacious callers access to human beings.

This automation of customer service, and bureaucracy, can reduce queues and processing times, but, as with sex robots,  the total automation of bureaucracy threatens to concentrate power in the hands of a few controllers. The potential for these controllers to abuse such systems is tremendous.

The film Elysium depicts the extensive use of robots in confrontational roles, such as police, or parole officers. Machines don’t have empathy, and have unlimited patience. These traits may be desirable for some roles. A human benefit officer, who’s dealing with difficult applicants, may eventually resort to bending the rules to help them. An automated system can say “no” all day (even all week). Human officials have high salaries that must be covered, so they will feel pressure to process people quickly. They are also afraid, that if a client complains about them, they might lose their job. An automated system (especially running on your PC) has zero hourly running costs and if an applicant, whose housing benefit get cut off by the automated system, commits suicide, there’s no one to blame. Perhaps, in some cases, troublesome applicants should not be prioritised, but, in other cases, they are troublesome precisely because of their desperate situation.

Concerns over who’s responsible for driverless car accidents are just the tip of a much larger iceberg. Who’s responsible when an algorithm blocks your credit card payment? Who’s responsible when an automated welfare system accidentally cuts off your unemployment benefits? Or wrongfully cancels your immigration visa? Or mistakenly cancels someone’s health insurance without informing them? Or fails to pay your salary that week? Or calls in your loan after mistakenly finding you in breach of its terms? Or delists your business, or reduces the ranking of your company, costing you tens of thousands in sales? For those of you who are weird like me and read the full terms and conditions of the various automatic services you subscribe to, the answer is clear: “Company X accepts no responsibility whatsoever for any damage or harm caused by the failure of our software.” This is pretty much ubiquitously across the terms and conditions of all software services.

Furthermore, what about tenants evicted by robo-bailiffs for not paying rent? Or robot police and security guards? If a human bailiff, security guard or police officer inappropriately physically assaults someone, they could lose their job or be sent to prison. But what happens if a RoboCop, bailiff, or security guard does the same? The corporation that made it would be liable, but fining a large corporation is a far smaller deterrent than imprisoning a human worker, thereby destroying his career. Programmers may calculate the legal liability for the harm caused by a particular decision tree is less than the time it saves their clients, or the money it makes them. Security robocrats with these decision trees could do more harm than human employees who bear direct criminal responsibility for their actions. And if the final software comes from a long supply chain, where one company uses a software package supplied by another company, and sells the final program on to a third company, which uses it in a slightly different manner to the supplier’s original specifications, it might be impossible to pinpoint the source of the blame. This could create a moral hazard as, in many cases, bosses might prefer for robocrats, unconcerned with criminal responsibility, to make certain decisions: refusing to pay out insurance, sending out fines to raise money for a municipal government, overestimating tax liability, cutting benefits, overcharging on bills, etc., etc.

People harmed by automated robocratic decisions may be less motivated to pursue them in court. Court cases generally involve evidence, time and legal fees. When another person has consciously wronged or mistreated us, we often feel compelled to seek justice against them despite the cost and inconvenience, but when the decision of something harms us, it no longer seems worth the effort to pursue it. Algorithm designers may take this into account when programming decision-making strategies to maximize their client’s profits.

Robocracy also contributes to the growth of unpaid work which Guy Standing has drawn attention to. Frequent job changes mean more time applying for jobs, reskilling, networking. Beyond that, there’s self-assessed tax returns, work visas (for those who find work abroad) along with registering (and perhaps later deregistering – which is sometimes even harder) with other nations’ tax systems. Today we must also check our own food out at the supermarket and be our own travel agent, booking hotels, planes and organising our itinerary. This is largely because an automated system’s time is free while an employee’s time is expensive. A customer or job applicant’s time may be valuable to them, but it costs nothing to companies and government bureaucracies, so institutions are increasingly dumping work onto customers and applicants at every available opportunity. Once upon a time, if a company or a government asked a customer or a tax-payer to fill out a form, they had to pay a bureaucrat to read that form. Today robocratic algorithms can process it with humans only looking at a small sample of flagged forms or metadata generated by statistically analysing thousands of forms. This creates a moral hazard, for the designers of forms and applications, to make them lengthier, effectively imposing unpaid work on the people who have to fill them.

Beyond that, as AI advances, it will be capable of processing evermore complicated laws. There is a danger that laws may someday get too complex for human lawyers or judges to comprehend. At that point, it will be necessary to fully automate the court system. Past civilizations collapsed under the weight of their own bureaucracy. Today, however, intelligence is so cheap the legal system might sustain itself, even as it gets exponentially more complex. However, if it becomes too complex for humans to handle, the time may come where robot police, bring human beings before robocrat judges and robot juries which send them to fully-automated prisons.

The potential of technology to facilitate the interests of its designer is massive. But what if the designer’s interests clash with other people? From the perspective those at the receiving end, certain technologies may reduce their quality of life and diminish their autonomy. The effects of automating decisions, which may affect and harm people who have never consented to let robots determine their destiny, deserve our intense scrutiny.

 

John

Filed Under: Technology Tagged With: Automation, Bureaucracy, Robocrat, Robocrats, Robots, Self-Driving Car Accidents

Blueprint For A Solar Economy

April 27, 2019 by admin

Jenson/Shutterstock.com

Why A Solar Economy?

 

Solar and Geothermal Energy are the most direct forms of renewable energy. Other forms, such as biomass, wind or wave energy are ultimately powered by the sun. Since energy flows from the Earth’s interior are just 0.03% of incoming solar radiation, solar energy potential dwarfs all other forms. Studies indicate the total harvestable energy potential of wind is 5 times global energy demand. Solar’s potential is far higher. Indeed, more solar energy is incident on the earth in an hour than humanity consumes in a year.

Another reason to favour solar over wind is its lack of moving parts. Consequently, solar panels last longer than expected (up to 40 years) while wind turbines wear out sooner than expected ( full report here ).  While wind turbines are getting bigger and bigger, solar panels remain compact as they get cheaper, more durable and more efficient.

These are sound reasons to believe the future belongs to a solar economy and not wind.

The main aim of renewable energy is to minimize cumulative atmospheric CO2 levels in 30-70 years time. CO2 levels next year, or even in 5 years, are unimportant. Only cumulative CO2 emissions over the next 30 to 70 years matter. If wind power is a dead end technology, we should concentrate economic resources on pushing solar down the learning curve as rapidly as possible. Indeed, even today, some solar projects are producing some of the cheapest energy in the world.

The argument “we need an energy mix” is a false one designed to humour obstinate people obsessed with pet dead-end technologies. We don’t need an energy mix. We just need a solar economy. This is the problems with a blind carbon price. In the long run, solar energy will clearly become the cheapest renewable, but, in order “to be fair”, we pay the same price for all carbon free energy. The result of a blind carbon price, compared to focusing funding on scaling up the solar economy as rapidly as possible, will likely cost hundreds of billions, if not trillions more to reach the exact same cumulative CO2 emissions in 30 years time.

 

A Solar Economy with Gas: A Winning Combination

 

Methane can be manufactured from electricity, water and carbon dioxide through the Sabatier reaction. The concept of using gas to store energy generated by renewables is known as Power To Gas. While the cycling efficiency of power to gas (Electricity -> gas -> Electricity) is only about 38%, existing gas infrastructure, like pipelines and LNG shipping, could transmit solar energy across the globe. The factor 2 difference in irradiance between countries with high and low solar energy potential also compensates for the 40% cycling efficiency of power to gas.

We don’t need a giant global HVDC grid. Power to gas enables the existing gas infrastructure to store and transmit solar electricity across the world. HVDC grids can’t store energy. Existing gas networks can store months of gas reserves. Pumped storage, hydroelectric dams and battery banks with cycling efficiencies of 90%+ could complement P2G for short term troughs in solar output – although high cycling efficiency storage is too expensive to store more than a few days worth of consumption.

An inventory of batteries kept in swapping stations for electric cars could serve a dual purpose of absorbing surplus renewable production as well as rapid EV charging.

Electric vehicles may not necessarily require more energy to be transmitted through the grid to cover transportation, as well as household, energy needs. If batteries banks located next to gas plants and solar panel field are charged up there and then physically transported to swapping stations, it might be possible to power a fleet of electric vehicles without upgrading the grid.

 

Importance of CO2 Sequestration

 

The Sabatier reaction requires high CO2 concentrations. It is, thus, important to sequester the carbon dioxide produced from burning gas, both to produce methane with solar power and to prevent climate change. If the solar energy is produced in a different location from where the gas is burnt, the CO2 will have to be piped back to the sunny region to be reconverted into methane. Existing gas infrastructure, that already transports large quantities of natural gas around the world, can also transport CO2. In other words, we will need CO2 pipelines as well as methane pipelines.

A solar economy with power to gas storage, will have a much lower CO2 inventory than a scenario without solar. Instead of storing decades, perhaps centuries of CO2 emissions, we need only store months of CO2 emissions, so there is less to fear from a leak in the system, as only a relatively small quantity of CO2 would escape.

Furthermore, concentrated CO2 will have a fundamental economic value to solar power plant operators. This will enable carbon sequestration companies to be profitable irrespective of carbon prices or government policy.

 

Space Heating

 

Combined heat and power, is a very favourable option for a solar powered economy with power to gas storage. Especially if burnt CO2 must be compressed and sequestered. During winter months when there is less sun, gas would be imported from warmer climes and burnt for electricity. Heating requirements will tend to be highest when sunshine is lowest.

Heat pumps could supply any further heating requirements. Electricity’s 18% share of total worldwide energy use is intimidating, given renewables currently only produce a fraction of the world’s electricity. However, for space heating at least, we can take solace in knowing that a little electricity goes a long way. A heat pump can transport about 4 joules of ground heat with just 1 joule of electricity. The number of joules of heat 1 joule of electricity can transport is its coefficient of performance. This is typically 3 or 4 for modest temperature differentials between the inside and outside.

 

Manufacturing

 

The gas, which power to gas produces, can be used in manufacturing directly. Additionally, the high temperatures produced by concentrated solar power have applications in a wide variety of manufacturing processes.

James May featured a group of scientists using CSP to manufacture gasoline out of water and CO2.

 

Shipping

 

The most credible alternative to fossil fuels for shipping are nuclear reactors. Aircraft carriers already use nuclear reactors so this is clearly feasible. Indeed, a nuclear powered merchant ship, the NS Savanah was built back in 1959 and, in 1969, became the first nuclear powered ship to dock in New York City for the festival “Nuclear Week In New York”

Maritime shipping accounts for 2.2% of CO2 emissions. Nuclear energy currently produce 6% of global energy and existing uranium reserves are sufficient for 135 years at our current rate of use. This implies that nuclear energy could power all maritime shipping for 405 years. Plenty of time to develop breeder reactors or beam driven fusion systems (which are more compact and cheaper than fusion systems designed to produce energy) to breed nuclear fuel from fertile materials as well as process long lived waste.

The only other fossil fuel free alternative is biomass but this is land intensive.

Either that or we go back to sailing boats which would require a significant reduction in ship size and speed with correspondingly lower cargo volumes and longer journey times.

 

Aircraft

 

The aviation industry is also responsible for 2% of CO2 emissions.

There are five possibilities for reducing aircraft emissions:

  • Biofuels
  • Replace with Maglev
  • Metal Powder
  • Radioisotopes
  • Beam Powered Propulsion

As with shipping, aircraft cannot sequester CO2. But while biofuels emit CO2, the growth of biofuels absorbs atmospheric CO2. Biofuels, however, do take up a lot of land and there are even some claims that biofuels are not carbon neutral due to their effect on land use.

Alternatively, high speed trains could replace air travel. Maglev trains have reached record speeds of 375mph, two thirds of the cruising speed of an aircraft. Air travel would still be needed over the oceans, but Maglev trains could reduce aircraft biofuel requirements.

Metal powder combustion is another interesting candidate. The energy density of iron powder combustion exceeds that of gasoline so it maybe a credible power source for aircraft. Furthermore, iron oxide is a solid and so is much easier for a compact system like an airplane or a ship to sequester.

Nuclear reactors are not feasible for aircraft as the required neutron shielding is too heavy. However, many radioisotopes decay by emitting alpha or beta particles. This radiation is easily shielded yet very energy dense. Indeed, one intended use of radioisotope is to power pacemakers. One could envisage hot pellets with a radioisotope in the centre and shielding material around the outside. These hot pellets could heat air entering the jet engine to provide CO2-free thrust. However, radioisotopes can’t be turned off and would require constant cooling. One option is to only load hot pellets onto the aircraft just prior to launch and transfer them from the aircraft into a cooling facility immediately after landing. Getting this system to operate reliably to ensure public safety could be quite challenging. A small, beam-background powered fusion reactor could generate the radioisotopes used to manufacture the hot pellets. Fusion reactors could be relatively cheap to construct so long as they don’t need to generate net energy. Solar energy would ultimately power the atoms beams for these fusion systems.

Alternatively, the aircraft could be remotely powered by an energy beam of microwaves or a laser. The main challenge with powering aircrafts is storing the large quantities of energy required to propel them at high speeds without adding too much weight. Remotely beaming energy to the aircraft from a beam generator on the ground would bypass this problem entirely. This would require very high accuracy. Leik Myrabo is currently experimenting with laser power to propel a prototype lightcraft.

 

Summary

 

With appropriate infrastructure to store and transmit it, solar power could power our entire industrial economy. Although it currently only provides a miniscule portion of our total energy, solar capacity has exploded since 2010 going up 20 fold in some places. If the 27% annual compound growth can be maintained, solar could power our entire economy in the next 20 or 30 years. Maintaining that ~30% exponential growth will, however, require a strong political will, not just to install solar, but ensure a that suitable infrastructure of power to gas will exist to store the surplus energy.

 

John

 

Do You Have a Burning Desire to Make a Comment?

 

Have you found this article thought provoking? Is there some message you desperately want to communicate to future readers but can’t because my comment section automatically closes 28 days after my posts go live?

If so, you might be interested to know that I reopen any comments section to members of my mailing on request as one of the perks of joining.

If you’d like to leave a comment, simply scroll to the bottom of the page, sign on to my mailing list and them email me with a request to reopen the comments section for this post.

Happy Commenting!

John

Filed Under: Technology Tagged With: CO2, Power To Gas, Sequestration, Solar, Solar Economy

  • Page 1
  • Page 2
  • Go to Next Page »

Footer

John McCone

Follow John on Twitter

  • Twitter

Top Posts & Pages

  • 9 Problems With Progressivism

Archives of Old Posts

Join my Blog Article Announcement Mailing List

Type in your email and click "Sign Up" to join my blog mailing list and be the first to hear about new blog articles and books (see mailing list policy)

Powered by MailChimp
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright © 2025 · Author Pro on Genesis Framework · WordPress · Log in

 

Loading Comments...