Pause Giant AI Experiments: Dark Ages

2 min readMar 31, 2023
dark ages
Photo by Breno Machado on Unsplash

Reading the original text from Pause Giant AI Experiments: An Open Letter — Future of Life Institute, it felt like we are pushing ourselves back to the dark ages.

Couple of viewpoints against some of the text from above link:

Should we let machines flood our information channels with propaganda and untruth?
Yes, we have usually termed them as scam/fake news/plagiarism since ages and have learned to surpass them quite effectively.

Should we automate away all the jobs, including the fulfilling ones?
Yes, we should get folks to instead focus on work that can more efficiently exploit their skill/time.

Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?
This is an extremely worried paranoid outlook. But as always, ‘only the paranoid survives’. Let’s think of ways to supersede it, but pausing is not the right way out.

Should we risk loss of control of our civilization?
Same as before, extreme paranoidism ? Stopping development of a stronger AI model is not the way out.

Such decisions must not be delegated to unelected tech leaders
Tech should NOT have elected leaders. It’s worrying to have a selected tech leaders to have control. Knowledge has to be let free for it to grow.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable
This can only be achieved by a supernatural being/God-Mode. It’s naive to believe we/team would ever be able to ever achieve this level of confidence.

Humans have created vaccines for biological viruses (even if from labs), anti-virus/cybersecurity for software Viri. For the probable GPT misuses, would believe teams to create misuse detectors that are effectively stronger reinforcement learners.

Stopping progress for similar ai models across labs is definitely #darkagesforai




A versatile software engineer, he has a range of interests beyond coding. Earlier posts are here :