Apple co-founder Steve Wozniak, Tesla chief executive Elon Musk, and hundreds of other technological titans called for a pause on the development of more powerful artificial intelligence because of the "profound risks" that such nascent systems actually pose to mankind as a whole.
ChatGPT, an AI language processing tool, has managed to secure recognition all over the globe over the past few months as knowledge workers make use of its capabilities to execute tasks such as building emails and computer code in a matter of seconds. This recent breakthrough in mass-market AI has sparked an almost arms race style battle across tech firms to integrate similar systems into their products and search engines.
As read in an open letter coming from the Future of Life Institute spotlighted that recent developments in AI could significantly harm information channels and employment prospects for quite a few industries, as well as speed up the timeframe in which AI will end up being able to outsmart humans. This letter sounded the call for a six-month moratorium on Developing AI solutions that are stronger than GPT-4, the latest version of ChatGPT released by OpenAI just recently, as the world debates the possible ramifications of this new technology.
"Such decisions must not be delegated to unelected tech leaders," read the letter. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects."
Moving past the signatures from Wozniak and Musk, the letter was endorsed by Pinterest co-founder Evan Sharp, Ripple co-founder Chris Larsen, DeepMind research scientists Zachary Kenton and Ramana Kumar, and former presidential candidate Andrew Yang. Academics from the highly prominent Harvard and Stanford universities also signed onto the letter.
"This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," the document went on. "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."
Musk himself thinks that ChatGPT is just an example of "training AI to be woke" and has gone to lead researchers about creating alternatives.