From BOMBS to AI
After discussing the terrible idea that we shouldn't be addicted to ice cream, let's talk about a lighter topic - bombs and terrorists!
Many of us have read the open letter signed by Elon Musk and other experts calling for a pause on the development of large-scale AI experiments, citing potential catastrophic consequences for humanity. Interestingly, something similar happened in the 1970s-1990s.
(There is an extremely important disclaimer to be made. I am not saying those who signed the open letter are terrorists like the Unabomber, whom I will introduce in the next paragraph. This should be obvious and self-explanatory, but this is the internet, and there are a lot of not-so-bright people out there, so I better make myself super clear.)
There was a man known as the Unabomber (take a look at the FBI link and see how he looks) who targeted universities, airlines, and people associated with the tech industry, resulting in the deaths of three people and the injury of 23 others.
He did this to express his anger and concern about the "industrial society." In his infamous manifesto titled "Industrial Society and Its Future," he argued that technology was reducing people to mere objects, destroying the environment, and creating a totalitarian system incompatible with human freedom. He called for a revolution against technological progress to save humanity. (Apart from his radical and violent behavior, would you agree with his argument?)
He demanded that multiple newspapers publish his manifesto and threatened to continue his bombing campaign if they refused to do so. The New York Times and the Washington Post complied. Ironically, his writing style was recognized by his brother, leading to his arrest by the FBI.
Some of the arguments in the Unabomber's manifesto are not that different from the recent AI Open Letter (please see the disclaimer above again), particularly in the concerns about the impact of technology on humanity and the call to stop technology's development. The Open Letter states, "Advanced AI could represent a profound change in the history of life on Earth," while calling for a halt on AI development.
People fear change, even the tech titans (unless they are leading a particular innovation and are only slightly lagging behind).
It is crucial that AI is developed in a way that aligns with human values, but can we really expect the flame of AI development to pause for regulators to catch up, especially considering the great commercial and military value that it holds? Is a six-month pause enough time for laws to catch up, given the bureaucracy of not just one country, but the whole world (think about the global fight against climate change)? If not, how long should we pause AI development?
Let me know your thoughts and share them with your friends who are interested in the development of AI. And once again, thanks for reading!
Comments ()