top of page

AI Safety: Progress but Lots More Needed

10/29/23

Editorial team at Bits with Brains

The 78th United Nations General Assembly (UNGA) session in September 2023 saw a flurry of discussions on artificial intelligence (AI) safety at the global level. While it is encouraging to see more attention on this critical issue, there is still a sense of urgency that more concrete action is needed to govern AI responsibly.

Earlier in the year, Mustafa Suleyman, co-founder of DeepMind, proposed creating an "IPCC for AI" - a global panel modeled after the Intergovernmental Panel on Climate Change to monitor and assess progress on AI safety. At the UNGA, he renewed this call for an international watchdog body to track AI risks. While a step in the right direction, monitoring alone may be insufficient. The UN Secretary-General also proposed a high-level advisory body on AI, but its role appears limited to providing recommendations.


More decisive coordination and regulatory action is still needed beyond just monitoring AI's evolution. At UNGA, OpenAI executives spoke to the US Congress about making voluntary commitments to AI safety.  However, while virtue signaling is better than inaction, voluntary measures have limitations. We need binding global initiatives not just voluntary corporate pledges.

Moreover, most cutting-edge AI research happens in a profit-driven manner behind closed doors. Relying solely on voluntary corporate self-governance is inherently risky. We need global coordination and oversight of AI research guided by the public interest.


At the UN Security Council session, AI was framed mainly as a threat to global security - especially regarding autonomous weapons. The significance of this has been captured well in “Future War” by Robert Latiff[1](recommended reading). However, viewing AI only as a hostile force risks becoming a self-fulfilling prophecy. Its benefits as a constructive partner also need emphasis.


AI inherently has dual-use potential for both harm and good. While risks like mass surveillance and lethal autonomous weapons are real, AI also holds huge promise for achieving the UN's Sustainable Development Goals. We need a balanced approach that addresses risks while also steering AI's benefits for humanity.


While each nation is progressing with its own AI regulations, the transnational nature of AI research needs coordinated global action. Initiatives like the proposed International AI Research Organization would enable such collaboration.


The future of AI cannot be left to chance or market forces alone. We need a coherent global effort guided by a vision for an amazing AI future that uplifts humanity. The UN and allied nations must take decisive leadership here.


While encouraging steps are being taken, the pace of progress remains slow compared to AI capabilities rapidly advancing. The window of opportunity is closing fast.


Sources:

https://press.un.org/en/2023/sgsm22007.doc.htm

Sources

bottom of page