Safety Exodus at OpenAI

OpenAI, the supposed champion of artificial intelligence, appears to be experiencing a bit of a brain drain.

Safety Exodus at OpenAI

Well, well, well. OpenAI, the supposed champion of artificial intelligence, appears to be experiencing a bit of a brain drain. Sutskever and Leike, two chaps leading the charge on making AI behave itself – or "superalignment," as they so whimsically call it – have jumped ship. And they're not alone. It seems a steady stream of safety-conscious employees have been making for the lifeboats since late last year.

Why, you ask? Well, you likely recall a rather dramatic attempted coup against CEO Sam Altman back in November. It seems Mr. Altman's penchant for secrecy wasn't quite to everyone's taste. But he managed to cling to power, emerging like a phoenix with even more control. Yet, despite the public show of camaraderie through gritted teeth, rumours of discord persist. Deleted tweets, mysterious absences – the stench of corporate intrigue, hangs over the coming feast of revenue.

Shortly after Altman's return, Sutskever posted this, and then quickly deleted it.

And then there's Mr. Altman's rather questionable fundraising tactics. Cosying up to autocrats doesn't exactly scream "ethical AI,"? His attempts to raise billions by visiting Saudi Arabia are well documented, but here are a few other world leaders with troubles whom Sam Altman has met recently:

  • Narendra Modi (India): Prime Minister of India, leader of the Bharatiya Janata Party (BJP), which has been criticized for its Hindu nationalist ideology and policies that marginalize minorities. The BBC reports police in India recently opened a case against his party for 'demonising muslims'.
  • Benjamin Netanyahu (Israel): Prime Minister of Israel, leader of the Likud party, known for his right-wing policies and controversial stances on the Israeli-Palestinian conflict. He is currently involved in a military campaign in Gaza.
  • Rishi Sunak (United Kingdom): Prime Minister of the UK, leader of the Conservative Party. He has expressed interest in making the UK a global hub for AI development and regulation. Mr Sunak has overseen rising inequality, public-sector austerity and regressive tax reforms.
  • Emmanuel Macron (France): President of France, leader of the La République En Marche! party. He aims to position France as a leader in AI research and innovation while ensuring ethical and responsible development. He also advocates for European-wide AI regulations, which some see as a way for him to implement AI smothering over widespread dissent.
  • Olaf Scholz (Germany): Chancellor of Germany, leader of the Social Democratic Party. While he has expressed caution regarding the rapid advancement of AI and emphasizes the importance of human oversight and control, he also stresses the need for AI to benefit society as a whole. This is against a tide of rising tensions in his country over his leadership.
  • Pedro Sánchez (Spain): Prime Minister of Spain, leader of the Spanish Socialist Workers' Party. He supports AI development but highlights the need for ethical guidelines and regulations to address potential risks and ensure fairness. He also advocates for using AI to improve public services and tackle social challenges. This is in the wake of fury by the spanish public at his cutting a self-serving deal with separatists to ensure his own political survival.

Is there a pattern here? Autocratic leaders picking up the whip of AI to lash their subjects, all while saying everyone else should be 'careful'.

And this obsession with OpenAI stockpiling resources, seemingly at the expense of safety... well, it's enough to make one wonder if the fox isn't guarding the henhouse.

So, what does this all mean? With the superalignment team decimated and their computing power potentially up for grabs, one has to question OpenAI's ability to keep their future AI creations in check. Leike went onto social media to explain his reasoning:

With remarkable directness, Leike said,

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

And while their current offerings might not be setting off any alarm bells just yet, the march towards artificial general intelligence – a machine with smarts that rival our own – is a prospect that should give even the most optimistic among us pause.

The departure of these key figures isn't just a personnel issue, it's a sign of a deeper malaise. A loss of faith, if you will, in OpenAI's commitment to safety. And that, my friends, is a very dangerous game indeed.


Jurisdictions with Active AI Legislation and Regulation:

  1. European Union: The EU is finalizing its Artificial Intelligence Act (AIA), which categorizes AI systems based on risk levels and imposes corresponding requirements. The AIA is expected to significantly impact various sectors, including healthcare, finance, employment, and law enforcement.
  2. United Kingdom: The UK government has published a white paper outlining a pro-innovation approach to AI regulation, focusing on five principles: safety, security, robustness, transparency, explainability, fairness, accountability, and governance. This approach will affect various sectors, including recruitment, local services, and the prevention of digital exclusion.
  3. United States: While no comprehensive federal AI legislation exists yet, various states are developing their own regulations. For example, California has introduced bills addressing AI bias in employment and algorithmic decision-making. The US approach to AI regulation is expected to impact industries like healthcare, finance, and autonomous vehicles.
  4. China: China has released draft regulations on generative AI, focusing on content governance and alignment with socialist values. These regulations will significantly impact the tech industry, content creators, and businesses utilizing generative AI models.

Recent News Events:

  1. AI War Conference: The AI War conference highlighted the growing concerns about AI's potential misuse in warfare and defense. This could accelerate discussions on international agreements and ethical guidelines for AI in military applications.
  2. Gaza Conflict: The reported use of AI-powered drones and autonomous weapons in the Gaza conflict raises ethical concerns and calls for greater transparency and accountability in the development and deployment of such technologies. This event could lead to stricter regulations on AI in warfare and defense.

Key Figures and Stakeholders in OpenAI:

  • Sam Altman (CEO): A charismatic and controversial leader, Altman's vision for OpenAI and his fundraising strategies have been central to the company's growth but have also raised concerns among some employees and the public.
  • Ilya Sutskever (Former Chief Scientist): A pioneer in deep learning and AI research, Sutskever's departure signals a potential shift in OpenAI's priorities and raises questions about the company's commitment to safety.
  • Jan Leike (Former Co-Lead of Superalignment Team): A leading voice in AI safety, Leike's resignation highlights concerns about OpenAI's ability to address the risks associated with future AI systems.
  • Greg Brockman (President and Co-Founder): A key figure in OpenAI's operations and strategy, Brockman's alignment with Altman is crucial for the company's future direction.
  • John Schulman (Co-Founder): Now leading the superalignment team, Schulman's ability to balance safety concerns with OpenAI's ambitious goals will be crucial.

Other Stakeholders:

  • Microsoft: A major investor in OpenAI, Microsoft's interests are intertwined with the company's success and its ability to commercialize AI technologies responsibly.