Governing AI: Learning from Gun Control to Ensure Public Global Safety: as Devil's advocate
Governing AI: Learning from Gun Control to Ensure Public Global Safety: as Devil's advocate as imagined by Dalle3

Governing AI: Learning from Gun Control to Ensure Public Global Safety: as Devil's advocate

Getting all the great minds together to Govern AI, by Dalle3

Welcome to #FixTheWorld or #GiveUp newsletter no. 37

Hi, I'm Gareth Wong

This is another serious topic, so no chit chat, and lets' get down to it, however, do feel free to share with others: FixTheWorld.4Good.space

TLDR: Prudent AI governance requires inclusive cooperation and democratic oversight to maximize benefits and minimise harm, much as with regulating firearms. The contrast between strict UK gun laws and lax US ones shows determined policy can curb violence despite difficulty. Similarly, judicious oversight of risky AI uses, guided by ethics and empowering affected groups, can steer innovation toward empowerment without harm. Restrictions, auditing, transparency and prohibiting certain applications are crucial. Learning from successes like British gun reforms, the dangers of unfettered AI proliferation can be contained to secure shared prosperity.

My suggested final key actions now moved to beginning of each article:

AI Philosopher as imagined by Dalle3

Just to be clear, I'm an AI optimist see my previous posts on FutureProofSelf, LifesSteeringWheel, AIEmpoweredPolymath, that is why this article is written in "devil's advocate' mode, to highlight opportunities and threats, and suggest some

All these issued played out real time 12days after this blog post was published, great summary from bloomberg below:

Key Actions Needed:

  • International coordination to prevent a regulatory "race to the bottom" as bad actors exploit jurisdictional discrepancies.

  • Categorical prohibitions on unambiguously unethical uses like mass surveillance for oppression, and autonomous weapons.

  • Mandatory licensing, testing, and external audits for high-risk AI innovations pending safety advances. (& Red teaming & thinking like terrorists)

  • Transparency rules compelling disclosure of training data, model architectures, real-world performance and risks.

  • Participatory oversight bodies representing diverse geographic, cultural and economic perspectives.

  • Proactive interventions throughout the AI pipeline to safeguard fairness, with ongoing external auditing (with actual+punitive damages clause for bad auditors).

  • Human rights principles and (non-political/non-religious/non-nationalistic) peaceful values as the moral foundation for regulatory regimes.

  • Inclusion of affected groups and experts beyond computer scientists in oversight processes.

  • Global cooperation guided by ethics and justice to direct this world-shaping technology towards equity and shared prosperity (& #DoTheRightThings & hopefully play major role to #FixTheWorld).

  • Create public benefit corp with AI liability insurance by all LLMs service providers (for cyber disasters cover which sadly will likely happen)

My Full #FixTheWorld or #GiveUp newsletter #37 blog below 👇

Governing AI learning from Gun Control

Governing AI: Learning from Gun Control to Ensure Public Safety

The rapid development of transformative AI technologies presents governance challenges reminiscent of when firearms spread globally centuries ago. Much as guns revolutionized security and hunting but also enabled violence, AI promises immense benefits but also significant dangers if misused.

Just as prudent gun control in nations like the UK saved lives despite controversy, wise policies on risky AI uses can maximize public wellbeing. But achieving effective oversight requires determination to resist obstruction, as the contrast between strict British gun laws and lax American ones makes clear. With sustained effort and vigilance against authoritarian capture, AI can improve lives equitably worldwide.

Governing AI learning from fun control from Gunpowder revolution via Dalle3

The Gunpowder Revolution

Firearms built on ancient Chinese inventions spread rapidly across Eurasia in the 1300s. Early guns were inaccurate and slow to reload, but they packed more destructive force than swords or bows. By the 1500s, European armies were armed with muskets, cannons, and other guns.

Firearms transformed warfare and hunting. Guns also changed law enforcement and enabled frontier expansion. Settlers defending homesteads with rifles later romanticized this "gunpowder revolution."

Yet guns proved a mixed blessing. Criminals exploited firearms, and accidental shootings took lives. Urban violence surged in the 1600s as cheap pistols spread. Critics decried the "cowardice" of attacking from afar rather than fighting honorably. But efforts to restrict guns stalled.

FIREARMS &VIOLENCE IN AMERICAN LIFE A STAFF REPORT SUBMITTED TO THE NATIONAL COMMISSION ON THE CAUSES & PREVENTION OF VIOLENCE George D. Newton, Jr. Library of Congress Catalog Card Number: 70-601932

The gunpowder revolution thus improved lives but also enabled new threats. Guns clearly required oversight to minimize harm, much as today's AI does. But effective regulation would take centuries of bitter debate, death tolls meanwhile is shocking:

Rates of firearm homicides among high-income countries with populations over 10 million, On gun violence, the United States is an outlier Published October 31, 2023, THE INSTITUTE FOR HEALTH METRICS AND EVALUATION

Governing AI, using British gun control as an example, via Dalle3

The British Example

Britain once struggled with gun problems comparable to modern America's. But after mass shootings in the 1980s and 1990s, Parliament imposed strict gun control. This approach succeeded in nearly eliminating gun murders. Britain's experience holds key lessons for governing innovations like AI.

Lax British gun laws once facilitated frequent shootings. As urbanization continued in the 1800s, pistol-packing criminals stalked the streets. Illegal pistols were tied to 19th century British youth gangs, much like modern American street violence.

Sentiment against this chaos led to reforms restricting certain firearms. But rural resistance stymied comprehensive regulation. Plus new technologies like breech-loading rifles enabled faster firing. As a result, guns remained easily available despite rising homicide rates.

This changed after mass shootings in 1987 and 1996. The Hungerford massacre of sixteen civilians with a semi-automatic rifle, then the Dunblane school shooting of seventeen children provoked public outrage. Within months, a bipartisan coalition passed sweeping reforms.

The Firearms Acts of UK banned civilian ownership of nearly all handguns and semi-automatic weapons. Strict licensing procedures were mandated for shotguns and rifles. Buyers faced thorough background checks and registration. Police could inspect storage facilities.

The reforms provoked bitter complaints that government was trampling citizens' rights. But the results were striking: gun homicides fell from hundreds annually to dozens. Dunblane-style school shootings disappeared. With fewer guns in circulation, accidental deaths and suicides also declined.

Whatever one thinks of gun rights, the British approach worked. It suggests that given political will, safety regulations on dangerous innovations can succeed despite controversy. Although law could still be changed, hence lobbyists are also active even within UK:

The contrast with America's perpetual gun violence highlights the value of prudent control.

Learn from Gun control American Quagmire, via Dalle3

The American Quagmire

Lax American gun laws enable the awful accidents to mass shootings that happens daily. Efforts to emulate Britain's sensible reforms have failed, stymed by lobbying and lawsuits.

Yet America once regulated firearms more stringently, and still could if wisdom prevailed over zealotry (we can only hope!).

In colonial America, many communities tightly restricted guns under "safe storage" laws mandating that weapons be kept disabled. Such policies aimed to prevent accidents and unauthorized use. Colonial New York, for instance, ordered that any loaded gun had to be kept unserviceable when not in use.

After the Civil War, concerns about pistols and crime spurred new gun controls. Many states and cities required permits to carry concealed weapons. In 1911, New York imposed strict licensing for all handguns. But lobbying weakened oversight, enabling mobsters to murder freely during Prohibition.

Recent decades saw renewed attempts at gun regulation after mass killings generated outrage. But the gun lobby countered each reform bill in Congress. Sweeping protections were won for weapons makers and owners. Underfunded agencies struggled to enforce poorly designed rules.

The result is America's nightmarish status quo. Hundreds are slain yearly in mass shootings, while suicide, street crime, and domestic violence take tens of thousands more lives. No other developed nation suffers such relentless carnage. Yet gridlocked politics prevent coherent policy responses.

Rates of firearm homicides among high-income countries with populations over 10 million, On gun violence, the United States is an outlier Published October 31, 2023, THE INSTITUTE FOR HEALTH METRICS AND EVALUATION

This failure offers a sobering lesson. Powerful economic interests will exploit divisive cultural issues to paralyze regulation. Without strong bipartisan leadership, policy becomes captive to extremists. Similar dynamics already surround AI, as debate swirls while rapid innovations continue unchecked.

Taming the AI Revolution via Dalle3

Taming the AI Revolution

AI today resembles firearms centuries ago: a transformative invention with promise and peril. Without judicious oversight, destructive misuse of AI seems inevitable. But regulating AI will prove controversial given its benefits and cultural roots. Success requires learning from successes like British gun control.

Clear parallels exist between unfettered proliferation of guns and ungoverned AI systems. Both enable bad actors while frustrating accountability. Firearms provide remote killing capacity, AI distributes disinformation and fraud globally. Preventing violence and social harms should guide AI policy as it did British gun control.

Certain AI uses like healthcare or education aid humanity and merit encouragement. But high-risk systems enabling impersonation, surveillance, and psychological manipulation require restriction pending safety improvements, just as military-grade assault weapons do. Framing prudent regulation as maximizing social benefits, not depriving individual liberties, will help balance competing values.

Global AI regulation Agency (independently funded) via Dalle3

Global AI regulation Agency (independently funded)

AI governance cannot realistically aim to prohibit all misuse. But well-designed oversight can greatly mitigate harm, as Britain's experience shows. Key elements include strict licensing of the riskiest systems, mandatory safety reviews, transparency requirements, and penalties for violations.

International coordination is also crucial, given AI's global impacts. Disparate regulations would just spur "jurisdiction shopping" as bad actors exploit loopholes. Major countries should thus agree on principles for restricting unethical AI while enabling its progress.

Most critically, effective oversight requires resisting the lobbying and legal obstructionism that delegates AI policy to corporate interests. Independence from lobbying pressures is vital. Regulators must also receive ample resources and staff expertise. Otherwise rules will prove toothless.

An ethics-focused AI regulatory agency should be created to develop and enforce standards. Leading researchers as well as civil society advocates should advise regulators to balance competing priorities.

Rigorous yet sensible oversight will allow AI's benefits while preventing disasters.

The future trajectory of AI remains unknown. But if cultivated prudently, AI can uplift humanity. This requires learning from successes like British gun control in reducing violence.

With wisdom and courage, society can foster AI's progress while restraining its risks. The stakes could hardly be higher, but the formula for maximizing benefits while minimizing harm is clear.

Governing AI Real Danger of AI via Dalle3

Real Dangers of Unconstrained AI

While prudent oversight offers hope, the hazards of unfettered AI proliferation remain severe.

Weaponised AI could wreak havoc through disinformation, systemic hacking, and oppression absent regulatory constraints. From election meddling to infrastructure attacks to biometric monitoring, ungoverned AI risks dystopian nightmares. Preventative policies guided by ethics offer the only reliable safeguard.

Already crude bots spreading false narratives through social media like Facebook have inflamed ethnic hatreds from Myanmar to Sri Lanka:

But AI generating customized propaganda could stoke vastly more chaos. Hyper-realistic fake videos portraying atrocities but lacking any factual basis would further corrode social cohesion. Only constant vigilance can minimize such manipulation risks.

On the cyber front, AI-directed hacking could launch devastating attacks compromising critical systems and services. Testing has demonstrated AI's ability to find vulnerabilities, impersonate targets and automate network intrusions. Unleashed irresponsibly, such capabilities could induce societal breakdown. Strict oversight and licensing of the most dangerous tools is essential (there is also some AIHackDefence research).

Ubiquitous biometric surveillance and predictive policing powered by AI also threaten oppression. China's alarming "social credit" system has been reported in the west often but Israel's RED Wolf showcases how AI-enabled monitoring tools coerce conformity. Similar infrastructures in countries adopting such system risk normalising permanent surveillance and chilling dissent. External constraints and democratisation are vital to preventing abuse.

Autonomous AI-directed weapons like slaughterbots represent another civilizational hazard requiring controls. Allowing machines full lethal authority without human supervision effectively abdicates responsibility, enabling mindless violence at vast scale. Internationally banning such systems is an urgent imperative.

An Overview of Catastrophic AI Risks, Center for AI Safety Berkeley

Like biotech and the internet, AI enables immense social benefits but also significant hazards if mishandled. Preventing anti-social applications while allowing pro-social ones calls for nuanced governance guided by ethics and human rights.

With wise cooperative effort, the threats of uncontrolled AI proliferation can be contained to realize its monumental potential for good.

Governing AI, The Need for Inclusive Governance, as imagined by Dalle3

The Need for Inclusive Governance

While dangers remain, prospects for cooperative oversight offer hope.

But inclusive participation and transparency are essential to earn legitimacy and steer AI towards just path. Providing avenues for affected groups to shape governance prevents bias and unilateral agendas. And transparency mandates make powerful systems accountable. Institutionalising inclusive cooperation and oversight can secure AI's benefits equitably.

Expanding who governs & guide AI is challenging but vital.

Multidisciplinary teams should represent diverse geographic and socioeconomic perspectives when designing socially impactful systems.

Oversight bodies must empower cultural, gender, and economic diversity to resist dominant group biases and blind spots.

Broader participation takes many forms. Crowdsourcing constitutional principles allows societies to imbue AI with shared values. Global regulatory standards developed through participatory processes earn legitimacy worldwide. Codetermination gives workers influence over automation's impacts.

Human inputs keep AI aligned with human priorities. Multipolar governance networks distributing oversight across institutions and sectors provide checks against excessive concentrations of power. Democratic deliberation through collective debate steers progress towards justice.

By opening technology's development to its effects on humanity, AI can enable liberation over oppression. But this requires determination to share agency. However difficult, inclusion and participation are prerequisites for steering this epochal innovation towards rights and wellbeing. With cooperation and courage, wise governance can secure AI's benefits for all.

Using Public Benefit Financial instruments to prevent catastrophic results as imagined by Dalle3

Using Public Benefit Financial instruments to prevent catastrophic results

By creating a "Public benefit Corp" that provide speciality mutual liability insurance market for large language models, this will ensure the companies involved be much more prudent. Part of the money raised could also be used to finance the compliance and regulatory authorities. The fund raised could be used to monitor, survey and key partners to ensure nefarious parties cannot succeed, worst case scenario, funds cannot

As LLMs become more powerful and widely deployed, there is a growing risk that they could be misused in ways that cause public harm as mentioned already, whether intentionally or accidentally. To manage this unforeseen risk, LLM providers should be required to carry insurance that covers potential public liabilities from LLM failures or misuse.

Rather than leaving this to an unfettered private market, insurance requirements should be thoughtfully designed through a public interest approach, not just regulatory capture by industry. Minimum liability coverage levels should be mandated for LLM providers proportional to the scale and risks of their systems.

To mutualise risk, pools and structures like government-backed reinsurance may be needed, as private reinsurers alone may lack capacity for systemic risks. Actuarial expertise will be critical in pricing and managing this systemic liability risk.

Overall, public liability insurance for LLM providers can help ensure that those developing and profiting from these technologies internalize the risks they create. But the insurance system must be designed proactively based on public interest, not narrowed private interests, to effectively cover potential harms from LLMs gone rogue or misused.

However, creating PBC at scale needed will be a challenging task, but it is not insurmountable, we can make it happen!

Governing AI: Recent Global Cooperation AI safety summit in UK imagined by Dalle3

Recent Global Cooperation

While significant hazards remain, momentum towards cooperative AI governance offers hope for securing its benefits responsibly. Recent years have seen major international gatherings aimed at aligning innovation with human rights through shared principles and oversight policies. Continued inclusive effort is key to ethical progress.

In 2019 the Global Partnership on AI formed to guide responsible development centered on human rights, transparency, accountability and inclusion. Working groups investigate policy domains while formulating recommendations. A 2021 OECD agreement established pioneering AI principles endorsed by democratic nations.

Visionary initiatives like the Vatican's Rome Call for Ethics and UNESCO's Ethics of AI further articulated ethical standards aligned with social justice. Regional bodies like the EU and African Union move towards regulatory frameworks upholding human dignity. Global hubs like AI For Humanity and APeC AI connect researchers worldwide to advance beneficial applications.

Most recently, Prime Minister Sunak hosted the pivotal 2022 UK AI Safety Summit (although US jumped the gun and wanted to write the AI rules), engaging government teams and experts from 28 nations. The resulting Bletchley Declaration acknowledged AI's possible catastrophic misuse potential and committed signatories to cooperate internationally on preventative policies, safety protocols and governance mechanisms. A 2023 follow-up summit in South Korea promises continued momentum.

This growing cooperation indicates consensus that equitable access to AI's benefits requires reasonable constraints on risks. But progress depends on principled compromises reconciling competing aims. To succeed, inclusive governance grounded in democratic ideals and human rights must guide innovation away from oppression towards liberation.

Governing AI leadership via Dalle3

The Need for Leadership

While global cooperation offers hope, leadership by principled democratic powers remains essential to prevent authoritarian capture of AI. Unrestrained, far right or far left parties or totalitarian regimes will exploit these technologies for social control and military or political dominance. Preventing a destabilizing AI arms race requires proactive partnerships by rights-respecting nations.

The world MUST urgently formulate norms guiding conscientious innovation while prohibiting clearly unethical applications.

Proactive collaboration on oversight frameworks anchored in transparency, accountability and human dignity can steer AI towards liberation not oppression. But absence assertive leadership, progress risks faltering.

No one country can alone govern this epochal global technology. But steering bodies led by democratic role models can set norms and incentives influencing worldwide development.

Europe's pioneering AI Act and the recent US executive order demonstrate asserting leadership to encourage principled progress. But much more cooperation is needed.

The future trajectory of AI remains uncertain.

Its unprecedented capabilities could uplift humanity through knowledge and opportunity, or usher in dystopian oppression.

Outcomes depend on governance policies that maximize benefit and minimize harm. But with ethical leadership and inclusive cooperation, the threats of uncontrolled AI proliferation can be surmounted to forge a more just world for all.

The path ahead remains demanding, but recent momentum offers hope that humanity can navigate the AI revolution towards broad inclusion and prosperity through ethics and wisdom. Sustained inclusive cooperation to direct this epochal breakthrough away from injustice can realize its monumental potential to uplift humanity.

Hope this newsletter challenged your thinking and maybe created some ideas that are actionable and maybe can help us to join forces to #FixTheWorld.

Feel free to share this newsletter with others: FixTheWorld.4Good.space


References:

Global Leaders Warn A.I. Could Cause ‘Catastrophic’ Harm https://www.nytimes.com/2023/11/01/world/europe/uk-ai-summit-sunak.html

Biden Issues Executive Order to Create A.I. Safeguards https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html

A.I. Muddies Israel-Hamas War in Unexpected Way https://www.nytimes.com/2023/10/28/business/media/ai-muddies-israel-hamas-war-in-unexpected-way.html

News Group Says A.I. Chatbots Heavily Rely on News Content https://www.nytimes.com/2023/10/31/business/media/news-artificial-intelligence-chatbots.html

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Five ways AI might destroy the world: ‘Everyone on Earth could fall over dead in the same second’ https://www.theguardian.com/technology/2023/jul/07/five-ways-ai-might-destroy-the-world-everyone-on-earth-could-fall-over-dead-in-the-same-second

The First Guns: How Gunpowder Overcame the Sword https://www.thecollector.com/first-guns/#

https://www.npr.org/2022/06/01/1102239642/school-shooting-dunblane-massacre-uvalde-texas-gun-control

On gun violence, the United States is an outlier 31 October 2023 THE INSTITUTE FOR HEALTH METRICS AND EVALUATION https://www.healthdata.org/news-events/insights-blog/acting-data/gun-violence-united-states-outlier

Rome call for AI Ethics https://www.romecall.org pdf

The White House Is Preparing for an AI-Dominated Future https://www.theatlantic.com/technology/archive/2023/10/biden-white-house-ai-executive-order/675837/

What the data says about gun deaths in the U.S. https://www.pewresearch.org/short-reads/2023/04/26/what-the-data-says-about-gun-deaths-in-the-u-s/

Gun Control, Explained A quick guide to the debate over gun legislation in the United States NYTimes 26Jan23 https://www.nytimes.com/explain/2023/gun-control

An Overview of Catastrophic AI Risks, Center for AI Safety Berkeley

How AI can learn from the law: putting humans in the loop only on appeal https://www.nature.com/articles/s41746-023-00906-8

Facial recognition: top 7 trends (tech, vendors, use cases) https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/biometrics/facial-recognition

Data & Society — Democratizing AI: Principles for Meaningful Public Participation https://datasociety.net/library/democratizing-ai-principles-for-meaningful-public-participation/

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics