Skip to content

[Signal-Alignment] Articulating Risks of AI projectΒ #6

@liondw

Description

@liondw

Part of the purpose of the signal alignment project is to help everyone play catch up on the wider alignment issue.

This is just the initial outline here - requires heavy compression

Simple AI generated presentations
https://gamma.app/public/4n03bbzmrgq8nrv
https://gamma.app/public/cxwkcy3ulil26f6

Articulating the Risks of AI project

(summary of the first half of the AGI unleashed video)
https://www.youtube.com/watch?v=fKgPg_j9eF0

  • Existential Risks
  • Social and Economic Turmoil
  • What are Autonomous AI?
  • Pace and scale of AI development feedback loop
    (advancement in one area is advancement in all) (leading to AGI)
  • The Control Problem (What is AI alignment)
  • Incomplete Solutions
  • Arguments for/ against AI alignment (an updated bad alignment take bingo could be turned into a shareable long scroll post for sharing on social media)

Also incorporate visualizations from the A.I. Dilemma talk ( This source has stronger citations, could be a better bridge to understanding for more official avenues)
https://www.youtube.com/watch?v=xoVJKj8lcNQ

  • Pace and scale of AI development feedback loop
    (advancement in one area is advancement in all) (leading to AGI)
  • Threats and Risks of powerful AI models
  • What is a GLLMM AI
  • 2024 will be the last human election
  • GLLMM have new emergent properties with increased scale
    (New languages, pace of theory of mind, arithmetic and maths, research level chemistry)
  • RLHF, and other alignment methods/ incomplete solutions ( a comment on the Swisse cheese approach?)
  • AI making stronger AI
  • more notes
  • [ ]

Image

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

Status

πŸ“‹ Backlog

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions