AI & AGI Glossary

Comprehensive dictionary of artificial intelligence and AGI terminology with simple explanations and cross-references

📚AI Terminology
byIndependent Research & Analysis
Published Dec 20, 2024Updated Jul 27, 2025✓ Reviewed

This analysis represents synthesis of expert opinions and publicly available research. The author is not a credentialed AI researcher but aims to provide accurate aggregation of expert consensus.

57
Total Terms
20
Alphabet Coverage
5
Categories
184
Cross-References
Showing 57 of 57 terms

Explore More AI Content

FAQ

Get answers to common questions about AGI and AI safety

Browse FAQ
👥

Expert Analysis

Explore predictions from leading AI researchers

View Experts
📊

Research Data

Dive into the latest AI research and benchmarks

Explore Research

References

[1]
Stuart Russell (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking Press.
[2]
Nick Bostrom (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
[3]
Dario Amodei et al. (2016). Concrete Problems in AI Safety. arXiv preprint arXiv:1606.06565.
[4]
Yuntao Bai et al. (2022). Constitutional AI: Harmlessness from AI Feedback. arXiv preprint arXiv:2212.08073.
[5]
Yann LeCun, Yoshua Bengio, & Geoffrey Hinton (2015). Deep learning. Nature, 521, 436-444.
[6]
Jason Wei, Yi Tay, Rishi Bommasani, & et al. (2022). Emergent Abilities of Large Language Models. arXiv preprint arXiv:2206.07682.
[7]
Toby Ord (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
[8]
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, & et al. (2021). On the Opportunities and Risks of Foundation Models. arXiv preprint arXiv:2108.07258.
[9]
OpenAI (2023). GPT-4 Technical Report. OpenAI.
[10]
Dan Hendrycks et al. (2021). Measuring Massive Multitask Language Understanding. arXiv preprint arXiv:2009.03300.
[11]
Long Ouyang, Jeffrey Wu, Xu Jiang, & et al. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155.
[12]
Jared Kaplan et al. (2020). Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361.