Table of contents

Title

Aug 3, 2025

Aug 3, 2025

0 mins read
0 mins read

The Second Case for Artificial Cultural Intelligence (ACI)

The Second Case for Artificial Cultural Intelligence (ACI)

Why the Future of AI Demands Plural Knowledge Systems

Why the Future of AI Demands Plural Knowledge Systems

Blog

Blog

Blog

Tech & Innovation

Tech & Innovation

Tech & Innovation

TL;DR: AI models reflect their knowledge foundations. AlphaEarth and recent research on LLM bias both reveal a need for Artificial Cultural Intelligence—systems built for and by diverse epistemologies.

What Did AlphaEarth Foundations Change?

Google DeepMind’s AlphaEarth Foundations now maps the planet through AI that learns directly from petabytes of Earth observation data (see original tweet I first stumbled upon here). From what I gathered, the model creates a global “digital twin,” (which reminds me of my article Why Tuvalu Needs A Digital Twin), compressing multiple sensor types—optical, radar, LiDAR—into compact, high-density vectors for every patch of land and coastal water. The result is an unsupervised, sensor-driven framework for reading the state of Earth across time and space.

AlphaEarth sidesteps the need for labeled training data. It encodes patterns and relationships as they actually appear in the physical world, not filtered through language or cultural narrative. This approach results in annual, pixel-level coverage from 2017 onward, ready for deep analytics without manual bias. The model’s structure allows for semantic search, anomaly detection, and mapping that operate independently from existing human categories or taxonomies.

This system represents a material, pattern-based way of knowing. The AI’s “language” is not words but the structure and transformation of the land itself.

AI and Cultural Distance: The Evidence

While models like AlphaEarth process the world without human-labeled input, most widely used AIs—LLMs such as ChatGPT and Claude—still draw from textual archives shaped by Western perspectives. A recent LinkedIn post by Ben Page, which passed on to me from one of my mentors (referencing Harvard research), reveals a consistent pattern: the alignment between AI-generated responses and human answers drops as cultural distance from the United States increases:

The above chart demonstrates that in nations further from the U.S. in worldview and custom, the correlation between GPT outputs and actual human sentiment falls off significantly. People in Australia, Canada, or the UK encounter high congruence. People in Egypt, Pakistan, or Jordan encounter the opposite. This drift occurs not simply at the level of translation, but at the level of how meaning, values, and reality are constructed.

A model’s “truth” becomes less reliable the more it leaves its own cultural context. This creates hidden friction in global deployment, undermining accuracy and relevance for the majority of humanity.

Why Artificial Cultural Intelligence (ACI) Is Necessary

Every AI is an artifact of its epistemology. AlphaEarth functions as an observer of material reality, deriving patterns and trends from unmediated environmental data.

LLMs function as archivists of linguistic and cultural material, embedding the associations, logic, and cognitive frames of their primary sources. No knowledge system is neutral. Logic, metaphors, and even problem-solving approaches reflect inherited ways of seeing.

Mainstream LLMs—trained on vast but Western-centric data—carry the logic, priorities, and unseen assumptions of their sources. People outside those contexts experience AI as subtly alien, even when it speaks their language.

Artificial Cultural Intelligence, which I originally made a case for here and on Hackernoon, requires the intentional design and training of models rooted in the epistemologies (and knowledge systems) of specific cultures, traditions, and knowledge systems. This means:

  • Sourcing training data from oral history, ritual practice, ecological observation, and community narrative

  • Defining categories and queries according to the values and metaphors present in local worldview

  • Prioritizing epistemic sovereignty—enabling communities and nations to own and govern the logic that shapes their AI

AlphaEarth as Proof of Multiple Knowledge Systems

AlphaEarth Foundations demonstrates that it is possible to build robust, high-performing AI that learns from non-linguistic sources. The model’s strength lies in its capacity to encode raw reality, not merely human consensus or narrative. This shows that AI can function as a “reader” of the land, not just an interpreter of stories.

When this insight is mapped to cultural knowledge, the implications become clear: the era of universal, one-size-fits-all AI is reaching its limits. The future lies in plural systems:

  • Linguistic models that serve communities as their own OS, not as “translations” of another culture’s frame

  • Material models that ground truth in the observed, not the described

  • Frameworks that allow for interconnection and synthesis across domains, without flattening difference

Building Culturally Sovereign AI

AI will increasingly require frameworks for cultural self-definition, not just localization.

This involves:

  • Supporting the creation of national, indigenous, and practice-based LLMs

  • Developing data infrastructures that allow communities to capture, encode, and validate their own ways of knowing

  • Creating protocols for dialogue across knowledge systems—so AI can operate as a connector, not a colonizer

Actually already started working on these sorts of projects (see below from Twitter/X):


Closing Insight

The next stage in AI is not a question of scale, but of orientation. When models reflect the knowledge systems of those they serve, technology becomes an extension of living culture and tradition—not a substitute or override.

Explore further, or join the effort to shape truly plural, culturally sovereign AI.

References & Further Reading

TL;DR: AI models reflect their knowledge foundations. AlphaEarth and recent research on LLM bias both reveal a need for Artificial Cultural Intelligence—systems built for and by diverse epistemologies.

What Did AlphaEarth Foundations Change?

Google DeepMind’s AlphaEarth Foundations now maps the planet through AI that learns directly from petabytes of Earth observation data (see original tweet I first stumbled upon here). From what I gathered, the model creates a global “digital twin,” (which reminds me of my article Why Tuvalu Needs A Digital Twin), compressing multiple sensor types—optical, radar, LiDAR—into compact, high-density vectors for every patch of land and coastal water. The result is an unsupervised, sensor-driven framework for reading the state of Earth across time and space.

AlphaEarth sidesteps the need for labeled training data. It encodes patterns and relationships as they actually appear in the physical world, not filtered through language or cultural narrative. This approach results in annual, pixel-level coverage from 2017 onward, ready for deep analytics without manual bias. The model’s structure allows for semantic search, anomaly detection, and mapping that operate independently from existing human categories or taxonomies.

This system represents a material, pattern-based way of knowing. The AI’s “language” is not words but the structure and transformation of the land itself.

AI and Cultural Distance: The Evidence

While models like AlphaEarth process the world without human-labeled input, most widely used AIs—LLMs such as ChatGPT and Claude—still draw from textual archives shaped by Western perspectives. A recent LinkedIn post by Ben Page, which passed on to me from one of my mentors (referencing Harvard research), reveals a consistent pattern: the alignment between AI-generated responses and human answers drops as cultural distance from the United States increases:

The above chart demonstrates that in nations further from the U.S. in worldview and custom, the correlation between GPT outputs and actual human sentiment falls off significantly. People in Australia, Canada, or the UK encounter high congruence. People in Egypt, Pakistan, or Jordan encounter the opposite. This drift occurs not simply at the level of translation, but at the level of how meaning, values, and reality are constructed.

A model’s “truth” becomes less reliable the more it leaves its own cultural context. This creates hidden friction in global deployment, undermining accuracy and relevance for the majority of humanity.

Why Artificial Cultural Intelligence (ACI) Is Necessary

Every AI is an artifact of its epistemology. AlphaEarth functions as an observer of material reality, deriving patterns and trends from unmediated environmental data.

LLMs function as archivists of linguistic and cultural material, embedding the associations, logic, and cognitive frames of their primary sources. No knowledge system is neutral. Logic, metaphors, and even problem-solving approaches reflect inherited ways of seeing.

Mainstream LLMs—trained on vast but Western-centric data—carry the logic, priorities, and unseen assumptions of their sources. People outside those contexts experience AI as subtly alien, even when it speaks their language.

Artificial Cultural Intelligence, which I originally made a case for here and on Hackernoon, requires the intentional design and training of models rooted in the epistemologies (and knowledge systems) of specific cultures, traditions, and knowledge systems. This means:

  • Sourcing training data from oral history, ritual practice, ecological observation, and community narrative

  • Defining categories and queries according to the values and metaphors present in local worldview

  • Prioritizing epistemic sovereignty—enabling communities and nations to own and govern the logic that shapes their AI

AlphaEarth as Proof of Multiple Knowledge Systems

AlphaEarth Foundations demonstrates that it is possible to build robust, high-performing AI that learns from non-linguistic sources. The model’s strength lies in its capacity to encode raw reality, not merely human consensus or narrative. This shows that AI can function as a “reader” of the land, not just an interpreter of stories.

When this insight is mapped to cultural knowledge, the implications become clear: the era of universal, one-size-fits-all AI is reaching its limits. The future lies in plural systems:

  • Linguistic models that serve communities as their own OS, not as “translations” of another culture’s frame

  • Material models that ground truth in the observed, not the described

  • Frameworks that allow for interconnection and synthesis across domains, without flattening difference

Building Culturally Sovereign AI

AI will increasingly require frameworks for cultural self-definition, not just localization.

This involves:

  • Supporting the creation of national, indigenous, and practice-based LLMs

  • Developing data infrastructures that allow communities to capture, encode, and validate their own ways of knowing

  • Creating protocols for dialogue across knowledge systems—so AI can operate as a connector, not a colonizer

Actually already started working on these sorts of projects (see below from Twitter/X):


Closing Insight

The next stage in AI is not a question of scale, but of orientation. When models reflect the knowledge systems of those they serve, technology becomes an extension of living culture and tradition—not a substitute or override.

Explore further, or join the effort to shape truly plural, culturally sovereign AI.

References & Further Reading

Subscribe to my AI newsletter

AI signals, essays, and tool/stack reviews. 3x a week.

© Copyright 2025 George (Siosi) Samuels

Subscribe to my AI newsletter

AI signals, essays, and tool/stack reviews. 3x a week.

© Copyright 2025 George (Siosi) Samuels

Subscribe to my AI newsletter

AI signals, essays, and tool/stack reviews. 3x a week.

© Copyright 2025 George (Siosi) Samuels