Geography and AI

“AI techniques, if properly applied, should also allow researchers to spend a greater proportion of their time on creative thinking and less on technical drudgery. As with any set of tools, the techniques of AI cannot replace a hard-earned understanding of some phenomenon and will almost certainly be overvalued and misused by some practitioners. [Nevertheless], if used with care, the techniques of AI will prove of great benefit to such an applied, problem solving discipline as geography.”

GeoAI as spatially-explicit models

Janowicz et al.’s (2020) definition:

“[…] utilizes advancements in techniques and data cultures to support the creation of more intelligent geographic information as well as methods, systems, and services for a variety of downstream tasks. […] address why (geo-)spatial matters by making a case for spatially explicit models.”

Spatially-explicit models (Goodchild, 2001)

  • are not invariant under relocation (invariance test)
  • include spatial representations in their implementations (representation test)
  • include spatial concepts in their formulations (formulation test)
  • change input spatial structure into different output spatial structure (outcome test)

~ Geospatial “for” AI

GeoAI: the “moonshot”


“Can we develop an artificial GIS analyst that passes a domain-specific Turing Test by 2030?” (Janowicz et al., 2020)


~ AI “for” Geospatial

Li et al. (2025) currently list 24 projects

Google’s Geospatial Reasoning

Chapter 1
How did we get here?

Artificial Intelligence

“When programmable computers were first conceived, people wondered whether such machines might become intelligent, over a hundred years before one was built (Lovelace, 1842).” (Goodfellow, Bengio and Courville, 2016, p. 21)

  • Turing (1950) paper on learning machines
  • John McCarthy’s workshop at Dartmouth College (1956)
  • … although Schmidhuber (2022) starts from Leibniz’s chain rule (1676), and Legendre and Gauss’s least squares (c.1805)
  • Rosenblatt (1962) multi-layer perceptron
  • Ivakhnenko, Lapa, et al. (1965) deep learning
  • Amari (1967) deep learning with stochastic gradient descent
  • Linnainmaa (1970) proposes backpropagation

Portrait of Ada King, Countess of Lovelace, by Alfred Edward Chalon (1780–1860), public domain via Wikimedia Commons

It’s (not just) the hardware, stupid!

The recent breakthroughs of deep learning algorithms from the past millennium would have been impossible without continually improving and accelerating computer hardware. (Schmidhuber, 2022)

  • 2012: Krizhevsky, Sutskever and Hinton (2017) showcase AlexNet
  • 2016: PyTorch (Meta AI) python library (and other languages)

The curse of dimensionality

“the curse of dimensionality […] which has hung over the head of the physicist and astronomer for many a year” (Bellman, 1966)

“in many applications with even modest dimension \(d\), the number of samples would be bigger than the number of atoms in the universe” (Bronstein et al., 2021)

“it’s neither obvious that we should be able to fit deep networks nor that they should generalize. A priori, deep learning shouldn’t work. And yet it does.” (Prince, 2023)

by Prince (2023), Creative Commons CC-BY-NC-ND

Priors

“For machine learning to be useful […] we need a way to choose a specific distribution from amongst the infinitely many possibilities. The preference for one choice over others is called inductive bias, or prior knowledge, and plays a central role in machine learning. [For instance,] when detecting objects in images, we can introduce prior knowledge that the identity of an object is generally independent of its location within the image.” (Bishop and Bishop, 2023)

What priors do we need in GeoAI?

  • include spatial concepts in their formulations (Goodchild, 2001)
    • first order effects, spatial heterogeneity, non-stationary
    • second-order effects, influence of neighbours, spatial autocorrelation
    • distance, adjacency, connectivity
    • scale and MAUP

Chapter 2
Geospatial “for” AI

Geographic priors

  • Mai, Xuan, et al. (2023) (Sphere2Vec)
    • spherical geometry prior
    • distance‑preserving prior
    • multi-scale prior
    • … and location as a prior

Sphere2Vec architecture by Mai, Xuan, et al. (2023)
  • De Sabbata and Liu (2023) (NAGAE)
    • local neighbours dependecy prior
    • neighbour order invariance prior
    • neighbour importance prior (GAT)
    • primacy of attributes over structure
    • … no location prior!

NAGAE architecture by De Sabbata and Liu (2023)

Graphs in GIScience

Graphs have long been used in geography and GIScience

  • to represent networks
    • transportation networks
      • street networks (geographic)
      • space syntax
    • social networks
  • to encode proximity
    • distance weights

Contains National Statistics data Crown copyright and database right 2015; Contains Ordnance Survey data Crown copyright and database right 2015. Data by OpenStreetMap, under ODbL, and by Boeing (2020), under CC0 1.0.

Graph Neural Networks (GNN) were developed in machine learning

  • generalisation of Convolutional Neural Networks
  • “deep neural networks on graphs other than regular grids” (Bruna et al., 2014)

Graph neural networks

  • Bruna et al. (2014) proposed a spectral construction approach
  • Kipf and Welling (2017) proposed a message passing approach
    • Graph Convolutional Network (GCN) layer for a node \(v\) with weights (\(W^{(l)}\)), activation function (\(\sigma\)) as

\[ h_{v}^{(l)} = \sigma \left( W^{(l)} \sum_{u \in N(v)} \frac{1}{|N(v)|} h_{u}^{(l-1)} \right) \]

  • Hamilton, Ying and Leskovec (2017) proposed a generalisation
    • in GraphSAGE a simple mean is used as aggregate and sum as combine functions

\[ h_{v}^{(l)} = \sigma \left( W^{(l)} \ {\scriptstyle COMBINE} \left( h_{v}^{l-1}, {\scriptstyle AGGREGATE} \left( \bigl\{ h_{u}^{(l-1)}, \forall u \in N(v) \bigl\} \right) \right) \right) \]

Example 1: Geodemographics

  • Cluster areas with similar demographics (see e.g., Webber and Burrows, 2018)
  • Carver (1998) proposed adjusting fuzzy c-means membership based on neighbours

\[ m'_i=\alpha m_i+\beta\frac{1}{A}\sum_j^n{w_{ij}m_j} \]

  • Mason and Jacobson (2007) suggested to adjust membership at each iteration
  • Grekousis (2021) introduces a distance-based neighbourhood

Intuition: is membership update akin to graph convolution?

\[ h_{v}^{(l)} = \sigma \left( W^{(l)} \ {\scriptstyle COMBINE} \left( h_{v}^{l-1}, {\scriptstyle AGGREGATE} \left( \bigl\{ h_{u}^{(l-1)}, \forall u \in N(v) \bigl\} \right) \right) \right) \]

NAGAE

Graph AutoEncoder architecture (Kipf and Welling, 2016) used for
spatial geodemographic classification (De Sabbata and Liu, 2023)

Map data source: CDRC LOAC Geodata Pack by the ESRC Consumer Data Research Centre; Contains National Statistics data Crown copyright and database right 2015; Contains Ordnance Survey data Crown copyright and database right 2015.

Results

Data source: CDRC LOAC Geodata Pack by the ESRC Consumer Data Research Centre; Contains National Statistics data Crown copyright and database right 2015; Contains Ordnance Survey data Crown copyright and database right 2015.

Example 2: Urban form

Learn effective representations of urban form from street network

  • local neighbours dependecy prior
  • neighbour order invariance prior
  • primacy of structure over attributes
  • … no location prior!

Learning urban form via GAE

(De Sabbata, Ballatore, Liu, et al., 2023)

Pre-processing

  • random 1% of nodes from 137 UK cities
  • an ego-graph for each node
    • 500m network distance (min 8 nodes)
    • junctions as nodes
      • num. of segments as an attribute
      • bounded min-max (1 to 4)
    • street segments as edges
      • length as an edge attribute
      • bounded min-max (50m to 500m)

Model

  • PyTorch Geometric
  • three-layer encoder
    • two GINE (Hu et al., 2020) layers
      • 64 hidden features
    • one linear layer
      • 64 features to 2 embeddings
  • trained for 1000 epochs
    • AdamW optimiser
    • 0.0001 learning rate
    • random 80% of ego-graphs
  • tested on remaining 20%

Case study

Leicester (UK)

  • Population: 368,600 at the 2021 UK Census, increased by 11.8% since 2011
  • Minority-majority city: 43.4% identify as Asian, 33.2% are White British
  • Area: about 73 km2 (28 sq mi)
  • Simplified OSM street network data by Boeing (2020)

Results

Street network data by OpenStreetMap, under ODbL, and by Boeing (2020), under CC0 1.0

 

So, what is spatial about GeoAI?

Where do we (need to) “include spatial concepts in their formulations”? (Goodchild, 2001)

  • architectures
  • parameters
    • initialisation
    • regularisation
  • datasets
    • data augmentation
  • explainability
  • interpretability
  • learning approach
    • self-supervised
  • training algorithms
  • loss functions

We need to understand what are the geo/spatial symmetries (Bronstein et al., 2021) that we should impose to learn useful functions?

  • These might be very different from e.g., learning object recognition functions!
    • scale separation vs MAUP?


“…but I thought we would be talking like ChatGPT and stuff?” (anonymous student, 2024)

Chapter 3
AI “for” Geospatial

How do generative models work?

Large Language Model (LLM) training process

  • Pre-training
    • learning to predict the next word in a sentence
    • … the “stochastic parrot” (Bender et al., 2021)
  • Fine-tuning
    • on a specific task such as
      • text summarisation
      • conversational chatbot
  • Reinforcement learning with human feedback (RLHF)
    • further trainig based on feedback provided by human assessors
  • Test-time compute (TTC)
    • “reasoning” models e.g. OpenAI’s o3
  • Retrieval augmented generation (RAG)

Next token prediction


LLMs estimate what is the most probable next token

(most modern LLMs actually use sub-word tokenisation)

  • “The capital of France” ► “is”
  • “The capital of France is” ► “Paris”
  • “The capital of France is Paris” ► “,”
  • “The capital of France is Paris,” ► “a”
  • “The capital of France is Paris, a” ► “European”
  • “The capital of France is Paris, a European” ► “city”

LLMs in GIScience

…but how do LLMs work?

LLMs are usually considered “black boxes”… GPT-4 is rumored to have 1.74T parameters

But recent years have seen a lot of work in mechanistic interpretability

Scaling Monosemanticity by Templeton et al. (2024) at Anthropic

Mechanistic interpretability

Probing for geographic information

Gurnee and Tegmark (2024) explored representations of spatial data using a linear probe.

Geospatial mechanistic interp

Results

Spatial autocorrelation as probe (De Sabbata, Roitero and Mizzaro, 2025)

Results

Spatial autocorrelation as probe (De Sabbata, Roitero and Mizzaro, 2025)

Geospatial mechanistic interp

Results

Using Sparse AutoEncoders (De Sabbata, Roitero and Mizzaro, 2025)

Results

Using Sparse AutoEncoders (De Sabbata, Roitero and Mizzaro, 2025)

Some thoughts

  • Superposition hypothesis holds for geographical information
    • distributed across many polysemantic neurons
    • rather than single monosemantic neuron
    • possibly alongside other information
    • difficult to isolate and interpret individual geographical entities and concepts
  • Sparse autoencoders improve interpretability (…maybe?)
    • from polysemantic structures to monosemantic features
    • improved the interpretability for geospatial tasks.
  • Applications
    • improving AI safety
    • understanding internal worksings of LLMs, including bias and diversity
    • more robust geographoic question-answering and autonomous GIS

Chapter 4
Where to we go from here?

Some key points


  • LLMs have their own internal geographies (De Sabbata, Roitero and Mizzaro, 2025)
    • How does placename ambiguity impact LLMs’ learning and interpretability?
    • What scale(s) of geographical relationships and effects are encoded?
    • How do geographical information interact with other information?

What’s next in GeoAI?

Thank you for your attention


Dr Stef De Sabbata (she/her)

Associate Professor of Geographical Information Science at the School of Geography, Geology and the Environment

Research theme lead for Cultural Informatics at the Institute for Digital Culture

University of Leicester, University Road, Leicester, LE1 7RH, UK

Contact: s.desabbata@leicester.ac.uk

Check out my GitHub repos at: github.com/sdesabbata

(De Sabbata and Liu, 2023; De Sabbata, Roitero and Mizzaro, 2025)

Reviews of GeoAI

Some readings on AI ethics

References

Agarwal, M. et al. (2024) “General geospatial inference with a population dynamics foundation model,” arXiv preprint arXiv:2411.07207 [Preprint].
Amari, S. (1967) “A theory of adaptive pattern classifiers (japanese version),” IEEE Transactions on Electronic Computers, (3), pp. 299–307.
Bellman, R. (1966) “Dynamic programming,” science, 153(3731), pp. 34–37.
Bender, E.M. et al. (2021) “On the dangers of stochastic parrots: Can language models be too big?🦜,” in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623.
Bhandari, P., Anastasopoulos, A. and Pfoser, D. (2023) “Are large language models geospatially knowledgeable?” in Proceedings of the 31st ACM international conference on advances in geographic information systems. New York, NY, USA: Association for Computing Machinery (SIGSPATIAL ’23). doi:10.1145/3589132.3625625.
Bishop, C.M. and Bishop, H. (2023) Deep learning: Foundations and concepts. Springer Nature.
Boeing, G. (2020) Global Urban Street Networks GraphML.” Harvard Dataverse. doi:10.7910/DVN/KA5HJ3.
Bommasani, R. et al. (2021) “On the opportunities and risks of foundation models,” arXiv preprint arXiv:2108.07258 [Preprint].
Boutayeb, A., Lahsen-cherif, I. and Khadimi, A.E. (2024) “A comprehensive GeoAI review: Progress, challenges and outlooks,” arXiv preprint arXiv:2412.11643 [Preprint].
Bronstein, M.M. et al. (2021) “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges,” arXiv preprint arXiv:2104.13478 [Preprint].
Bruna, J. et al. (2014) “Spectral networks and locally connected networks on graphs.” Available at: https://arxiv.org/abs/1312.6203.
Carver, S. (1998) “Fuzzy geodemographics: A contribution from fuzzy clustering methods,” in Innovations in GIS 5. CRC Press, pp. 141–149.
Cohn, A.G. and Blackwell, R.E. (2024) “Evaluating the ability of large language models to reason about cardinal directions (short paper),” in. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPICS.COSIT.2024.28.
De Sabbata, S., Ballatore, A., Miller, H.J., et al. (2023) “GeoAI in urban analytics,” International Journal of Geographical Information Science. Taylor & Francis.
De Sabbata, S., Ballatore, A., Liu, P., et al. (2023) “Learning urban form through unsupervised graph-convolutional neural networks,” in Proceedings of the 2nd international workshop on geospatial knowledge graphs and GeoAI: Methods, models, and resources.
De Sabbata, S. and Liu, P. (2023) “A graph neural network framework for spatial geodemographic classification,” International Journal of Geographical Information Science, 37(12), pp. 2464–2486. doi:10.1080/13658816.2023.2254382.
De Sabbata, S., Roitero, K. and Mizzaro, S. (2025) “Geospatial mechanistic interpretability of large language models,” in Janowicz, K. et al. (eds.) Geography according to ChatGPT. IOS Press (Frontiers in artificial intelligence and applications).
Decoupes, R. et al. (2024) “Evaluation of geographical distortions in language models: A crucial step towards equitable representations,” arXiv preprint arXiv:2404.17401 [Preprint].
Devlin, J. et al. (2018) “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805 [Preprint]. Available at: https://arxiv.org/abs/1810.04805.
Feng, J. et al. (2024) “CityGPT: Empowering urban spatial cognition of large language models.” Available at: https://arxiv.org/abs/2406.13948.
Feng, S. et al. (2024) “Where to move next: Zero-shot generalization of llms for next poi recommendation,” in 2024 IEEE conference on artificial intelligence (CAI). IEEE, pp. 1530–1535.
Floridi, L. (2023a) AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models,” Philosophy & Technology, 36(1), p. 15. doi:10.1007/s13347-023-00621-y.
Floridi, L. (2023b) “Machine Unlearning: Its Nature, Scope, and Importance for a Delete Culture,” Philosophy & Technology, 36(2), p. 42. doi:10.1007/s13347-023-00644-5.
Floridi, L. (2024) “Introduction to the special issues: The ethics of artificial intelligence: Exacerbated problems, renewed problems, unprecedented problems,” American Philosophical Quarterly, 61(4), pp. 301–307. doi:10.5406/21521123.61.4.01.
Fulman, N., Memduhoğlu, A. and Zipf, A. (2024) “Distortions in judged spatial relations in large language models.” Available at: https://arxiv.org/abs/2401.04218.
Gao, S. (2020) “A review of recent researches and reflections on geospatial artificial intelligence,” Geomatics and Information Science of Wuhan University, 45(12), pp. 1865–1874.
Gao, S., Hu, Y. and Li, W. (2023) Handbook of geospatial artificial intelligence. Boca Raton: CRC Press.
Goodchild, M. (2001) “Issues in spatially explicit modeling,” Agent-based models of land-use and land-cover change, pp. 13–17.
Goodfellow, I., Bengio, Y. and Courville, A. (2016) Deep learning. MIT press.
Grekousis, G. (2019) “Artificial neural networks and deep learning in urban geography: A systematic review and meta-analysis,” Computers, Environment and Urban Systems, 74, pp. 244–256.
Grekousis, G. (2021) “Local fuzzy geographically weighted clustering: A new method for geodemographic segmentation,” International Journal of Geographical Information Science, 35(1), pp. 152–174. doi:10.1080/13658816.2020.1808221.
Gurnee, W. and Tegmark, M. (2024) “Language models represent space and time.” Available at: https://arxiv.org/abs/2310.02207.
Hamilton, W., Ying, Z. and Leskovec, J. (2017) “Inductive representation learning on large graphs,” in Guyon, I. et al. (eds.) Advances in neural information processing systems. Curran Associates, Inc. Available at: https://proceedings.neurips.cc/paper_files/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf.
Hochmair, H.H., Juhász, L. and Kemp, T. (2024) “Correctness comparison of ChatGPT‐4, gemini, claude‐3, and copilot for spatial tasks,” Transactions in GIS [Preprint]. doi:10.1111/tgis.13233.
Hu, W. et al. (2020) “Strategies for Pre-training Graph Neural Networks.” arXiv. doi:10.48550/arXiv.1905.12265.
Hu, X. et al. (2024) “Toponym resolution leveraging lightweight and open-source large language models and geo-knowledge,” International Journal of Geographical Information Science, 0(0), pp. 1–28. doi:10.1080/13658816.2024.2405182.
Hu, Y. et al. (2019) “GeoAI at ACM SIGSPATIAL: Progress, challenges, and future directions,” Sigspatial Special, 11(2), pp. 5–15.
Hu, Y. et al. (2024) “A five-year milestone: Reflections on advances and limitations in GeoAI research,” Annals of GIS, 30(1), pp. 1–14.
Ilyankou, I. et al. (2024) “Do sentence transformers learn quasi-geospatial concepts from general text?” Available at: https://arxiv.org/abs/2404.04169.
Ivakhnenko, A.G., Lapa, V.G., et al. (1965) “Cybernetic predicting devices,” (No Title) [Preprint].
Janowicz, K. et al. (2020) “GeoAI: Spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond,” International Journal of Geographical Information Science, 34(4), pp. 625–636. doi:10.1080/13658816.2019.1684500.
Janowicz, K., Sieber, R. and Crampton, J. (2022) “GeoAI, counter-AI, and human geography: A conversation,” Dialogues in Human Geography, 12(3), pp. 446–458.
Kang, Y., Gao, S. and Roth, R. (2022) “A review and synthesis of recent geoai research for cartography: Methods, applications, and ethics,” in Proceedings of AutoCarto, pp. 2–4.
Kipf, T.N. and Welling, M. (2016) “Variational graph auto-encoders.” Available at: https://arxiv.org/abs/1611.07308.
Kipf, T.N. and Welling, M. (2017) “Semi-supervised classification with graph convolutional networks.” Available at: https://arxiv.org/abs/1609.02907.
Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2017) “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, 60(6), pp. 84–90.
Li, F., Hogg, D.C. and Cohn, A.G. (2024) “Advancing spatial reasoning in large language models: An in-depth evaluation and enhancement using the stepgame benchmark,” in Proceedings of the AAAI conference on artificial intelligence. (17), pp. 18500–18507.
Li, W. (2020) GeoAI: Where machine learning and big data converge in GIScience,” Journal of Spatial Information Science, (20), pp. 71–77. doi:10.5311/JOSIS.2020.20.658.
Li, W. et al. (2024) “GeoAI for science and the science of GeoAI,” Journal of Spatial Information Science, (29), pp. 1–17.
Li, Z. et al. (2025) “GIScience in the era of artificial intelligence: A research agenda towards autonomous GIS,” arXiv preprint arXiv:2503.23633 [Preprint].
Linnainmaa, S. (1970) The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors. PhD thesis. Master’s Thesis (in Finnish), Univ. Helsinki.
Liu, P. and Biljecki, F. (2022) “A review of spatially-explicit GeoAI applications in urban geography,” International Journal of Applied Earth Observation and Geoinformation, 112, p. 102936.
Liu, P., Zhang, Y. and Biljecki, F. “Explainable spatially explicit geospatial artificial intelligence in urban analytics,” Environment and Planning B: Urban Analytics and City Science, 0(0), p. 23998083231204689. doi:10.1177/23998083231204689.
Liu, Z. et al. (2024) “Measuring geographic diversity of foundation models with a natural language–based geo-guessing experiment on GPT-4,” AGILE: GIScience Series, 5, p. 38. doi:10.5194/agile-giss-5-38-2024.
Liu, Z., Currier, K. and Janowicz, K. (2024) “Making Geographic Space Explicit In Probing Multimodal Large Language Models For Cul-Tural Subjects,” in Global AI Cultures workshop of ICLR 2024.
Lovelace, A. (1842) “Notes upon LF Menabrea’s sketch of the analytical engine invented by Charles Babbage,” Bibliotheque Universelle de Geneve, 82, pp. 245–295.
Mai, G. et al. (2022) “A review of location encoding for GeoAI: Methods and applications,” International Journal of Geographical Information Science, 36(4), pp. 639–673. doi:10.1080/13658816.2021.2004602.
Mai, G., Huang, W., et al. (2023b) “On the opportunities and challenges of foundation models for geospatial artificial intelligence.” Available at: https://arxiv.org/abs/2304.06798.
Mai, G., Huang, W., et al. (2023a) “On the opportunities and challenges of foundation models for geospatial artificial intelligence.” Available at: https://arxiv.org/abs/2304.06798.
Mai, G., Xuan, Y., et al. (2023) “Sphere2Vec: A general-purpose location representation learning over a spherical surface for large-scale geospatial predictions,” ISPRS Journal of Photogrammetry and Remote Sensing, 202, pp. 439–462.
Mai, G. et al. (2025) “Towards the next generation of geospatial artificial intelligence,” International Journal of Applied Earth Observation and Geoinformation, 136, p. 104368.
Mason, G. and Jacobson, R. (2007) “Fuzzy geographically weighted clustering,” in Proceedings of the 9th international conference on geocomputation, maynooth, eire, ireland, pp. 3–5.
Nelson, T. et al. (2025) “A research agenda for GIScience in a time of disruptions,” International Journal of Geographical Information Science, 39(1), pp. 1–24.
Novelli, C. et al. (2023) “Taking AI risks seriously: A new assessment model for the AI Act,” AI & SOCIETY [Preprint]. doi:10.1007/s00146-023-01723-z.
Prince, S.J. (2023) Understanding deep learning. MIT press.
Roberts, J. et al. (2023) “GPT4GEO: How a language model sees the world’s geography.” Available at: https://arxiv.org/abs/2306.00020.
Rosenblatt, F. (1962) “Principles of neurodynamics,” Perceptrons and the theory of brain mechanisms [Preprint].
Schmidhuber, J. (2022) “Annotated history of modern AI and deep learning.” Available at: https://arxiv.org/abs/2212.11279.
Shelby, R. et al. (2023) “Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction.” arXiv. doi:10.48550/arXiv.2210.05791.
Shi, M. et al. (2025) “Geography for AI sustainability and sustainability for GeoAI,” Cartography and Geographic Information Science, pp. 1–19.
Sieber, R. et al. (2024) “What is civic participation in artificial intelligence?” Environment and Planning B: Urban Analytics and City Science, p. 23998083241296200.
Singh, S., Fore, M. and Stamoulis, D. (2024) “GeoLLM-engine: A realistic environment for building geospatial copilots,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 585–594.
Slaughter, R.K., Kopec, J. and Batal, M. (2020) “Algorithms and economic justice: A taxonomy of harms and a path forward for the Federal Trade Commission,” Yale JL & Tech., 23, p. 1.
Smith, T.R. (1984) “Artificial intelligence and its applicability to geographical problem solving,” The Professional Geographer, 36(2), pp. 147–158.
Staab, R. et al. (2023) “Beyond Memorization: Violating Privacy Via Inference with Large Language Models.” arXiv. doi:10.48550/arXiv.2310.07298.
Tan, C. et al. (2023) “On the promises and challenges of multimodal foundation models for geographical, environmental, agricultural, and urban planning applications.” Available at: https://arxiv.org/abs/2312.17016.
Templeton, A. et al. (2024) “Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet,” Transformer Circuits Thread [Preprint].
Turing, A.M. (1950) I.—COMPUTING MACHINERY AND INTELLIGENCE,” Mind, LIX(236), pp. 433–460. doi:10.1093/mind/LIX.236.433.
Vaswani, A. et al. (2017) “Attention is all you need,” in Guyon, I. et al. (eds.) Advances in neural information processing systems. Curran Associates, Inc. Available at: https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
Wang, S. et al. (2024) “GPT, large language models (LLMs) and generative artificial intelligence (GAI) models in geospatial science: A systematic review,” International Journal of Digital Earth, 17(1), p. 2353122. doi:10.1080/17538947.2024.2353122.
Webber, R. and Burrows, R. (2018) The predictive postcode: The geodemographic classification of British society. Sage.
Xing, J. and Sieber, R. (2023) “The challenges of integrating explainable artificial intelligence into GeoAI,” Transactions in GIS, 27(3), pp. 626–645. doi:10.1111/tgis.13045.
Xu, L. et al. (2024) “Evaluating large language models on spatial tasks: A multi-task benchmarking study.” Available at: https://arxiv.org/abs/2408.14438.
Yao, A. et al. (2024) “Bringing ethics to cartography and geographic information science: AutoCarto 2022,” Cartography and Geographic Information Science, 51(4), pp. 487–491. doi:10.1080/15230406.2024.2352534.
Zhang, Y. et al. (2024) “MapGPT: An autonomous framework for mapping by integrating large language model and cartographic tools,” Cartography and Geographic Information Science, 0(0), pp. 1–27. doi:10.1080/15230406.2024.2404868.
Zhu, H. et al. (2024) “PlanGPT: Enhancing urban planning with tailored language model and efficient retrieval.” Available at: https://arxiv.org/abs/2402.19273.