AJ Alvero, Dustin S. Stoltz, Oscar Stuhler, Marshall A. Taylor
Sociological Science January 20, 2026
10.15195/v13.a3
Abstract
Generative artificial intelligence (GenAI) has garnered considerable attention for its poten- tial utility in research and scholarship, even among those who typically do not rely on computational tools. However, early commentators have also articulated concerns about how GenAI usage comes with enormous environmental costs, serious social risks, and a tendency to produce low-quality content. In the midst of both excitement and skepticism, it is crucial to take stock of how GenAI is actually being used. Our study focuses on sociological research as our site, and here we present findings from a survey of 433 authors of articles published in 50 sociology journals in the past five years. The survey provides an overview of the state of the discipline with regard to the use of GenAI by providing answers to fundamental questions: how (much) do scholars use the technology for their research; what are their reasons for using it; and how concerned, trustful, and optimistic are they about the technology? Of the approximately one third of respondents who self-report using GenAI at least weekly, the primary uses are for writing assistance and comparatively less so in planning, data collection, or data analysis. In both use and attitudes, there are surprisingly few differences between self-identified computational and non-computational researchers. In general, respondents are very concerned about the social and environmental consequences of GenAI. Trust in GenAI outputs is low, regardless of expertise or frequency of use. Although optimism that GenAI will improve is high, scholars are divided on whether GenAI will have a positive impact on the field.
Generative artificial intelligence (GenAI) has garnered considerable attention for its poten- tial utility in research and scholarship, even among those who typically do not rely on computational tools. However, early commentators have also articulated concerns about how GenAI usage comes with enormous environmental costs, serious social risks, and a tendency to produce low-quality content. In the midst of both excitement and skepticism, it is crucial to take stock of how GenAI is actually being used. Our study focuses on sociological research as our site, and here we present findings from a survey of 433 authors of articles published in 50 sociology journals in the past five years. The survey provides an overview of the state of the discipline with regard to the use of GenAI by providing answers to fundamental questions: how (much) do scholars use the technology for their research; what are their reasons for using it; and how concerned, trustful, and optimistic are they about the technology? Of the approximately one third of respondents who self-report using GenAI at least weekly, the primary uses are for writing assistance and comparatively less so in planning, data collection, or data analysis. In both use and attitudes, there are surprisingly few differences between self-identified computational and non-computational researchers. In general, respondents are very concerned about the social and environmental consequences of GenAI. Trust in GenAI outputs is low, regardless of expertise or frequency of use. Although optimism that GenAI will improve is high, scholars are divided on whether GenAI will have a positive impact on the field.
![]() | This work is licensed under a Creative Commons Attribution 4.0 International License. |
Supplemental Materials
Reproducibility Package: A replication repository for this article can be found at: https://github.com/Marshall-Soc/genai_sociology. The data for this article are hosted on the Harvard Dataverse (Alvero et al. 2025) and can be accessed through: https://doi.org/10.7910/DVN/ICXIRP
- Citation: Alvero, AJ, Dustin S. Stoltz, Oscar Stuhler, and Marshall A. Taylor. 2025. “Generative AI in Sociological Research: State of the Discipline” Sociological Science 13: 45-62.
- Received: August 23, 2025
- Accepted: November 8, 2025
- Editors: Arnout van de Rijt, Cristobal Young
- DOI: 10.15195/v13.a3



No reactions yet.