Tag Archives | Bias

Social Status and the Moral Acceptance of Artificial Intelligence

Patrick Schenk, Vanessa A. Müller, Luca Keiser

Sociological Science October 29, 2024
10.15195/v11.a36


The morality of artificial intelligence (AI) has become a contentious topic in academic and public debates. We argue that AI’s moral acceptance depends not only on its ability to accomplish a task in line with moral norms but also on the social status attributed to AI. Agent type (AI vs. computer program vs. human), gender, and organizational membership impact moral permissibility. In a factorial survey experiment, 578 participants rated the moral acceptability of agents performing a task (e.g., cancer diagnostics). We find that using AI is judged less morally acceptable than employing human agents. AI used in high-status organizations is judged more morally acceptable than in low-status organizations. No differences were found between computer programs and AI. Neither anthropomorphic nor gender framing had an effect. Thus, human agents in high-status organizations receive a moral surplus purely based on their structural position in a cultural status hierarchy regardless of their actual performance.
Creative Commons LicenseThis work is licensed under a Creative Commons Attribution 4.0 International License.

Patrick Schenk: Department of Sociology, University of Lucerne
E-mail: patrick.schenk@unilu.ch

Vanessa A. Müller: Department of Sociology, University of Lucerne
E-mail: vanessa.mueller2@unilu.ch

Luca Keiser: gfs.bern
E-mail: luca.keiser@gfsbern.ch

Acknowledgements: We thank Gabriel Abend, Michael Sauder, the editor of Sociological Science, and an anonymous reviewer for their valuable comments. Earlier versions of this article were presented at the Congress of the Academy of Sociology in Bern, Switzerland, and the Conference of the European Sociological Association in Porto, Portugal.

Funding: This study was funded by the Swiss National Science Foundation (grant number 100017_200750/1).

Supplemental Materials

Reproducibility Package: A reproduction package with data, codebook, and statistical code is available through the following link: https://doi.org/10.5281/zenodo.13850548.

  • Citation: Schenk, Patrick, Vanessa A. Müller, Luca Keiser. 2024. “Social Status and the Moral Acceptance of Artificial Intelligence.” Sociological Science 11: 989-1016.
  • Received: August 20, 2024
  • Accepted: September 29, 2024
  • Editors: Ari Adut, Stephen Vaisey
  • DOI: 10.15195/v11.a36


0

A Large-Scale Test of Gender Bias in the Media

Eran Shor, Arnout van de Rijt, Babak Fotouhi

Sociological Science, September 3, 2019
10.15195/v6.a20


A large body of studies demonstrates that women continue to receive less media coverage than men do. Some attribute this difference to gender bias in media reporting—a systematic inclination toward male subjects. We propose that in order to establish the presence of media bias, one has to demonstrate that the news coverage of men is disproportional even after accounting for occupational inequalities and differences in public interest. We examine the coverage of more than 20,000 successful women and men from various social and occupational domains in more than 2,000 news sources as well as web searches for these individuals as a behavioral measure of interest. We find that when compared with similar-aged men from the same occupational strata, women enjoy greater public interest yet receive less media coverage.
Creative Commons LicenseThis work is licensed under a Creative Commons Attribution 4.0 International License.

Eran Shor: Department of Sociology, McGill University
E-mail: eran.shor@mcgill.ca

Arnout van de Rijt: Social and Behavioural Sciences, Utrecht University
E-mail: arnoutvanderijt@gmail.com

Babak Fotouhi: Program for Evolutionary Dynamics, Harvard University
E-mail: babak_fotouhi@fas.harvard.edu

  • Citation: Shor, Eran, Arnout van de Rijt, and Babak Fotouhi. 2019. “A Large-Scale Test of Gender Bias in the Media.” Sociological Science 6: 526-550.
  • Received: June 6, 2019
  • Accepted: June 13, 2019
  • Editors: Jesper Sørensen, Olav Sorenson
  • DOI: 10.15195/v6.a20


0

Multicollinearity and Model Misspecification

Christopher Winship, Bruce Western

Sociological Science, July 26, 2016
DOI 10.15195/v3.a27

Multicollinearity in linear regression is typically thought of as a problem of large standard errors due to near-linear dependencies among independent variables. This problem can be solved by more informative data, possibly in the form of a larger sample. We argue that this understanding of multicollinearity is only partly correct. The near collinearity of independent variables can also increase the sensitivity of regression estimates to small errors in the model misspecification. We examine the classical assumption that independent variables are uncorrelated with the errors. With collinearity, small deviations from this assumption can lead to large changes in estimates. We present a Bayesian estimator that specifies a prior distribution for the covariance between the independent variables and the error term. This estimator can be used to calculate confidence intervals that reflect sampling error and uncertainty about the model specification. A Monte Carlo experiment indicates that the Bayesian estimator has good frequentist properties in the presence of specification errors. We illustrate the new method by estimating a model of the black–white gap in earnings.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution 4.0 International License.

Christopher Winship: Department of Sociology, Harvard University
Email: cwinship@wjh.harvard.edu

Bruce Western: Department of Sociology, Harvard University
Email: western@wjh.harvard.edu

Acknowledgements: We thank Kinga Makovi for help in the preparation of the manuscript. We also appreciate the editor’s suggestions for citations that we were unaware of.

  • Citation: Winship, Christopher, and Bruce Western. 2016. “Multicollinearity and Model Misspecification.” Sociological Science 3:627-649.
  • Received: February 5, 2016
  • Accepted: March 5, 2016
  • Editors: Jesper Sørensen, Olav Sorenson
  • DOI: 10.15195/v3.a27


0
SiteLock