The data in this report are based on a nationally representative survey of 1,011 American adults, aged 18 and older. Results in Sections 3 and 4 are reported for the subset of 861 registered voters who participated in the survey. The survey was conducted April 16 –May 1, 2023. All questionnaires were self-administered by respondents in a web-based environment. The median completion time for the survey was 22 minutes.
The sample was drawn from the Ipsos KnowledgePanel®, an online panel of members drawn using probability sampling methods. Prospective members are recruited using a combination of random digit dial and address-based sampling techniques that cover virtually all (non-institutional) residential phone numbers and addresses in the United States. Those contacted who would choose to join the panel but do not have access to the Internet are loaned computers and given Internet access so they may participate.
The sample therefore includes a representative cross-section of American adults – irrespective of whether they have Internet access, use only a cell phone, etc. The sample was weighted, post survey, to match key US Census Bureau demographic norms.
From November 2008 to December 2018, no KnowledgePanel® member participated in more than one Climate Change in the American Mind (CCAM) survey. Beginning with the April 2019 survey, panel members who have participated in CCAM surveys in the past, excluding the most recent two surveys, may be randomly selected for participation. In the current survey, 267 respondents, 232 of whom are registered voters included in this report, participated in a previous CCAM survey.
The survey instrument was designed by Anthony Leiserowitz, Seth Rosenthal, Jennifer Carman, Matthew Ballew, Danning Lu, Marija Verner, Sanguk Lee, Matthew Goldberg, Jennifer Marlon, Joshua Low, Kristin Barendregt-Ludwig, Michel Gelobter, and Gerald Torres of Yale University; Edward Maibach, John Kotcher, Teresa Myers, and Nicholas Badullovich of George Mason University; Andrea Aguilar, Sha Merirei Ongelungel, Cristian Sanchez, and Karina Sahlin of the Digital Climate Coalition; Irene Burga and Mark Magan ̃a of Green Latinos; Saad Amer of Justice Environment; Romona Taylor Williams of Mississippi Citizens United for Prosperity; Montana Burgess of Neighbours United; Grace McRae and Makeda Fakede of the Sierra Club; and Manuel Salgado and Annika Larson of WE ACT for Environmental Justice. The categories for the content analysis of the open-ended responses about groups vulnerable to global warming were developed by John Kotcher of George Mason University, and open-ended responses were coded by Patrick Ansah, Tracy Mason, and Nicholas Badullovich of George Mason University. The categories for the content analysis of the open-ended responses about climate justice were developed by Jennifer Carman of Yale University, and the open-ended responses were coded by Matthew Ballew and Danning (Leilani) Lu of Yale University. The figures and tables were constructed by Emily Goddard of Yale University.
Margins of error
All samples are subject to some degree of sampling error—that is, statistical results obtained from a sample can be expected to dier somewhat from results that would be obtained if every member of the target population was interviewed. Average margins of error, at the 95% confidence level, are as follows:
Rounding error and tabulation
In data tables, bases specified are unweighted, while percentages are weighted to match national population parameters.
For tabulation purposes, percentage points are rounded to the nearest whole number. As a result, percentages in a given chart may total slightly higher or lower than 100%. Summed response categories (e.g., “strongly support” + “somewhat support”) are rounded after sums are calculated. For example, in some cases, the sum of 25% + 25% might be reported as 51% (e.g., 25.3% + 25.3% = 50.6%, which, after rounding, would be reported as 25% + 25% = 51%).
Instructions for coding Section 1.1: Open-ended responses about groups perceived to be most harmed by global warming
A doctoral student and a postdoctoral fellow coded the open-ended responses using instructions and categories developed by one of the Primary Investigators. Percent agreement ranged from 93% —99% for the categories coded. Dierences between the two coders were resolved via discussion between them and the Primary Investigator. “Not asked” classification was determined by a “No” or “Not sure” response to the preceding question, “Do you think that global warming harms some groups of people in the United States more than others?” Participants who provided that response were not shown this open-ended question. Definitions of the other categories used by the coders are listed below.
For the following variables, we code each survey response for the presence or absence (0=absent; present=1) of the following categories listed below. The order in which the categories are mentioned in the survey response does not matter for the purposes of coding, simply the presence or absence of a particular category.
Instructions for coding Section 2.2: Open-ended responses about the term “climate justice”
The three lead authors at Yale Program on Climate Change Communication first conducted independent coding of the open-ended responses. The first author developed a codebook based on all three raters’ categories and the other authors coded the responses again following the final codebook. Dierences in the coding were resolved via discussion between the three researchers. Percent agreement ranged from 81% —98% for the categories coded. The “haven’t heard of climate justice” classification was determined by a “nothing at all” response to the preceding question, “How much, if anything, have you heard or read about the concept of climate justice?” Participants who provided that response were not shown this open-ended question. Definitions of the other categories used by the coders are listed
below.
For the following variables, we code each survey response for the presence or absence (0 = absent; 1 = present) of the following categories listed below. The order in which the categories are mentioned in the survey response does not matter for the purposes of coding, simply the presence or absence of a particular category.
A survey response can be coded positive for multiple content variables. For example, the response, “Holding corporations accountable for pollution” was coded as “present” for both accountability and reparations (for the reference to accountability) and corporations (for mentioning corporations). Definitions for each content variable are provided below.