The news report that Singapore is the world’s “smartest” city (ST, Oct. 3) – based on the International Institute for Management Development (IMD) Smart City Index 2019, published by the IMD World Competitiveness Centre and the Singapore University of Technology and Design (SUTD) – is useful for exploring the two concepts of sampling and construct validity. First, how did the IMD and SUTD team collect the survey data and to what extent are the samples representative of the 102 cities (including Singapore); and second, how did the team define and operationalise “smart” and to what extent are the survey questions and responses reflective of whether a city is truly “smart”?
IMD is a business education school based in Lausanne, Switzerland.
From the get-go, many are likely to problematise the relatively small sample size – 120 residents from each city – but as a research colleague has previously pointed out with regard to public surveys in general, “how the sample is drawn (and how we quantify uncertainty) is more important than the size of the sample” (Shannon Ang, a PhD candidate at the University of Michigan, manages the excellent online project “Singapore Society in Numbers”). And furthermore, consistent with his critique that such surveys and studies are oftentimes not adequately transparent about their methodology, the IMD Smart City offered few details about sampling (and construct validity too, as we will see later), beyond the brief note that the residents were “randomly chosen”. For instance:
- How exactly was sampling conducted in each city? Did the sampling strategies differ from the city to city, and if so how would that affect the findings?
- What were the non-response rates, or the number of people in each city who refused to complete the survey? Among those who did complete, was there missing data? And to what degree are these individuals who refused to complete similar to or different from those who did?
- In the first place too, how did IMD and SUTD decide on 120 as the appropriate number? More broadly, how did they decide upon these 102 cities?
Beyond the sampling, and also to check for representativeness, it would have been helpful to have aggregated information about the demographic and socio-economic background of the respondents.
Construct validity is the next concept of interest. The team defined a “smart” city “as an urban setting that applies technology to enhance the benefits and diminish the shortcomings of urbanisation”. In addition, it considered the two factors of structures (“the existing infrastructure of the cities”) and technologies (“the technological provisions and services available to the inhabitants”) over five key areas: Health and safety, mobility, activities, opportunities for work and school, as well as governance.
There can be reasonable disagreements over the extent to which the two factors and the five key areas adequately characterise a city as “smart”. In other words, is there consensus over the features or traits of a “smart” city? How did the team decide what to include or exclude (in other words, what was its overall research methodology)? And how do these definitions or operationalisations compare to other studies?
Such ambiguity perhaps points to the need for these studies – and news reports of these studies – to pay equal attention to how the results or findings were derived (again, their methodology), rather than focusing disproportionately on the findings per se. Increased scepticism can be healthy, because readers will be less inclined to take results at face value or to dismiss findings which run counter to their worldviews, to ultimately advance data-driven discourse in Singapore.