Page 101 - Education in a Digital World
P. 101
88 Local Variations
noted at the time, these “increases were the result of governmental programs, as
well as support by local communities and the efforts of individual schools” (Pelgrum
et al. 1993, n.p). That said, even during the early emergence of mainstream computer
use in schools, some interesting national differences were apparent. For example,
just why were differences in students’ use of computers by gender apparent in all
countries apart from in French-speaking systems and in Greece (see Pelgrum and
Plomp 1993)?
The so-called ‘SITES’ study (i.e. the Second Information Technology in Education
Study) ran throughout the 1997 to 2008 period, and focused on an extended
sample of twenty-seven countries and regions. Although documenting the generally
increasing levels of digital technology access and use in schools, here, too, the data
highlighted a number of notable differences between countries. For example, student–
computer ratios in lower secondary schools were reported to be as low as 9:1
in Canada and 12:1 in Denmark and Singapore, as opposed to ratios of 133:1
in Lithuania and 210:1 in Cyprus. The phases of the study which included obser-
vation and interview data also highlighted a “great deal of diversity and variation” with
regard to teachers’ and students’ in-school uses of digital technology. Significantly,
these differences were reported primarily at the country/system level rather than
between ‘clusters’ of countries (Law et al. 2008). In other words, different countries
in similar regions (e.g. the countries across ‘northern Europe’) would nevertheless
display noticeably different patterns of technology use. One key finding emerging
from these data was the lack of clear correlation between observed technology use
in schools and the nature of national policy drives or the general conditions of
national school systems. As the study concluded, “findings indicate that the extent
of ICT use does not only depend on overall national level ICT policies and school
level conditions” (Carstens and Pelgrum 2009, n.p).
The data from these IEA studies, therefore, provide useful insights into the
changing (and non-changing) patterns of educational technology use throughout
the 1980s, 1990s and 2000s, and have continued into the 2010s through online
computer-based assessments of students’ skills in the guise of the ‘International
Computer and Information Literacy Study’. Although based on admittedly ‘broad
brush’ measures and indicators, and while limited to those countries whose
governments were willing to finance their participation (hence the inconsistent
inclusion of different national cases), these data nevertheless suggest a clear sense of
national difference and variation. Similar patterns and trends are also evident in
other programmes of comparative educational measurement. For example, the
OECD’s long-running PISA (Programme for International Student Assessment)
study provides a valuable measure of educational technology use over time through
its repeated surveys of secondary school students across the organisation’s member
countries. While illustrating generally rising levels of students’ access to and familiarity
with digital technology, the PISA data nevertheless highlight a number of differ-
ences between national cases (see OECD 2005, 2010). A number of interesting
questions therefore arise from the PISA data. For example, why are the reported